Sample records for factor correction solutions

  1. Angular spectral framework to test full corrections of paraxial solutions.

    PubMed

    Mahillo-Isla, R; González-Morales, M J

    2015-07-01

    Different correction methods for paraxial solutions have been used when such solutions extend out of the paraxial regime. The authors have used correction methods guided by either their experience or some educated hypothesis pertinent to the particular problem that they were tackling. This article provides a framework so as to classify full wave correction schemes. Thus, for a given solution of the paraxial wave equation, we can select the best correction scheme of those available. Some common correction methods are considered and evaluated under the proposed scope. Another remarkable contribution is obtained by giving the necessary conditions that two solutions of the Helmholtz equation must accomplish to accept a common solution of the parabolic wave equation as a paraxial approximation of both solutions.

  2. Efficiency for unretained solutes in packed column supercritical fluid chromatography. I. Theory for isothermal conditions and correction factors for carbon dioxide.

    PubMed

    Poe, Donald P

    2005-06-17

    A general theory for efficiency of nonuniform columns with compressible mobile phase fluids is applied to the elution of an unretained solute in packed-column supercritical fluid chromatography (pSFC). The theoretical apparent plate height under isothermal conditions is given by the Knox equation multiplied by a compressibility correction factor f1, which is equal to the ratio of the temporal-to-spatial average densities of the mobile phase. If isothermal conditions are maintained, large pressure drops in pSFC should not result in excessive efficiency losses for elution of unretained solutes.

  3. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  4. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  5. Radiation boundary condition and anisotropy correction for finite difference solutions of the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Webb, Jay C.

    1994-01-01

    In this paper finite-difference solutions of the Helmholtz equation in an open domain are considered. By using a second-order central difference scheme and the Bayliss-Turkel radiation boundary condition, reasonably accurate solutions can be obtained when the number of grid points per acoustic wavelength used is large. However, when a smaller number of grid points per wavelength is used excessive reflections occur which tend to overwhelm the computed solutions. Excessive reflections are due to the incompability between the governing finite difference equation and the Bayliss-Turkel radiation boundary condition. The Bayliss-Turkel radiation boundary condition was developed from the asymptotic solution of the partial differential equation. To obtain compatibility, the radiation boundary condition should be constructed from the asymptotic solution of the finite difference equation instead. Examples are provided using the improved radiation boundary condition based on the asymptotic solution of the governing finite difference equation. The computed results are free of reflections even when only five grid points per wavelength are used. The improved radiation boundary condition has also been tested for problems with complex acoustic sources and sources embedded in a uniform mean flow. The present method of developing a radiation boundary condition is also applicable to higher order finite difference schemes. In all these cases no reflected waves could be detected. The use of finite difference approximation inevita bly introduces anisotropy into the governing field equation. The effect of anisotropy is to distort the directional distribution of the amplitude and phase of the computed solution. It can be quite large when the number of grid points per wavelength used in the computation is small. A way to correct this effect is proposed. The correction factor developed from the asymptotic solutions is source independent and, hence, can be determined once and for all. The

  6. Error analysis and correction of discrete solutions from finite element codes

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.

    1984-01-01

    Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.

  7. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  8. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  9. Quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, G.; Leble, S.

    2014-03-01

    Analytical form of quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model is obtained through zeta function regularisation with account of all rest variables of a d-dimensional theory. Qualitative dependence of quantum corrections on parameters of the classical systems is also evaluated for a much broader class of potentials u(x) = b2f(bx) + C with b and C as arbitrary real constants.

  10. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  11. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  12. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  13. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  14. Solution for the nonuniformity correction of infrared focal plane arrays.

    PubMed

    Zhou, Huixin; Liu, Shangqian; Lai, Rui; Wang, Dabao; Cheng, Yubao

    2005-05-20

    Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.

  15. What about False Insights? Deconstructing the Aha! Experience along Its Multiple Dimensions for Correct and Incorrect Solutions Separately

    PubMed Central

    Danek, Amory H.; Wiley, Jennifer

    2017-01-01

    The subjective Aha! experience that problem solvers often report when they find a solution has been taken as a marker for insight. If Aha! is closely linked to insightful solution processes, then theoretically, an Aha! should only be experienced when the correct solution is found. However, little work has explored whether the Aha! experience can also accompany incorrect solutions (“false insights”). Similarly, although the Aha! experience is not a unitary construct, little work has explored the different dimensions that have been proposed as its constituents. To address these gaps in the literature, 70 participants were presented with a set of difficult problems (37 magic tricks), and rated each of their solutions for Aha! as well as with regard to Suddenness in the emergence of the solution, Certainty of being correct, Surprise, Pleasure, Relief, and Drive. Solution times were also used as predictors for the Aha! experience. This study reports three main findings: First, false insights exist. Second, the Aha! experience is multidimensional and consists of the key components Pleasure, Suddenness and Certainty. Third, although Aha! experiences for correct and incorrect solutions share these three common dimensions, they are also experienced differently with regard to magnitude and quality, with correct solutions emerging faster, leading to stronger Aha! experiences, and higher ratings of Pleasure, Suddenness, and Certainty. Solution correctness proffered a slightly different emotional coloring to the Aha! experience, with the additional perception of Relief for correct solutions, and Surprise for incorrect ones. These results cast some doubt on the assumption that the occurrence of an Aha! experience can serve as a definitive signal that a true insight has taken place. On the other hand, the quantitative and qualitative differences in the experience of correct and incorrect solutions demonstrate that the Aha! experience is not a mere epiphenomenon. Strong Aha

  16. What about False Insights? Deconstructing the Aha! Experience along Its Multiple Dimensions for Correct and Incorrect Solutions Separately.

    PubMed

    Danek, Amory H; Wiley, Jennifer

    2016-01-01

    The subjective Aha! experience that problem solvers often report when they find a solution has been taken as a marker for insight. If Aha! is closely linked to insightful solution processes, then theoretically, an Aha! should only be experienced when the correct solution is found. However, little work has explored whether the Aha! experience can also accompany incorrect solutions ("false insights"). Similarly, although the Aha! experience is not a unitary construct, little work has explored the different dimensions that have been proposed as its constituents. To address these gaps in the literature, 70 participants were presented with a set of difficult problems (37 magic tricks), and rated each of their solutions for Aha! as well as with regard to Suddenness in the emergence of the solution, Certainty of being correct, Surprise, Pleasure, Relief, and Drive. Solution times were also used as predictors for the Aha! This study reports three main findings: First, false insights exist. Second, the Aha! experience is multidimensional and consists of the key components Pleasure, Suddenness and Certainty. Third, although Aha! experiences for correct and incorrect solutions share these three common dimensions, they are also experienced differently with regard to magnitude and quality, with correct solutions emerging faster, leading to stronger Aha! experiences, and higher ratings of Pleasure, Suddenness, and Certainty. Solution correctness proffered a slightly different emotional coloring to the Aha! experience, with the additional perception of Relief for correct solutions, and Surprise for incorrect ones. These results cast some doubt on the assumption that the occurrence of an Aha! experience can serve as a definitive signal that a true insight has taken place. On the other hand, the quantitative and qualitative differences in the experience of correct and incorrect solutions demonstrate that the Aha! experience is not a mere epiphenomenon. Strong Aha! experiences are

  17. Improved scatter correction with factor analysis for planar and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user

  18. Selecting the correct solution to a physics problem when given several possibilities

    NASA Astrophysics Data System (ADS)

    Richards, Evan Thomas

    Despite decades of research on what learning actions are associated with effective learners (Palincsar and Brown, 1984; Atkinson, et al., 2000), the literature has not fully addressed how to cue those actions (particularly within the realm of physics). Recent reforms that integrate incorrect solutions suggest a possible avenue to reach those actions. However, there is only a limited understanding as to what actions are invoked with such reforms (Grosse and Renkl, 2007). This paper reports on a study that tasked participants with selecting the correct solution to a physics problem when given three possible solutions, where only one of the solutions was correct and the other two solutions contained errors. Think aloud protocol data (Ericsson and Simon, 1993) was analyzed per a framework adapted from Palincsar and Brown (1984). Cued actions were indeed connected to those identified in the worked example literature. Particularly satisfying is the presence of internal consistency checks (i.e., are the solutions self-consistent?), which is a behavior predicted by the Palincsar and Brown (1984) framework, but not explored in the worked example realm. Participant discussions were also found to be associated with those physics-related solution features that were varied across solutions (such as fundamental principle selection or system and surroundings selections).

  19. Correctional officers' perceptions of a solution-focused training program: potential implications for working with offenders.

    PubMed

    Pan, Peter Jen Der; Deng, Liang-Yu F; Chang, Shona Shih Hua; Jiang, Karen Jye-Ru

    2011-09-01

    The purpose of this exploratory study was to explore correctional officers' perceptions and experiences during a solution-focused training program and to initiate development of a modified pattern for correctional officers to use in jails. The study uses grounded theory procedures combined with a follow-up survey. The findings identified six emergent themes: obstacles to doing counseling work in prisons, offenders' amenability to change, correctional officers' self-image, advantages of a solution-focused approach (SFA), potential advantages of applying SFA to offenders, and the need for the consolidation of learning and transformation. Participants perceived the use of solution-focused techniques as appropriate, important, functional, and of only moderate difficulty in interacting with offenders. Finally, a modified pattern was developed for officers to use when working with offenders in jails. Suggestions and recommendations are made for correctional interventions and future studies.

  20. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  1. Can small field diode correction factors be applied universally?

    PubMed

    Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R

    2014-09-01

    Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.

  2. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    NASA Astrophysics Data System (ADS)

    Cooling, M. P.; Humphrey, V. F.; Wilkens, V.

    2011-02-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  3. Factors Associated With Early Loss of Hallux Valgus Correction.

    PubMed

    Shibuya, Naohiro; Kyprios, Evangelos M; Panchani, Prakash N; Martin, Lanster R; Thorud, Jakob C; Jupiter, Daniel C

    Recurrence is common after hallux valgus corrective surgery. Although many investigators have studied the risk factors associated with a suboptimal hallux position at the end of long-term follow-up, few have evaluated the factors associated with actual early loss of correction. We conducted a retrospective cohort study to identify the predictors of lateral deviation of the hallux during the postoperative period. We evaluated the demographic data, preoperative severity of the hallux valgus, other angular measurements characterizing underlying deformities, amount of hallux valgus correction, and postoperative alignment of the corrected hallux valgus for associations with recurrence. After adjusting for the covariates, the only factor associated with recurrence was the postoperative tibial sesamoid position. The recurrence rate was ~50% and ~60% when the postoperative tibial sesamoid position was >4 and >5 on the 7-point scale, respectively. Published by Elsevier Inc.

  4. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  5. Power corrections to TMD factorization for Z-boson production

    DOE PAGES

    Balitsky, I.; Tarasov, A.

    2018-05-24

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  6. Power corrections to TMD factorization for Z-boson production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balitsky, I.; Tarasov, A.

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  7. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  8. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  9. Correction factors for self-selection when evaluating screening programmes.

    PubMed

    Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H

    2016-03-01

    In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.

  10. Re-evaluation of the correction factors for the GROVEX

    NASA Astrophysics Data System (ADS)

    Ketelhut, Steffen; Meier, Markus

    2018-04-01

    The GROVEX (GROssVolumige EXtrapolationskammer, large-volume extrapolation chamber) is the primary standard for the dosimetry of low-dose-rate interstitial brachytherapy at the Physikalisch-Technische Bundesanstalt (PTB). In the course of setup modifications and re-measuring of several dimensions, the correction factors have been re-evaluated in this work. The correction factors for scatter and attenuation have been recalculated using the Monte Carlo software package EGSnrc, and a new expression has been found for the divergence correction. The obtained results decrease the measured reference air kerma rate by approximately 0.9% for the representative example of a seed of type Bebig I25.S16C. This lies within the expanded uncertainty (k  =  2).

  11. Perturbative corrections to B → D form factors in QCD

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Ming; Wei, Yan-Bing; Shen, Yue-Long; Lü, Cai-Dian

    2017-06-01

    We compute perturbative QCD corrections to B → D form factors at leading power in Λ/ m b , at large hadronic recoil, from the light-cone sum rules (LCSR) with B-meson distribution amplitudes in HQET. QCD factorization for the vacuum-to- B-meson correlation function with an interpolating current for the D-meson is demonstrated explicitly at one loop with the power counting scheme {m}_c˜ O(√{Λ {m}_b}) . The jet functions encoding information of the hard-collinear dynamics in the above-mentioned correlation function are complicated by the appearance of an additional hard-collinear scale m c , compared to the counterparts entering the factorization formula of the vacuum-to- B-meson correction function for the construction of B → π from factors. Inspecting the next-to-leading-logarithmic sum rules for the form factors of B → Dℓν indicates that perturbative corrections to the hard-collinear functions are more profound than that for the hard functions, with the default theory inputs, in the physical kinematic region. We further compute the subleading power correction induced by the three-particle quark-gluon distribution amplitudes of the B-meson at tree level employing the background gluon field approach. The LCSR predictions for the semileptonic B → Dℓν form factors are then extrapolated to the entire kinematic region with the z-series parametrization. Phenomenological implications of our determinations for the form factors f BD +,0 ( q 2) are explored by investigating the (differential) branching fractions and the R( D) ratio of B → Dℓν and by determining the CKM matrix element |V cb | from the total decay rate of B → Dμν μ .

  12. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  13. Attenuation correction factors for cylindrical, disc and box geometry

    NASA Astrophysics Data System (ADS)

    Agarwal, Chhavi; Poi, Sanhita; Mhatre, Amol; Goswami, A.; Gathibandhe, M.

    2009-08-01

    In the present study, attenuation correction factors have been experimentally determined for samples having cylindrical, disc and box geometry and compared with the attenuation correction factors calculated by Hybrid Monte Carlo (HMC) method [ C. Agarwal, S. Poi, A. Goswami, M. Gathibandhe, R.A. Agrawal, Nucl. Instr. and. Meth. A 597 (2008) 198] and with the near-field and far-field formulations available in literature. It has been observed that the near-field formulae, although said to be applicable at close sample-detector geometry, does not work at very close sample-detector configuration. The advantage of the HMC method is that it is found to be valid for all sample-detector geometries.

  14. The perturbation correction factors for cylindrical ionization chambers in high-energy photon beams.

    PubMed

    Yoshiyama, Fumiaki; Araki, Fujio; Ono, Takeshi

    2010-07-01

    In this study, we calculated perturbation correction factors for cylindrical ionization chambers in high-energy photon beams by using Monte Carlo simulations. We modeled four Farmer-type cylindrical chambers with the EGSnrc/Cavity code and calculated the cavity or electron fluence correction factor, P (cav), the displacement correction factor, P (dis), the wall correction factor, P (wall), the stem correction factor, P (stem), the central electrode correction factor, P (cel), and the overall perturbation correction factor, P (Q). The calculated P (dis) values for PTW30010/30013 chambers were 0.9967 +/- 0.0017, 0.9983 +/- 0.0019, and 0.9980 +/- 0.0019, respectively, for (60)Co, 4 MV, and 10 MV photon beams. The value for a (60)Co beam was about 1.0% higher than the 0.988 value recommended by the IAEA TRS-398 protocol. The P (dis) values had a substantial discrepancy compared to those of IAEA TRS-398 and AAPM TG-51 at all photon energies. The P (wall) values were from 0.9994 +/- 0.0020 to 1.0031 +/- 0.0020 for PTW30010 and from 0.9961 +/- 0.0018 to 0.9991 +/- 0.0017 for PTW30011/30012, in the range of (60)Co-10 MV. The P (wall) values for PTW30011/30012 were around 0.3% lower than those of the IAEA TRS-398. Also, the chamber response with and without a 1 mm PMMA water-proofing sleeve agreed within their combined uncertainty. The calculated P (stem) values ranged from 0.9945 +/- 0.0014 to 0.9965 +/- 0.0014, but they are not considered in current dosimetry protocols. The values were no significant difference on beam qualities. P (cel) for a 1 mm aluminum electrode agreed within 0.3% with that of IAEA TRS-398. The overall perturbation factors agreed within 0.4% with those for IAEA TRS-398.

  15. 75 FR 5536 - Pipeline Safety: Control Room Management/Human Factors, Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts...: Control Room Management/Human Factors, Correction AGENCY: Pipeline and Hazardous Materials Safety... following correcting amendments: PART 192--TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM...

  16. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  17. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution

  18. Feedback That Corrects and Contrasts Students' Erroneous Solutions with Expert Ones Improves Expository Instruction for Conceptual Change

    ERIC Educational Resources Information Center

    Asterhan, Christa S. C.; Dotan, Aviv

    2018-01-01

    In the present study, we examined the effects of feedback that corrects and contrasts a student's own erroneous solutions with the canonical, correct one (CEC&C feedback) on learning in a conceptual change task. Sixty undergraduate students received expository instruction about natural selection, which presented the canonical, scientifically…

  19. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  20. Insight solutions are correct more often than analytic solutions

    PubMed Central

    Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark

    2016-01-01

    How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960

  1. THE CALCULATION OF BURNABLE POISON CORRECTION FACTORS FOR PWR FRESH FUEL ACTIVE COLLAR MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea; Swinhoe, Martyn T.

    2012-06-19

    Verification of commercial low enriched uranium light water reactor fuel takes place at the fuel fabrication facility as part of the overall international nuclear safeguards solution to the civilian use of nuclear technology. The fissile mass per unit length is determined nondestructively by active neutron coincidence counting using a neutron collar. A collar comprises four slabs of high density polyethylene that surround the assembly. Three of the slabs contain {sup 3}He filled proportional counters to detect time correlated fission neutrons induced by an AmLi source placed in the fourth slab. Historically, the response of a particular collar design to amore » particular fuel assembly type has been established by careful cross-calibration to experimental absolute calibrations. Traceability exists to sources and materials held at Los Alamos National Laboratory for over 35 years. This simple yet powerful approach has ensured consistency of application. Since the 1980's there has been a steady improvement in fuel performance. The trend has been to higher burn up. This requires the use of both higher initial enrichment and greater concentrations of burnable poisons. The original analytical relationships to correct for varying fuel composition are consequently being challenged because the experimental basis for them made use of fuels of lower enrichment and lower poison content than is in use today and is envisioned for use in the near term. Thus a reassessment of the correction factors is needed. Experimental reassessment is expensive and time consuming given the great variation between fuel assemblies in circulation. Fortunately current modeling methods enable relative response functions to be calculated with high accuracy. Hence modeling provides a more convenient and cost effective means to derive correction factors which are fit for purpose with confidence. In this work we use the Monte Carlo code MCNPX with neutron coincidence tallies to calculate the influence

  2. Wavefront-guided correction of ocular aberrations: Are phase plate and refractive surgery solutions equal?

    NASA Astrophysics Data System (ADS)

    Marchese, Linda E.; Munger, Rejean; Priest, David

    2005-08-01

    Wavefront-guided laser eye surgery has been recently introduced and holds the promise of correcting not only defocus and astigmatism in patients but also higher-order aberrations. Research is just beginning on the implementation of wavefront-guided methods in optical solutions, such as phase-plate-based spectacles, as alternatives to surgery. We investigate the theoretical differences between the implementation of wavefront-guided surgical and phase plate corrections. The residual aberrations of 43 model eyes are calculated after simulated refractive surgery and also after a phase plate is placed in front of the untreated eye. In each case, the current wavefront-guided paradigm that applies a direct map of the ocular aberrations to the correction zone is used. The simulation results demonstrate that an ablation map that is a Zernike fit of a direct transform of the ocular wavefront phase error is not as efficient in correcting refractive errors of sphere, cylinder, spherical aberration, and coma as when the same Zernike coefficients are applied to a phase plate, with statistically significant improvements from 2% to 6%.

  3. Determination of correction factors in beta radiation beams using Monte Carlo method.

    PubMed

    Polo, Ivón Oramas; Santos, William de Souza; Caldas, Linda V E

    2018-06-15

    The absorbed dose rate is the main characterization quantity for beta radiation. The extrapolation chamber is considered the primary standard instrument. To determine absorbed dose rates in beta radiation beams, it is necessary to establish several correction factors. In this work, the correction factors for the backscatter due to the collecting electrode and to the guard ring, and the correction factor for Bremsstrahlung in beta secondary standard radiation beams are presented. For this purpose, the Monte Carlo method was applied. The results obtained are considered acceptable, and they agree within the uncertainties. The differences between the backscatter factors determined by the Monte Carlo method and those of the ISO standard were 0.6%, 0.9% and 2.04% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. The differences between the Bremsstrahlung factors determined by the Monte Carlo method and those of the ISO were 0.25%, 0.6% and 1% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Finite barrier corrections to the PGH solution of Kramers' turnover theory

    NASA Astrophysics Data System (ADS)

    Pollak, Eli; Ianconescu, Reuven

    2014-04-01

    Kramers [Physica 7, 284 (1940)], in his seminal paper, derived expressions for the rate of crossing a barrier in the underdamped limit of weak friction and the moderate to strong friction limit. The challenge of obtaining a uniform expression for the rate, valid for all damping strengths is known as Kramers turnover theory. Two different solutions have been presented. Mel'nikov and Meshkov [J. Chem. Phys. 85, 1018 (1986)] (MM) considered the motion of the particle, treating the friction as a perturbation parameter. Pollak, Grabert, and Hänggi [J. Chem. Phys. 91, 4073 (1989)] (PGH), considered the motion along the unstable mode which is separable from the bath in the barrier region. In practice, the two theories differ in the way an energy loss parameter is estimated. In this paper, we show that previous numerical attempts to resolve the quality of the two approaches were incomplete and that at least for a cubic potential with Ohmic friction, the quality of agreement of both expressions with numerical simulation is similar over a large range of friction strengths and temperatures. Mel'nikov [Phys. Rev. E 48, 3271 (1993)], in a later paper, improved his theory by introducing finite barrier corrections. In this paper we note that previous numerical tests of the finite barrier corrections were also incomplete. They did not employ the exact rate expression, but a harmonic approximation to it. The central part of this paper, is to include finite barrier corrections also within the PGH formalism. Tests on a cubic potential demonstrate that finite barrier corrections significantly improve the agreement of both MM and PGH theories when compared with numerical simulations.

  5. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Tanny, S; Parsai, E

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for

  6. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    PubMed

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  7. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  8. Journal Impact Factor: Do the Numerator and Denominator Need Correction?

    PubMed Central

    Liu, Xue-Li; Gai, Shuang-Shuang; Zhou, Jing

    2016-01-01

    To correct the incongruence of document types between the numerator and denominator in the traditional impact factor (IF), we make a corresponding adjustment to its formula and present five corrective IFs: IFTotal/Total, IFTotal/AREL, IFAR/AR, IFAREL/AR, and IFAREL/AREL. Based on a survey of researchers in the fields of ophthalmology and mathematics, we obtained the real impact ranking of sample journals in the minds of peer experts. The correlations between various IFs and questionnaire score were analyzed to verify their journal evaluation effects. The results show that it is scientific and reasonable to use five corrective IFs for journal evaluation for both ophthalmology and mathematics. For ophthalmology, the journal evaluation effects of the five corrective IFs are superior than those of traditional IF: the corrective effect of IFAR/AR is the best, IFAREL/AR is better than IFTotal/Total, followed by IFTotal/AREL, and IFAREL/AREL. For mathematics, the journal evaluation effect of traditional IF is superior than those of the five corrective IFs: the corrective effect of IFTotal/Total is best, IFAREL/AR is better than IFTotal/AREL and IFAREL/AREL, and the corrective effect of IFAR/AR is the worst. In conclusion, not all disciplinary journal IF need correction. The results in the current paper show that to correct the IF of ophthalmologic journals may be valuable, but it seems to be meaningless for mathematic journals. PMID:26977697

  9. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  10. ULR Re-analysed Global GPS Solution for Vertical Land Motion Correction at Tide Gauges

    NASA Astrophysics Data System (ADS)

    Letetrel, C.; Wöppelmann, G.; Bouin, M.; Altamimi, Z.; Martine, F.; Santamaria, A.

    2007-12-01

    The presentation will review the recent results published by Wöppelmann et al. (2007) in Global and Planetary Change. Geocentric sea-level trend estimates were derived from the global GPS analyses conducted at ULR consortium to correct a set of relevant tide gauges from the vertical motion of the land upon which they are settled. The exercise proved worthwhile. The results showed a reduced dispersion of the estimated sea level trends, either regionally or globally, after application of the GPS corrections compared to the corrections derived from the glacio-isostatic adjustment models of Peltier (2004). Here we will focus on two important issues that were not addressed in Wöppelmann et al. (2007). The first issue concerns the noise content of our GPS solutions. Previous works have shown that GPS coordinate time series are subject to significant time-correlated (coloured) noise, with a large predominance of flicker noise (Zhang et al. 1997, Mao et al. 1999, Williams et al. 2004). The presence of coloured noise in a time series has a significant effect on the rate uncertainty, which may otherwise be underestimated by as much as an order of magnitude. We therefore carefully investigate the now 10-year long data set of reanalysed GPS solutions for noise content using the Allan variance technique (Feissel et al. 2007). Preliminary results show that the reanalysed solutions at ULR exhibit far less flicker noise than any other solution published so far in the literature available to us. The percentage of stations with flicker noise drops to only about 20%. These encouraging results advocate for a comprehensive reanalysis strategy with full coherent models over the entire observation data span. Moreover, the noise level reaches the best levels of other geodetic results recently published, namely the VLBI level in the horizontal component and the SLR level in the vertical component (Feissel et al. 2007). The second issue that we would like to address in the presentation

  11. Diaphragm correction factors for the FAC-IR-300 free-air ionization chamber.

    PubMed

    Mohammadi, Seyed Mostafa; Tavakoli-Anbaran, Hossein

    2018-02-01

    A free-air ionization chamber FAC-IR-300, designed by the Atomic Energy Organization of Iran, is used as the primary Iranian national standard for the photon air kerma. For accurate air kerma measurements, the contribution from the scattered photons to the total energy released in the collecting volume must be eliminated. One of the sources of scattered photons is the chamber's diaphragm. In this paper, the diaphragm scattering correction factor, k dia , and the diaphragm transmission correction factor, k tr , were introduced. These factors represent corrections to the measured charge (or current) for the photons scattered from the diaphragm surface and the photons penetrated through the diaphragm volume, respectively. The k dia and k tr values were estimated by Monte Carlo simulations. The simulations were performed for the mono-energetic photons in the energy range of 20 - 300keV. According to the simulation results, in this energy range, the k dia values vary between 0.9997 and 0.9948, and k tr values decrease from 1.0000 to 0.9965. The corrections grow in significance with increasing energy of the primary photons. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Air-braked cycle ergometers: validity of the correction factor for barometric pressure.

    PubMed

    Finn, J P; Maxwell, B F; Withers, R T

    2000-10-01

    Barometric pressure exerts by far the greatest influence of the three environmental factors (barometric pressure, temperature and humidity) on power outputs from air-braked ergometers. The barometric pressure correction factor for power outputs from air-braked ergometers is in widespread use but apparently has never been empirically validated. Our experiment validated this correction factor by calibrating two air-braked cycle ergometers in a hypobaric chamber using a dynamic calibration rig. The results showed that if the power output correction for changes in air resistance at barometric pressures corresponding to altitudes of 38, 600, 1,200 and 1,800 m above mean sea level were applied, then the coefficients of variation were 0.8-1.9% over the range of 160-1,597 W. The overall mean error was 3.0 % but this included up to 0.73 % for the propagated error that was associated with errors in the measurement of: a) temperature b) relative humidity c) barometric pressure d) force, distance and angular velocity by the dynamic calibration rig. The overall mean error therefore approximated the +/- 2.0% of true load that was specified by the Laboratory Standards Assistance Scheme of the Australian Sports Commission. The validity of the correction factor for barometric pressure on power output was therefore demonstrated over the altitude range of 38-1,800 m.

  13. On Aethalometer measurement uncertainties and an instrument correction factor for the Arctic

    NASA Astrophysics Data System (ADS)

    Backman, John; Schmeisser, Lauren; Virkkula, Aki; Ogren, John A.; Asmi, Eija; Starkweather, Sandra; Sharma, Sangeeta; Eleftheriadis, Konstantinos; Uttal, Taneil; Jefferson, Anne; Bergin, Michael; Makshtas, Alexander; Tunved, Peter; Fiebig, Markus

    2017-12-01

    Several types of filter-based instruments are used to estimate aerosol light absorption coefficients. Two significant results are presented based on Aethalometer measurements at six Arctic stations from 2012 to 2014. First, an alternative method of post-processing the Aethalometer data is presented, which reduces measurement noise and lowers the detection limit of the instrument more effectively than boxcar averaging. The biggest benefit of this approach can be achieved if instrument drift is minimised. Moreover, by using an attenuation threshold criterion for data post-processing, the relative uncertainty from the electronic noise of the instrument is kept constant. This approach results in a time series with a variable collection time (Δt) but with a constant relative uncertainty with regard to electronic noise in the instrument. An additional advantage of this method is that the detection limit of the instrument will be lowered at small aerosol concentrations at the expense of temporal resolution, whereas there is little to no loss in temporal resolution at high aerosol concentrations ( > 2.1-6.7 Mm-1 as measured by the Aethalometers). At high aerosol concentrations, minimising the detection limit of the instrument is less critical. Additionally, utilising co-located filter-based absorption photometers, a correction factor is presented for the Arctic that can be used in Aethalometer corrections available in literature. The correction factor of 3.45 was calculated for low-elevation Arctic stations. This correction factor harmonises Aethalometer attenuation coefficients with light absorption coefficients as measured by the co-located light absorption photometers. Using one correction factor for Arctic Aethalometers has the advantage that measurements between stations become more inter-comparable.

  14. Impact of correction factors in human brain lesion-behavior inference.

    PubMed

    Sperber, Christoph; Karnath, Hans-Otto

    2017-03-01

    Statistical voxel-based lesion-behavior mapping (VLBM) in neurological patients with brain lesions is frequently used to examine the relationship between structure and function of the healthy human brain. Only recently, two simulation studies noted reduced anatomical validity of this method, observing the results of VLBM to be systematically misplaced by about 16 mm. However, both simulation studies differed from VLBM analyses of real data in that they lacked the proper use of two correction factors: lesion size and "sufficient lesion affection." In simulation experiments on a sample of 274 real stroke patients, we found that the use of these two correction factors reduced misplacement markedly compared to uncorrected VLBM. Apparently, the misplacement is due to physiological effects of brain lesion anatomy. Voxel-wise topographies of collateral damage in the real data were generated and used to compute a metric for the inter-voxel relation of brain damage. "Anatomical bias" vectors that were solely calculated from these inter-voxel relations in the patients' real anatomical data, successfully predicted the VLBM misplacement. The latter has the potential to help in the development of new VLBM methods that provide even higher anatomical validity than currently available by the proper use of correction factors. Hum Brain Mapp 38:1692-1701, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory

    NASA Astrophysics Data System (ADS)

    Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.

    2018-03-01

    This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.

  16. Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.

    PubMed

    Etemad, S Gh; Thibault, J; Hashemabadi, S H

    2003-10-01

    This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.

  17. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  18. Color correction strategies in optical design

    NASA Astrophysics Data System (ADS)

    Pfisterer, Richard N.; Vorndran, Shelby D.

    2014-12-01

    An overview of color correction strategies is presented. Starting with basic first-order aberration theory, we identify known color corrected solutions for doublets and triplets. Reviewing the modern approaches of Robb-Mercado, Rayces-Aguilar, and C. de Albuquerque et al, we find that they confirm the existence of glass combinations for doublets and triplets that yield color corrected solutions that we already know exist. Finally we explore the use of the y, ӯ diagram in conjunction with aberration theory to identify the solution space of glasses capable of leading to color corrected solutions in arbitrary optical systems.

  19. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.

    PubMed

    Czarnecki, D; Zink, K

    2013-04-21

    The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors Ω(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the

  20. SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haywood, J

    Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumesmore » in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.« less

  1. Factors influencing workplace violence risk among correctional health workers: insights from an Australian survey.

    PubMed

    Cashmore, Aaron W; Indig, Devon; Hampton, Stephen E; Hegney, Desley G; Jalaludin, Bin B

    2016-11-01

    Little is known about the environmental and organisational determinants of workplace violence in correctional health settings. This paper describes the views of health professionals working in these settings on the factors influencing workplace violence risk. All employees of a large correctional health service in New South Wales, Australia, were invited to complete an online survey. The survey included an open-ended question seeking the views of participants about the factors influencing workplace violence in correctional health settings. Responses to this question were analysed using qualitative thematic analysis. Participants identified several factors that they felt reduced the risk of violence in their workplace, including: appropriate workplace health and safety policies and procedures; professionalism among health staff; the presence of prison guards and the quality of security provided; and physical barriers within clinics. Conversely, participants perceived workplace violence risk to be increased by: low health staff-to-patient and correctional officer-to-patient ratios; high workloads; insufficient or underperforming security staff; and poor management of violence, especially horizontal violence. The views of these participants should inform efforts to prevent workplace violence among correctional health professionals.

  2. NOTE: Monte Carlo simulation of correction factors for IAEA TLD holders

    NASA Astrophysics Data System (ADS)

    Hultqvist, Martha; Fernández-Varea, José M.; Izewska, Joanna

    2010-03-01

    The IAEA standard thermoluminescent dosimeter (TLD) holder has been developed for the IAEA/WHO TLD postal dose program for audits of high-energy photon beams, and it is also employed by the ESTRO-QUALity assurance network (EQUAL) and several national TLD audit networks. Factors correcting for the influence of the holder on the TL signal under reference conditions have been calculated in the present work from Monte Carlo simulations with the PENELOPE code for 60Co γ-rays and 4, 6, 10, 15, 18 and 25 MV photon beams. The simulation results are around 0.2% smaller than measured factors reported in the literature, but well within the combined standard uncertainties. The present study supports the use of the experimentally obtained holder correction factors in the determination of the absorbed dose to water from the TL readings; the factors calculated by means of Monte Carlo simulations may be adopted for the cases where there are no measured data.

  3. The Etiology of Presbyopia, Contributing Factors, and Future Correction Methods

    NASA Astrophysics Data System (ADS)

    Hickenbotham, Adam Lyle

    Presbyopia has been a complicated problem for clinicians and researchers for centuries. Defining what constitutes presbyopia and what are its primary causes has long been a struggle for the vision and scientific community. Although presbyopia is a normal aging process of the eye, the continuous and gradual loss of accommodation is often dreaded and feared. If presbyopia were to be considered a disease, its global burden would be enormous as it affects more than a billion people worldwide. In this dissertation, I explore factors associated with presbyopia and develop a model for explaining the onset of presbyopia. In this model, the onset of presbyopia is associated primarily with three factors; depth of focus, focusing ability (accommodation), and habitual reading (or task) distance. If any of these three factors could be altered sufficiently, the onset of presbyopia could be delayed or prevented. Based on this model, I then examine possible optical methods that would be effective in correcting for presbyopia by expanding depth of focus. Two methods that have been show to be effective at expanding depth of focus include utilizing a small pupil aperture or generating higher order aberrations, particularly spherical aberration. I compare these two optical methods through the use of simulated designs, monitor testing, and visual performance metrics and then apply them in subjects through an adaptive optics system that corrects aberrations through a wavefront aberrometer and deformable mirror. I then summarize my findings and speculate about the future of presbyopia correction.

  4. Factors Influencing the Design, Establishment, Administration, and Governance of Correctional Education for Females

    ERIC Educational Resources Information Center

    Ellis, Johnica; McFadden, Cheryl; Colaric, Susan

    2008-01-01

    This article summarizes the results of a study conducted to investigate factors influencing the organizational design, establishment, administration, and governance of correctional education for females. The research involved interviews with correctional and community college administrators and practitioners representing North Carolina female…

  5. Relativistic corrections to the form factors of Bc into P-wave orbitally excited charmonium

    NASA Astrophysics Data System (ADS)

    Zhu, Ruilin

    2018-06-01

    We investigated the form factors of the Bc meson into P-wave orbitally excited charmonium using the nonrelativistic QCD effective theory. Through the analytic computation, the next-to-leading order relativistic corrections to the form factors were obtained, and the asymptotic expressions were studied in the infinite bottom quark mass limit. Employing the general form factors, we discussed the exclusive decays of the Bc meson into P-wave orbitally excited charmonium and a light meson. We found that the relativistic corrections lead to a large correction for the form factors, which makes the branching ratios of the decay channels B (Bc ± →χcJ (hc) +π± (K±)) larger. These results are useful for the phenomenological analysis of the Bc meson decays into P-wave charmonium, which shall be tested in the LHCb experiments.

  6. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  7. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  8. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  9. A Correction to the Stress-Strain Curve During Multistage Hot Deformation of 7150 Aluminum Alloy Using Instantaneous Friction Factors

    NASA Astrophysics Data System (ADS)

    Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui

    2018-04-01

    Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.

  10. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  11. 49 CFR 325.73 - Microphone distance correction factors. 1

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the observed sound level reading is— 31 feet (9.5 m) or more but less than 35 feet (10.7 m) −4 35 feet... more but less than 83 feet (25.3 m) +2 [40 FR 42437, Sept. 12, 1975, as amended at 54 FR 50385, Dec. 6... 49 Transportation 5 2013-10-01 2013-10-01 false Microphone distance correction factors. 1 325.73...

  12. 49 CFR 325.73 - Microphone distance correction factors. 1

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the observed sound level reading is— 31 feet (9.5 m) or more but less than 35 feet (10.7 m) −4 35 feet... more but less than 83 feet (25.3 m) +2 [40 FR 42437, Sept. 12, 1975, as amended at 54 FR 50385, Dec. 6... 49 Transportation 5 2012-10-01 2012-10-01 false Microphone distance correction factors. 1 325.73...

  13. 49 CFR 325.73 - Microphone distance correction factors. 1

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the observed sound level reading is— 31 feet (9.5 m) or more but less than 35 feet (10.7 m) −4 35 feet... more but less than 83 feet (25.3 m) +2 [40 FR 42437, Sept. 12, 1975, as amended at 54 FR 50385, Dec. 6... 49 Transportation 5 2014-10-01 2014-10-01 false Microphone distance correction factors. 1 325.73...

  14. Determination of velocity correction factors for real-time air velocity monitoring in underground mines.

    PubMed

    Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-12-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer ® . The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed.

  15. Determination of velocity correction factors for real-time air velocity monitoring in underground mines

    PubMed Central

    Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-01-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer®. The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed. PMID:29201495

  16. Calculation of Coincidence Summing Correction Factors for an HPGe detector using GEANT4.

    PubMed

    Giubrone, G; Ortiz, J; Gallardo, S; Martorell, S; Bas, M C

    2016-07-01

    The aim of this paper was to calculate the True Coincidence Summing Correction Factors (TSCFs) for an HPGe coaxial detector in order to correct the summing effect as a result of the presence of (88)Y and (60)Co in a multigamma source used to obtain a calibration efficiency curve. Results were obtained for three volumetric sources using the Monte Carlo toolkit, GEANT4. The first part of this paper deals with modeling the detector in order to obtain a simulated full energy peak efficiency curve. A quantitative comparison between the measured and simulated values was made across the entire energy range under study. The True Summing Correction Factors were calculated for (88)Y and (60)Co using the full peak efficiencies obtained with GEANT4. This methodology was subsequently applied to (134)Cs, and presented a complex decay scheme. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  18. Detector to detector corrections: a comprehensive experimental study of detector specific correction factors for beam output measurements for small radiotherapy beams.

    PubMed

    Azangwe, Godfrey; Grochowska, Paulina; Georg, Dietmar; Izewska, Joanna; Hopfgartner, Johannes; Lechner, Wolfgang; Andersen, Claus E; Beierholm, Anders R; Helt-Hansen, Jakob; Mizuno, Hideyuki; Fukumura, Akifumi; Yajima, Kaori; Gouldstone, Clare; Sharpe, Peter; Meghzifene, Ahmed; Palmans, Hugo

    2014-07-01

    The aim of the present study is to provide a comprehensive set of detector specific correction factors for beam output measurements for small beams, for a wide range of real time and passive detectors. The detector specific correction factors determined in this study may be potentially useful as a reference data set for small beam dosimetry measurements. Dose response of passive and real time detectors was investigated for small field sizes shaped with a micromultileaf collimator ranging from 0.6 × 0.6 cm(2) to 4.2 × 4.2 cm(2) and the measurements were extended to larger fields of up to 10 × 10 cm(2). Measurements were performed at 5 cm depth, in a 6 MV photon beam. Detectors used included alanine, thermoluminescent dosimeters (TLDs), stereotactic diode, electron diode, photon diode, radiophotoluminescent dosimeters (RPLDs), radioluminescence detector based on carbon-doped aluminium oxide (Al2O3:C), organic plastic scintillators, diamond detectors, liquid filled ion chamber, and a range of small volume air filled ionization chambers (volumes ranging from 0.002 cm(3) to 0.3 cm(3)). All detector measurements were corrected for volume averaging effect and compared with dose ratios determined from alanine to derive a detector correction factors that account for beam perturbation related to nonwater equivalence of the detector materials. For the detectors used in this study, volume averaging corrections ranged from unity for the smallest detectors such as the diodes, 1.148 for the 0.14 cm(3) air filled ionization chamber and were as high as 1.924 for the 0.3 cm(3) ionization chamber. After applying volume averaging corrections, the detector readings were consistent among themselves and with alanine measurements for several small detectors but they differed for larger detectors, in particular for some small ionization chambers with volumes larger than 0.1 cm(3). The results demonstrate how important it is for the appropriate corrections to be applied to give

  19. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    NASA Astrophysics Data System (ADS)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2018-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  20. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    NASA Astrophysics Data System (ADS)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  1. Continuous quantum error correction for non-Markovian decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089

    2007-08-15

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less

  2. Laser Vision Correction with Q Factor Modification for Keratoconus Management.

    PubMed

    Pahuja, Natasha Kishore; Shetty, Rohit; Sinha Roy, Abhijit; Thakkar, Maithil Mukesh; Jayadev, Chaitra; Nuijts, Rudy Mma; Nagaraja, Harsha

    2017-04-01

    To evaluate the outcomes of corneal laser ablation with Q factor modification for vision correction in patients with progressive keratoconus. In this prospective study, 50 eyes of 50 patients were divided into two groups based on Q factor (>-1 in Group I and ≤-1 in Group II). All patients underwent a detailed ophthalmic examination including uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), subjective acceptance and corneal topography using the Pentacam. The topolyzer was used to measure the corneal asphericity (Q). Ablation was performed based on the preoperative Q values and thinnest pachymetry to obtain a target of near normal Q. This was followed by corneal collagen crosslinking to stabilize the progression. Statistically significant improvement (p ≤ 0.05) was noticed in refractive, topographic, and Q values posttreatment in both groups. The improvement in higher-order aberrations and total aberrations were statistically significant in both groups; however, the spherical aberration showed statistically significant improvement only in Group II. Ablation based on the preoperative Q and pachymetry for a near normal postoperative Q value appears to be an effective method to improve the visual acuity and quality in patients with keratoconus.

  3. Stress Intensity Factor Plasticity Correction for Flaws in Stress Concentration Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, E.; Wilson, W.K.

    2000-02-01

    Plasticity corrections to elastically computed stress intensity factors are often included in brittle fracture evaluation procedures. These corrections are based on the existence of a plastic zone in the vicinity of the crack tip. Such a plastic zone correction is included in the flaw evaluation procedure of Appendix A to Section XI of the ASME Boiler and Pressure Vessel Code. Plasticity effects from the results of elastic and elastic-plastic explicit flaw finite element analyses are examined for various size cracks emanating from the root of a notch in a panel and for cracks located at fillet fadii. The results ofmore » these caluclations provide conditions under which the crack-tip plastic zone correction based on the Irwin plastic zone size overestimates the plasticity effect for crack-like flaws embedded in stress concentration regions in which the elastically computed stress exceeds the yield strength of the material. A failure assessment diagram (FAD) curve is employed to graphically c haracterize the effect of plasticity on the crack driving force. The Option 1 FAD curve of the Level 3 advanced fracture assessment procedure of British Standard PD 6493:1991, adjusted for stress concentration effects by a term that is a function of the applied load and the ratio of the local radius of curvature at the flaw location to the flaw depth, provides a satisfactory bound to all the FAD curves derived from the explicit flaw finite element calculations. The adjusted FAD curve is a less restrictive plasticity correction than the plastic zone correction of Section XI for flaws embedded in plastic zones at geometric stress concentrators. This enables unnecessary conservatism to be removed from flaw evaluation procedures that utilize plasticity corrections.« less

  4. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  5. Three-Dimensional Thermal Boundary Layer Corrections for Circular Heat Flux Gauges Mounted in a Flat Plate with a Surface Temperature Discontinuity

    NASA Technical Reports Server (NTRS)

    Kandula, M.; Haddad, G. F.; Chen, R.-H.

    2006-01-01

    Three-dimensional Navier-Stokes computational fluid dynamics (CFD) analysis has been performed in an effort to determine thermal boundary layer correction factors for circular convective heat flux gauges (such as Schmidt-Boelter and plug type)mounted flush in a flat plate subjected to a stepwise surface temperature discontinuity. Turbulent flow solutions with temperature-dependent properties are obtained for a free stream Reynolds number of 1E6, and freestream Mach numbers of 2 and 4. The effect of gauge diameter and the plate surface temperature have been investigated. The 3-D CFD results for the heat flux correction factors are compared to quasi-21) results deduced from constant property integral solutions and also 2-D CFD analysis with both constant and variable properties. The role of three-dimensionality and of property variations on the heat flux correction factors has been demonstrated.

  6. Monte Carlo and experimental determination of correction factors for gamma knife perfexion small field dosimetry measurements

    NASA Astrophysics Data System (ADS)

    Zoros, E.; Moutsatsos, A.; Pappas, E. P.; Georgiou, E.; Kollias, G.; Karaiskos, P.; Pantelis, E.

    2017-09-01

    Detector-, field size- and machine-specific correction factors are required for precise dosimetry measurements in small and non-standard photon fields. In this work, Monte Carlo (MC) simulation techniques were used to calculate the k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} and k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors for a series of ionization chambers, a synthetic microDiamond and diode dosimeters, used for reference and/or output factor (OF) measurements in the Gamma Knife Perfexion photon fields. Calculations were performed for the solid water (SW) and ABS plastic phantoms, as well as for a water phantom of the same geometry. MC calculations for the k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors in SW were compared against corresponding experimental results for a subset of ionization chambers and diode detectors. Reference experimental OF data were obtained through the weighted average of corresponding measurements using TLDs, EBT-2 films and alanine pellets. k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} values close to unity (within 1%) were calculated for most of ionization chambers in water. Greater corrections of up to 6.0% were observed for chambers with relatively large air-cavity dimensions and steel central electrode. A phantom correction of 1.006 and 1.024 (breaking down to 1.014 from the ABS sphere and 1.010 from the accompanying ABS phantom adapter) were calculated for the SW and ABS phantoms, respectively, adding up to k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} corrections in water. Both measurements and MC calculations for the diode and microDiamond detectors resulted in lower than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors, due to their denser sensitive volume and encapsulation materials. In comparison, higher than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} results for the ionization chambers suggested field size depended dose underestimations (being significant for the 4 mm field), with magnitude depending on the combination of

  7. New Correction Factors Based on Seasonal Variability of Outdoor Temperature for Estimating Annual Radon Concentrations in UK.

    PubMed

    Daraktchieva, Z

    2017-06-01

    Indoor radon concentrations generally vary with season. Radon gas enters buildings from beneath due to a small air pressure difference between the inside of a house and outdoors. This underpressure which draws soil gas including radon into the house depends on the difference between the indoor and outdoor temperatures. The variation in a typical house in UK showed that the mean indoor radon concentration reaches a maximum in January and a minimum in July. Sine functions were used to model the indoor radon data and monthly average outdoor temperatures, covering the period between 2005 and 2014. The analysis showed a strong negative correlation between the modelled indoor radon data and outdoor temperature. This correlation was used to calculate new correction factors that could be used for estimation of annual radon concentration in UK homes. The comparison between the results obtained with the new correction factors and the previously published correction factors showed that the new correction factors perform consistently better on the selected data sets. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Single-image-based solution for optics temperature-dependent nonuniformity correction in an uncooled long-wave infrared camera.

    PubMed

    Cao, Yanpeng; Tisse, Christel-Loic

    2014-02-01

    In this Letter, we propose an efficient and accurate solution to remove temperature-dependent nonuniformity effects introduced by the imaging optics. This single-image-based approach computes optics-related fixed pattern noise (FPN) by fitting the derivatives of correction model to the gradient components, locally computed on an infrared image. A modified bilateral filtering algorithm is applied to local pixel output variations, so that the refined gradients are most likely caused by the nonuniformity associated with optics. The estimated bias field is subtracted from the raw infrared imagery to compensate the intensity variations caused by optics. The proposed method is fundamentally different from the existing nonuniformity correction (NUC) techniques developed for focal plane arrays (FPAs) and provides an essential image processing functionality to achieve completely shutterless NUC for uncooled long-wave infrared (LWIR) imaging systems.

  9. Development of Tissue to Total Mass Correction Factor for Porites divaricata in Calcification Rate Studies

    NASA Astrophysics Data System (ADS)

    Cannone, T. C.; Kelly, S. K.; Foster, K.

    2013-05-01

    One anticipated result of ocean acidification is lower calcification rates of corals. Many studies currently use the buoyant weights of coral nubbins as a means of estimating skeletal weight during non-destructive experiments. The objectives of this study, conducted at the Little Cayman Research Centre, were twofold: (1) to determine whether the purple and yellow color variations of Porites divaricata had similar tissue mass to total mass ratios; and (2) to determine a correction factor for tissue mass based on the total coral mass. T-test comparisons indicated that the tissue to total mass ratios were statistically similar for purple and yellow cohorts, thus allowing them to be grouped together within a given sample population. Linear regression analysis provided a correction factor (r2 = 0.69) to estimate the tissue mass from the total mass, which may eliminate the need to remove tissue during studies and allow subsequent testing on the same nubbins or their return to the natural environment. Additional work is needed in the development of a correction factor for P. divaricata with a higher prediction accuracy.

  10. Method of Calculating the Correction Factors for Cable Dimensioning in Smart Grids

    NASA Astrophysics Data System (ADS)

    Simutkin, M.; Tuzikova, V.; Tlusty, J.; Tulsky, V.; Muller, Z.

    2017-04-01

    One of the main causes of overloading electrical equipment by currents of higher harmonics is the great increasing of a number of non-linear electricity power consumers. Non-sinusoidal voltages and currents affect the operation of electrical equipment, reducing its lifetime, increases the voltage and power losses in the network, reducing its capacity. There are standards that respects emissions amount of higher harmonics current that cannot provide interference limit for a safe level in power grid. The article presents a method for determining a correction factor to the long-term allowable current of the cable, which allows for this influence. Using mathematical models in the software Elcut, it was described thermal processes in the cable in case the flow of non-sinusoidal current. Developed in the article theoretical principles, methods, mathematical models allow us to calculate the correction factor to account for the effect of higher harmonics in the current spectrum for network equipment in any type of non-linear load.

  11. α '-corrected black holes in String Theory

    NASA Astrophysics Data System (ADS)

    Cano, Pablo A.; Meessen, Patrick; Ortín, Tomás; Ramírez, Pedro F.

    2018-05-01

    We consider the well-known solution of the Heterotic Superstring effective action to zeroth order in α ' that describes the intersection of a fundamental string with momentum and a solitonic 5-brane and which gives a 3-charge, static, extremal, supersymmetric black hole in 5 dimensions upon dimensional reduction on T5. We compute explicitly the first-order in α ' corrections to this solution, including SU(2) Yang-Mills fields which can be used to cancel some of these corrections and we study the main properties of this α '-corrected solution: supersymmetry, values of the near-horizon and asymptotic charges, behavior under α '-corrected T-duality, value of the entropy (using Wald formula directly in 10 dimensions), existence of small black holes etc. The value obtained for the entropy agrees, within the limits of approximation, with that obtained by microscopic methods. The α ' corrections coming from Wald's formula prove crucial for this result.

  12. Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.

    PubMed

    Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A

    2013-05-21

    The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence

  13. Assessment and correction of turbidity effects on Raman observations of chemicals in aqueous solutions.

    PubMed

    Sinfield, Joseph V; Monwuba, Chike K

    2014-01-01

    Improvements in diode laser, fiber optic, and data acquisition technologies are enabling increased use of Raman spectroscopic techniques for both in lab and in situ water analysis. Aqueous media encountered in the natural environment often contain suspended solids that can interfere with spectroscopic measurements, yet removal of these solids, for example, via filtration, can have even greater adverse effects on the extent to which subsequent measurements are representative of actual field conditions. In this context, this study focuses on evaluation of turbidity effects on Raman spectroscopic measurements of two common environmental pollutants in aqueous solution: ammonium nitrate and trichloroethylene. The former is typically encountered in the runoff from agricultural operations and is a strong scatterer that has no significant influence on the Raman spectrum of water. The latter is a commonly encountered pollutant at contaminated sites associated with degreasing and cleaning operations and is a weak scatterer that has a significant influence on the Raman spectrum of water. Raman observations of each compound in aqueous solutions of varying turbidity created by doping samples with silica flour with grain sizes ranging from 1.6 to 5.0 μm were employed to develop relationships between observed Raman signal strength and turbidity level. Shared characteristics of these relationships were then employed to define generalized correction methods for the effect of turbidity on Raman observations of compounds in aqueous solution.

  14. Orbit-product representation and correction of Gaussian belief propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K; Chertkov, Michael; Chernyak, Vladimir

    We present a new interpretation of Gaussian belief propagation (GaBP) based on the 'zeta function' representation of the determinant as a product over orbits of a graph. We show that GaBP captures back-tracking orbits of the graph and consider how to correct this estimate by accounting for non-backtracking orbits. We show that the product over non-backtracking orbits may be interpreted as the determinant of the non-backtracking adjacency matrix of the graph with edge weights based on the solution of GaBP. An efficient method is proposed to compute a truncated correction factor including all non-backtracking orbits up to a specified length.

  15. Electron fluence correction factors for various materials in clinical electron beams.

    PubMed

    Olivares, M; DeBlois, F; Podgorsak, E B; Seuntjens, J P

    2001-08-01

    Relative to solid water, electron fluence correction factors at the depth of dose maximum in bone, lung, aluminum, and copper for nominal electron beam energies of 9 MeV and 15 MeV of the Clinac 18 accelerator have been determined experimentally and by Monte Carlo calculation. Thermoluminescent dosimeters were used to measure depth doses in these materials. The measured relative dose at dmax in the various materials versus that of solid water, when irradiated with the same number of monitor units, has been used to calculate the ratio of electron fluence for the various materials to that of solid water. The beams of the Clinac 18 were fully characterized using the EGS4/BEAM system. EGSnrc with the relativistic spin option turned on was used to optimize the primary electron energy at the exit window, and to calculate depth doses in the five phantom materials using the optimized phase-space data. Normalizing all depth doses to the dose maximum in solid water stopping power ratio corrected, measured depth doses and calculated depth doses differ by less than +/- 1% at the depth of dose maximum and by less than 4% elsewhere. Monte Carlo calculated ratios of doses in each material to dose in LiF were used to convert the TLD measurements at the dose maximum into dose at the center of the TLD in the phantom material. Fluence perturbation correction factors for a LiF TLD at the depth of dose maximum deduced from these calculations amount to less than 1% for 0.15 mm thick TLDs in low Z materials and are between 1% and 3% for TLDs in Al and Cu phantoms. Electron fluence ratios of the studied materials relative to solid water vary between 0.83+/-0.01 and 1.55+/-0.02 for materials varying in density from 0.27 g/cm3 (lung) to 8.96 g/cm3 (Cu). The difference in electron fluence ratios derived from measurements and calculations ranges from -1.6% to +0.2% at 9 MeV and from -1.9% to +0.2% at 15 MeV and is not significant at the 1sigma level. Excluding the data for Cu, electron

  16. Improved scatterer property estimates from ultrasound backscatter for small gate lengths using a gate-edge correction factor

    NASA Astrophysics Data System (ADS)

    Oelze, Michael L.; O'Brien, William D.

    2004-11-01

    Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .

  17. Universal Binding and Recoil Corrections to Bound State g Factors in Hydrogenlike Ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eides, Michael I.; Martin, Timothy J. S.

    2010-09-03

    The leading relativistic and recoil corrections to bound state g factors of particles with arbitrary spin are calculated. It is shown that these corrections are universal for any spin and depend only on the free particle gyromagnetic ratios. To prove this universality we develop nonrelativistic quantum electrodynamics (NRQED) for charged particles with an arbitrary spin. The coefficients in the NRQED Hamiltonian for higher spin particles are determined only by the requirements of Lorentz invariance and local charge conservation in the respective relativistic theory. For spin one charged particles, the NRQED Hamiltonian follows from the renormalizable QED of the charged vectormore » bosons. We show that universality of the leading relativistic and recoil corrections can be explained with the help of the Bargmann-Michael-Telegdi equation.« less

  18. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  19. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  20. SU-F-T-23: Correspondence Factor Correction Coefficient for Commissioning of Leipzig and Valencia Applicators with the Standard Imaging IVB 1000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaghue, J; Gajdos, S

    Purpose: To determine the correction factor of the correspondence factor for the Standard Imaging IVB 1000 well chamber for commissioning of Elekta’s Leipzig and Valencia skin applicators. Methods: The Leipzig and Valencia applicators are designed to treat small skin lesions by collimating irradiation to the treatment area. Published output factors are used to calculate dose rates for clinical treatments. To validate onsite applicators, a correspondence factor (CFrev) is measured and compared to published values. The published CFrev is based on well chamber model SI HDR 1000 Plus. The CFrev is determined by correlating raw values of the source calibration setupmore » (Rcal,raw) and values taken when each applicator is mounted on the same well chamber with an adapter (Rapp,raw). The CFrev is calculated by using the equation CFrev =Rapp,raw/Rcal,raw. The CFrev was measured for each applicator in both the SI HDR 1000 Plus and the SI IVB 1000. A correction factor, CFIVB for the SI IVB 1000 was determined by finding the ratio of CFrev (SI IVB 1000) and CFrev (SI HDR 1000 Plus). Results: The average correction factors at dwell position 1121 were found to be 1.073, 1.039, 1.209, 1.091, and 1.058 for the Valencia V2, Valencia V3, Leipzig H1, Leipzig H2, and Leipzig H3 respectively. There were no significant variations in the correction factor for dwell positions 1119 through 1121. Conclusion: By using the appropriate correction factor, the correspondence factors for the Leipzig and Valencia surface applicators can be validated with the Standard Imaging IVB 1000. This allows users to correlate their measurements with the Standard Imaging IVB 1000 to the published data. The correction factor is included in the equation for the CFrev as follows: CFrev= Rapp,raw/(CFIVB*Rcal,raw). Each individual applicator has its own correction factor, so care must be taken that the appropriate factor is used.« less

  1. Spectral correction factors for conventional neutron dosemeters used in high-energy neutron environments.

    PubMed

    Lee, K W; Sheu, R J

    2015-04-01

    High-energy neutrons (>10 MeV) contribute substantially to the dose fraction but result in only a small or negligible response in most conventional moderated-type neutron detectors. Neutron dosemeters used for radiation protection purpose are commonly calibrated with (252)Cf neutron sources and are used in various workplace. A workplace-specific correction factor is suggested. In this study, the effect of the neutron spectrum on the accuracy of dose measurements was investigated. A set of neutron spectra representing various neutron environments was selected to study the dose responses of a series of Bonner spheres, including standard and extended-range spheres. By comparing (252)Cf-calibrated dose responses with reference values based on fluence-to-dose conversion coefficients, this paper presents recommendations for neutron field characterisation and appropriate correction factors for responses of conventional neutron dosemeters used in environments with high-energy neutrons. The correction depends on the estimated percentage of high-energy neutrons in the spectrum or the ratio between the measured responses of two Bonner spheres (the 4P6_8 extended-range sphere versus the 6″ standard sphere). © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Detecting and correcting the bias of unmeasured factors using perturbation analysis: a data-mining approach.

    PubMed

    Lee, Wen-Chung

    2014-02-05

    The randomized controlled study is the gold-standard research method in biomedicine. In contrast, the validity of a (nonrandomized) observational study is often questioned because of unknown/unmeasured factors, which may have confounding and/or effect-modifying potential. In this paper, the author proposes a perturbation test to detect the bias of unmeasured factors and a perturbation adjustment to correct for such bias. The proposed method circumvents the problem of measuring unknowns by collecting the perturbations of unmeasured factors instead. Specifically, a perturbation is a variable that is readily available (or can be measured easily) and is potentially associated, though perhaps only very weakly, with unmeasured factors. The author conducted extensive computer simulations to provide a proof of concept. Computer simulations show that, as the number of perturbation variables increases from data mining, the power of the perturbation test increased progressively, up to nearly 100%. In addition, after the perturbation adjustment, the bias decreased progressively, down to nearly 0%. The data-mining perturbation analysis described here is recommended for use in detecting and correcting the bias of unmeasured factors in observational studies.

  3. Viscous Corrections of the Time Incremental Minimization Scheme and Visco-Energetic Solutions to Rate-Independent Evolution Problems

    NASA Astrophysics Data System (ADS)

    Minotti, Luca; Savaré, Giuseppe

    2018-02-01

    We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.

  4. Emerging technology for transonic wind-tunnel-wall interference assessment and corrections

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Kemp, W. B., Jr.; Garriz, J. A.

    1988-01-01

    Several nonlinear transonic codes and a panel method code for wind tunnel/wall interference assessment and correction (WIAC) studies are reviewed. Contrasts between two- and three-dimensional transonic testing factors which affect WIAC procedures are illustrated with airfoil data from the NASA/Langley 0.3-meter transonic cyrogenic tunnel and Pathfinder I data. Also, three-dimensional transonic WIAC results for Mach number and angle-of-attack corrections to data from a relatively large 20 deg swept semispan wing in the solid wall NASA/Ames high Reynolds number Channel I are verified by three-dimensional thin-layer Navier-Stokes free-air solutions.

  5. Experimental setup for the determination of the correction factors of the neutron doseratemeters in fast neutron fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliescu, Elena; Bercea, Sorin; Dudu, Dorin

    2013-12-16

    The use of the U-120 Cyclotron of the IFIN-HH allowed to perform a testing bench with fast neutrons in order to determine the correction factors of the doseratemeters dedicated to neutron measurement. This paper deals with researchers performed in order to develop the irradiation facility testing the fast neutrons flux generated at the Cyclotron. This facility is presented, together with the results obtain in determining the correction factor for a doseratemeter dedicated to the neutron dose equivalent rate measurement.

  6. Quality correction factors of composite IMRT beam deliveries: theoretical considerations.

    PubMed

    Bouchard, Hugo

    2012-11-01

    In the scope of intensity modulated radiation therapy (IMRT) dosimetry using ionization chambers, quality correction factors of plan-class-specific reference (PCSR) fields are theoretically investigated. The symmetry of the problem is studied to provide recommendable criteria for composite beam deliveries where correction factors are minimal and also to establish a theoretical limit for PCSR delivery k(Q) factors. The concept of virtual symmetric collapsed (VSC) beam, being associated to a given modulated composite delivery, is defined in the scope of this investigation. Under symmetrical measurement conditions, any composite delivery has the property of having a k(Q) factor identical to its associated VSC beam. Using this concept of VSC, a fundamental property of IMRT k(Q) factors is demonstrated in the form of a theorem. The sensitivity to the conditions required by the theorem is thoroughly examined. The theorem states that if a composite modulated beam delivery produces a uniform dose distribution in a volume V(cyl) which is symmetric with the cylindrical delivery and all beams fulfills two conditions in V(cyl): (1) the dose modulation function is unchanged along the beam axis, and (2) the dose gradient in the beam direction is constant for a given lateral position; then its associated VSC beam produces no lateral dose gradient in V(cyl), no matter what beam modulation or gantry angles are being used. The examination of the conditions required by the theorem lead to the following results. The effect of the depth-dose gradient not being perfectly constant with depth on the VSC beam lateral dose gradient is found negligible. The effect of the dose modulation function being degraded with depth on the VSC beam lateral dose gradient is found to be only related to scatter and beam hardening, as the theorem holds also for diverging beams. The use of the symmetry of the problem in the present paper leads to a valuable theorem showing that k(Q) factors of composite IMRT

  7. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    PubMed

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Real-time correction of tsunami site effect by frequency-dependent tsunami-amplification factor

    NASA Astrophysics Data System (ADS)

    Tsushima, H.

    2017-12-01

    For tsunami early warning, I developed frequency-dependent tsunami-amplification factor and used it to design a recursive digital filter that can be applicable for real-time correction of tsunami site response. In this study, I assumed that a tsunami waveform at an observing point could be modeled by convolution of source, path and site effects in time domain. Under this assumption, spectral ratio between offshore and the nearby coast can be regarded as site response (i.e. frequency-dependent amplification factor). If the amplification factor can be prepared before tsunamigenic earthquakes, its temporal convolution to offshore tsunami waveform provides tsunami prediction at coast in real time. In this study, tsunami waveforms calculated by tsunami numerical simulations were used to develop frequency-dependent tsunami-amplification factor. Firstly, I performed numerical tsunami simulations based on nonlinear shallow-water theory from many tsuanmigenic earthquake scenarios by varying the seismic magnitudes and locations. The resultant tsunami waveforms at offshore and the nearby coastal observing points were then used in spectral-ratio analysis. An average of the resulted spectral ratios from the tsunamigenic-earthquake scenarios is regarded as frequency-dependent amplification factor. Finally, the estimated amplification factor is used in design of a recursive digital filter that can be applicable in time domain. The above procedure is applied to Miyako bay at the Pacific coast of northeastern Japan. The averaged tsunami-height spectral ratio (i.e. amplification factor) between the location at the center of the bay and the outside show a peak at wave-period of 20 min. A recursive digital filter based on the estimated amplification factor shows good performance in real-time correction of tsunami-height amplification due to the site effect. This study is supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grant 15K16309.

  9. Intercomparison of methods for coincidence summing corrections in gamma-ray spectrometry--part II (volume sources).

    PubMed

    Lépy, M-C; Altzitzoglou, T; Anagnostakis, M J; Capogni, M; Ceccatelli, A; De Felice, P; Djurasevic, M; Dryak, P; Fazio, A; Ferreux, L; Giampaoli, A; Han, J B; Hurtado, S; Kandic, A; Kanisch, G; Karfopoulos, K L; Klemola, S; Kovar, P; Laubenstein, M; Lee, J H; Lee, J M; Lee, K B; Pierre, S; Carvalhal, G; Sima, O; Tao, Chau Van; Thanh, Tran Thien; Vidmar, T; Vukanac, I; Yang, M J

    2012-09-01

    The second part of an intercomparison of the coincidence summing correction methods is presented. This exercise concerned three volume sources, filled with liquid radioactive solution. The same experimental spectra, decay scheme and photon emission intensities were used by all the participants. The results were expressed as coincidence summing corrective factors for several energies of (152)Eu and (134)Cs, and different source-to-detector distances. They are presented and discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Thin wing corrections for phase-change heat-transfer data.

    NASA Technical Reports Server (NTRS)

    Hunt, J. L.; Pitts, J. I.

    1971-01-01

    Since no methods are available for determining the magnitude of the errors incurred when the semiinfinite slab assumption is violated, a computer program was developed to calculate the heat-transfer coefficients to both sides of a finite, one-dimensional slab subject to the boundary conditions ascribed to the phase-change coating technique. The results have been correlated in the form of correction factors to the semiinfinite slab solutions in terms of parameters normally used with the technique.

  11. Entrance dose measurements for in‐vivo diode dosimetry: Comparison of correction factors for two types of commercial silicon diode detectors

    PubMed Central

    Zhu, X. R.

    2000-01-01

    Silicon diode dosimeters have been used routinely for in‐vivo dosimetry. Despite their popularity, an appropriate implementation of an in‐vivo dosimetry program using diode detectors remains a challenge for clinical physicists. One common approach is to relate the diode readout to the entrance dose, that is, dose to the reference depth of maximum dose such as dmax for the 10×10 cm2 field. Various correction factors are needed in order to properly infer the entrance dose from the diode readout, depending on field sizes, target‐to‐surface distances (TSD), and accessories (such as wedges and compensate filters). In some clinical practices, however, no correction factor is used. In this case, a diode‐dosimeter‐based in‐vivo dosimetry program may not serve the purpose effectively; that is, to provide an overall check of the dosimetry procedure. In this paper, we provide a formula to relate the diode readout to the entrance dose. Correction factors for TSD, field size, and wedges used in this formula are also clearly defined. Two types of commercial diode detectors, ISORAD (n‐type) and the newly available QED (p‐type) (Sun Nuclear Corporation), are studied. We compared correction factors for TSDs, field sizes, and wedges. Our results are consistent with the theory of radiation damage of silicon diodes. Radiation damage has been shown to be more serious for n‐type than for p‐type detectors. In general, both types of diode dosimeters require correction factors depending on beam energy, TSD, field size, and wedge. The magnitudes of corrections for QED (p‐type) diodes are smaller than ISORAD detectors. PACS number(s): 87.66.–a, 87.52.–g PMID:11674824

  12. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  13. On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.

    1993-01-01

    We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.

  14. SU-F-BRD-15: Quality Correction Factors in Scanned Or Broad Proton Therapy Beams Are Indistinguishable

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorriaux, J; Lee, J; ICTEAM Institute, Universite catholique de Louvain, Louvain-la-Neuve

    2015-06-15

    Purpose: The IAEA TRS-398 code of practice details the reference conditions for reference dosimetry of proton beams using ionization chambers and the required beam quality correction factors (kQ). Pencil beam scanning (PBS) requires multiple spots to reproduce the reference conditions. The objective is to demonstrate, using Monte Carlo (MC) calculations, that kQ factors for broad beams can be used for scanned beams under the same reference conditions with no significant additional uncertainty. We consider hereafter the general Alfonso formalism (Alfonso et al, 2008) for non-standard beam. Methods: To approach the reference conditions and the associated dose distributions, PBS must combinemore » many pencil beams with range modulation and shaping techniques different than those used in passive systems (broad beams). This might lead to a different energy spectrum at the measurement point. In order to evaluate the impact of these differences on kQ factors, ion chamber responses are computed with MC (Geant4 9.6) in a dedicated scanned pencil beam (Q-pcsr) producing a 10×10cm2 composite field with a flat dose distribution from 10 to 16 cm depth. Ion chamber responses are also computed by MC in a broad beam with quality Q-ds (double scattering). The dose distribution of Q -pcsr matches the dose distribution of Q-ds. k-(Q-pcsr,Q-ds) is computed for a 2×2×0.2cm{sup 3} idealized air cavity and a realistic plane-parallel ion chamber (IC). Results: Under reference conditions, quality correction factors for a scanned composite field versus a broad beam are the same for air cavity dose response, k-(Q-pcsr,Q-ds) =1.001±0.001 and for a Roos IC, k-(Q-pcsr,Q-ds) =0.999±0.005. Conclusion: Quality correction factors for ion chamber response in scanned and broad proton therapy beams are identical under reference conditions within the calculation uncertainties. The results indicate that quality correction factors published in IAEA TRS-398 can be used for scanned beams in the

  15. On the p(dis) correction factor for cylindrical chambers.

    PubMed

    Andreo, Pedro

    2010-03-07

    The authors of a recent paper (Wang and Rogers 2009 Phys. Med. Biol. 54 1609) have used the Monte Carlo method to simulate the 'classical' experiment made more than 30 years ago by Johansson et al (1978 National and International Standardization of Radiation Dosimetry (Atlanta 1977) vol 2 (Vienna: IAEA) pp 243-70) on the displacement (or replacement) perturbation correction factor p(dis) for cylindrical chambers in 60Co and high-energy photon beams. They conclude that an 'unreasonable normalization at dmax' of the ionization chambers response led to incorrect results, and for the IAEA TRS-398 Code of Practice, which uses ratios of those results, 'the difference in the correction factors can lead to a beam calibration deviation of more than 0.5% for Farmer-like chambers'. The present work critically examines and questions some of the claims and generalized conclusions of the paper. It is demonstrated that for real, commercial Farmer-like chambers, the possible deviations in absorbed dose would be much smaller (typically 0.13%) than those stated by Wang and Rogers, making the impact of their proposed values negligible on practical high-energy photon dosimetry. Differences of the order of 0.4% would only appear at the upper extreme of the energies potentially available for clinical use (around 25 MV) and, because lower energies are more frequently used, the number of radiotherapy photon beams for which the deviations would be larger than say 0.2% is extremely small. This work also raises concerns on the proposed value of pdis for Farmer chambers at the reference quality of 60Co in relation to their impact on electron beam dosimetry, both for direct dose determination using these chambers and for the cross-calibration of plane-parallel chambers. The proposed increase of about 1% in p(dis) (compared with TRS-398) would lower the kQ factors and therefore Dw in electron beams by the same amount. This would yield a severe discrepancy with the current good agreement between

  16. Factors related to stability following the surgical correction of skeletal open bite.

    PubMed

    Ito, Goshi; Koh, Myongsun; Fujita, Tadashi; Shirakura, Maya; Ueda, Hiroshi; Tanne, Kazuo

    2014-05-01

    If a skeletal anterior open bite malocclusion is treated by orthognathic surgery directed only at the mandible, the lower jaw is repositioned upward in a counter-clockwise rotation. However, this procedure has a high risk of relapse. In the present study, the key factors associated with post-surgical stability of corrected skeletal anterior open bite malocclusions were investigated. Eighteen orthognathic patients were subjected to cephalometric analysis to assess the dental and skeletal changes following mandibular surgery for the correction of an anterior open bite. The patients were divided into two groups, determined by an increase or decrease in nasion-menton (N-Me) distance as a consequence of surgery. Changes in overbite, the displacements of molars and positional changes in Menton were evaluated immediately before and after surgery and after a minimum of one year post-operatively. The group with a decreased N-Me distance exhibited a significantly greater backward positioning of the mandible. The group with an increased N-Me distance experienced significantly greater dentoalveolar extrusion of the lower molars. A sufficient mandibular backward repositioning is an effective technique in the prevention of open bite relapse. In addition, it is important not to induce molar extrusion during post-surgical orthodontic treatment to preserve stability of the surgical open bite correction.

  17. The central electrode correction factor for high-Z electrodes in small ionization chambers.

    PubMed

    Muir, B R; Rogers, D W O

    2011-02-01

    Recent Monte Carlo calculations of beam quality conversion factors for ion chambers that use high-Z electrodes [B. R. Muir and D. W. O. Rogers, Med. Phys. 37, 5939-5950 (2010)] have shown large deviations of kQ values from values calculated using the same techniques as the TG-51 and TRS-398 protocols. This report investigates the central electrode correction factor, Pcel, for these chambers. Ionization chambers are modeled and Pcel is calculated using the EGSnrc user code egs_chamber for three cases: in photon and electron beams under reference conditions; as a function of distance from an iridium-192 point source in a water phantom; and as a function of depth in a water phantom on which a 200 kVp x-ray source or 6 MV beam is incident. In photon beams, differences of up to 3% between Pcel calculations for a chamber with a high-Z electrode and those used by TG-51 for a 1 mm diameter aluminum electrode are observed. The central electrode correction factor for a given value of the beam quality specifier is different depending on the amount of filtration of the photon beam. However, in an unfiltered 6 MV beam, Pcel, varies by only 0.3% for a chamber with a high-Z electrode as the depth is varied from 1 to 20 cm in water. The difference between Pcel calculations for chambers with high-Z electrodes and TG-51 values for a chamber with an aluminum electrode is up to 0.45% in electron beams. The central electrode correction, which is roughly proportional to the chambers absorbed dose sensitivity, is found to be large and variable as a function of distance for chambers with high-Z and aluminum electrodes in low-energy photon fields. In this work, ionization chambers that employ high-Z electrodes have been shown to be problematic in various situations. For beam quality conversion factors, the ratio of Pcel in a beam quality Q to that in a Co-60 beam is required; for some chambers, kQ is significantly different from current dosimetry protocol values because of central

  18. Quality correction factors of composite IMRT beam deliveries: Theoretical considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, Hugo

    2012-11-15

    Purpose: In the scope of intensity modulated radiation therapy (IMRT) dosimetry using ionization chambers, quality correction factors of plan-class-specific reference (PCSR) fields are theoretically investigated. The symmetry of the problem is studied to provide recommendable criteria for composite beam deliveries where correction factors are minimal and also to establish a theoretical limit for PCSR delivery k{sub Q} factors. Methods: The concept of virtual symmetric collapsed (VSC) beam, being associated to a given modulated composite delivery, is defined in the scope of this investigation. Under symmetrical measurement conditions, any composite delivery has the property of having a k{sub Q} factor identicalmore » to its associated VSC beam. Using this concept of VSC, a fundamental property of IMRT k{sub Q} factors is demonstrated in the form of a theorem. The sensitivity to the conditions required by the theorem is thoroughly examined. Results: The theorem states that if a composite modulated beam delivery produces a uniform dose distribution in a volume V{sub cyl} which is symmetric with the cylindrical delivery and all beams fulfills two conditions in V{sub cyl}: (1) the dose modulation function is unchanged along the beam axis, and (2) the dose gradient in the beam direction is constant for a given lateral position; then its associated VSC beam produces no lateral dose gradient in V{sub cyl}, no matter what beam modulation or gantry angles are being used. The examination of the conditions required by the theorem lead to the following results. The effect of the depth-dose gradient not being perfectly constant with depth on the VSC beam lateral dose gradient is found negligible. The effect of the dose modulation function being degraded with depth on the VSC beam lateral dose gradient is found to be only related to scatter and beam hardening, as the theorem holds also for diverging beams. Conclusions: The use of the symmetry of the problem in the present paper

  19. Correction factors for ionization chamber measurements with the ‘Valencia’ and ‘large field Valencia’ brachytherapy applicators

    NASA Astrophysics Data System (ADS)

    Gimenez-Alventosa, V.; Gimenez, V.; Ballester, F.; Vijande, J.; Andreo, P.

    2018-06-01

    Treatment of small skin lesions using HDR brachytherapy applicators is a widely used technique. The shielded applicators currently available in clinical practice are based on a tungsten-alloy cup that collimates the source-emitted radiation into a small region, hence protecting nearby tissues. The goal of this manuscript is to evaluate the correction factors required for dose measurements with a plane-parallel ionization chamber typically used in clinical brachytherapy for the ‘Valencia’ and ‘large field Valencia’ shielded applicators. Monte Carlo simulations have been performed using the PENELOPE-2014 system to determine the absorbed dose deposited in a water phantom and in the chamber active volume with a Type A uncertainty of the order of 0.1%. The average energies of the photon spectra arriving at the surface of the water phantom differ by approximately 10%, being 384 keV for the ‘Valencia’ and 343 keV for the ‘large field Valencia’. The ionization chamber correction factors have been obtained for both applicators using three methods, their values depending on the applicator being considered. Using a depth-independent global chamber perturbation correction factor and no shift of the effective point of measurement yields depth-dose differences of up to 1% for the ‘Valencia’ applicator. Calculations using a depth-dependent global perturbation factor, or a shift of the effective point of measurement combined with a constant partial perturbation factor, result in differences of about 0.1% for both applicators. The results emphasize the relevance of carrying out detailed Monte Carlo studies for each shielded brachytherapy applicator and ionization chamber.

  20. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soh, R; Lee, J; Harianto, F

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute materialmore » for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.« less

  1. Small field detector correction factors: effects of the flattening filter for Elekta and Varian linear accelerators

    PubMed Central

    Liu, Paul Z.Y.; Lee, Christopher; McKenzie, David R.; Suchowerska, Natalka

    2016-01-01

    Flattening filter‐free (FFF) beams are becoming the preferred beam type for stereotactic radiosurgery (SRS) and stereotactic ablative radiation therapy (SABR), as they enable an increase in dose rate and a decrease in treatment time. This work assesses the effects of the flattening filter on small field output factors for 6 MV beams generated by both Elekta and Varian linear accelerators, and determines differences between detector response in flattened (FF) and FFF beams. Relative output factors were measured with a range of detectors (diodes, ionization chambers, radiochromic film, and microDiamond) and referenced to the relative output factors measured with an air core fiber optic dosimeter (FOD), a scintillation dosimeter developed at Chris O'Brien Lifehouse, Sydney. Small field correction factors were generated for both FF and FFF beams. Diode measured detector response was compared with a recently published mathematical relation to predict diode response corrections in small fields. The effect of flattening filter removal on detector response was quantified using a ratio of relative detector responses in FFF and FF fields for the same field size. The removal of the flattening filter was found to have a small but measurable effect on ionization chamber response with maximum deviations of less than ±0.9% across all field sizes measured. Solid‐state detectors showed an increased dependence on the flattening filter of up to ±1.6%. Measured diode response was within ±1.1% of the published mathematical relation for all fields up to 30 mm, independent of linac type and presence or absence of a flattening filter. For 6 MV beams, detector correction factors between FFF and FF beams are interchangeable for a linac between FF and FFF modes, providing that an additional uncertainty of up to ±1.6% is accepted. PACS number(s): 87.55.km, 87.56.bd, 87.56.Da PMID:27167280

  2. Asymptotic, multigroup flux reconstruction and consistent discontinuity factors

    DOE PAGES

    Trahan, Travis J.; Larsen, Edward W.

    2015-05-12

    Recent theoretical work has led to an asymptotically derived expression for reconstructing the neutron flux from lattice functions and multigroup diffusion solutions. The leading-order asymptotic term is the standard expression for flux reconstruction, i.e., it is the product of a shape function, obtained through a lattice calculation, and the multigroup diffusion solution. The first-order asymptotic correction term is significant only where the gradient of the diffusion solution is not small. Inclusion of this first-order correction term can significantly improve the accuracy of the reconstructed flux. One may define discontinuity factors (DFs) to make certain angular moments of the reconstructed fluxmore » continuous across interfaces between assemblies in 1-D. Indeed, the standard assembly discontinuity factors make the zeroth moment (scalar flux) of the reconstructed flux continuous. The inclusion of the correction term in the flux reconstruction provides an additional degree of freedom that can be used to make two angular moments of the reconstructed flux continuous across interfaces by using current DFs in addition to flux DFs. Thus, numerical results demonstrate that using flux and current DFs together can be more accurate than using only flux DFs, and that making the second angular moment continuous can be more accurate than making the zeroth moment continuous.« less

  3. SU-F-BRE-01: A Rapid Method to Determine An Upper Limit On a Radiation Detector's Correction Factor During the QA of IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamio, Y; Bouchard, H

    2014-06-15

    Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less

  4. Comment on ``Accurate and fast numerical solution of Poisson's equation for arbitrary, space-filling Voronoi polyhedra: Near-field corrections revisited''

    NASA Astrophysics Data System (ADS)

    Gonis, A.; Zhang, X.-G.

    2012-09-01

    This is a Comment on the paper by Alam, Wilson, and Johnson [Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.84.205106 84, 205106 (2011)], proposing the solution of the near-field corrections (NFCs) problem for the Poisson equation for extended, e.g., space-filling charge densities. We point out that the problem considered by the authors can be simply avoided by means of performing certain integrals in a particular order, whereas, their method does not address the genuine problem of NFCs that arises when the solution of the Poisson equation is attempted within multiple-scattering theory. We also point out a flaw in their line of reasoning, leading to the expression for the potential inside the bounding sphere of a cell that makes it inapplicable for certain geometries.

  5. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  6. Corridor of existence of thermodynamically consistent solution of the Ornstein-Zernike equation.

    PubMed

    Vorob'ev, V S; Martynov, G A

    2007-07-14

    We obtain the exact equation for a correction to the Ornstein-Zernike (OZ) equation based on the assumption of the uniqueness of thermodynamical functions. We show that this equation is reduced to a differential equation with one arbitrary parameter for the hard sphere model. The compressibility factor within narrow limits of this parameter variation can either coincide with one of the formulas obtained on the basis of analytical solutions of the OZ equation or assume all intermediate values lying in a corridor between these solutions. In particular, we find the value of this parameter when the thermodynamically consistent compressibility factor corresponds to the Carnahan-Stirling formula.

  7. Diffusion of Small Solute Particles in Viscous Liquids: Cage Diffusion, a Result of Decoupling of Solute-Solvent Dynamics, Leads to Amplification of Solute Diffusion.

    PubMed

    Acharya, Sayantan; Nandi, Manoj K; Mandal, Arkajit; Sarkar, Sucharita; Bhattacharyya, Sarika Maitra

    2015-08-27

    We study the diffusion of small solute particles through solvent by keeping the solute-solvent interaction repulsive and varying the solvent properties. The study involves computer simulations, development of a new model to describe diffusion of small solutes in a solvent, and also mode coupling theory (MCT) calculations. In a viscous solvent, a small solute diffuses via coupling to the solvent hydrodynamic modes and also through the transient cages formed by the solvent. The model developed can estimate the independent contributions from these two different channels of diffusion. Although the solute diffusion in all the systems shows an amplification, the degree of it increases with solvent viscosity. The model correctly predicts that when the solvent viscosity is high, the solute primarily diffuses by exploiting the solvent cages. In such a scenario the MCT diffusion performed for a static solvent provides a correct estimation of the cage diffusion.

  8. The Impact of Individual and Institutional Factors on Turnover Intent Among Taiwanese Correctional Staff.

    PubMed

    Lai, Yung-Lien

    2017-01-01

    The existing literature on turnover intent among correctional staff conducted in Western societies focuses on the impact of individual-level factors; the possible effects of institutional contexts have been largely overlooked. Moreover, the relationships of various multidimensional conceptualizations of both job satisfaction and organizational commitment to turnover intent are still largely unknown. Using data collected by a self-reported survey of 676 custody staff employed in 22 Taiwanese correctional facilities during April of 2011, the present study expands upon theoretical models developed in Western societies and examines the effects of both individual and institutional factors on turnover intent simultaneously. Results from the use of the hierarchical linear modeling (HLM) statistical method indicate that, at the individual-level, supervisory versus non-supervisory status, job stress, job dangerousness, job satisfaction, and organizational commitment consistently produce a significant association with turnover intent after controlling for personal characteristics. Specifically, three distinct forms of organizational commitment demonstrated an inverse impact on turnover intent. Among institutional-level variables, custody staff who came from a larger facility reported higher likelihood of thinking about quitting their job. © The Author(s) 2015.

  9. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  10. Effect of Irrigation Time of Antiseptic Solutions on Bone Cell Viability and Growth Factor Release.

    PubMed

    Sawada, Kosaku; Nakahara, Ken; Haga-Tsujimura, Maiko; Fujioka-Kobayashi, Masako; Iizuka, Tateyuki; Miron, Richard J

    2018-03-01

    Antiseptic solutions are commonly utilized to treat local infection in the oral and maxillofacial region. However, surrounding vital bone is also exposed to antiseptic agents during irrigation and may have a potential negative impact on bone survival. The aim of the present study was therefore to investigate the effect of rinsing time with various antiseptic solutions on bone cell viability, as well as their subsequent release of growth factors important for bone regeneration. The bone samples collected from porcine mandible were rinsed in the following commonly utilized antiseptic solutions; povidone-iodine (0.5%), chlorhexidine digluconate (CHX, 0.2%), hydrogen peroxide (1%), and sodium hypochlorite (0.25%) for 1, 5, 10, 20, 30, or 60 minutes and assessed for cell viability and release of growth factors including vascular endothelial growth factor, transforming growth factor beta 1, bone morphogenetic protein 2, receptor activator of nuclear factor kappa-B ligand, and interleukin-1 beta by enzyme-linked immunosorbent assay. It was found in all the tested groups that the long exposure of any of the tested antiseptic solutions drastically promoted higher cell death. Sodium hypochlorite demonstrated the significantly highest cell death and at all time points. Interestingly, bone cell viability was highest in the CHX group post short-term rinsing of 1, 5, or 10 minutes when compared with the other 4 tested groups. A similar trend was also observed in subsequent growth factor release. The present study demonstrated that of the 4 tested antiseptic solutions, short-term CHX rinsing (ideally within 1 minute) favored bone cell viability and growth factor release. Clinical protocols should be adapted accordingly.

  11. Operator Factorization and the Solution of Second-Order Linear Ordinary Differential Equations

    ERIC Educational Resources Information Center

    Robin, W.

    2007-01-01

    The theory and application of second-order linear ordinary differential equations is reviewed from the standpoint of the operator factorization approach to the solution of ordinary differential equations (ODE). Using the operator factorization approach, the general second-order linear ODE is solved, exactly, in quadratures and the resulting…

  12. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 12: Suitability of plan class specific reference fields for estimating dosimeter correction factors for small clinical CyberKnife fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandervoort, Eric; Christiansen, Eric; Belec, Jaso

    Purpose: The purpose of this work is to investigate the utility of plan class specific reference (PCSR) fields for predicting dosimeter response within isocentric and non-isocentric composite clinical fields using the smallest fields employed by the CyberKnife radiosurgery system. Methods: Monte Carlo dosimeter response correction factors (CFs) were calculated for a plastic scintillator and microchamber dosimeter in 21 clinical fields and 9 candidate plan-class PCSR fields which employ the 5, 7.5 and 10 mm diameter collimators. Measurements were performed in 5 PCSR fields to confirm the predicted relative response of detectors in the same field. Results: Ratios of corrected measuredmore » dose in the PCSR fields agree to within 1% of unity. Calculated CFs for isocentric fields agree within 1.5% of those for PCSR fields. Large and variable microchamber CFs are required for non-isocentric fields, with differences as high as 5% between different clinical fields in the same plan class and 4% within the same field depending on the point of measurement. Non-isocentric PCSR fields constructed to have relatively homogenous dose over a region larger than the detector have very different ion chamber CFs from clinical fields. The plastic scintillator detector has much more consistent response within each plan class but still require 3–4% corrections in some fields. Conclusions: While the PCSR field concept is useful for small isocentric fields, this approach may not be appropriate for non-isocentric clinical fields which exhibit large and variable ion chamber CFs which differ significantly from CFs for homogenous field PCSRs.« less

  13. II. Comment on “Critique and correction of the currently accepted solution of the infinite spherical well in quantum mechanics” by Huang Young-Sea and Thomann Hans-Rudolph

    NASA Astrophysics Data System (ADS)

    Prados, Antonio; Plata, Carlos A.

    2016-12-01

    We comment on the paper "Critique and correction of the currently accepted solution of the infinite spherical well in quantum mechanics" by Huang Young-Sea and Thomann Hans-Rudolph, EPL 115, 60001 (2016) .

  14. An entropy correction method for unsteady full potential flows with strong shocks

    NASA Technical Reports Server (NTRS)

    Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.

    1986-01-01

    An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.

  15. An exact solution of a simplified two-phase plume model. [for solid propellant rocket

    NASA Technical Reports Server (NTRS)

    Wang, S.-Y.; Roberts, B. B.

    1974-01-01

    An exact solution of a simplified two-phase, gas-particle, rocket exhaust plume model is presented. It may be used to make the upper-bound estimation of the heat flux and pressure loads due to particle impingement on the objects existing in the rocket exhaust plume. By including the correction factors to be determined experimentally, the present technique will provide realistic data concerning the heat and aerodynamic loads on these objects for design purposes. Excellent agreement in trend between the best available computer solution and the present exact solution is shown.

  16. Empirical Correction for Differences in Chemical Exchange Rates in Hydrogen Exchange-Mass Spectrometry Measurements.

    PubMed

    Toth, Ronald T; Mills, Brittney J; Joshi, Sangeeta B; Esfandiary, Reza; Bishop, Steven M; Middaugh, C Russell; Volkin, David B; Weis, David D

    2017-09-05

    A barrier to the use of hydrogen exchange-mass spectrometry (HX-MS) in many contexts, especially analytical characterization of various protein therapeutic candidates, is that differences in temperature, pH, ionic strength, buffering agent, or other additives can alter chemical exchange rates, making HX data gathered under differing solution conditions difficult to compare. Here, we present data demonstrating that HX chemical exchange rates can be substantially altered not only by the well-established variables of temperature and pH but also by additives including arginine, guanidine, methionine, and thiocyanate. To compensate for these additive effects, we have developed an empirical method to correct the hydrogen-exchange data for these differences. First, differences in chemical exchange rates are measured by use of an unstructured reporter peptide, YPI. An empirical chemical exchange correction factor, determined by use of the HX data from the reporter peptide, is then applied to the HX measurements obtained from a protein of interest under different solution conditions. We demonstrate that the correction is experimentally sound through simulation and in a proof-of-concept experiment using unstructured peptides under slow-exchange conditions (pD 4.5 at ambient temperature). To illustrate its utility, we applied the correction to HX-MS excipient screening data collected for a pharmaceutically relevant IgG4 mAb being characterized to determine the effects of different formulations on backbone dynamics.

  17. The effects of geology and the impact of seasonal correction factors on indoor radon levels: a case study approach.

    PubMed

    Gillmore, Gavin K; Phillips, Paul S; Denman, Antony R

    2005-01-01

    Geology has been highlighted by a number of authors as a key factor in high indoor radon levels. In the light of this, this study examines the application of seasonal correction factors to indoor radon concentrations in the UK. This practice is based on an extensive database gathered by the National Radiological Protection Board over the years (small-scale surveys began in 1976 and continued with a larger scale survey in 1988) and reflects well known seasonal variations observed in indoor radon levels. However, due to the complexity of underlying geology (the UK arguably has the world's most complex solid and surficial geology over the shortest distances) and considerable variations in permeability of underlying materials it is clear that there are a significant number of occurrences where the application of a seasonal correction factor may give rise to over-estimated or under-estimated radon levels. Therefore, the practice of applying a seasonal correction should be one that is undertaken with caution, or not at all. This work is based on case studies taken from the Northamptonshire region and comparisons made to other permeable geologies in the UK.

  18. Sulfate and sulfide sulfur isotopes (δ34S and δ33S) measured by solution and laser ablation MC-ICP-MS: An enhanced approach using external correction

    USGS Publications Warehouse

    Pribil, Michael; Ridley, William I.; Emsbo, Poul

    2015-01-01

    Isotope ratio measurements using a multi-collector inductively coupled plasma mass spectrometer (MC-ICP-MS) commonly use standard-sample bracketing with a single isotope standard for mass bias correction for elements with narrow-range isotope systems measured by MC-ICP-MS, e.g. Cu, Fe, Zn, and Hg. However, sulfur (S) isotopic composition (δ34S) in nature can range from at least − 40 to + 40‰, potentially exceeding the ability of standard-sample bracketing using a single sulfur isotope standard to accurately correct for mass bias. Isotopic fractionation via solution and laser ablation introduction was determined during sulfate sulfur (Ssulfate) isotope measurements. An external isotope calibration curve was constructed using in-house and National Institute of Standards and Technology (NIST) Ssulfate isotope reference materials (RM) in an attempt to correct for the difference. The ability of external isotope correction for Ssulfate isotope measurements was evaluated by analyzing NIST and United States Geological Survey (USGS) Ssulfate isotope reference materials as unknowns. Differences in δ34Ssulfate between standard-sample bracketing and standard-sample bracketing with external isotope correction for sulfate samples ranged from 0.72‰ to 2.35‰ over a δ34S range of 1.40‰ to 21.17‰. No isotopic differences were observed when analyzing Ssulfide reference materials over a δ34Ssulfide range of − 32.1‰ to 17.3‰ and a δ33S range of − 16.5‰ to 8.9‰ via laser ablation (LA)-MC-ICP-MS. Here, we identify a possible plasma induced fractionation for Ssulfate and describe a new method using external isotope calibration corrections using solution and LA-MC-ICP-MS.

  19. Comment on: Accurate and fast numerical solution of Poisson s equation for arbitrary, space-filling Voronoi polyhedra: Near-field corrections revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonis, Antonios; Zhang, Xiaoguang

    2012-01-01

    This is a comment on the paper by Aftab Alam, Brian G. Wilson, and D. D. Johnson [1], proposing the solution of the near-field corrections (NFC s) problem for the Poisson equation for extended, e.g., space filling, charge densities. We point out that the problem considered by the authors can be simply avoided by means of performing certain integrals in a particular order, while their method does not address the genuine problem of NFC s that arises when the solution of the Poisson equation is attempted within multiple scattering theory. We also point out a flaw in their line ofmore » reasoning leading to the expression for the potential inside the bounding sphere of a cell that makes it inapplicable to certain geometries.« less

  20. Exact-solution for cone-plate viscometry

    NASA Astrophysics Data System (ADS)

    Giacomin, A. J.; Gilbert, P. H.

    2017-11-01

    The viscosity of a Newtonian fluid is often measured by confining the fluid to the gap between a rotating cone that is perpendicular to a fixed disk. We call this experiment cone-plate viscometry. When the cone angle approaches π/2 , the viscometer gap is called narrow. The shear stress in the fluid, throughout a narrow gap, hardly departs from the shear stress exerted on the plate, and we thus call cone-plate flow nearly homogeneous. In this paper, we derive an exact solution for this slight heterogeneity, and from this, we derive the correction factors for the shear rate on the cone and plate, for the torque, and thus, for the measured Newtonian viscosity. These factors thus allow the cone-plate viscometer to be used more accurately, and with cone-angles well below π/2 . We find cone-plate flow field heterogeneity to be far slighter than previously thought. We next use our exact solution for the velocity to arrive at the exact solution for the temperature rise, due to viscous dissipation, in cone-plate flow subject to isothermal boundaries. Since Newtonian viscosity is a strong function of temperature, we expect our new exact solution for the temperature rise be useful to those measuring Newtonian viscosity, and especially so, to those using wide gaps. We include two worked examples to teach practitioners how to use our main results.

  1. Method to determine the position-dependant metal correction factor for dose-rate equivalent laser testing of semiconductor devices

    DOEpatents

    Horn, Kevin M.

    2013-07-09

    A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.

  2. The Harrison Diffusion Kinetics Regimes in Solute Grain Boundary Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belova, Irina; Fiedler, T; Kulkarni, Nagraj S

    2012-01-01

    Knowledge of the limits of the principal Harrison kinetics regimes (Type-A, B and C) for grain boundary diffusion is very important for the correct analysis of the depth profiles in a tracer diffusion experiment. These regimes for self-diffusion have been extensively studied in the past by making use of the phenomenological Lattice Monte Carlo (LMC) method with the result that the limits are now well established. The relationship of those self-diffusion limits to the corresponding ones for solute diffusion in the presence of solute segregation to the grain boundaries remains unclear. In the present study, the influence of solute segregationmore » on the limits is investigated with the LMC method for the well-known parallel grain boundary slab model by showing the equivalence of two diffusion models. It is shown which diffusion parameters are useful for identifying the limits of the Harrison kinetics regimes for solute grain boundary diffusion. It is also shown how the measured segregation factor from the diffusion experiment in the Harrison Type-B kinetics regime may differ from the global segregation factor.« less

  3. Development of a correction factor for Xe-133 vials for use with a dose calibrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gels, G.L.; Piltingsrud, H.V.

    1982-04-01

    Manufacturers of dose calibrators who give calibration settings for various radionuclies sometimes do not specify the type of radionuclide container the calibration is for. The container, moreover, may not be of the same type as those a user might purchase. When these factors are not considered, the activity administered to the patient may be significantly different from that intended. An experiment is described in which calibration factors are determined for measurement of Xe-133 activity in vials in a dose calibrator. This was accomplished by transferring the Xe-133 from the commercial vials to standard NBS calibration ampules. Based on ten suchmore » transfers, the resulting correction factor for the dose calibrator was 1.22.« less

  4. Method for Correcting Control Surface Angle Measurements in Single Viewpoint Photogrammetry

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W. (Inventor); Barrows, Danny A. (Inventor)

    2006-01-01

    A method of determining a corrected control surface angle for use in single viewpoint photogrammetry to correct control surface angle measurements affected by wing bending. First and second visual targets are spaced apart &om one another on a control surface of an aircraft wing. The targets are positioned at a semispan distance along the aircraft wing. A reference target separation distance is determined using single viewpoint photogrammetry for a "wind off condition. An apparent target separation distance is then computed for "wind on." The difference between the reference and apparent target separation distances is minimized by recomputing the single viewpoint photogrammetric solution for incrementally changed values of target semispan distances. A final single viewpoint photogrammetric solution is then generated that uses the corrected semispan distance that produced the minimized difference between the reference and apparent target separation distances. The final single viewpoint photogrammetric solution set is used to determine the corrected control surface angle.

  5. Ionization correction factors for H II regions in blue compact dwarf galaxies

    NASA Astrophysics Data System (ADS)

    Holovatyi, V. V.; Melekh, B. Ya.

    2002-08-01

    Energy distributions in the spectra of the ionizing nuclei of H II regions beyond λ <= 91.2 nm were calculated. A grid of photoionization models of 270 H II regions was constructed. The free parameters of the model grid are the hydrogen density nH in the nebular gas, filling factor, energy Lc-spectrum of ionizing nuclei, and metallicity. The chemical composition from the studies of Izotov et al. were used for model grid initialization. The integral linear spectra calculated for the photoionization models were used to determine the concentration ne, temperatures Te of electrons, and ionic concentrations n(A+i)/n(H+) by the nebular gas diagnostic method. The averaged relative ionic abundances n(A+i)/n(H+) thus calculated were used to determine new expressions for ionization correction factors which we recommend for the determination of abundances in the H II regions of blue compact dwarf galaxies.

  6. Reliability of IGBT in a STATCOM for Harmonic Compensation and Power Factor Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak

    With smart grid integration, there is a need to characterize reliability of a power system by including reliability of power semiconductors in grid related applications. In this paper, the reliability of IGBTs in a STATCOM application is presented for two different applications, power factor correction and harmonic elimination. The STATCOM model is developed in EMTP, and analytical equations for average conduction losses in an IGBT and a diode are derived and compared with experimental data. A commonly used reliability model is used to predict reliability of IGBT.

  7. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  8. Course Corrections. Experts Offer Solutions to the College Cost Crisis

    ERIC Educational Resources Information Center

    Lumina Foundation for Education, 2005

    2005-01-01

    This paper discusses outsourcing as one solution to the college cost crisis. It is not presented as the solution; rather, it is put forth as an attractive strategy characterized by minimal financial and programmatic risk. To explore the basic policy considerations associated with outsourcing, this paper briefly reviews why institutions consider…

  9. 78 FR 33698 - New Animal Drugs; Dexmedetomidine; Lasalocid; Melengestrol; Monensin; and Tylosin; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... 2013 that appeared in the Federal Register of April 30, 2013. FDA is correcting the approved strengths... correcting the approved strengths of dexmedetomidine hydrochloride injectable solution. This correction is...

  10. C5 nerve palsy after posterior reconstruction surgery: predictive risk factors of the incidence and critical range of correction for kyphosis.

    PubMed

    Kurakawa, Takuto; Miyamoto, Hiroshi; Kaneyama, Shuichi; Sumi, Masatoshi; Uno, Koki

    2016-07-01

    It has been reported that the incidence of post-operative segmental nerve palsy, such as C5 palsy, is higher in posterior reconstruction surgery than in conventional laminoplasty. Correction of kyphosis may be related to such a complication. The aim of this study was to elucidate the risk factors of the incidence of post-operative C5 palsy, and the critical range of sagittal realignment in posterior instrumentation surgery. Eighty-eight patients (mean age 64.0 years) were involved. The types of the disease were; 33 spondylosis with kyphosis, 27 rheumatoid arthritis, 17 athetoid cerebral palsy and 11 others. The patients were divided into two groups; Group P: patients with post-operative C5 palsy, and Group NP: patients without C5 palsy. The correction angle of kyphosis, and pre-operative diameter of C4/5 foramen on CT were evaluated between the two groups. Multivariate logistic regression analysis was used to determine the critical range of realignment and the risk factors affecting the incidence of post-operative C5 palsy. Seventeen (19.3 %) of the 88 patients developed C5 palsy. The correction angle of kyphosis in Group P (15.7°) was significantly larger than that in Group NP (4.5°). In Group P, pre-operative diameters of intervertebral foramen at C4/5 (3.2 mm) were significantly smaller than those in Group NP (4.1 mm). The multivariate analysis demonstrated that the risk factors were the correction angle and pre-operative diameter of the C4/5 intervertebral foramen. The logistic regression model showed a correction angle exceeding 20° was critical for developing the palsy when C4/5 foraminal diameter reaches 4.1 mm, and there is a higher risk when the C4/5 foraminal diameter is less than 2.7 mm regardless of any correction. This study has indicated the risk factors of post-operative C5 palsy and the critical range of realignment of the cervical spine after posterior instrumented surgery.

  11. Difficulty Factors, Distribution Effects, and the Least Squares Simplex Data Matrix Solution

    ERIC Educational Resources Information Center

    Ten Berge, Jos M. F.

    1972-01-01

    In the present article it is argued that the Least Squares Simplex Data Matrix Solution does not deal adequately with difficulty factors inasmuch as the theoretical foundation is insufficient. (Author/CB)

  12. On stable exponential cosmological solutions with non-static volume factor in the Einstein-Gauss-Bonnet model

    NASA Astrophysics Data System (ADS)

    Ivashchuk, V. D.; Ernazarov, K. K.

    2017-01-01

    A (n + 1)-dimensional gravitational model with cosmological constant and Gauss-Bonnet term is studied. The ansatz with diagonal cosmological metrics is adopted and solutions with exponential dependence of scale factors: ai ˜ exp (vit), i = 1, …, n, are considered. The stability analysis of the solutions with non-static volume factor is presented. We show that the solutions with v 1 = v 2 = v 3 = H > 0 and small enough variation of the effective gravitational constant G are stable if certain restriction on (vi ) is obeyed. New examples of stable exponential solutions with zero variation of G in dimensions D = 1 + m + 2 with m > 2 are presented.

  13. Factors Associated with Correct and Consistent Insecticide Treated Curtain Use in Iquitos, Peru

    PubMed Central

    Scott, Thomas W.; Elder, John P.; Alexander, Neal; Halsey, Eric S.; McCall, Philip J.

    2016-01-01

    Dengue is an arthropod-borne virus of great public health importance, and control of its mosquito vectors is currently the only available method for prevention. Previous research has suggested that insecticide treated curtains (ITCs) can lower dengue vector infestations in houses. This observational study investigated individual and household-level socio-demographic factors associated with correct and consistent use of ITCs in Iquitos, Peru. A baseline knowledge, attitudes, and practices (KAP) survey was administered to 1,333 study participants, and ITCs were then distributed to 593 households as part of a cluster-randomized trial. Follow up KAP surveys and ITC-monitoring checklists were conducted at 9, 18, and 27 months post-ITC distribution. At 9 months post-distribution, almost 70% of ITCs were hanging properly (e.g. hanging fully extended or tied up), particularly those hung on walls compared to other locations. Proper ITC hanging dropped at 18 months to 45.7%. The odds of hanging ITCs correctly and consistently were significantly greater among those participants who were housewives, knew three or more correct symptoms of dengue and at least one correct treatment for dengue, knew a relative or close friend who had had dengue, had children sleeping under a mosquito net, or perceived a change in the amount of mosquitoes in the home. Additionally, the odds of recommending ITCs in the future were significantly greater among those who perceived a change in the amount of mosquitoes in the home (e.g. perceived the ITCs to be effective). Despite various challenges associated with the sustained effectiveness of the selected ITCs, almost half of the ITCs were still hanging at 18 months, suggesting a feasible vector control strategy for sustained community use. PMID:26967157

  14. Absorptive corrections for vector mesons: matching to complex mass scheme and longitudinal corrections

    NASA Astrophysics Data System (ADS)

    Jiménez Pérez, L. A.; Toledo Sánchez, G.

    2017-12-01

    Unstable spin-1 particles are properly described by including absorptive corrections to the electromagnetic vertex and propagator, without breaking the electromagnetic gauge invariance. We show that the modified propagator can be set in a complex mass form, provided the mass and width parameters, which are properly defined at the pole, are replaced by energy dependent functions fulfilling the same requirements at the pole. We exemplify the case for the {K}* (892) vector meson, and find that the mass function deviates around 2 MeV from the Kπ threshold to the pole, and that the width function exhibits a different behavior compared to the uncorrected energy dependent width. Considering the {τ }-\\to {K}{{S}}{π }-{ν }τ decay as dominated by the {K}* (892) and {K}{\\prime * }(1410) vectors and one scalar particle, we exhibit the role of the transversal and longitudinal corrections to the vector propagator by obtaining the modified vector and scalar form factors. The modified vector form factor is found to be the same as in the complex mass form, while the scalar form factor receives a modification from the longitudinal correction to the vector propagator. A fit to the experimental Kπ spectrum shows that the phase induced by the presence of this new contribution in the scalar sector improves the description of the experimental data in the troublesome region around 0.7 GeV. Besides that, the correction to the scalar form factor is found to be negligible.

  15. Modeling boundary measurements of scattered light using the corrected diffusion approximation

    PubMed Central

    Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.

    2012-01-01

    We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102

  16. Real-Gas Correction Factors for Hypersonic Flow Parameters in Helium

    NASA Technical Reports Server (NTRS)

    Erickson, Wayne D.

    1960-01-01

    The real-gas hypersonic flow parameters for helium have been calculated for stagnation temperatures from 0 F to 600 F and stagnation pressures up to 6,000 pounds per square inch absolute. The results of these calculations are presented in the form of simple correction factors which must be applied to the tabulated ideal-gas parameters. It has been shown that the deviations from the ideal-gas law which exist at high pressures may cause a corresponding significant error in the hypersonic flow parameters when calculated as an ideal gas. For example the ratio of the free-stream static to stagnation pressure as calculated from the thermodynamic properties of helium for a stagnation temperature of 80 F and pressure of 4,000 pounds per square inch absolute was found to be approximately 13 percent greater than that determined from the ideal-gas tabulation with a specific heat ratio of 5/3.

  17. Does an electronic continuum correction improve effective short-range ion-ion interactions in aqueous solution?

    NASA Astrophysics Data System (ADS)

    Bruce, Ellen E.; van der Vegt, Nico F. A.

    2018-06-01

    Non-polarizable force fields for hydrated ions not always accurately describe short-range ion-ion interactions, frequently leading to artificial ion clustering in bulk aqueous solutions. This can be avoided by adjusting the nonbonded anion-cation or cation-water Lennard-Jones parameters. This approach has been successfully applied to different systems, but the parameterization is demanding owing to the necessity of separate investigations of each ion pair. Alternatively, polarization effects may effectively be accounted for using the electronic continuum correction (ECC) of Leontyev et al. [J. Chem. Phys. 119, 8024 (2003)], which involves scaling the ionic charges with the inverse square-root of the water high-frequency dielectric permittivity. ECC has proven to perform well for monovalent salts as well as for divalent salts in water. Its performance, however, for multivalent salts with higher valency remains unexplored. The present work illustrates the applicability of the ECC model to trivalent K3PO4 and divalent K2HPO4 in water. We demonstrate that the ECC models, without additional tuning of force field parameters, provide an accurate description of water-mediated interactions between salt ions. This results in predictions of the osmotic coefficients of aqueous K3PO4 and K2HPO4 solutions in good agreement with experimental data. Analysis of ion pairing thermodynamics in terms of contact ion pair (CIP), solvent-separated ion pair, and double solvent-separated ion pair contributions shows that potassium-phosphate CIP formation is stronger with trivalent than with divalent phosphate ions.

  18. Deriving detector-specific correction factors for rectangular small fields using a scintillator detector.

    PubMed

    Qin, Yujiao; Zhong, Hualiang; Wen, Ning; Snyder, Karen; Huang, Yimei; Chetty, Indrin J

    2016-11-08

    The goal of this study was to investigate small field output factors (OFs) for flat-tening filter-free (FFF) beams on a dedicated stereotactic linear accelerator-based system. From this data, the collimator exchange effect was quantified, and detector-specific correction factors were generated. Output factors for 16 jaw-collimated small fields (from 0.5 to 2 cm) were measured using five different detectors including an ion chamber (CC01), a stereotactic field diode (SFD), a diode detector (Edge), Gafchromic film (EBT3), and a plastic scintillator detector (PSD, W1). Chamber, diodes, and PSD measurements were performed in a Wellhofer water tank, while films were irradiated in solid water at 100 cm source-to-surface distance and 10 cm depth. The collimator exchange effect was quantified for rectangular fields. Monte Carlo (MC) simulations of the measured configurations were also performed using the EGSnrc/DOSXYZnrc code. Output factors measured by the PSD and verified against film and MC calculations were chosen as the benchmark measurements. Compared with plastic scintillator detector (PSD), the small volume ion chamber (CC01) underestimated output factors by an average of -1.0% ± 4.9% (max. = -11.7% for 0.5 × 0.5 cm2 square field). The stereotactic diode (SFD) overestimated output factors by 2.5% ± 0.4% (max. = 3.3% for 0.5 × 1 cm2 rectangular field). The other diode detector (Edge) also overestimated the OFs by an average of 4.2% ± 0.9% (max. = 6.0% for 1 × 1 cm2 square field). Gafchromic film (EBT3) measure-ments and MC calculations agreed with the scintillator detector measurements within 0.6% ± 1.8% and 1.2% ± 1.5%, respectively. Across all the X and Y jaw combinations, the average collimator exchange effect was computed: 1.4% ± 1.1% (CC01), 5.8% ± 5.4% (SFD), 5.1% ± 4.8% (Edge diode), 3.5% ± 5.0% (Monte Carlo), 3.8% ± 4.7% (film), and 5.5% ± 5.1% (PSD). Small field detectors should be used with caution with a clear understanding of their

  19. Pipeline for illumination correction of images for high-throughput microscopy.

    PubMed

    Singh, S; Bray, M-A; Jones, T R; Carpenter, A E

    2014-12-01

    The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  20. The Computation of Orthogonal Independent Cluster Solutions and Their Oblique Analogs in Factor Analysis.

    ERIC Educational Resources Information Center

    Hofmann, Richard J.

    A very general model for the computation of independent cluster solutions in factor analysis is presented. The model is discussed as being either orthogonal or oblique. Furthermore, it is demonstrated that for every orthogonal independent cluster solution there is an oblique analog. Using three illustrative examples, certain generalities are made…

  1. Downward continuation of the free-air gravity anomalies to the ellipsoid using the gradient solution and terrain correction: An attempt of global numerical computations

    NASA Technical Reports Server (NTRS)

    Wang, Y. M.

    1989-01-01

    The formulas for the determination of the coefficients of the spherical harmonic expansion of the disturbing potential of the earth are defined for data given on a sphere. In order to determine the spherical harmonic coefficients, the gravity anomalies have to be analytically downward continued from the earth's surface to a sphere-at least to the ellipsoid. The goal is to continue the gravity anomalies from the earth's surface downward to the ellipsoid using recent elevation models. The basic method for the downward continuation is the gradient solution (the g sub 1 term). The terrain correction was also computed because of the role it can play as a correction term when calculating harmonic coefficients from surface gravity data. The fast Fourier transformation was applied to the computations.

  2. Ionization chamber correction factors for MR-linacs

    NASA Astrophysics Data System (ADS)

    Pojtinger, Stefan; Steffen Dohm, Oliver; Kapsch, Ralf-Peter; Thorwarth, Daniela

    2018-06-01

    Previously, readings of air-filled ionization chambers have been described as being influenced by magnetic fields. To use these chambers for dosimetry in magnetic resonance guided radiotherapy (MRgRT), this effect must be taken into account by introducing a correction factor k B. The purpose of this study is to systematically investigate k B for a typical reference setup for commercially available ionization chambers with different magnetic field strengths. The Monte Carlo simulation tool EGSnrc was used to simulate eight commercially available ionization chambers in magnetic fields whose magnetic flux density was in the range of 0–2.5 T. To validate the simulation, the influence of the magnetic field was experimentally determined for a PTW30013 Farmer-type chamber for magnetic flux densities between 0 and 1.425 T. Changes in the detector response of up to 8% depending on the magnetic flux density, on the chamber geometry and on the chamber orientation were obtained. In the experimental setup, a maximum deviation of less than 2% was observed when comparing measured values with simulated values. Dedicated values for two MR-linac systems (ViewRay MRIdian, ViewRay Inc, Cleveland, United States, 0.35 T/ 6 MV and Elekta Unity, Elekta AB, Stockholm, Sweden, 1.5 T/7 MV) were determined for future use in reference dosimetry. Simulated values for thimble-type chambers are in good agreement with experiments as well as with the results of previous publications. After further experimental validation, the results can be considered for definition of standard protocols for purposes of reference dosimetry in MRgRT.

  3. Ionization chamber correction factors for MR-linacs.

    PubMed

    Pojtinger, Stefan; Dohm, Oliver Steffen; Kapsch, Ralf-Peter; Thorwarth, Daniela

    2018-06-07

    Previously, readings of air-filled ionization chambers have been described as being influenced by magnetic fields. To use these chambers for dosimetry in magnetic resonance guided radiotherapy (MRgRT), this effect must be taken into account by introducing a correction factor k B . The purpose of this study is to systematically investigate k B for a typical reference setup for commercially available ionization chambers with different magnetic field strengths. The Monte Carlo simulation tool EGSnrc was used to simulate eight commercially available ionization chambers in magnetic fields whose magnetic flux density was in the range of 0-2.5 T. To validate the simulation, the influence of the magnetic field was experimentally determined for a PTW30013 Farmer-type chamber for magnetic flux densities between 0 and 1.425 T. Changes in the detector response of up to 8% depending on the magnetic flux density, on the chamber geometry and on the chamber orientation were obtained. In the experimental setup, a maximum deviation of less than 2% was observed when comparing measured values with simulated values. Dedicated values for two MR-linac systems (ViewRay MRIdian, ViewRay Inc, Cleveland, United States, 0.35 T/ 6 MV and Elekta Unity, Elekta AB, Stockholm, Sweden, 1.5 T/7 MV) were determined for future use in reference dosimetry. Simulated values for thimble-type chambers are in good agreement with experiments as well as with the results of previous publications. After further experimental validation, the results can be considered for definition of standard protocols for purposes of reference dosimetry in MRgRT.

  4. Detector signal correction method and system

    DOEpatents

    Carangelo, Robert M.; Duran, Andrew J.; Kudman, Irwin

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  5. Carrier-phase multipath corrections for GPS-based satellite attitude determination

    NASA Technical Reports Server (NTRS)

    Axelrad, A.; Reichert, P.

    2001-01-01

    This paper demonstrates the high degree of spatial repeatability of these errors for a spacecraft environment and describes a correction technique, termed the sky map method, which exploits the spatial correlation to correct measurements and improve the accuracy of GPS-based attitude solutions.

  6. Radiative corrections to the η(') Dalitz decays

    NASA Astrophysics Data System (ADS)

    Husek, Tomáš; Kampf, Karol; Novotný, Jiří; Leupold, Stefan

    2018-05-01

    We provide the complete set of radiative corrections to the Dalitz decays η(')→ℓ+ℓ-γ beyond the soft-photon approximation, i.e., over the whole range of the Dalitz plot and with no restrictions on the energy of a radiative photon. The corrections inevitably depend on the η(')→ γ*γ(*) transition form factors. For the singly virtual transition form factor appearing, e.g., in the bremsstrahlung correction, recent dispersive calculations are used. For the one-photon-irreducible contribution at the one-loop level (for the doubly virtual form factor), we use a vector-meson-dominance-inspired model while taking into account the η -η' mixing.

  7. Detector signal correction method and system

    DOEpatents

    Carangelo, R.M.; Duran, A.J.; Kudman, I.

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  8. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    PubMed

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  9. Correction of measured Gamma-Knife output factors for angular dependence of diode detectors and PinPoint ionization chamber.

    PubMed

    Hršak, Hrvoje; Majer, Marija; Grego, Timor; Bibić, Juraj; Heinrich, Zdravko

    2014-12-01

    Dosimetry for Gamma-Knife requires detectors with high spatial resolution and minimal angular dependence of response. Angular dependence and end effect time for p-type silicon detectors (PTW Diode P and Diode E) and PTW PinPoint ionization chamber were measured with Gamma-Knife beams. Weighted angular dependence correction factors were calculated for each detector. The Gamma-Knife output factors were corrected for angular dependence and end effect time. For Gamma-Knife beams angle range of 84°-54°. Diode P shows considerable angular dependence of 9% and 8% for the 18 mm and 14, 8, 4 mm collimator, respectively. For Diode E this dependence is about 4% for all collimators. PinPoint ionization chamber shows angular dependence of less than 3% for 18, 14 and 8 mm helmet and 10% for 4 mm collimator due to volumetric averaging effect in a small photon beam. Corrected output factors for 14 mm helmet are in very good agreement (within ±0.3%) with published data and values recommended by vendor (Elekta AB, Stockholm, Sweden). For the 8 mm collimator diodes are still in good agreement with recommended values (within ±0.6%), while PinPoint gives 3% less value. For the 4 mm helmet Diodes P and E show over-response of 2.8% and 1.8%, respectively. For PinPoint chamber output factor of 4 mm collimator is 25% lower than Elekta value which is generally not consequence of angular dependence, but of volumetric averaging effect and lack of lateral electronic equilibrium. Diodes P and E represent good choice for Gamma-Knife dosimetry. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Correction factors for on-line microprobe analysis of multielement alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Brewer, W. D.

    1977-01-01

    An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.

  11. Alternate Solution to Generalized Bernoulli Equations via an Integrating Factor: An Exact Differential Equation Approach

    ERIC Educational Resources Information Center

    Tisdell, C. C.

    2017-01-01

    Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem…

  12. Stress Management in Correctional Recreation.

    ERIC Educational Resources Information Center

    Card, Jaclyn A.

    Current economic conditions have created additional sources of stress in the correctional setting. Often, recreation professionals employed in these settings also add to inmate stress. One of the major factors limiting stress management in correctional settings is a lack of understanding of the value, importance, and perceived freedom, of leisure.…

  13. Geometrical E-beam proximity correction for raster scan systems

    NASA Astrophysics Data System (ADS)

    Belic, Nikola; Eisenmann, Hans; Hartmann, Hans; Waas, Thomas

    1999-04-01

    High pattern fidelity is a basic requirement for the generation of masks containing sub micro structures and for direct writing. Increasing needs mainly emerging from OPC at mask level and x-ray lithography require a correction of the e-beam proximity effect. The most part of e-beam writers are raster scan system. This paper describes a new method for geometrical pattern correction in order to provide a correction solution for e-beam system that are not able to apply variable doses.

  14. Adaptive convergence nonuniformity correction algorithm.

    PubMed

    Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua

    2011-01-01

    Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.

  15. Dead-time Corrected Disdrometer Data

    DOE Data Explorer

    Bartholomew, Mary Jane

    2008-03-05

    Original and dead-time corrected disdrometer results for observations made at SGP and TWP. The correction is based on the technique discussed in Sheppard and Joe, 1994. In addition, these files contain calculated radar reflectivity factor, mean Doppler velocity and attenuation for every measurement for both the original and dead-time corrected data at the following wavelengths: 0.316, 0.856, 3.2, 5, and 10cm (W,K,X,C,S bands). Pavlos Kollias provided the code to do these calculations.

  16. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  17. The Additional Secondary Phase Correction System for AIS Signals

    PubMed Central

    Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen

    2017-01-01

    This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330

  18. Kerr-Newman black holes with string corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, Anthony M.; Larsen, Finn

    We study N = 2 supergravity with higher-derivative corrections that preserve the N = 2 supersymmetry and show that Kerr-Newman black holes are solutions to these theories. Modifications of the black hole entropy due to the higher derivatives are universal and apply even in the BPS and Schwarzschild limits. Our solutions and their entropy are greatly simplified by supersymmetry of the theory even though the black holes generally do not preserve any of the supersymmetry.

  19. Kerr-Newman black holes with string corrections

    DOE PAGES

    Charles, Anthony M.; Larsen, Finn

    2016-10-26

    We study N = 2 supergravity with higher-derivative corrections that preserve the N = 2 supersymmetry and show that Kerr-Newman black holes are solutions to these theories. Modifications of the black hole entropy due to the higher derivatives are universal and apply even in the BPS and Schwarzschild limits. Our solutions and their entropy are greatly simplified by supersymmetry of the theory even though the black holes generally do not preserve any of the supersymmetry.

  20. Iterative CT shading correction with no prior information

    NASA Astrophysics Data System (ADS)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical

  1. SU-E-T-552: Monte Carlo Calculation of Correction Factors for a Free-Air Ionization Chamber in Support of a National Air-Kerma Standard for Electronic Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Bergstrom, P

    2015-06-15

    Purpose: To use Monte Carlo radiation transport methods to calculate correction factors for a free-air ionization chamber in support of a national air-kerma standard for low-energy, miniature x-ray sources used for electronic brachytherapy (eBx). Methods: The NIST is establishing a calibration service for well-type ionization chambers used to characterize the strength of eBx sources prior to clinical use. The calibration approach involves establishing the well-chamber’s response to an eBx source whose air-kerma rate at a 50 cm distance is determined through a primary measurement performed using the Lamperti free-air ionization chamber. However, the free-air chamber measurements of charge or currentmore » can only be related to the reference air-kerma standard after applying several corrections, some of which are best determined via Monte Carlo simulation. To this end, a detailed geometric model of the Lamperti chamber was developed in the EGSnrc code based on the engineering drawings of the instrument. The egs-fac user code in EGSnrc was then used to calculate energy-dependent correction factors which account for missing or undesired ionization arising from effects such as: (1) attenuation and scatter of the x-rays in air; (2) primary electrons escaping the charge collection region; (3) lack of charged particle equilibrium; (4) atomic fluorescence and bremsstrahlung radiation. Results: Energy-dependent correction factors were calculated assuming a monoenergetic point source with the photon energy ranging from 2 keV to 60 keV in 2 keV increments. Sufficient photon histories were simulated so that the Monte Carlo statistical uncertainty of the correction factors was less than 0.01%. The correction factors for a specific eBx source will be determined by integrating these tabulated results over its measured x-ray spectrum. Conclusion: The correction factors calculated in this work are important for establishing a national standard for eBx which will help ensure

  2. Dilatation-dissipation corrections for advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1992-01-01

    This paper analyzes dilatation-dissipation based compressibility corrections for advanced turbulence models. Numerical computations verify that the dilatation-dissipation corrections devised by Sarkar and Zeman greatly improve both the k-omega and k-epsilon model predicted effect of Mach number on spreading rate. However, computations with the k-gamma model also show that the Sarkar/Zeman terms cause an undesired reduction in skin friction for the compressible flat-plate boundary layer. A perturbation solution for the compressible wall layer shows that the Sarkar and Zeman terms reduce the effective von Karman constant in the law of the wall. This is the source of the inaccurate k-gamma model skin-friction predictions for the flat-plate boundary layer. The perturbation solution also shows that the k-epsilon model has an inherent flaw for compressible boundary layers that is not compensated for by the dilatation-dissipation corrections. A compressibility modification for k-gamma and k-epsilon models is proposed that is similar to those of Sarkar and Zeman. The new compressibility term permits accurate predictions for the compressible mixing layer, flat-plate boundary layer, and a shock separated flow with the same values for all closure coefficients.

  3. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  4. LETTER TO THE EDITOR: On the pdis correction factor for cylindrical chambers

    NASA Astrophysics Data System (ADS)

    Andreo, Pedro

    2010-03-01

    The authors of a recent paper (Wang and Rogers 2009 Phys. Med. Biol. 54 1609) have used the Monte Carlo method to simulate the 'classical' experiment made more than 30 years ago by Johansson et al (1978 National and International Standardization of Radiation Dosimetry (Atlanta 1977) vol 2 (Vienna: IAEA) pp 243-70) on the displacement (or replacement) perturbation correction factor pdis for cylindrical chambers in 60Co and high-energy photon beams. They conclude that an 'unreasonable normalization at dmax' of the ionization chambers response led to incorrect results, and for the IAEA TRS-398 Code of Practice, which uses ratios of those results, 'the difference in the correction factors can lead to a beam calibration deviation of more than 0.5% for Farmer-like chambers'. The present work critically examines and questions some of the claims and generalized conclusions of the paper. It is demonstrated that for real, commercial Farmer-like chambers, the possible deviations in absorbed dose would be much smaller (typically 0.13%) than those stated by Wang and Rogers, making the impact of their proposed values negligible on practical high-energy photon dosimetry. Differences of the order of 0.4% would only appear at the upper extreme of the energies potentially available for clinical use (around 25 MV) and, because lower energies are more frequently used, the number of radiotherapy photon beams for which the deviations would be larger than say 0.2% is extremely small. This work also raises concerns on the proposed value of pdis for Farmer chambers at the reference quality of 60Co in relation to their impact on electron beam dosimetry, both for direct dose determination using these chambers and for the cross-calibration of plane-parallel chambers. The proposed increase of about 1% in pdis (compared with TRS-398) would lower the kQ factors and therefore Dw in electron beams by the same amount. This would yield a severe discrepancy with the current good agreement between

  5. Pollen Aquaporins: The Solute Factor.

    PubMed

    Pérez Di Giorgio, Juliana A; Soto, Gabriela C; Muschietti, Jorge P; Amodeo, Gabriela

    2016-01-01

    In the recent years, the biophysical properties and presumed physiological role of aquaporins (AQPs) have been expanded to specialized cells where water and solute exchange are crucial traits. Complex but unique processes such as stomatal movement or pollen hydration and germination have been addressed not only by identifying the specific AQP involved but also by studying how these proteins integrate and coordinate cellular activities and functions. In this review, we referred specifically to pollen-specific AQPs and analyzed what has been assumed in terms of transport properties and what has been found in terms of their physiological role. Unlike that in many other cells, the AQP machinery in mature pollen lacks plasma membrane intrinsic proteins, which are extensively studied for their high water capacity exchange. Instead, a variety of TIPs and NIPs are expressed in pollen. These findings have altered the initial understanding of AQPs and water exchange to consider specific and diverse solutes that might be critical to sustaining pollen's success. The spatial and temporal distribution of the pollen AQPs also reflects a regulatory mechanism that allowing a properly adjusting water and solute exchange.

  6. Exploring item and higher order factor structure with the Schmid-Leiman solution: syntax codes for SPSS and SAS.

    PubMed

    Wolff, Hans-Georg; Preising, Katja

    2005-02-01

    To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development. The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/.

  7. Region of Interest Correction Factors Improve Reliability of Diffusion Imaging Measures Within and Across Scanners and Field Strengths

    PubMed Central

    Venkatraman, Vijay K; Gonzalez, Christopher E.; Landman, Bennett; Goh, Joshua; Reiter, David A.; An, Yang; Resnick, Susan M.

    2017-01-01

    Diffusion tensor imaging (DTI) measures are commonly used as imaging markers to investigate individual differences in relation to behavioral and health-related characteristics. However, the ability to detect reliable associations in cross-sectional or longitudinal studies is limited by the reliability of the diffusion measures. Several studies have examined reliability of diffusion measures within (i.e. intra-site) and across (i.e. inter-site) scanners with mixed results. Our study compares the test-retest reliability of diffusion measures within and across scanners and field strengths in cognitively normal older adults with a follow-up interval less than 2.25 years. Intra-class correlation (ICC) and coefficient of variation (CoV) of fractional anisotropy (FA) and mean diffusivity (MD) were evaluated in sixteen white matter and twenty-six gray matter bilateral regions. The ICC for intra-site reliability (0.32 to 0.96 for FA and 0.18 to 0.95 for MD in white matter regions; 0.27 to 0.89 for MD and 0.03 to 0.79 for FA in gray matter regions) and inter-site reliability (0.28 to 0.95 for FA in white matter regions, 0.02 to 0.86 for MD in gray matter regions) with longer follow-up intervals were similar to earlier studies using shorter follow-up intervals. The reliability of across field strengths comparisons was lower than intra- and inter-site reliability. Within and across scanner comparisons showed that diffusion measures were more stable in larger white matter regions (> 1500 mm3). For gray matter regions, the MD measure showed stability in specific regions and was not dependent on region size. Linear correction factor estimated from cross-sectional or longitudinal data improved the reliability across field strengths. Our findings indicate that investigations relating diffusion measures to external variables must consider variable reliability across the distinct regions of interest and that correction factors can be used to improve consistency of measurement across

  8. Strategies for Meeting Correctional Training and Manpower Needs, Four Developmental Projects.

    ERIC Educational Resources Information Center

    Law Enforcement Assistance Administration (Dept. of Justice), Washington, DC.

    The Law Enforcement Education Act of 1965 has placed special emphasis on projects involving training and utilization of correctional manpower. The four representative projects reported here give a comprehensive view of the problems of upgrading correctional staff and possible solutions to those problems: (1) "The Developmental Laboratory for…

  9. Factors influencing the adoption of self-management solutions: an interpretive synthesis of the literature on stakeholder experiences.

    PubMed

    Harvey, J; Dopson, S; McManus, R J; Powell, J

    2015-11-13

    In a research context, self-management solutions, which may range from simple book diaries to complex telehealth packages, designed to facilitate patients in managing their long-term conditions, have often shown cost-effectiveness, but their implementation in practice has frequently been challenging. We conducted an interpretive qualitative synthesis of relevant articles identified through systematic searches of bibliographic databases in July 2014. We searched PubMed (Medline/NLM), Web of Science, LISTA (EBSCO), CINAHL, Embase and PsycINFO. Coding and analysis was inductive, using the framework method to code and to categorise themes. We took a sensemaking approach to the interpretation of findings. Fifty-eight articles were selected for synthesis. Results showed that during adoption, factors identified as facilitators by some were experienced as barriers by others, and facilitators could change to barriers for the same adopter, depending on how adopters rationalise the solutions within their context when making decisions about (retaining) adoption. Sometimes, when adopters saw and experienced benefits of a solution, they continued using the solution but changed their minds when they could no longer see the benefits. Thus, adopters placed a positive value on the solution if they could constructively rationalise it (which increased adoption) and attached a negative rationale (decreasing adoption) if the solution did not meet their expectations. Key factors that influenced the way adopters rationalised the solutions consisted of costs and the added value of the solution to them and moral, social, motivational and cultural factors. Considering 'barriers' and 'facilitators' for implementation may be too simplistic. Implementers could instead iteratively re-evaluate how potential facilitators and barriers are being experienced by adopters throughout the implementation process, to help adopters to retain constructive evaluations of the solution. Implementers need to pay

  10. A mass-balanced definition of corrected retention volume in gas chromatography.

    PubMed

    Kurganov, A

    2007-05-25

    The mass balance equation of a chromatographic system using a compressible moving phase has been compiled for mass flow of the mobile phase instead of traditional volumetric flow allowing solution of the equation in an analytical form. The relation obtained correlates retention volume measured under ambient conditions with the partition coefficient of the solute. Compared to the relation in the ideal chromatographic system the equation derived contains an additional correction term accounting for the compressibility of the moving phase. When the retention volume is measured under the mean column pressure and column temperature the correction term is reduced to unit and the relation is simplified to those known for the ideal system. This volume according to International Union of Pure and Applied Chemistry (IUPAC) is called the corrected retention volume.

  11. Drag Corrections in High-Speed Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Ludwieg, H.

    1947-01-01

    In the vicinity of a body in a wind tunnel the displacement effect of the wake, due to the finite dimensions of the stream, produces a pressure gradient which evokes a change of drag. In incompressible flow this change of drag is so small, in general, that one does not have to take it into account in wind-tunnel measurements; however, in compressible flow it beoomes considerably larger, so that a correction factor is necessary for measured values. Correction factors for a closed tunnel and an open jet with circular cross sections are calculated and compared with the drag - corrections already bown for high-speed tunnnels.

  12. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  13. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  14. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  15. Single-stage three-phase boost power factor correction circuit for AC-DC converter

    NASA Astrophysics Data System (ADS)

    Azazi, Haitham Z.; Ahmed, Sayed M.; Lashine, Azza E.

    2018-01-01

    This article presents a single-stage three-phase power factor correction (PFC) circuit for AC-to-DC converter using a single-switch boost regulator, leading to improve the input power factor (PF), reducing the input current harmonics and decreasing the number of required active switches. A novel PFC control strategy which is characterised as a simple and low-cost control circuit was adopted, for achieving a good dynamic performance, unity input PF, and minimising the harmonic contents of the input current, at which it can be applied to low/medium power converters. A detailed analytical, simulation and experimental studies were therefore conducted. The effectiveness of the proposed controller algorithm is validated by the simulation results, which were carried out using MATLAB/SIMULINK environment. The proposed system is built and tested in the laboratory using DSP-DS1104 digital control board for an inductive load. The results revealed that the total harmonic distortion in the supply current was very low. Finally, a good agreement between simulation and experimental results was achieved.

  16. Intermediate boundary conditions for LOD, ADI and approximate factorization methods

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.

    1985-01-01

    A general approach to determining the correct intermediate boundary conditions for dimensional splitting methods is presented. The intermediate solution U is viewed as a second order accurate approximation to a modified equation. Deriving the modified equation and using the relationship between this equation and the original equation allows us to determine the correct boundary conditions for U*. This technique is illustrated by applying it to locally one dimensional (LOD) and alternating direction implicit (ADI) methods for the heat equation in two and three space dimensions. The approximate factorization method is considered in slightly more generality.

  17. Borehole deviation and correction factor data for selected wells in the eastern Snake River Plain aquifer at and near the Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Twining, Brian V.

    2016-11-29

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Energy, has maintained a water-level monitoring program at the Idaho National Laboratory (INL) since 1949. The purpose of the program is to systematically measure and report water-level data to assess the eastern Snake River Plain aquifer and long term changes in groundwater recharge, discharge, movement, and storage. Water-level data are commonly used to generate potentiometric maps and used to infer increases and (or) decreases in the regional groundwater system. Well deviation is one component of water-level data that is often overlooked and is the result of the well construction and the well not being plumb. Depending on measured slant angle, where well deviation generally increases linearly with increasing slant angle, well deviation can suggest artificial anomalies in the water table. To remove the effects of well deviation, the USGS INL Project Office applies a correction factor to water-level data when a well deviation survey indicates a change in the reference elevation of greater than or equal to 0.2 ft.Borehole well deviation survey data were considered for 177 wells completed within the eastern Snake River Plain aquifer, but not all wells had deviation survey data available. As of 2016, USGS INL Project Office database includes: 57 wells with gyroscopic survey data; 100 wells with magnetic deviation survey data; 11 wells with erroneous gyroscopic data that were excluded; and, 68 wells with no deviation survey data available. Of the 57 wells with gyroscopic deviation surveys, correction factors for 16 wells ranged from 0.20 to 6.07 ft and inclination angles (SANG) ranged from 1.6 to 16.0 degrees. Of the 100 wells with magnetic deviation surveys, a correction factor for 21 wells ranged from 0.20 to 5.78 ft and SANG ranged from 1.0 to 13.8 degrees, not including the wells that did not meet the correction factor criteria of greater than or equal to 0.20 ft.Forty-seven wells had

  18. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  19. Vessel-Mounted ADCP Data Calibration and Correction

    NASA Astrophysics Data System (ADS)

    de Andrade, A. F.; Barreira, L. M.; Violante-Carvalho, N.

    2013-05-01

    A set of scripts for vessel-mounted ADCP (Acoustic Doppler Current Profiler) data processing is presented. The need for corrections in the data measured by a ship-mounted ADCP and the complexities found during installation, implementation and identification of tasks performed by currently available systems for data processing consist the main motivating factors for the development of a system that would be more practical in manipulation, open code and more manageable for the user. The proposed processing system consists of a set of scripts developed in Matlab TM programming language. The system is able to read the binary files provided by the data acquisition program VMDAS (Vessel Mounted Data Acquisition System), Teledyne RDInstruments proprietary, and calculate calibration factors to correct the data and visualize them after correction. For use the new system, it is only necessary that the ADCP data collected with VMDAS program is in a processing diretory and Matlab TM software be installed on the user's computer. Developed algorithms were extensively tested with ADCP data obtained during Oceano Sul III (Southern Ocean III - OSIII) cruise, conducted by Brazilian Navy aboard the R/V "Antares", from March 26th to May 10th 2007, in the oceanic region between the states of São Paulo and Rio Grande do Sul. For read the data the function rdradcp.m, developed by Rich Pawlowicz and available on his website (http://www.eos.ubc.ca/~rich/#RDADCP), was used. To calculate the calibration factors, alignment error (α) and sensitivity error (β) in Water Tracking and Bottom Tracking Modes, equations deduced by Joyce (1998), Pollard & Read (1989) and Trump & Marmorino (1996) were implemented in Matlab. To validate the calibration factors obtained in the processing system developed, the parameters were compared with the factors provided by CODAS (Common Ocean Data Access System, available at http://currents.soest.hawaii.edu/docs/doc/index.html), post-processing program. For the

  20. The Fundamental Solutions for the Stress Intensity Factors of Modes I, II And III. The Axially Symmetric Problem

    NASA Astrophysics Data System (ADS)

    Rogowski, B.

    2015-05-01

    The subject of the paper are Green's functions for the stress intensity factors of modes I, II and III. Green's functions are defined as a solution to the problem of an elastic, transversely isotropic solid with a penny-shaped or an external crack under general axisymmetric loadings acting along a circumference on the plane parallel to the crack plane. Exact solutions are presented in a closed form for the stress intensity factors under each type of axisymmetric ring forces as fundamental solutions. Numerical examples are employed and conclusions which can be utilized in engineering practice are formulated.

  1. Geometrical correction factors for heat flux meters

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Papell, S. S.

    1974-01-01

    General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. The local averaging error e(x) is defined as the difference between the measured value of the heat flux and the local value which occurs at the center of the gage. In terms of e(x), a correction procedure is presented which allows a better estimate for the true value of the local heat flux. For many practical problems, it is possible to use relatively large gages to obtain acceptable heat flux measurements.

  2. Fluence correction factor for graphite calorimetry in a clinical high-energy carbon-ion beam.

    PubMed

    Lourenço, A; Thomas, R; Homer, M; Bouchard, H; Rossomme, S; Renaud, J; Kanai, T; Royle, G; Palmans, H

    2017-04-07

    The aim of this work is to develop and adapt a formalism to determine absorbed dose to water from graphite calorimetry measurements in carbon-ion beams. Fluence correction factors, [Formula: see text], needed when using a graphite calorimeter to derive dose to water, were determined in a clinical high-energy carbon-ion beam. Measurements were performed in a 290 MeV/n carbon-ion beam with a field size of 11  ×  11 cm 2 , without modulation. In order to sample the beam, a plane-parallel Roos ionization chamber was chosen for its small collecting volume in comparison with the field size. Experimental information on fluence corrections was obtained from depth-dose measurements in water. This procedure was repeated with graphite plates in front of the water phantom. Fluence corrections were also obtained with Monte Carlo simulations through the implementation of three methods based on (i) the fluence distributions differential in energy, (ii) a ratio of calculated doses in water and graphite at equivalent depths and (iii) simulations of the experimental setup. The [Formula: see text] term increased in depth from 1.00 at the entrance toward 1.02 at a depth near the Bragg peak, and the average difference between experimental and numerical simulations was about 0.13%. Compared to proton beams, there was no reduction of the [Formula: see text] due to alpha particles because the secondary particle spectrum is dominated by projectile fragmentation. By developing a practical dose conversion technique, this work contributes to improving the determination of absolute dose to water from graphite calorimetry in carbon-ion beams.

  3. An Evaluation of Unit and ½ Mass Correction Approaches as a ...

    EPA Pesticide Factsheets

    Rare earth elements (REE) and certain alkaline earths can produce M+2 interferences in ICP-MS because they have sufficiently low second ionization energies. Four REEs (150Sm, 150Nd, 156Gd and 156Dy) produce false positives on 75As and 78Se and 132Ba can produce a false positive on 66Zn. Currently, US EPA Method 200.8 does not address these as sources of false positives. Additionally, these M+2 false positives are typically enhanced if collision cell technology is utilized to reduce polyatomic interferences associated with ICP-MS detection. A preliminary evaluation indicates that instrumental tuning conditions can impact the observed M+2/M+1 ratio and in turn the false positives generated on Zn, As and Se. Both unit and ½ mass approaches will be evaluated to correct for these false positives relative to the benchmark concentrations estimates from a triple quadrupole ICP-MS using standard solutions. The impact of matrix on these M+2 corrections will be evaluated over multiple analysis days with a focus on evaluating internal standards that mirror the matrix induced shifts in the M+2 ion transmission. The goal of this evaluation is to move away from fixed M+2 corrective approaches and move towards sample specific approaches that mimic the sample matrix induced variability while attempting to address intra-day variability of the M+2 correction factors through the use of internal standards. Oral Presentation via webinar for EPA Laboratory Technical Informati

  4. Backscatter correction factor for megavoltage photon beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Yida; Zhu, Timothy C.

    2011-10-15

    Purpose: For routine clinical dosimetry of photon beams, it is often necessary to know the minimum thickness of backscatter phantom material to ensure that full backscatter condition exists. Methods: In case of insufficient backscatter thickness, one can determine the backscatter correction factor, BCF(s,d,t), defined as the ratio of absorbed dose measured on the central-axis of a phantom with backscatter thickness of t to that with full backscatter for square field size s and forward depth d. Measurements were performed in SAD geometry for 6 and 15 MV photon beams using a 0.125 cc thimble chamber for field sizes between 10more » x 10 and 30 x 30 cm at depths between d{sub max} (1.5 cm for 6 MV and 3 cm for 15 MV) and 20 cm. Results: A convolution method was used to calculate BCF using Monte-Carlo simulated point-spread kernels generated for clinical photon beams for energies between Co-60 and 24 MV. The convolution calculation agrees with the experimental measurements to within 0.8% with the same physical trend. The value of BCF deviates more from 1 for lower energies and larger field sizes. According to our convolution calculation, the minimum BCF occurs at forward depth d{sub max} and 40 x 40 cm field size, 0.970 for 6 MV and 0.983 for 15 MV. Conclusions: The authors concluded that backscatter thickness is 6.0 cm for 6 MV and 4.0 cm for 15 MV for field size up to 10 x 10 cm when BCF = 0.998. If 4 cm backscatter thickness is used, BCF is 0.997 and 0.983 for field size of 10 x 10 and 40 x 40 cm for 6 MV, and is 0.998 and 0.990 for 10 x 10 and 40 x 40 cm for 15 MV, respectively.« less

  5. Determination of Slope Safety Factor with Analytical Solution and Searching Critical Slip Surface with Genetic-Traversal Random Method

    PubMed Central

    2014-01-01

    In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679

  6. A hybrid filter to mitigate harmonics caused by nonlinear load and resonance caused by power factor correction capacitor

    NASA Astrophysics Data System (ADS)

    Adan, N. F.; Soomro, D. M.

    2017-01-01

    Power factor correction capacitor (PFCC) is commonly installed in industrial applications for power factor correction (PFC). With the expanding use of non-linear equipment such as ASDs, power converters, etc., power factor (PF) improvement has become difficult due to the presence of harmonics. The resulting capacitive impedance of the PFCC may form a resonant circuit with the source inductive reactance at a certain frequency, which is likely to coincide with one of the harmonic frequency of the load. This condition will trigger large oscillatory currents and voltages that may stress the insulation and cause subsequent damage to the PFCC and equipment connected to the power system (PS). Besides, high PF cannot be achieved due to power distortion. This paper presents the design of a three-phase hybrid filter consisting of a single tuned passive filter (STPF) and shunt active power filter (SAPF) to mitigate harmonics and resonance in the PS through simulation using PSCAD/EMTDC software. SAPF was developed using p-q theory. The hybrid filter has resulted in significant improvement on both total harmonic distortion for voltage (THDV) and total demand distortion for current (TDDI) with maximum values of 2.93% and 9.84% respectively which were within the recommended IEEE 519-2014 standard limits. Regarding PF improvement, the combined filters have achieved PF close to desired PF at 0.95 for firing angle, α values up to 40°.

  7. A radiation quality correction factor k for well-type ionization chambers for the measurement of the reference air kerma rate of (60)Co HDR brachytherapy sources.

    PubMed

    Schüller, Andreas; Meier, Markus; Selbach, Hans-Joachim; Ankerhold, Ulrike

    2015-07-01

    The aim of this study was to investigate whether a chamber-type-specific radiation quality correction factor kQ can be determined in order to measure the reference air kerma rate of (60)Co high-dose-rate (HDR) brachytherapy sources with acceptable uncertainty by means of a well-type ionization chamber calibrated for (192)Ir HDR sources. The calibration coefficients of 35 well-type ionization chambers of two different chamber types for radiation fields of (60)Co and (192)Ir HDR brachytherapy sources were determined experimentally. A radiation quality correction factor kQ was determined as the ratio of the calibration coefficients for (60)Co and (192)Ir. The dependence on chamber-to-chamber variations, source-to-source variations, and source strength was investigated. For the PTW Tx33004 (Nucletron source dosimetry system (SDS)) well-type chamber, the type-specific radiation quality correction factor kQ is 1.19. Note that this value is valid for chambers with the serial number, SN ≥ 315 (Nucletron SDS SN ≥ 548) onward only. For the Standard Imaging HDR 1000 Plus well-type chambers, the type-specific correction factor kQ is 1.05. Both kQ values are independent of the source strengths in the complete clinically relevant range. The relative expanded uncertainty (k = 2) of kQ is UkQ = 2.1% for both chamber types. The calibration coefficient of a well-type chamber for radiation fields of (60)Co HDR brachytherapy sources can be calculated from a given calibration coefficient for (192)Ir radiation by using a chamber-type-specific radiation quality correction factor kQ. However, the uncertainty of a (60)Co calibration coefficient calculated via kQ is at least twice as large as that for a direct calibration with a (60)Co source.

  8. Benchmarking of software tools for optical proximity correction

    NASA Astrophysics Data System (ADS)

    Jungmann, Angelika; Thiele, Joerg; Friedrich, Christoph M.; Pforr, Rainer; Maurer, Wilhelm

    1998-06-01

    The point when optical proximity correction (OPC) will become a routine procedure for every design is not far away. For such a daily use the requirements for an OPC tool go far beyond the principal functionality of OPC that was proven by a number of approaches and is documented well in literature. In this paper we first discuss the requirements for a productive OPC tool. Against these requirements a benchmarking was performed with three different OPC tools available on market (OPRX from TVT, OPTISSIMO from aiss and PROTEUS from TMA). Each of these tools uses a different approach to perform the correction (rules, simulation or model). To assess the accuracy of the correction, a test chip was fabricated, which contains corrections done by each software tool. The advantages and weakness of the several solutions are discussed.

  9. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, Robert M.; Hamblen, David G.; Brouillette, Carl R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  10. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, R.M.; Hamblen, D.G.; Brouillette, C.R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  11. A general assessment of the physiochemical factors that influence leachables accumulation in pharmaceutical drug products and related solutions.

    PubMed

    Jenke, Dennis

    2011-01-01

    The accumulation of organic compounds associated with plastic materials into pharmaceutical products and their associated solutions has important suitability for use consequences for those pharmaceutical solutions, most notably in terms of safety and efficacy. The interaction between the pharmaceutical solution and the plastic material is driven and controlled by the same thermodynamic and kinetic factors that regulate the interaction between the constituents of any comparable two-phased system. These physiochemical factors are delineated in this article, and their application to pharmaceutical products is demonstrated. When drug products are packaged in plastic container systems, substances may leach from the container and accumulate in the product. The magnitude of this leaching, and thus the effect that leachables have on the drug product, is controlled by certain thermodynamic and kinetic processes. These factors are described and detailed in this article.

  12. Does RAIM with Correct Exclusion Produce Unbiased Positions?

    PubMed Central

    Teunissen, Peter J. G.; Imparato, Davide; Tiberius, Christian C. J. M.

    2017-01-01

    As the navigation solution of exclusion-based RAIM follows from a combination of least-squares estimation and a statistically based exclusion-process, the computation of the integrity of the navigation solution has to take the propagated uncertainty of the combined estimation-testing procedure into account. In this contribution, we analyse, theoretically as well as empirically, the effect that this combination has on the first statistical moment, i.e., the mean, of the computed navigation solution. It will be shown, although statistical testing is intended to remove biases from the data, that biases will always remain under the alternative hypothesis, even when the correct alternative hypothesis is properly identified. The a posteriori exclusion of a biased satellite range from the position solution will therefore never remove the bias in the position solution completely. PMID:28672862

  13. Oligomeric domain structure of human complement factor H by X-ray and neutron solution scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, S.J.; Nealis, A.S.; Sim, R.B.

    1991-03-19

    Factor H is a regulatory component of the complement system. It has a monomer M{sub r} of 150,000. Primary structure analysis shows that the polypeptide is divided into 20 homologous regions, each 60 amino acid residues long. These are independently folding domains and are termed short consensus repeats (SCRs) or complement control protein (CCP) repeats. High-flux synchrotron x-ray and neutron scatteriing studies were performed in order to define its solution structure in conditions close to physiological. The M{sub r} of factor H was determined as 250,000-320,000 to show that factor H is dimeric. The radius of gyration R{sub G} ofmore » native factor H by X-rays or by neutrons in 0% or 100% {sup 2}H{sub 2}O buffers is not measurable but is greater than 12.5 nm. Two cross-sectional radii of gyration R{sub XS-1} and R{sub XS-2} were determined as 3.0-3.1 and 1.8 nm, respectively. Analyses of the cross-sectional intensities show that factor H is composed of two distinct subunits. This model corresponds to an actual R{sub G} fo 21-23 nm. The separation between each SCR/CCP in factor H is close to 4 nm. In the solution structure of factor H, the SCR/CCP domains are in a highly extended conformation.« less

  14. SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.

    PubMed

    Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J

    2017-06-01

    This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  16. Longitudinal measurement of chromatic dispersion along an optical fiber transmission system with a new correction factor

    NASA Astrophysics Data System (ADS)

    Abbasi, Madiha; Imran Baig, Mirza; Shafique Shaikh, Muhammad

    2013-12-01

    At present existence OTDR based techniques have become a standard practice for measuring chromatic dispersion distribution along an optical fiber transmission link. A constructive measurement technique has been offered in this paper, in which a four wavelength bidirectional optical time domain reflectometer (OTDR) has been used to compute the chromatic dispersion allocation beside an optical fiber transmission system. To improve the correction factor a novel formulation has been developed, which leads to an enhanced and defined measurement. The investigational outcomes obtained are in good harmony.

  17. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    PubMed Central

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by

  18. Numerical method for angle-of-incidence correction factors for diffuse radiation incident photovoltaic modules

    DOE PAGES

    Marion, Bill

    2017-03-27

    Here, a numerical method is provided for solving the integral equation for the angle-of-incidence (AOI) correction factor for diffuse radiation incident photovoltaic (PV) modules. The types of diffuse radiation considered include sky, circumsolar, horizon, and ground-reflected. The method permits PV module AOI characteristics to be addressed when calculating AOI losses associated with diffuse radiation. Pseudo code is provided to aid users in the implementation, and results are shown for PV modules with tilt angles from 0° to 90°. Diffuse AOI losses are greatest for small PV module tilt angles. Including AOI losses associated with the diffuse irradiance will improve predictionsmore » of PV system performance.« less

  19. Immediate Truth--Temporal Contiguity between a Cognitive Problem and Its Solution Determines Experienced Veracity of the Solution

    ERIC Educational Resources Information Center

    Topolinski, Sascha; Reber, Rolf

    2010-01-01

    A temporal contiguity hypothesis for the experience of veracity is tested which states that a solution candidate to a cognitive problem is more likely to be experienced as correct the faster it succeeds the problem. Experiment 1 varied the onset time of the appearance of proposed solutions to anagrams (50 ms vs. 150 ms) and found for both correct…

  20. Influence of Clinical Factors and Magnification Correction on Normal Thickness Profiles of Macular Retinal Layers Using Optical Coherence Tomography.

    PubMed

    Higashide, Tomomi; Ohkubo, Shinji; Hangai, Masanori; Ito, Yasuki; Shimada, Noriaki; Ohno-Matsui, Kyoko; Terasaki, Hiroko; Sugiyama, Kazuhisa; Chew, Paul; Li, Kenneth K W; Yoshimura, Nagahisa

    2016-01-01

    To identify the factors which significantly contribute to the thickness variabilities in macular retinal layers measured by optical coherence tomography with or without magnification correction of analytical areas in normal subjects. The thickness of retinal layers {retinal nerve fiber layer (RNFL), ganglion cell layer plus inner plexiform layer (GCLIPL), RNFL plus GCLIPL (ganglion cell complex, GCC), total retina, total retina minus GCC (outer retina)} were measured by macular scans (RS-3000, NIDEK) in 202 eyes of 202 normal Asian subjects aged 20 to 60 years. The analytical areas were defined by three concentric circles (1-, 3- and 6-mm nominal diameters) with or without magnification correction. For each layer thickness, a semipartial correlation (sr) was calculated for explanatory variables including age, gender, axial length, corneal curvature, and signal strength index. Outer retinal thickness was significantly thinner in females than in males (sr2, 0.07 to 0.13) regardless of analytical areas or magnification correction. Without magnification correction, axial length had a significant positive sr with RNFL (sr2, 0.12 to 0.33) and a negative sr with GCLIPL (sr2, 0.22 to 0.31), GCC (sr2, 0.03 to 0.17), total retina (sr2, 0.07 to 0.17) and outer retina (sr2, 0.16 to 0.29) in multiple analytical areas. The significant sr in RNFL, GCLIPL and GCC became mostly insignificant following magnification correction. The strong correlation between the thickness of inner retinal layers and axial length appeared to result from magnification effects. Outer retinal thickness may differ by gender and axial length independently of magnification correction.

  1. Influence of Clinical Factors and Magnification Correction on Normal Thickness Profiles of Macular Retinal Layers Using Optical Coherence Tomography

    PubMed Central

    Higashide, Tomomi; Ohkubo, Shinji; Hangai, Masanori; Ito, Yasuki; Shimada, Noriaki; Ohno-Matsui, Kyoko; Terasaki, Hiroko; Sugiyama, Kazuhisa; Chew, Paul; Li, Kenneth K. W.; Yoshimura, Nagahisa

    2016-01-01

    Purpose To identify the factors which significantly contribute to the thickness variabilities in macular retinal layers measured by optical coherence tomography with or without magnification correction of analytical areas in normal subjects. Methods The thickness of retinal layers {retinal nerve fiber layer (RNFL), ganglion cell layer plus inner plexiform layer (GCLIPL), RNFL plus GCLIPL (ganglion cell complex, GCC), total retina, total retina minus GCC (outer retina)} were measured by macular scans (RS-3000, NIDEK) in 202 eyes of 202 normal Asian subjects aged 20 to 60 years. The analytical areas were defined by three concentric circles (1-, 3- and 6-mm nominal diameters) with or without magnification correction. For each layer thickness, a semipartial correlation (sr) was calculated for explanatory variables including age, gender, axial length, corneal curvature, and signal strength index. Results Outer retinal thickness was significantly thinner in females than in males (sr2, 0.07 to 0.13) regardless of analytical areas or magnification correction. Without magnification correction, axial length had a significant positive sr with RNFL (sr2, 0.12 to 0.33) and a negative sr with GCLIPL (sr2, 0.22 to 0.31), GCC (sr2, 0.03 to 0.17), total retina (sr2, 0.07 to 0.17) and outer retina (sr2, 0.16 to 0.29) in multiple analytical areas. The significant sr in RNFL, GCLIPL and GCC became mostly insignificant following magnification correction. Conclusions The strong correlation between the thickness of inner retinal layers and axial length appeared to result from magnification effects. Outer retinal thickness may differ by gender and axial length independently of magnification correction. PMID:26814541

  2. Visual loss after corrective surgery for pediatric scoliosis: incidence and risk factors from a nationwide database.

    PubMed

    De la Garza-Ramos, Rafael; Samdani, Amer F; Sponseller, Paul D; Ain, Michael C; Miller, Neil R; Shaffrey, Christopher I; Sciubba, Daniel M

    2016-04-01

    Perioperative visual loss (POVL) after spinal deformity surgery is an uncommon but severe complication. Data on the incidence and risk factors of this complication after corrective surgery in the pediatric population are limited. The present study aimed to investigate nationwide estimates of POVL after corrective surgery for pediatric scoliosis. This is a retrospective study that uses a nationwide database. The sample includes 42,339 patients under the age of 18 who underwent surgery for idiopathic scoliosis. The outcome measures were incidence of POVL and risk factors. Patients under the age of 18 who underwent elective surgery for idiopathic scoliosis between 2002 and 2011 were identified using the Nationwide Inpatient Sample database. The incidence of POVL (ischemic optic neuropathy, central retinal artery occlusion, or cortical blindness) was estimated after application of discharge weights. Demographics, comorbidities, and operative parameters were compared between patients with and without visual loss. A multivariate logistic regression was performed to identify significant risk factors for POVL development. No funds were received in support of this work. The incidence of POVL was 1.6 per 1,000 procedures (0.16%). Patients with visual loss were significantly more likely to be younger and male, have Medicaid as insurance, and undergo fusion of eight or more spinal levels compared with patients without visual loss. Following multivariate analysis, older patients (odds ratio [OR]: 0.84; 95% confidence interval [CI]: 0.77-0.91) and female patients (OR: 0.08; 95% CI: 0.04-0.14) were significantly less likely to develop POVL compared with younger and male patients. On the other hand, having Medicaid as insurance (OR: 2.13;95% CI: 1.32-3.45), history of deficiency anemia (OR: 8.64; 95% CI: 5.46-14.31), and fusion of eight or more spinal levels (OR: 2.40; 95% CI: 1.34-4.30) were all independently associated with POVL. In this nationwide study, the incidence of POVL

  3. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  4. Thermodynamics of charged Lifshitz black holes with quadratic corrections

    NASA Astrophysics Data System (ADS)

    Bravo-Gaete, Moisés; Hassaïne, Mokhtar

    2015-03-01

    In arbitrary dimension, we consider the Einstein-Maxwell Lagrangian supplemented by the more general quadratic-curvature corrections. For this model, we derive four classes of charged Lifshitz black hole solutions for which the metric function is shown to depend on a unique integration constant. The masses of these solutions are computed using the quasilocal formalism based on the relation established between the off-shell Abbott-Deser-Tekin and Noether potentials. Among these four solutions, three of them are interpreted as extremal in the sense that their masses vanish identically. For the last family of solutions, both the quasilocal mass and the electric charge are shown to depend on the integration constant. Finally, we verify that the first law of thermodynamics holds for each solution and a Smarr formula is also established for the four solutions.

  5. Determination of the quenching correction factors for plastic scintillation detectors in therapeutic high-energy proton beams

    PubMed Central

    Wang, L L W; Perles, L A; Archambault, L; Sahoo, N; Mirkovic, D; Beddar, S

    2013-01-01

    The plastic scintillation detectors (PSD) have many advantages over other detectors in small field dosimetry due to its high spatial resolution, excellent water equivalence and instantaneous readout. However, in proton beams, the PSDs will undergo a quenching effect which makes the signal level reduced significantly when the detector is close to Bragg peak where the linear energy transfer (LET) for protons is very high. This study measures the quenching correction factor (QCF) for a PSD in clinical passive-scattering proton beams and investigates the feasibility of using PSDs in depth-dose measurements in proton beams. A polystyrene based PSD (BCF-12, ϕ0.5mm×4mm) was used to measure the depth-dose curves in a water phantom for monoenergetic unmodulated proton beams of nominal energies 100, 180 and 250 MeV. A Markus plane-parallel ion chamber was also used to get the dose distributions for the same proton beams. From these results, the QCF as a function of depth was derived for these proton beams. Next, the LET depth distributions for these proton beams were calculated by using the MCNPX Monte Carlo code, based on the experimentally validated nozzle models for these passive-scattering proton beams. Then the relationship between the QCF and the proton LET could be derived as an empirical formula. Finally, the obtained empirical formula was applied to the PSD measurements to get the corrected depth-dose curves and they were compared to the ion chamber measurements. A linear relationship between QCF and LET, i.e. Birks' formula, was obtained for the proton beams studied. The result is in agreement with the literature. The PSD measurements after the quenching corrections agree with ion chamber measurements within 5%. PSDs are good dosimeters for proton beam measurement if the quenching effect is corrected appropriately. PMID:23128412

  6. Determination of the quenching correction factors for plastic scintillation detectors in therapeutic high-energy proton beams

    NASA Astrophysics Data System (ADS)

    Wang, L. L. W.; Perles, L. A.; Archambault, L.; Sahoo, N.; Mirkovic, D.; Beddar, S.

    2012-12-01

    Plastic scintillation detectors (PSDs) have many advantages over other detectors in small field dosimetry due to their high spatial resolution, excellent water equivalence and instantaneous readout. However, in proton beams, the PSDs undergo a quenching effect which makes the signal level reduced significantly when the detector is close to the Bragg peak where the linear energy transfer (LET) for protons is very high. This study measures the quenching correction factor (QCF) for a PSD in clinical passive-scattering proton beams and investigates the feasibility of using PSDs in depth-dose measurements in proton beams. A polystyrene-based PSD (BCF-12, ϕ0.5 mm × 4 mm) was used to measure the depth-dose curves in a water phantom for monoenergetic unmodulated proton beams of nominal energies 100, 180 and 250 MeV. A Markus plane-parallel ion chamber was also used to get the dose distributions for the same proton beams. From these results, the QCF as a function of depth was derived for these proton beams. Next, the LET depth distributions for these proton beams were calculated by using the MCNPX Monte Carlo code, based on the experimentally validated nozzle models for these passive-scattering proton beams. Then the relationship between the QCF and the proton LET could be derived as an empirical formula. Finally, the obtained empirical formula was applied to the PSD measurements to get the corrected depth-dose curves and they were compared to the ion chamber measurements. A linear relationship between the QCF and LET, i.e. Birks' formula, was obtained for the proton beams studied. The result is in agreement with the literature. The PSD measurements after the quenching corrections agree with ion chamber measurements within 5%. PSDs are good dosimeters for proton beam measurement if the quenching effect is corrected appropriately.

  7. Correction factors to convert microdosimetry measurements in silicon to tissue in 12C ion therapy.

    PubMed

    Bolst, David; Guatelli, Susanna; Tran, Linh T; Chartier, Lachlan; Lerch, Michael L F; Matsufuji, Naruhiro; Rosenfeld, Anatoly B

    2017-03-21

    Silicon microdosimetry is a promising technology for heavy ion therapy (HIT) quality assurance, because of its sub-mm spatial resolution and capability to determine radiation effects at a cellular level in a mixed radiation field. A drawback of silicon is not being tissue-equivalent, thus the need to convert the detector response obtained in silicon to tissue. This paper presents a method for converting silicon microdosimetric spectra to tissue for a therapeutic 12 C beam, based on Monte Carlo simulations. The energy deposition spectra in a 10 μm sized silicon cylindrical sensitive volume (SV) were found to be equivalent to those measured in a tissue SV, with the same shape, but with dimensions scaled by a factor κ equal to 0.57 and 0.54 for muscle and water, respectively. A low energy correction factor was determined to account for the enhanced response in silicon at low energy depositions, produced by electrons. The concept of the mean path length [Formula: see text] to calculate the lineal energy was introduced as an alternative to the mean chord length [Formula: see text] because it was found that adopting Cauchy's formula for the [Formula: see text] was not appropriate for the radiation field typical of HIT as it is very directional. [Formula: see text] can be determined based on the peak of the lineal energy distribution produced by the incident carbon beam. Furthermore it was demonstrated that the thickness of the SV along the direction of the incident 12 C ion beam can be adopted as [Formula: see text]. The tissue equivalence conversion method and [Formula: see text] were adopted to determine the RBE 10 , calculated using a modified microdosimetric kinetic model, applied to the microdosimetric spectra resulting from the simulation study. Comparison of the RBE 10 along the Bragg peak to experimental TEPC measurements at HIMAC, NIRS, showed good agreement. Such agreement demonstrates the validity of the developed tissue equivalence correction factors and of

  8. Correction factors to convert microdosimetry measurements in silicon to tissue in 12C ion therapy

    NASA Astrophysics Data System (ADS)

    Bolst, David; Guatelli, Susanna; Tran, Linh T.; Chartier, Lachlan; Lerch, Michael L. F.; Matsufuji, Naruhiro; Rosenfeld, Anatoly B.

    2017-03-01

    Silicon microdosimetry is a promising technology for heavy ion therapy (HIT) quality assurance, because of its sub-mm spatial resolution and capability to determine radiation effects at a cellular level in a mixed radiation field. A drawback of silicon is not being tissue-equivalent, thus the need to convert the detector response obtained in silicon to tissue. This paper presents a method for converting silicon microdosimetric spectra to tissue for a therapeutic 12C beam, based on Monte Carlo simulations. The energy deposition spectra in a 10 μm sized silicon cylindrical sensitive volume (SV) were found to be equivalent to those measured in a tissue SV, with the same shape, but with dimensions scaled by a factor κ equal to 0.57 and 0.54 for muscle and water, respectively. A low energy correction factor was determined to account for the enhanced response in silicon at low energy depositions, produced by electrons. The concept of the mean path length < {{l}\\text{Path}}> to calculate the lineal energy was introduced as an alternative to the mean chord length < l> because it was found that adopting Cauchy’s formula for the < l> was not appropriate for the radiation field typical of HIT as it is very directional. < {{l}\\text{Path}}> can be determined based on the peak of the lineal energy distribution produced by the incident carbon beam. Furthermore it was demonstrated that the thickness of the SV along the direction of the incident 12C ion beam can be adopted as < {{l}\\text{Path}}> . The tissue equivalence conversion method and < {{l}\\text{Path}}> were adopted to determine the RBE10, calculated using a modified microdosimetric kinetic model, applied to the microdosimetric spectra resulting from the simulation study. Comparison of the RBE10 along the Bragg peak to experimental TEPC measurements at HIMAC, NIRS, showed good agreement. Such agreement demonstrates the validity of the developed tissue equivalence correction factors and of the determination of < {{l}\\text{Path}}> .

  9. Does the Hertz solution estimate pressures correctly in diamond indentor experiments?

    NASA Astrophysics Data System (ADS)

    Bruno, M. S.; Dunn, K. J.

    1986-05-01

    The Hertz solution has been widely used to estimate pressures in a spherical indentor against flat matrix type high pressure experiments. It is usually assumed that the pressure generated when compressing a sample between the indentor and substrate is the same as that generated when compressing an indentor against a flat surface with no sample present. A non-linear finite element analysis of this problem has shown that the situation is far more complex. The actual peak pressure in the sample is highly dependent on plastic deformation and the change in material properties due to hydrostatic pressure. An analysis with two material models is presented and compared with the Hertz solution.

  10. Solution of Einsteins Equation for Deformation of a Magnetized Neutron Star

    NASA Astrophysics Data System (ADS)

    Rizaldy, R.; Sulaksono, A.

    2018-04-01

    We studied the effect of very large and non-uniform magnetic field existed in the neutron star on the deformation of the neutron star. We used in our analytical calculation, multipole expansion of the tensor metric and the momentum-energy tensor in Legendre polynomial expansion up to the quadrupole order. In this way we obtain the solutions of Einstein’s equation with the correction factors due to the magnetic field are taken into account. We obtain from our numerical calculation that the degree of deformation (ellipticity) is increased when the the mass is decreased.

  11. Simplified solution for osculating Keplerian parameter corrections of GEO satellites for intersatellite optical link

    NASA Astrophysics Data System (ADS)

    Yılmaz, Umit C.; Cavdar, Ismail H.

    2015-04-01

    In intersatellite optical communication, the Pointing, Acquisition and Tracking (PAT) phase is one of the important phases that needs to be completed successfully before initiating communication. In this paper, we focused on correcting the possible errors on the Geostationary Earth Orbit (GEO) by using azimuth and elevation errors between Low Earth Orbit (LEO) to GEO optical link during the PAT phase. To minimise the PAT duration, a simplified correction of longitude and inclination errors of the GEO satellite's osculating Keplerian parameters has been suggested. A simulation has been done considering the beaconless tracking and spiral-scanning technique. As a result, starting from the second day, we are able to reduce the uncertainty cone of the GEO satellite by about 200 μrad, if the values are larger than that quantity. The first day of the LEO-GEO links have been used to determine the parameters. Thanks to the corrections, the locking time onto the GEO satellite has been reduced, and more data are able to transmit to the GEO satellite.

  12. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  13. Air density correction in ionization dosimetry.

    PubMed

    Christ, G; Dohm, O S; Schüle, E; Gaupp, S; Martin, M

    2004-05-21

    Air density must be taken into account when ionization dosimetry is performed with unsealed ionization chambers. The German dosimetry protocol DIN 6800-2 states an air density correction factor for which current barometric pressure and temperature and their reference values must be known. It also states that differences between air density and the attendant reference value, as well as changes in ionization chamber sensitivity, can be determined using a radioactive check source. Both methods have advantages and drawbacks which the paper discusses in detail. Barometric pressure at a given height above sea level can be determined by using a suitable barometer, or data downloaded from airport or weather service internet sites. The main focus of the paper is to show how barometric data from measurement or from the internet are correctly processed. Therefore the paper also provides all the requisite equations and terminological explanations. Computed and measured barometric pressure readings are compared, and long-term experience with air density correction factors obtained using both methods is described.

  14. Method of absorbance correction in a spectroscopic heating value sensor

    DOEpatents

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  15. Sci-Fri PM: Planning-10: The replacement correction factors for cylindrical chambers in megavoltage beams.

    PubMed

    Wang, L; Rogers, Dwo

    2008-07-01

    The replacement correction factor (P repl ) in ion chamber dosimetry accounts for the effects of the medium being replaced by the air cavity of the chamber. In TG-21, P repl was conceptually separated into two components: fluence correction, P fl , and gradient correction, P gr . In TG-51, for electron beams, the calibration is at d ref where P gr is required for cylindrical chambers and P fl is unknown and assumed to be the same as that for a beam having the same mean electron energy at d max . For cylindrical chambers in high-energy photon beams, P repl also represents a major uncertainty in current dosimetry protocols. In this study, P repl is calculated with high precision (<0.1%) by the Monte Carlo method as the ratio of the dose in a phantom to the dose scored in water-walled cylindrical cavities of various radii (with the center of the cavity being the point of measurement) in both high energy photon and electron beams. It is found that, for electron beams, the mean electron energy at depth is a good beam quality specifier for P fl ; and TG-51's adoption of P fl at d max with the same mean electron energy for use at d ref is proven to be accurate. For Farmer chambers in photon beams, there is essentially no beam quality dependence for P repl values. In a Co photon beam, the calculated P repl is about 0.4-0.6% higher than the TG-21 value, indicating TG-21 (and TG-51) used incorrect values of P repl for cylindrical chambers. © 2008 American Association of Physicists in Medicine.

  16. Influence of electrolytes in the QCM response: discrimination and quantification of the interference to correct microgravimetric data.

    PubMed

    Encarnação, João M; Stallinga, Peter; Ferreira, Guilherme N M

    2007-02-15

    In this work we demonstrate that the presence of electrolytes in solution generates desorption-like transients when the resonance frequency is measured. Using impedance spectroscopy analysis and Butterworth-Van Dyke (BVD) equivalent electrical circuit modeling we demonstrate that non-Kanazawa responses are obtained in the presence of electrolytes mainly due to the formation of a diffuse electric double layer (DDL) at the sensor surface, which also causes a capacitor like signal. We extend the BVD equivalent circuit by including additional parallel capacitances in order to account for such capacitor like signal. Interfering signals from electrolytes and DDL perturbations were this way discriminated. We further quantified as 8.0+/-0.5 Hz pF-1 the influence of electrolytes to the sensor resonance frequency and we used this factor to correct the data obtained by frequency counting measurements. The applicability of this approach is demonstrated by the detection of oligonucleotide sequences. After applying the corrective factor to the frequency counting data, the mass contribution to the sensor signal yields identical values when estimated by impedance analysis and frequency counting.

  17. Aerosol hygroscopic growth parameterization based on a solute specific coefficient

    NASA Astrophysics Data System (ADS)

    Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.

    2011-09-01

    Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.

  18. Sample Dimensionality Effects on d' and Proportion of Correct Responses in Discrimination Testing.

    PubMed

    Bloom, David J; Lee, Soo-Yeun

    2016-09-01

    Products in the food and beverage industry have varying levels of dimensionality ranging from pure water to multicomponent food products, which can modify sensory perception and possibly influence discrimination testing results. The objectives of the study were to determine the impact of (1) sample dimensionality and (2) complex formulation changes on the d' and proportion of correct response of the 3-AFC and triangle methods. Two experiments were conducted using 47 prescreened subjects who performed either triangle or 3-AFC test procedures. In Experiment I, subjects performed 3-AFC and triangle tests using model solutions with different levels of dimensionality. Samples increased in dimensionality from 1-dimensional sucrose in water solution to 3-dimensional sucrose, citric acid, and flavor in water solution. In Experiment II, subjects performed 3-AFC and triangle tests using 3-dimensional solutions. Sample pairs differed in all 3 dimensions simultaneously to represent complex formulation changes. Two forms of complexity were compared: dilution, where all dimensions decreased in the same ratio, and compensation, where a dimension was increased to compensate for a reduction in another. The proportion of correct responses decreased for both methods when the dimensionality was increased from 1- to 2-dimensional samples. No reduction in correct responses was observed from 2- to 3-dimensional samples. No significant differences in d' were demonstrated between the 2 methods when samples with complex formulation changes were tested. Results reveal an impact on proportion of correct responses due to sample dimensionality and should be explored further using a wide range of sample formulations. © 2016 Institute of Food Technologists®

  19. Long-term correction of canine hemophilia B by gene transfer of blood coagulation factor IX mediated by adeno-associated viral vector.

    PubMed

    Herzog, R W; Yang, E Y; Couto, L B; Hagstrom, J N; Elwell, D; Fields, P A; Burton, M; Bellinger, D A; Read, M S; Brinkhous, K M; Podsakoff, G M; Nichols, T C; Kurtzman, G J; High, K A

    1999-01-01

    Hemophilia B is a severe X-linked bleeding diathesis caused by the absence of functional blood coagulation factor IX, and is an excellent candidate for treatment of a genetic disease by gene therapy. Using an adeno-associated viral vector, we demonstrate sustained expression (>17 months) of factor IX in a large-animal model at levels that would have a therapeutic effect in humans (up to 70 ng/ml, adequate to achieve phenotypic correction, in an animal injected with 8.5x10(12) vector particles/kg). The five hemophilia B dogs treated showed stable, vector dose-dependent partial correction of the whole blood clotting time and, at higher doses, of the activated partial thromboplastin time. In contrast to other viral gene delivery systems, this minimally invasive procedure, consisting of a series of percutaneous intramuscular injections at a single timepoint, was not associated with local or systemic toxicity. Efficient gene transfer to muscle was shown by immunofluorescence staining and DNA analysis of biopsied tissue. Immune responses against factor IX were either absent or transient. These data provide strong support for the feasibility of the approach for therapy of human subjects.

  20. A Comparison of Blood Factor XII Autoactivation in Buffer, Protein Cocktail, Serum, and Plasma Solutions

    PubMed Central

    Golas, Avantika; Yeh, Chyi-Huey Josh; Pitakjakpipop, Harit; Siedlecki, Christopher A.; Vogler, Erwin A.

    2012-01-01

    Activation of blood plasma coagulation in vitro by contact with material surfaces is demonstrably dependent on plasma-volume-to-activator-surface-area ratio. The only plausible explanation consistent with current understanding of coagulation-cascade biochemistry is that procoagulant stimulus arising from the activation complex of the intrinsic pathway is dependent on activator surface area. And yet, it is herein shown that activation of the blood zymogen factor XII (Hageman factor, FXII) dissolved in buffer, protein cocktail, heat-denatured serum, and FXI deficient plasma does not exhibit activator surface-area dependence. Instead, a highly-variable burst of procoagulant-enzyme yield is measured that exhibits no measurable kinetics, sensitivity to mixing, or solution-temperature dependence. Thus, FXII activation in both buffer and protein-containing solutions does not exhibit characteristics of a biochemical reaction but rather appears to be a “mechanochemical” reaction induced by FXII molecule interactions with hydrophilic activator particles that do not formally adsorb blood proteins from solution. Results of this study strongly suggest that activator surface-area dependence observed in contact activation of plasma coagulation does not solely arise at the FXII activation step of the intrinsic pathway. PMID:23117212

  1. Spectral dependence on the correction factor of erythemal UV for cloud, aerosol, total ozone, and surface properties: A modeling study

    NASA Astrophysics Data System (ADS)

    Park, Sang Seo; Jung, Yeonjin; Lee, Yun Gon

    2016-07-01

    Radiative transfer model simulations were used to investigate the erythemal ultraviolet (EUV) correction factors by separating the UV-A and UV-B spectral ranges. The correction factor was defined as the ratio of EUV caused by changing the amounts and characteristics of the extinction and scattering materials. The EUV correction factors (CFEUV) for UV-A [CFEUV(A)] and UV-B [CFEUV(B)] were affected by changes in the total ozone, optical depths of aerosol and cloud, and the solar zenith angle. The differences between CFEUV(A) and CFEUV(B) were also estimated as a function of solar zenith angle, the optical depths of aerosol and cloud, and total ozone. The differences between CFEUV(A) and CFEUV(B) ranged from -5.0% to 25.0% for aerosols, and from -9.5% to 2.0% for clouds in all simulations for different solar zenith angles and optical depths of aerosol and cloud. The rate of decline of CFEUV per unit optical depth between UV-A and UV-B differed by up to 20% for the same aerosol and cloud conditions. For total ozone, the variation in CFEUV(A) was negligible compared with that in CFEUV(B) because of the effective spectral range of the ozone absorption band. In addition, the sensitivity of the CFEUVs due to changes in surface conditions (i.e., surface albedo and surface altitude) was also estimated by using the model in this study. For changes in surface albedo, the sensitivity of the CFEUVs was 2.9%-4.1% per 0.1 albedo change, depending on the amount of aerosols or clouds. For changes in surface altitude, the sensitivity of CFEUV(B) was twice that of CFEUV(A), because the Rayleigh optical depth increased significantly at shorter wavelengths.

  2. Study on the influence of stochastic properties of correction terms on the reliability of instantaneous network RTK

    NASA Astrophysics Data System (ADS)

    Próchniewicz, Dominik

    2014-03-01

    The reliability of precision GNSS positioning primarily depends on correct carrier-phase ambiguity resolution. An optimal estimation and correct validation of ambiguities necessitates a proper definition of mathematical positioning model. Of particular importance in the model definition is the taking into account of the atmospheric errors (ionospheric and tropospheric refraction) as well as orbital errors. The use of the network of reference stations in kinematic positioning, known as Network-based Real-Time Kinematic (Network RTK) solution, facilitates the modeling of such errors and their incorporation, in the form of correction terms, into the functional description of positioning model. Lowered accuracy of corrections, especially during atmospheric disturbances, results in the occurrence of unaccounted biases, the so-called residual errors. The taking into account of such errors in Network RTK positioning model is possible by incorporating the accuracy characteristics of the correction terms into the stochastic model of observations. In this paper we investigate the impact of the expansion of the stochastic model to include correction term variances on the reliability of the model solution. In particular the results of instantaneous solution that only utilizes a single epoch of GPS observations, is analyzed. Such a solution mode due to the low number of degrees of freedom is very sensitive to an inappropriate mathematical model definition. Thus the high level of the solution reliability is very difficult to achieve. Numerical tests performed for a test network located in mountain area during ionospheric disturbances allows to verify the described method for the poor measurement conditions. The results of the ambiguity resolution as well as the rover positioning accuracy shows that the proposed method of stochastic modeling can increase the reliability of instantaneous Network RTK performance.

  3. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    PubMed

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  4. Study of different solutes for determination of neutron source strength based on the water bath

    NASA Astrophysics Data System (ADS)

    Khabaz, Rahim

    2018-09-01

    Time required for activation to saturation and background measurement is considered a limitation of strength determination of radionuclide neutron sources using manganese bath system (MBS). The objective of this research was to evaluate the other solutes based on water bath for presentation of the suitable replacement with MBS. With the aid Monte Carlo simulation, for three neutron sources, having different neutron spectra, immersed in six aqueous solutions, i.e., Na2SO4, VOSO4, MnSO4, Rh2(SO4)3, In2(SO4)3, I2O5, the correction factors in all nuclei of solutions for neutron losses with different process were obtained. The calculations results indicate that the Rh2(SO4)3 and VOSO4 are best options for replacing with MnSO4.

  5. A propensity score approach to correction for bias due to population stratification using genetic and non-genetic factors.

    PubMed

    Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita

    2009-12-01

    Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.

  6. Using the Karolinska Scales of Personality on male juvenile delinquents: relationships between scales and factor structure.

    PubMed

    Dåderman, Anna M; Hellström, Ake; Wennberg, Peter; Törestad, Bertil

    2005-01-01

    The aim of the present study was to investigate relationships between scales from the Karolinska Scales of Personality (KSP) and the factor structure of the KSP in a sample of male juvenile delinquents. The KSP was administered to a group of male juvenile delinquents (n=55, mean age 17 years; standard deviation=1.2) from four Swedish national correctional institutions for serious offenders. As expected, the KSP showed appropriate correlations between the scales. Factor analysis (maximum likelihood) arrived at a four-factor solution in this sample, which is in line with previous research performed in a non-clinical sample of Swedish males. More research is needed in a somewhat larger sample of juvenile delinquents in order to confirm the present results regarding the factor solution.

  7. Puzzler Solution: Just Making an Observation | Poster

    Cancer.gov

    Editor’s Note: It looks like we stumped you. None of the puzzler guesses were correct, but our winner was the closest to getting it right. He guessed it was a sanitary sewer clean-out pipe, and that’s what the photo looks like, according to our source at Facilities Maintenance and Engineering. Please continue reading for the correct puzzler solution. By Ashley DeVine, Staff

  8. Puzzler Solution: Just Making an Observation | Poster

    Cancer.gov

    Editor’s Note: It looks like we stumped you. None of the puzzler guesses were correct, but our winner was the closest to getting it right. He guessed it was a sanitary sewer clean-out pipe, and that’s what the photo looks like, according to our source at Facilities Maintenance and Engineering. Please continue reading for the correct puzzler solution. By Ashley DeVine, Staff Writer

  9. Incorporating convection into one-dimensional solute redistribution during crystal growth from the melt I. The steady-state solution

    NASA Astrophysics Data System (ADS)

    Yen, C. T.; Tiller, W. A.

    1992-03-01

    A one-dimensional mathematical analysis is made of the redistribution of solute which occurs during crystal growth from a convected melt. In this analysis, the important contribution from lateral melt convection to one-dimensional solute redistribution analysis is taken into consideration via an annihilation/creation term in the one-dimensional solute transport equation. Calculations of solute redistribution under steady-state conditions have been carried out analytically. It is found that this new solute redistribution model overcomes several weaknesses that occur when applying the Burton, Prim and Slichter solute segregation equation (1953) in real melt growth situations. It is also found that, with this correction, the diffusion coefficients for solute's in liquid silicon are now found to be in the same range as other liquid metal diffusion coefficients.

  10. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, inmore » contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in

  11. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities.

    PubMed

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib

    2016-03-01

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial

  12. The structure of aqueous sodium hydroxide solutions: a combined solution x-ray diffraction and simulation study.

    PubMed

    Megyes, Tünde; Bálint, Szabolcs; Grósz, Tamás; Radnai, Tamás; Bakó, Imre; Sipos, Pál

    2008-01-28

    To determine the structure of aqueous sodium hydroxide solutions, results obtained from x-ray diffraction and computer simulation (molecular dynamics and Car-Parrinello) have been compared. The capabilities and limitations of the methods in describing the solution structure are discussed. For the solutions studied, diffraction methods were found to perform very well in describing the hydration spheres of the sodium ion and yield structural information on the anion's hydration structure. Classical molecular dynamics simulations were not able to correctly describe the bulk structure of these solutions. However, Car-Parrinello simulation proved to be a suitable tool in the detailed interpretation of the hydration sphere of ions and bulk structure of solutions. The results of Car-Parrinello simulations were compared with the findings of diffraction experiments.

  13. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    NASA Astrophysics Data System (ADS)

    Kilcrease, D. P.; Brookes, S.

    2013-12-01

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. A simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure for the Born cross-sections that employs the Elwert-Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. We also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.

  14. Aberration corrected STEM by means of diffraction gratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linck, Martin; Ercius, Peter A.; Pierce, Jordan S.

    In the past 15 years, the advent of aberration correction technology in electron microscopy has enabled materials analysis on the atomic scale. This is made possible by precise arrangements of multipole electrodes and magnetic solenoids to compensate the aberrations inherent to any focusing element of an electron microscope. In this paper, we describe an alternative method to correct for the spherical aberration of the objective lens in scanning transmission electron microscopy (STEM) using a passive, nanofabricated diffractive optical element. This holographic device is installed in the probe forming aperture of a conventional electron microscope and can be designed to removemore » arbitrarily complex aberrations from the electron's wave front. In this work, we show a proof-of-principle experiment that demonstrates successful correction of the spherical aberration in STEM by means of such a grating corrector (GCOR). Our GCOR enables us to record aberration-corrected high-resolution high-angle annular dark field (HAADF-) STEM images, although yet without advancement in probe current and resolution. Finally, improvements in this technology could provide an economical solution for aberration-corrected high-resolution STEM in certain use scenarios.« less

  15. Aberration corrected STEM by means of diffraction gratings

    DOE PAGES

    Linck, Martin; Ercius, Peter A.; Pierce, Jordan S.; ...

    2017-06-12

    In the past 15 years, the advent of aberration correction technology in electron microscopy has enabled materials analysis on the atomic scale. This is made possible by precise arrangements of multipole electrodes and magnetic solenoids to compensate the aberrations inherent to any focusing element of an electron microscope. In this paper, we describe an alternative method to correct for the spherical aberration of the objective lens in scanning transmission electron microscopy (STEM) using a passive, nanofabricated diffractive optical element. This holographic device is installed in the probe forming aperture of a conventional electron microscope and can be designed to removemore » arbitrarily complex aberrations from the electron's wave front. In this work, we show a proof-of-principle experiment that demonstrates successful correction of the spherical aberration in STEM by means of such a grating corrector (GCOR). Our GCOR enables us to record aberration-corrected high-resolution high-angle annular dark field (HAADF-) STEM images, although yet without advancement in probe current and resolution. Finally, improvements in this technology could provide an economical solution for aberration-corrected high-resolution STEM in certain use scenarios.« less

  16. The two sides of the C-factor.

    PubMed

    Fok, Alex S L; Aregawi, Wondwosen A

    2018-04-01

    The aim of this paper is to investigate the effects on shrinkage strain/stress development of the lateral constraints at the bonded surfaces of resin composite specimens used in laboratory measurement. Using three-dimensional (3D) Hooke's law, a recently developed shrinkage stress theory is extended to 3D to include the additional out-of-plane strain/stress induced by the lateral constraints at the bonded surfaces through the Poisson's ratio effect. The model contains a parameter that defines the relative thickness of the boundary layers, adjacent to the bonded surfaces, that are under such multiaxial stresses. The resulting differential equation is solved for the shrinkage stress under different boundary conditions. The accuracy of the model is assessed by comparing the numerical solutions with a wide range of experimental data, which include those from both shrinkage strain and shrinkage stress measurements. There is good agreement between theory and experiments. The model correctly predicts the different instrument-dependent effects that a specimen's configuration factor (C-factor) has on shrinkage stress. That is, for noncompliant stress-measuring instruments, shrinkage stress increases with the C-factor of the cylindrical specimen; while the opposite is true for compliant instruments. The model also provides a correction factor, which is a function of the C-factor, Poisson's ratio and boundary layer thickness of the specimen, for shrinkage strain measured using the bonded-disc method. For the resin composite examined, the boundary layers have a combined thickness that is ∼11.5% of the specimen's diameter. The theory provides a physical and mechanical basis for the C-factor using principles of engineering mechanics. The correction factor it provides allows the linear shrinkage strain of a resin composite to be obtained more accurately from the bonded-disc method. Published by Elsevier Ltd.

  17. Optical solutions for unbundled access network

    NASA Astrophysics Data System (ADS)

    Bacîş Vasile, Irina Bristena

    2015-02-01

    The unbundling technique requires finding solutions to guarantee the economic and technical performances imposed by the nature of the services that can be offered. One of the possible solutions is the optic one; choosing this solution is justified for the following reasons: it optimizes the use of the access network, which is the most expensive part of a network (about 50% of the total investment in telecommunications networks) while also being the least used (telephone traffic on the lines has a low cost); it increases the distance between the master station/central and the terminal of the subscriber; the development of the services offered to the subscribers is conditioned by the subscriber network. For broadband services there is a need for support for the introduction of high-speed transport. A proper identification of the factors that must be satisfied and a comprehensive financial evaluation of all resources involved, both the resources that are in the process of being bought as well as extensions are the main conditions that would lead to a correct choice. As there is no single optimal technology for all development scenarios, which can take into account all access systems, a successful implementation is always done by individual/particularized scenarios. The method used today for the selection of an optimal solution is based on statistics and analysis of the various, already implemented, solutions, and on the experience that was already gained; the main evaluation criterion and the most unbiased one is the ratio between the cost of the investment and the quality of service, while serving an as large as possible number of customers.

  18. 77 FR 72199 - Technical Corrections; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... changes. This correcting amendment is necessary to correct the statutory authority that is cited in one of... is necessary to correct the statutory authority that is cited in the authority citation for part 171...

  19. Author Correction: Biochemical phosphates observed using hyperpolarized 31P in physiological aqueous solutions.

    PubMed

    Nardi-Schreiber, Atara; Gamliel, Ayelet; Harris, Talia; Sapir, Gal; Sosna, Jacob; Gomori, J Moshe; Katz-Brull, Rachel

    2018-05-22

    The original version of the Supplementary Information associated with this Article contained an error in Supplementary Figure 2 and Supplementary Figure 5 in which the 31 P NMR spectral lines were missing. The HTML has been updated to include a corrected version of the Supplementary Information.

  20. Experimental determination of field factors (\\Omega _{{{Q}_{\\text{clin}}},{{Q}_{\\text{msr}}}}^{{{f}_{\\text{clin}}},{{f}_{\\text{msr}}}} ) for small radiotherapy beams using the daisy chain correction method

    NASA Astrophysics Data System (ADS)

    Lárraga-Gutiérrez, José Manuel

    2015-08-01

    Recently, Alfonso et al proposed a new formalism for the dosimetry of small and non-standard fields. The proposed new formalism is strongly based on the calculation of detector-specific beam correction factors by Monte Carlo simulation methods, which accounts for the difference in the response of the detector between the small and the machine specific reference field. The correct calculation of the detector-specific beam correction factors demands an accurate knowledge of the linear accelerator, detector geometry and composition materials. The present work shows that the field factors in water may be determined experimentally using the daisy chain correction method down to a field size of 1 cm  ×  1 cm for a specific set of detectors. The detectors studied were: three mini-ionization chambers (PTW-31014, PTW-31006, IBA-CC01), three silicon-based diodes (PTW-60018, IBA-SFD and IBA-PFD) and one synthetic diamond detector (PTW-60019). Monte Carlo simulations and experimental measurements were performed for a 6 MV photon beam at 10 cm depth in water with a source-to-axis distance of 100 cm. The results show that the differences between the experimental and Monte Carlo calculated field factors are less than 0.5%—with the exception of the IBA-PFD—for field sizes between 1.5 cm  ×  1.5 cm and 5 cm  ×  5 cm. For the 1 cm  ×  1 cm field size, the differences are within 2%. By using the daisy chain correction method, it is possible to determine measured field factors in water. The results suggest that the daisy chain correction method is not suitable for measurements performed with the IBA-PFD detector. The latter is due to the presence of tungsten powder in the detector encapsulation material. The use of Monte Carlo calculated k{{Q\\text{clin}},{{Q}\\text{msr}}}{{f\\text{clin}},{{f}\\text{msr}}} is encouraged for field sizes less than or equal to 1 cm  ×  1 cm for the dosimeters used in this work.

  1. HIV-risk characteristics in community corrections.

    PubMed

    Clark, C Brendan; McCullumsmith, Cheryl B; Waesche, Matthew C; Islam, M Aminul; Francis, Reginald; Cropsey, Karen L

    2013-01-01

    Individuals in the criminal justice system engage in behaviors that put them at high risk for HIV. This study sought to identify characteristics of individuals who are under community corrections supervision (eg, probation) and at risk for HIV. Approximately 25,000 individuals under community corrections supervision were assessed for HIV risk, and 5059 participants were deemed high-risk or no-risk. Of those, 1519 exhibited high sexual-risk (SR) behaviors, 203 exhibited injection drug risk (IVR), 957 exhibited both types of risk (SIVR), and 2380 exhibited no risk. Sociodemographic characteristics and drug of choice were then examined using univariate and binary logistic regression. Having a history of sexual abuse, not having insurance, and selecting any drug of choice were associated with all forms of HIV risk. However, the effect sizes associated with the various drugs of choice varied significantly by group. Aside from those common risk factors, very different patterns emerged. Female gender was a risk factor for the SR group but was less likely to be associated with IVR. Younger age was associated with SR, whereas older age was associated with IVR. Black race was a risk factor for SR but had a negative association with IVR and SIVR. Living in a shelter, living with relatives/friends, and being unemployed were all risk factors for IVR but were protective factors for SR. Distinct sociodemographic and substance use characteristics were associated with sexual versus injection drug use risk for individuals under community corrections supervision who were at risk for HIV. Information from this study could help identify high-risk individuals and allow tailoring of interventions.

  2. Pre-correction of distorted Bessel-Gauss beams without wavefront detection

    NASA Astrophysics Data System (ADS)

    Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing

    2017-12-01

    By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.

  3. Doctors' confusion over ratios and percentages in drug solutions: the case for standard labelling

    PubMed Central

    Wheeler, Daniel Wren; Remoundos, Dionysios Dennis; Whittlestone, Kim David; Palmer, Michael Ian; Wheeler, Sarah Jane; Ringrose, Timothy Richard; Menon, David Krishna

    2004-01-01

    The different ways of expressing concentrations of drugs in solution, as ratios or percentages or mass per unit volume, are a potential cause of confusion that may contribute to dose errors. To assess doctors' understanding of what they signify, all active subscribers to doctors.net.uk, an online community exclusively for UK doctors, were invited to complete a brief web-based multiple-choice questionnaire that explored their familiarity with solutions of adrenaline (expressed as a ratio), lidocaine (expressed as a percentage) and atropine (expressed in mg per mL), and their ability to calculate the correct volume to administer in clinical scenarios relevant to all specialties. 2974 (24.6%) replied. The mean score achieved was 4.80 out of 6 (SD 1.38). Only 85.2% and 65.8% correctly identified the mass of drug in the adrenaline and lidocaine solutions, respectively, whilst 93.1% identified the correct concentration of atropine. More would have administered the correct volume of adrenaline and lidocaine in clinical scenarios (89.4% and 81.0%, respectively) but only 65.5% identified the correct volume of atropine. The labelling of drug solutions as ratios or percentages is antiquated and confusing. Labelling should be standardized to mass per unit volume. PMID:15286190

  4. Continuous correction of differential path length factor in near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Talukdar, Tanveer; Moore, Jason H.; Diamond, Solomon G.

    2013-05-01

    In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method.

  5. UXDs-Driven Transferring Method from TRIZ Solution to Domain Solution

    NASA Astrophysics Data System (ADS)

    Ma, Lihui; Cao, Guozhong; Chang, Yunxia; Wei, Zihui; Ma, Kai

    The translation process from TRIZ solutions to domain solutions is an analogy-based process. TRIZ solutions, such as 40 inventive principles and the related cases, are medium-solutions for domain problems. Unexpected discoveries (UXDs) are the key factors to trigger designers to generate new ideas for domain solutions. The Algorithm of UXD resolving based on Means-Ends Analysis(MEA) is studied and an UXDs-driven transferring method from TRIZ solution to domain solution is formed. A case study shows the application of the process.

  6. Risk factors for postoperative intraretinal cystoid changes after peeling of idiopathic epiretinal membranes among patients randomized for balanced salt solution and air-tamponade.

    PubMed

    Leisser, Christoph; Hirnschall, Nino; Hackl, Christoph; Döller, Birgit; Varsits, Ralph; Ullrich, Marlies; Kefer, Katharina; Karl, Rigal; Findl, Oliver

    2018-02-20

    Epiretinal membranes (ERM) are macular disorders leading to loss of vision and metamorphopsia. Vitrectomy with membrane peeling displays the gold standard of care. Aim of this study was to assess risk factors for postoperative intraretinal cystoid changes in a study population randomized for balanced salt solution and air-tamponade at the end of surgery. A prospective randomized study, including 69 eyes with idiopathic ERM. Standard 23-gauge three-port pars plana vitrectomy with membrane peeling, using intraoperative optical coherence tomography (OCT), was performed. Randomization for BSS and air-tamponade was performed prior to surgery. Best-corrected visual acuity improved from 32.9 letters to 45.1 letters 3 months after surgery. Presence of preoperative intraretinal cystoid changes was found to be the only risk factor for presence of postoperative intraretinal cystoid changes 3 months after surgery (p = 0.01; odds ratio: 8.0). Other possible risk factors such as combined phacoemulsification with 23G-ppv and membrane peeling (p = 0.16; odds ratio: 2.4), intraoperative subfoveal hyporeflective zones (p = 0.23; odds ratio: 2.6), age over 70 years (p = 0.29; odds ratio: 0.5) and air-tamponade (p = 0.59; odds ratio: 1.5) were not found to be significant. There is strong evidence that preoperative intraretinal cystoid changes lead to smaller benefit from surgery. © 2018 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  7. Correction tool for Active Shape Model based lumbar muscle segmentation.

    PubMed

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  8. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    NASA Astrophysics Data System (ADS)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  9. Assessment of ionization chamber correction factors in photon beams using a time saving strategy with PENELOPE code.

    PubMed

    Reis, C Q M; Nicolucci, P

    2016-02-01

    The purpose of this study was to investigate Monte Carlo-based perturbation and beam quality correction factors for ionization chambers in photon beams using a saving time strategy with PENELOPE code. Simulations for calculating absorbed doses to water using full spectra of photon beams impinging the whole water phantom and those using a phase-space file previously stored around the point of interest were performed and compared. The widely used NE2571 ionization chamber was modeled with PENELOPE using data from the literature in order to calculate absorbed doses to the air cavity of the chamber. Absorbed doses to water at reference depth were also calculated for providing the perturbation and beam quality correction factors for that chamber in high energy photon beams. Results obtained in this study show that simulations with phase-space files appropriately stored can be up to ten times shorter than using a full spectrum of photon beams in the input-file. Values of kQ and its components for the NE2571 ionization chamber showed good agreement with published values in the literature and are provided with typical statistical uncertainties of 0.2%. Comparisons to kQ values published in current dosimetry protocols such as the AAPM TG-51 and IAEA TRS-398 showed maximum percentage differences of 0.1% and 0.6% respectively. The proposed strategy presented a significant efficiency gain and can be applied for a variety of ionization chambers and clinical photon beams. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    PubMed Central

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  11. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    PubMed

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  12. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  13. WE-G-18A-03: Cone Artifacts Correction in Iterative Cone Beam CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Folkerts, M; Jiang, S

    Purpose: For iterative reconstruction (IR) in cone-beam CT (CBCT) imaging, data truncation along the superior-inferior (SI) direction causes severe cone artifacts in the reconstructed CBCT volume images. Not only does it reduce the effective SI coverage of the reconstructed volume, it also hinders the IR algorithm convergence. This is particular a problem for regularization based IR, where smoothing type regularization operations tend to propagate the artifacts to a large area. It is our purpose to develop a practical cone artifacts correction solution. Methods: We found it is the missing data residing in the truncated cone area that leads to inconsistencymore » between the calculated forward projections and measured projections. We overcome this problem by using FDK type reconstruction to estimate the missing data and design weighting factors to compensate the inconsistency caused by the missing data. We validate the proposed methods in our multi-GPU low-dose CBCT reconstruction system on multiple patients' datasets. Results: Compared to the FDK reconstruction with full datasets, while IR is able to reconstruct CBCT images using a subset of projection data, the severe cone artifacts degrade overall image quality. For head-neck case under a full-fan mode, 13 out of 80 slices are contaminated. It is even more severe in pelvis case under half-fan mode, where 36 out of 80 slices are affected, leading to inferior soft-tissue delineation. By applying the proposed method, the cone artifacts are effectively corrected, with a mean intensity difference decreased from ∼497 HU to ∼39HU for those contaminated slices. Conclusion: A practical and effective solution for cone artifacts correction is proposed and validated in CBCT IR algorithm. This study is supported in part by NIH (1R01CA154747-01)« less

  14. Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning

    ERIC Educational Resources Information Center

    Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan

    2009-01-01

    In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…

  15. Stochastic solution to quantum dynamics

    NASA Technical Reports Server (NTRS)

    John, Sarah; Wilson, John W.

    1994-01-01

    The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.

  16. Misalignment corrections in optical interconnects

    NASA Astrophysics Data System (ADS)

    Song, Deqiang

    Optical interconnects are considered a promising solution for long distance and high bitrate data transmissions, outperforming electrical interconnects in terms of loss and dispersion. Due to the bandwidth and distance advantage of optical interconnects, longer links have been implemented with optics. Recent studies show that optical interconnects have clear advantages even at very short distances---intra system interconnects. The biggest challenge for such optical interconnects is the alignment tolerance. Many free space optical components require very precise assembly and installation, and therefore the overall cost could be increased. This thesis studied the misalignment tolerance and possible alignment correction solutions for optical interconnects at backplane or board level. First the alignment tolerance for free space couplers was simulated and the result indicated the most critical alignments occur between the VCSEL, waveguide and microlens arrays. An in-situ microlens array fabrication method was designed and experimentally demonstrated, with no observable misalignment with the waveguide array. At the receiver side, conical lens arrays were proposed to replace simple microlens arrays for a larger angular alignment tolerance. Multilayer simulation models in CodeV were built to optimized the refractive index and shape profiles of the conical lens arrays. Conical lenses fabricated with micro injection molding machine and fiber etching were characterized. Active component VCSOA was used to correct misalignment in optical connectors between the board and backplane. The alignment correction capability were characterized for both DC and AC (1GHz) optical signal. The speed and bandwidth of the VCSOA was measured and compared with a same structure VCSEL. Based on the optical inverter being studied in our lab, an all-optical flip-flop was demonstrated using a pair of VCSOAs. This memory cell with random access ability can store one bit optical signal with set or

  17. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  18. Application of the Exradin W1 scintillator to determine Ediode 60017 and microDiamond 60019 correction factors for relative dosimetry within small MV and FFF fields.

    PubMed

    Underwood, T S A; Rowland, B C; Ferrand, R; Vieillevigne, L

    2015-09-07

    In this work we use EBT3 film measurements at 10 MV to demonstrate the suitability of the Exradin W1 (plastic scintillator) for relative dosimetry within small photon fields. We then use the Exradin W1 to measure the small field correction factors required by two other detectors: the PTW unshielded Ediode 60017 and the PTW microDiamond 60019. We consider on-axis correction-factors for small fields collimated using MLCs for four different TrueBeam energies: 6 FFF, 6 MV, 10 FFF and 10 MV. We also investigate percentage depth dose and lateral profile perturbations. In addition to high-density effects from its silicon sensitive region, the Ediode exhibited a dose-rate dependence and its known over-response to low energy scatter was found to be greater for 6 FFF than 6 MV. For clinical centres without access to a W1 scintillator, we recommend the microDiamond over the Ediode and suggest that 'limits of usability', field sizes below which a detector introduces unacceptable errors, can form a practical alternative to small-field correction factors. For a dosimetric tolerance of 2% on-axis, the microDiamond might be utilised down to 10 mm and 15 mm field sizes for 6 MV and 10 MV, respectively.

  19. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  20. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    PubMed

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  1. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    DOE PAGES

    Kilcrease, D. P.; Brookes, S.

    2013-08-19

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. Additionally, a simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure formore » the Born cross-sections that employs the Elwert–Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. Furthermore, we also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.« less

  2. High-throughput ab-initio dilute solute diffusion database.

    PubMed

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-07-19

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.

  3. Correction of xeroderma pigmentosum repair defect by basal transcription factor BTF2 (TFIIH).

    PubMed Central

    van Vuuren, A J; Vermeulen, W; Ma, L; Weeda, G; Appeldoorn, E; Jaspers, N G; van der Eb, A J; Bootsma, D; Hoeijmakers, J H; Humbert, S

    1994-01-01

    ERCC3 was initially identified as a gene correcting the nucleotide excision repair (NER) defect of xeroderma pigmentosum complementation group B (XP-B). The recent finding that its gene product is identical to the p89 subunit of basal transcription factor BTF2(TFIIH), opened the possibility that it is not directly involved in NER but that it regulates the transcription of one or more NER genes. Using an in vivo microinjection repair assay and an in vitro NER system based on cell-free extracts we demonstrate that ERCC3 in BTF2 is directly implicated in excision repair. Antibody depletion experiments support the idea that the p62 BTF2 subunit and perhaps the entire transcription factor function in NER. Microinjection experiments suggest that exogenous ERCC3 can exchange with ERCC3 subunits in the complex. Expression of a dominant negative K436-->R ERCC3 mutant, expected to have lost all helicase activity, completely abrogates NER and transcription and concomitantly induces a dramatic chromatin collapse. These findings establish the role of ERCC3 and probably the entire BTF2 complex in transcription in vivo which was hitherto only demonstrated in vitro. The results strongly suggest that transcription itself is a critical component for maintenance of chromatin structure. The remarkable dual role of ERCC3 in NER and transcription provides a clue in understanding the complex clinical features of some inherited repair syndromes. Images PMID:8157004

  4. Continuous correction of differential path length factor in near-infrared spectroscopy

    PubMed Central

    Moore, Jason H.; Diamond, Solomon G.

    2013-01-01

    Abstract. In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method. PMID:23640027

  5. Non-ideal Solution Thermodynamics of Cytoplasm

    PubMed Central

    Ross-Rodriguez, Lisa U.; McGann, Locksley E.

    2012-01-01

    Quantitative description of the non-ideal solution thermodynamics of the cytoplasm of a living mammalian cell is critically necessary in mathematical modeling of cryobiology and desiccation and other fields where the passive osmotic response of a cell plays a role. In the solution thermodynamics osmotic virial equation, the quadratic correction to the linear ideal, dilute solution theory is described by the second osmotic virial coefficient. Herein we report, for the first time, intracellular solution second osmotic virial coefficients for four cell types [TF-1 hematopoietic stem cells, human umbilical vein endothelial cells (HUVEC), porcine hepatocytes, and porcine chondrocytes] and further report second osmotic virial coefficients indistinguishable from zero (for the concentration range studied) for human hepatocytes and mouse oocytes. PMID:23840923

  6. Factor Analysis with EM Algorithm Never Gives Improper Solutions when Sample Covariance and Initial Parameter Matrices Are Proper

    ERIC Educational Resources Information Center

    Adachi, Kohei

    2013-01-01

    Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…

  7. Habit Breaking Appliance for Multiple Corrections

    PubMed Central

    Abraham, Reji; Kamath, Geetha; Sodhi, Jasmeet Singh; Sodhi, Sonia; Rita, Chandki; Sai Kalyan, S.

    2013-01-01

    Tongue thrusting and thumb sucking are the most commonly seen oral habits which act as the major etiological factors in the development of dental malocclusion. This case report describes a fixed habit correcting appliance, Hybrid Habit Correcting Appliance (HHCA), designed to eliminate these habits. This hybrid appliance is effective in less compliant patients and if desired can be used along with the fixed orthodontic appliance. Its components can act as mechanical restrainers and muscle retraining devices. It is also effective in cases with mild posterior crossbites. PMID:24198976

  8. Issues in Correctional Training and Casework. Correctional Monograph.

    ERIC Educational Resources Information Center

    Wolford, Bruce I., Ed.; Lawrenz, Pam, Ed.

    The eight papers contained in this monograph were drawn from two national meetings on correctional training and casework. Titles and authors are: "The Challenge of Professionalism in Correctional Training" (Michael J. Gilbert); "A New Perspective in Correctional Training" (Jack Lewis); "Reasonable Expectations in Correctional Officer Training:…

  9. Honorary Authorship Practices in Environmental Science Teams: Structural and Cultural Factors and Solutions.

    PubMed

    Elliott, Kevin C; Settles, Isis H; Montgomery, Georgina M; Brassel, Sheila T; Cheruvelil, Kendra Spence; Soranno, Patricia A

    2017-01-01

    Overinclusive authorship practices such as honorary or guest authorship have been widely reported, and they appear to be exacerbated by the rise of large interdisciplinary collaborations that make authorship decisions particularly complex. Although many studies have reported on the frequency of honorary authorship and potential solutions to it, few have probed how the underlying dynamics of large interdisciplinary teams contribute to the problem. This article reports on a qualitative study of the authorship standards and practices of six National Science Foundation-funded interdisciplinary environmental science teams. Using interviews of the lead principal investigator and an early-career member on each team, our study explores the nature of honorary authorship practices as well as some of the motivating factors that may contribute to these practices. These factors include both structural elements (policies and procedures) and cultural elements (values and norms) that cross organizational boundaries. Therefore, we provide recommendations that address the intersection of these factors and that can be applied at multiple organizational levels.

  10. On non-exponential cosmological solutions with two factor spaces of dimensions m and 1 in the Einstein-Gauss-Bonnet model with a Λ-term

    NASA Astrophysics Data System (ADS)

    Ernazarov, K. K.

    2017-12-01

    We consider a (m + 2)-dimensional Einstein-Gauss-Bonnet (EGB) model with the cosmological Λ-term. We restrict the metrics to be diagonal ones and find for certain Λ = Λ(m) class of cosmological solutions with non-exponential time dependence of two scale factors of dimensions m > 2 and 1. Any solution from this class describes an accelerated expansion of m-dimensional subspace and tends asymptotically to isotropic solution with exponential dependence of scale factors.

  11. Investigation of the chamber correction factor (k(ch)) for the UK secondary standard ionization chamber (NE2561/NE2611) using medium-energy x-rays.

    PubMed

    Rosser, K E

    1998-11-01

    This paper evaluates the characteristics of ionization chambers for the measurement of absorbed dose to water for medium-energy x-rays. The values of the chamber correction factor, k(ch), used in the IPEMB code of practice for the UK secondary standard (NE2561/NE2611) ionization chamber are derived and their constituent factors examined. The comparison of the chambers' responses in air revealed that of the chambers tested only the NE2561, NE2571 and NE2505 exhibit a flat (within 5%) energy response in air. Under no circumstances should the NACP, Sanders electron chamber, or any chamber that has a wall made of high atomic number material, be used for medium-energy x-ray dosimetry. The measurements in water reveal that a chamber that has a substantial housing, such as the PTW Grenz chamber, should not be used to measure absorbed dose to water in this energy range. The value of k(ch) for an NE2561 chamber was determined by measuring the absorbed dose to water and comparing it with that for an NE2571 chamber, for which k(ch) data have been published. The chamber correction factor varies from 1.023 +/- 0.03 to 1.018 +/- 0.001 for x-ray beams with HVL between 0.15 and 4 mm Cu. The values agree with that for an NE2571 chamber within the experimental uncertainty. The corrections due to the stem, waterproof sleeve and replacement of the phantom material by the chamber for an NE2561 chamber are described.

  12. A coupled Eulerian/Lagrangian method for the solution of three-dimensional vortical flows

    NASA Technical Reports Server (NTRS)

    Felici, Helene Marie

    1992-01-01

    A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of three-dimensional rotational flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method using particle markers is added to the Eulerian time-marching procedure and provides a correction of the Eulerian solution. In turn, the Eulerian solutions is used to integrate the Lagrangian state-vector along the particles trajectories. The Lagrangian correction technique does not require any a-priori information on the structure or position of the vortical regions. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers, used as 'accuracy boosters,' take advantage of the accurate convection description of the Lagrangian solution and enhance the vorticity and entropy capturing capabilities of standard Eulerian finite-volume methods. The combined solution procedures is tested in several applications. The convection of a Lamb vortex in a straight channel is used as an unsteady compressible flow preservation test case. The other test cases concern steady incompressible flow calculations and include the preservation of turbulent inlet velocity profile, the swirling flow in a pipe, and the constant stagnation pressure flow and secondary flow calculations in bends. The last application deals with the external flow past a wing with emphasis on the trailing vortex solution. The improvement due to the addition of the Lagrangian correction technique is measured by comparison with analytical solutions when available or with Eulerian solutions on finer grids. The use of the combined Eulerian/Lagrangian scheme results in substantially lower grid resolution requirements than the standard Eulerian scheme for a given solution accuracy.

  13. Corrected formula for the polarization of second harmonic plasma emission

    NASA Technical Reports Server (NTRS)

    Melrose, D. B.; Dulk, G. A.; Gary, D. E.

    1980-01-01

    Corrections for the theory of polarization of second harmonic plasma emission are proposed. The nontransversality of the magnetoionic waves was not taken into account correctly and is here corrected. The corrected and uncorrected results are compared for two simple cases of parallel and isotropic distributions of Langmuir waves. It is found that whereas with the uncorrected formula plausible values of the coronal magnetic fields were obtained from the observed polarization of the second harmonic, the present results imply fields which are stronger by a factor of three to four.

  14. Mean ionic activity coefficients in aqueous NaCl solutions from molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mester, Zoltan; Panagiotopoulos, Athanassios Z., E-mail: azp@princeton.edu

    The mean ionic activity coefficients of aqueous NaCl solutions of varying concentrations at 298.15 K and 1 bar have been obtained from molecular dynamics simulations by gradually turning on the interactions of an ion pair inserted into the solution. Several common non-polarizable water and ion models have been used in the simulations. Gibbs-Duhem equation calculations of the thermodynamic activity of water are used to confirm the thermodynamic consistency of the mean ionic activity coefficients. While the majority of model combinations predict the correct trends in mean ionic activity coefficients, they overestimate their values at high salt concentrations. The solubility predictionsmore » also suffer from inaccuracies, with all models underpredicting the experimental values, some by large factors. These results point to the need for further ion and water model development.« less

  15. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  16. High-throughput ab-initio dilute solute diffusion database

    PubMed Central

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-01-01

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world. PMID:27434308

  17. Improving Planck calibration by including frequency-dependent relativistic corrections

    NASA Astrophysics Data System (ADS)

    Quartin, Miguel; Notari, Alessio

    2015-09-01

    The Planck satellite detectors are calibrated in the 2015 release using the "orbital dipole", which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10-3, due to coupling with the "solar dipole" (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevant for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.

  18. MIL-HDBK-338: Environmental Conversion Table Correction

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Novack, Steven

    2017-01-01

    In reliability analysis, especially for launch vehicles, limited data is frequently a problem. Component data from other environments must be used. MIL-HBK-338 has a matrix showing the conversation between environments. Due to round off the conversions are not commutative, converting from A to B will not equal converting from B to A. Agenda: Introduction to environment conversions; Original table; Original table with edits; How big is the problem?; First attempt at correction; Proposed solution.

  19. Simulation and Correction of Triana-Viewed Earth Radiation Budget with ERBE/ISCCP Data

    NASA Technical Reports Server (NTRS)

    Huang, Jian-Ping; Minnis, Patrick; Doelling, David R.; Valero, Francisco P. J.

    2002-01-01

    This paper describes the simulation of the earth radiation budget (ERB) as viewed by Triana and the development of correction models for converting Trianaviewed radiances into a complete ERB. A full range of Triana views and global radiation fields are simulated using a combination of datasets from ERBE (Earth Radiation Budget Experiment) and ISCCP (International Satellite Cloud Climatology Project) and analyzed with a set of empirical correction factors specific to the Triana views. The results show that the accuracy of global correction factors to estimate ERB from Triana radiances is a function of the Triana position relative to the Lagrange-1 (L1) or the Sun location. Spectral analysis of the global correction factor indicates that both shortwave (SW; 0.2 - 5.0 microns) and longwave (LW; 5 -50 microns) parameters undergo seasonal and diurnal cycles that dominate the periodic fluctuations. The diurnal cycle, especially its amplitude, is also strongly dependent on the seasonal cycle. Based on these results, models are developed to correct the radiances for unviewed areas and anisotropic emission and reflection. A preliminary assessment indicates that these correction models can be applied to Triana radiances to produce the most accurate global ERB to date.

  20. Solving Upwind-Biased Discretizations: Defect-Correction Iterations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    1999-01-01

    This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.

  1. Closure Report for Corrective Action Unit 340: NTS Pesticide Release Sites Nevada Test Site, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. M. Obi

    The purpose of this report is to provide documentation of the completed corrective action and to provide data confirming the corrective action. The corrective action was performed in accordance with the approved Corrective Action Plan (CAP) (U.S. Department of Energy [DOE], 1999) and consisted of clean closure by excavation and disposal. The Area 15 Quonset Hut 15-11 was formerly used for storage of farm supplies including pesticides, herbicides, and fertilizers. The Area 23 Quonset Hut 800 was formerly used to clean pesticide and herbicide equipment. Steam-cleaning rinsate and sink drainage occasionally overflowed a sump into adjoining drainage ditches. One ditchmore » flows south and is referred to as the quonset hut ditch. The other ditch flows southeast and is referred to as the inner drainage ditch. The Area 23 Skid Huts were formerly used for storing and mixing pesticide and herbicide solutions. Excess solutions were released directly to the ground near the skid huts. The skid huts were moved to a nearby location prior to the site characterization performed in 1998 and reported in the Corrective Action Decision Document (CADD) (DOE, 1998). The vicinity and site plans of the Area 23 sites are shown in Figures 2 and 3, respectively.« less

  2. Microinjection of human cell extracts corrects xeroderma pigmentosum defect.

    PubMed Central

    de Jonge, A J; Vermeulen, W; Klein, B; Hoeijmakers, J H

    1983-01-01

    Cultured fibroblasts of patients with the DNA repair syndrome xeroderma pigmentosum (XP) were injected with crude cell extracts from various human cells. Injected fibroblasts were then assayed for unscheduled DNA synthesis (UDS) to see whether the injected extract could complement their deficiency in the removal of u.v.-induced thymidine dimers from their DNA. Microinjection of extracts from repair-proficient cells (such as HeLa, placenta) and from cells belonging to XP complementation group C resulted in a temporary correction of the DNA repair defect in XP-A cells but not in cells from complementation groups C, D or F. Extracts prepared from XP-A cells were unable to correct the XP-A repair defect. The UDS of phenotypically corrected XP-A cells is u.v.-specific and can reach the level of normal cells. The XP-A correcting factor was found to be sensitive to the action of proteinase K, suggesting that it is a protein. It is present in normal cells in high amounts, it is stable on storage and can still be detected in the injected cells 8 h after injection. The microinjection assay described in this paper provides a useful tool for the purification of the XP-A (and possibly other) factor(s) involved in DNA repair. Images Fig. 1. PMID:6357782

  3. [Raman spectroscopy fluorescence background correction and its application in clustering analysis of medicines].

    PubMed

    Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei

    2010-08-01

    During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.

  4. Critical Factors in Mental Health Programming for Juveniles in Corrections Facilities

    ERIC Educational Resources Information Center

    Underwood, Lee A.; Phillips, Annie; von Dresner, Kara; Knight, Pamela D.

    2006-01-01

    Juveniles with mental health and other specialized needs are overrepresented in the juvenile justice system, and while juvenile corrections have not historically provided standardized and evidence-based mental health services for its incarcerated youth, the demand is evident. The reality is that juveniles with serious mental illness are committed…

  5. Black hole solution in the framework of arctan-electrodynamics

    NASA Astrophysics Data System (ADS)

    Kruglov, S. I.

    An arctan-electrodynamics coupled with the gravitational field is investigated. We obtain the regular black hole solution that at r →∞ gives corrections to the Reissner-Nordström solution. The corrections to Coulomb’s law at r →∞ are found. We evaluate the mass of the black hole that is a function of the dimensional parameter β introduced in the model. The magnetically charged black hole was investigated and we have obtained the magnetic mass of the black hole and the metric function at r →∞. The regular black hole solution is obtained at r → 0 with the de Sitter core. We show that there is no singularity of the Ricci scalar for electrically and magnetically charged black holes. Restrictions on the electric and magnetic fields are found that follow from the requirement of the absence of superluminal sound speed and the requirement of a classical stability.

  6. Brane Inflation, Solitons and Cosmological Solutions: I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, P.

    2005-01-25

    In this paper we study various cosmological solutions for a D3/D7 system directly from M-theory with fluxes and M2-branes. In M-theory, these solutions exist only if we incorporate higher derivative corrections from the curvatures as well as G-fluxes. We take these corrections into account and study a number of toy cosmologies, including one with a novel background for the D3/D7 system whose supergravity solution can be completely determined. Our new background preserves all the good properties of the original model and opens up avenues to investigate cosmological effects from wrapped branes and brane-antibrane annihilation, to name a few. We alsomore » discuss in some detail semilocal defects with higher global symmetries, for example exceptional ones, that occur in a slightly different regime of our D3/D7 model. We show that the D3/D7 system does have the required ingredients to realize these configurations as non-topological solitons of the theory. These constructions also allow us to give a physical meaning to the existence of certain underlying homogeneous quaternionic Kahler manifolds.« less

  7. A correction for Dupuit-Forchheimer interface flow models of seawater intrusion in unconfined coastal aquifers

    NASA Astrophysics Data System (ADS)

    Koussis, Antonis D.; Mazi, Katerina; Riou, Fabien; Destouni, Georgia

    2015-06-01

    Interface flow models that use the Dupuit-Forchheimer (DF) approximation for assessing the freshwater lens and the seawater intrusion in coastal aquifers lack representation of the gap through which fresh groundwater discharges to the sea. In these models, the interface outcrops unrealistically at the same point as the free surface, is too shallow and intersects the aquifer base too far inland, thus overestimating an intruding seawater front. To correct this shortcoming of DF-type interface solutions for unconfined aquifers, we here adapt the outflow gap estimate of an analytical 2-D interface solution for infinitely thick aquifers to fit the 50%-salinity contour of variable-density solutions for finite-depth aquifers. We further improve the accuracy of the interface toe location predicted with depth-integrated DF interface solutions by ∼20% (relative to the 50%-salinity contour of variable-density solutions) by combining the outflow-gap adjusted aquifer depth at the sea with a transverse-dispersion adjusted density ratio (Pool and Carrera, 2011), appropriately modified for unconfined flow. The effectiveness of the combined correction is exemplified for two regional Mediterranean aquifers, the Israel Coastal and Nile Delta aquifers.

  8. Factor solutions of the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) in a Swedish population.

    PubMed

    Mörtberg, Ewa; Reuterskiöld, Lena; Tillfors, Maria; Furmark, Tomas; Öst, Lars-Göran

    2017-06-01

    Culturally validated rating scales for social anxiety disorder (SAD) are of significant importance when screening for the disorder, as well as for evaluating treatment efficacy. This study examined construct validity and additional psychometric properties of two commonly used scales, the Social Phobia Scale and the Social Interaction Anxiety Scale, in a clinical SAD population (n = 180) and in a normal population (n = 614) in Sweden. Confirmatory factor analyses of previously reported factor solutions were tested but did not reveal acceptable fit. Exploratory factor analyses (EFA) of the joint structure of the scales in the total population yielded a two-factor model (performance anxiety and social interaction anxiety), whereas EFA in the clinical sample revealed a three-factor solution, a social interaction anxiety factor and two performance anxiety factors. The SPS and SIAS showed good to excellent internal consistency, and discriminated well between patients with SAD and a normal population sample. Both scales showed good convergent validity with an established measure of SAD, whereas the discriminant validity of symptoms of social anxiety and depression could not be confirmed. The optimal cut-off score for SPS and SIAS were 18 and 22 points, respectively. It is concluded that the factor structure and the additional psychometric properties of SPS and SIAS support the use of the scales for assessment in a Swedish population.

  9. Systematic uncertainties in the Monte Carlo calculation of ion chamber replacement correction factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L. L. W.; La Russa, D. J.; Rogers, D. W. O.

    In a previous study [Med. Phys. 35, 1747-1755 (2008)], the authors proposed two direct methods of calculating the replacement correction factors (P{sub repl} or p{sub cav}p{sub dis}) for ion chambers by Monte Carlo calculation. By ''direct'' we meant the stopping-power ratio evaluation is not necessary. The two methods were named as the high-density air (HDA) and low-density water (LDW) methods. Although the accuracy of these methods was briefly discussed, it turns out that the assumption made regarding the dose in an HDA slab as a function of slab thickness is not correct. This issue is reinvestigated in the current study,more » and the accuracy of the LDW method applied to ion chambers in a {sup 60}Co photon beam is also studied. It is found that the two direct methods are in fact not completely independent of the stopping-power ratio of the two materials involved. There is an implicit dependence of the calculated P{sub repl} values upon the stopping-power ratio evaluation through the choice of an appropriate energy cutoff {Delta}, which characterizes a cavity size in the Spencer-Attix cavity theory. Since the {Delta} value is not accurately defined in the theory, this dependence on the stopping-power ratio results in a systematic uncertainty on the calculated P{sub repl} values. For phantom materials of similar effective atomic number to air, such as water and graphite, this systematic uncertainty is at most 0.2% for most commonly used chambers for either electron or photon beams. This uncertainty level is good enough for current ion chamber dosimetry, and the merits of the two direct methods of calculating P{sub repl} values are maintained, i.e., there is no need to do a separate stopping-power ratio calculation. For high-Z materials, the inherent uncertainty would make it practically impossible to calculate reliable P{sub repl} values using the two direct methods.« less

  10. Miscibility as a factor for component crystallization in multisolute frozen solutions.

    PubMed

    Izutsu, Ken-Ichi; Shibata, Hiroko; Yoshida, Hiroyuki; Goda, Yukihiro

    2014-07-01

    The relationship between the miscibility of formulation ingredients and their crystallization during the freezing segment of the lyophilization process was studied. The thermal properties of frozen solutions containing myo-inositol and cosolutes were obtained by performing heating scans from -70 °C before and after heat treatment at -20 °C to -5 °C. Addition of dextran 40,000 reduced and prevented crystallization of myo-inositol. In the first scan, some frozen solutions containing an inositol-rich mixture with dextran showed single broad transitions (Tg's: transition temperatures of maximally freeze-concentrated solutes) that indicated incomplete mixing of the concentrated amorphous solutes. Heat treatment of these frozen solutions induced separation of the solutes into inositol-dominant and solute mixture phases (Tg' splitting) following crystallization of myo-inositol (Tg' shifting). The crystal growth involved myo-inositol molecules in the solute mixture phase. The amorphous-amorphous phase separation and resulting loss of the heteromolecular interaction in the freeze-concentrated inositol-dominant phase should allow ordered assembly of the solute molecules required for nucleation. Some dextran-rich and intermediate concentration ratio frozen solutions retained single Tg's of the amorphous solute mixture, both before and after heat treatments. The relevance of solute miscibility on the crystallization of myo-inositol was also indicated in the systems containing glucose or recombinant human albumin. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  11. Scientific author names: errors, corrections, and identity profiles.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Gerasimov, Alexey N; Kostyukova, Elena I; Kitas, George D

    2016-01-01

    Authorship problems are deep-rooted in the field of science communication. Some of these relate to lack of specific journal instructions. For decades, experts in journal editing and publishing have been exploring the authorship criteria and contributions deserving either co-authorship or acknowledgment. The issue of inconsistencies of listing and abbreviating author names has come to the fore lately. There are reports on the difficulties of figuring out Chinese surnames and given names of South Indians in scholarly articles. However, it seems that problems with correct listing and abbreviating author names are global. This article presents an example of swapping second (father's) name with surname in a 'predatory' journal, where numerous instances of incorrectly identifying and crediting authors passed unnoticed for the journal editors, and no correction has been published. Possible solutions are discussed in relation to identifying author profiles and adjusting editorial policies to the emerging problems. Correcting mistakes with author names post-publication and integrating with the Open Researcher and Contributor ID (ORCID) platform are among them.

  12. Scientific author names: errors, corrections, and identity profiles

    PubMed Central

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Gerasimov, Alexey N.; Kostyukova, Elena I.; Kitas, George D.

    2016-01-01

    Authorship problems are deep-rooted in the field of science communication. Some of these relate to lack of specific journal instructions. For decades, experts in journal editing and publishing have been exploring the authorship criteria and contributions deserving either co-authorship or acknowledgment. The issue of inconsistencies of listing and abbreviating author names has come to the fore lately. There are reports on the difficulties of figuring out Chinese surnames and given names of South Indians in scholarly articles. However, it seems that problems with correct listing and abbreviating author names are global. This article presents an example of swapping second (father’s) name with surname in a ‘predatory’ journal, where numerous instances of incorrectly identifying and crediting authors passed unnoticed for the journal editors, and no correction has been published. Possible solutions are discussed in relation to identifying author profiles and adjusting editorial policies to the emerging problems. Correcting mistakes with author names post-publication and integrating with the Open Researcher and Contributor ID (ORCID) platform are among them. PMID:27346960

  13. SU-E-T-98: An Analysis of TG-51 Electron Beam Calibration Correction Factor Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, P; Alvarez, P; Taylor, P

    Purpose: To analyze the uncertainty of the TG-51 electron beam calibration correction factors for farmer type ion chambers currently used by institutions visited by IROC Houston. Methods: TG-51 calibration data were collected from 181 institutions visited by IROC Houston physicists for 1174 and 197 distinct electron beams from modern Varian and Elekta accelerators, respectively. Data collected and analyzed included ion chamber make and model, nominal energy, N{sub D,w}, I{sub 50}, R{sub 50}, k’R{sub 50}, d{sub ref}, P{sub gr} and pdd(d{sub ref}). k’R{sub 50} data for parallel plate chambers were excluded from the analysis. Results: Unlike photon beams, electron nominal energymore » is a poor indicator of the actual energy as evidenced by the range of R{sub 50} values for each electron beam energy (6–22MeV). The large range in R{sub 50} values resulted k’R{sub 50} values with a small standard deviation but large range between maximum value used and minimum value (0.001–0.029) used for a specific Varian nominal energy. Varian data showed more variability in k’R{sub 50} values than the Elekta data (0.001–0.014). Using the observed range of R{sub 50} values, the maximum spread in k’R{sub 50} values was determined by IROC Houston and compared to the spread of k’R{sub 50} values used in the community. For Elekta linacs the spreads were equivalent, but for Varian energies of 6 to 16MeV, the community spread was 2 to 6 times larger. Community P{sub gr} values had a much larger range of values for 6 and 9 MeV values than predicted. The range in Varian pdd(d{sub ref} ) used by the community for low energies was large, (1.4–4.9 percent), when it should have been very close to unity. Exradin, PTW Roos and PTW farmer chambers N{sub D,w} values showed the largest spread, ≥11 percent. Conclusion: While the vast majority of electron beam calibration correction factors used are accurate, there is a surprising spread in some of the values used.« less

  14. Analyses of factors of crash avoidance maneuvers using the general estimates system.

    PubMed

    Yan, Xuedong; Harb, Rami; Radwan, Essam

    2008-06-01

    Taking an effective corrective action to a critical traffic situation provides drivers an opportunity to avoid crash occurrence and minimize crash severity. The objective of this study is to investigate the relationship between the probability of taking corrective actions and the characteristics of drivers, vehicles, and driving environments. Using the 2004 GES crash database, this study classified drivers who encountered critical traffic events (identified as P_CRASH3 in the GES database) into two pre-crash groups: corrective avoidance actions group and no corrective avoidance actions group. Single and multiple logistic regression analyses were performed to identify potential traffic factors associated with the probability of drivers taking corrective actions. The regression results showed that the driver/vehicle factors associated with the probability of taking corrective actions include: driver age, gender, alcohol use, drug use, physical impairments, distraction, sight obstruction, and vehicle type. In particular, older drivers, female drivers, drug/alcohol use, physical impairment, distraction, or poor visibility may increase the probability of failing to attempt to avoid crashes. Moreover, drivers of larger size vehicles are 42.5% more likely to take corrective avoidance actions than passenger car drivers. On the other hand, the significant environmental factors correlated with the drivers' crash avoidance maneuver include: highway type, number of lanes, divided/undivided highway, speed limit, highway alignment, highway profile, weather condition, and surface condition. Some adverse highway environmental factors, such as horizontal curves, vertical curves, worse weather conditions, and slippery road surface conditions are correlated with a higher probability of crash avoidance maneuvers. These results may seem counterintuitive but they can be explained by the fact that motorists may be more likely to drive cautiously in those adverse driving environments. The

  15. Improving Planck calibration by including frequency-dependent relativistic corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quartin, Miguel; Notari, Alessio, E-mail: mquartin@if.ufrj.br, E-mail: notari@ffn.ub.es

    2015-09-01

    The Planck satellite detectors are calibrated in the 2015 release using the 'orbital dipole', which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10{sup −3}, due to coupling with the 'solar dipole' (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevantmore » for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.« less

  16. Interactive surface correction for 3D shape based segmentation

    NASA Astrophysics Data System (ADS)

    Schwarz, Tobias; Heimann, Tobias; Tetzlaff, Ralf; Rau, Anne-Mareike; Wolf, Ivo; Meinzer, Hans-Peter

    2008-03-01

    Statistical shape models have become a fast and robust method for segmentation of anatomical structures in medical image volumes. In clinical practice, however, pathological cases and image artifacts can lead to local deviations of the detected contour from the true object boundary. These deviations have to be corrected manually. We present an intuitively applicable solution for surface interaction based on Gaussian deformation kernels. The method is evaluated by two radiological experts on segmentations of the liver in contrast-enhanced CT images and of the left heart ventricle (LV) in MRI data. For both applications, five datasets are segmented automatically using deformable shape models, and the resulting surfaces are corrected manually. The interactive correction step improves the average surface distance against ground truth from 2.43mm to 2.17mm for the liver, and from 2.71mm to 1.34mm for the LV. We expect this method to raise the acceptance of automatic segmentation methods in clinical application.

  17. Air kerma calibration factors and chamber correction values for PTW soft x-ray, NACP and Roos ionization chambers at very low x-ray energies.

    PubMed

    Ipe, N E; Rosser, K E; Moretti, C J; Manning, J W; Palmer, M J

    2001-08-01

    This paper evaluates the characteristics of ionization chambers for the measurement of absorbed dose to water using very low-energy x-rays. The values of the chamber correction factor, k(ch), used in the IPEMB 1996 code of practice for the UK secondary standard ionization chambers (PTW type M23342 and PTW type M23344), the Roos (PTW type 34001) and NACP electron chambers are derived. The responses in air of the small and large soft x-ray chambers (PTW type M23342 and PTW type M23344) and the NACP and Roos electron ionization chambers were compared. Besides the soft x-ray chambers, the NACP and Roos chambers can be used for very low-energy x-ray dosimetry provided that they are used in the restricted energy range for which their response does not change by more than 5%. The chamber correction factor was found by comparing the absorbed dose to water determined using the dosimetry protocol recommended for low-energy x-rays with that for very low-energy x-rays. The overlap energy range was extended using data from Grosswendt and Knight. Chamber correction factors given in this paper are chamber dependent, varying from 1.037 to 1.066 for a PTW type M23344 chamber, which is very different from a value of unity given in the IPEMB code. However, the values of k(ch) determined in this paper agree with those given in the DIN standard within experimental uncertainty. The authors recommend that the very low-energy section of the IPEMB code is amended to include the most up-to-date values of k(ch).

  18. Corrections to the thin wall approximation in general relativity

    NASA Technical Reports Server (NTRS)

    Garfinkle, David; Gregory, Ruth

    1989-01-01

    The question is considered whether the thin wall formalism of Israel applies to the gravitating domain walls of a lambda phi(exp 4) theory. The coupled Einstein-scalar equations that describe the thick gravitating wall are expanded in powers of the thickness of the wall. The solutions of the zeroth order equations reproduce the results of the usual Israel thin wall approximation for domain walls. The solutions of the first order equations provide corrections to the expressions for the stress-energy of the wall and to the Israel thin wall equations. The modified thin wall equations are then used to treat the motion of spherical and planar domain walls.

  19. Calculating Probabilistic Distance to Solution in a Complex Problem Solving Domain

    ERIC Educational Resources Information Center

    Sudol, Leigh Ann; Rivers, Kelly; Harris, Thomas K.

    2012-01-01

    In complex problem solving domains, correct solutions are often comprised of a combination of individual components. Students usually go through several attempts, each attempt reflecting an individual solution state that can be observed during practice. Classic metrics to measure student performance over time rely on counting the number of…

  20. The biology of distraction osteogenesis for correction of mandibular and craniomaxillofacial defects: A review

    PubMed Central

    Natu, Subodh Shankar; Ali, Iqbal; Alam, Sarwar; Giri, Kolli Yada; Agarwal, Anshita; Kulkarni, Vrishali Ajit

    2014-01-01

    Limb lengthening by distraction osteogenesis was first described in 1905. The technique did not gain wide acceptance until Gavril Ilizarov identified the physiologic and mechanical factors governing successful regeneration of bone formation. Distraction osteogenesis is a new variation of more traditional orthognathic surgical procedure for the correction of dentofacial deformities. It is most commonly used for the correction of more severe deformities and syndromes of both the maxilla and the mandible and can also be used in children at ages previously untreatable. The basic technique includes surgical fracture of deformed bone, insertion of device, 5-7 days rest, and gradual separation of bony segments by subsequent activation at the rate of 1 mm per day, followed by an 8-12 weeks consolidation phase. This allows surgeons, the lengthening and reshaping of deformed bone. The aim of this paper is to review the principle, technical considerations, applications and limitations of distraction osteogenesis. The application of osteodistraction offers novel solutions for surgical-orthodontic management of developmental anomalies of the craniofacial skeleton as bone may be molded into different shapes along with the soft tissue component gradually thereby resulting in less relapse. PMID:24688555

  1. Computational technique for stepwise quantitative assessment of equation correctness

    NASA Astrophysics Data System (ADS)

    Othman, Nuru'l Izzah; Bakar, Zainab Abu

    2017-04-01

    Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.

  2. Determination of the thermodynamic correction factor of fluids confined in nano-metric slit pores from molecular simulation

    NASA Astrophysics Data System (ADS)

    Collell, Julien; Galliero, Guillaume

    2014-05-01

    The multi-component diffusive mass transport is generally quantified by means of the Maxwell-Stefan diffusion coefficients when using molecular simulations. These coefficients can be related to the Fick diffusion coefficients using the thermodynamic correction factor matrix, which requires to run several simulations to estimate all the elements of the matrix. In a recent work, Schnell et al. ["Thermodynamics of small systems embedded in a reservoir: A detailed analysis of finite size effects," Mol. Phys. 110, 1069-1079 (2012)] developed an approach to determine the full matrix of thermodynamic factors from a single simulation in bulk. This approach relies on finite size effects of small systems on the density fluctuations. We present here an extension of their work for inhomogeneous Lennard Jones fluids confined in slit pores. We first verified this extension by cross validating the results obtained from this approach with the results obtained from the simulated adsorption isotherms, which allows to determine the thermodynamic factor in porous medium. We then studied the effects of the pore width (from 1 to 15 molecular sizes), of the solid-fluid interaction potential (Lennard Jones 9-3, hard wall potential) and of the reduced fluid density (from 0.1 to 0.7 at a reduced temperature T* = 2) on the thermodynamic factor. The deviation of the thermodynamic factor compared to its equivalent bulk value decreases when increasing the pore width and becomes insignificant for reduced pore width above 15. We also found that the thermodynamic factor is sensitive to the magnitude of the fluid-fluid and solid-fluid interactions, which softens or exacerbates the density fluctuations.

  3. Determination of the thermodynamic correction factor of fluids confined in nano-metric slit pores from molecular simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collell, Julien; Galliero, Guillaume, E-mail: guillaume.galliero@univ-pau.fr

    2014-05-21

    The multi-component diffusive mass transport is generally quantified by means of the Maxwell-Stefan diffusion coefficients when using molecular simulations. These coefficients can be related to the Fick diffusion coefficients using the thermodynamic correction factor matrix, which requires to run several simulations to estimate all the elements of the matrix. In a recent work, Schnell et al. [“Thermodynamics of small systems embedded in a reservoir: A detailed analysis of finite size effects,” Mol. Phys. 110, 1069–1079 (2012)] developed an approach to determine the full matrix of thermodynamic factors from a single simulation in bulk. This approach relies on finite size effectsmore » of small systems on the density fluctuations. We present here an extension of their work for inhomogeneous Lennard Jones fluids confined in slit pores. We first verified this extension by cross validating the results obtained from this approach with the results obtained from the simulated adsorption isotherms, which allows to determine the thermodynamic factor in porous medium. We then studied the effects of the pore width (from 1 to 15 molecular sizes), of the solid-fluid interaction potential (Lennard Jones 9-3, hard wall potential) and of the reduced fluid density (from 0.1 to 0.7 at a reduced temperature T* = 2) on the thermodynamic factor. The deviation of the thermodynamic factor compared to its equivalent bulk value decreases when increasing the pore width and becomes insignificant for reduced pore width above 15. We also found that the thermodynamic factor is sensitive to the magnitude of the fluid-fluid and solid-fluid interactions, which softens or exacerbates the density fluctuations.« less

  4. Second-order numerical methods for multi-term fractional differential equations: Smooth and non-smooth solutions

    NASA Astrophysics Data System (ADS)

    Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em

    2017-12-01

    Starting with the asymptotic expansion of the error equation of the shifted Gr\\"{u}nwald--Letnikov formula, we derive a new modified weighted shifted Gr\\"{u}nwald--Letnikov (WSGL) formula by introducing appropriate correction terms. We then apply one special case of the modified WSGL formula to solve multi-term fractional ordinary and partial differential equations, and we prove the linear stability and second-order convergence for both smooth and non-smooth solutions. We show theoretically and numerically that numerical solutions up to certain accuracy can be obtained with only a few correction terms. Moreover, the correction terms can be tuned according to the fractional derivative orders without explicitly knowing the analytical solutions. Numerical simulations verify the theoretical results and demonstrate that the new formula leads to better performance compared to other known numerical approximations with similar resolution.

  5. Stable exponential cosmological solutions with 3- and l-dimensional factor spaces in the Einstein-Gauss-Bonnet model with a Λ -term

    NASA Astrophysics Data System (ADS)

    Ivashchuk, V. D.; Kobtsev, A. A.

    2018-02-01

    A D-dimensional gravitational model with a Gauss-Bonnet term and the cosmological term Λ is studied. We assume the metrics to be diagonal cosmological ones. For certain fine-tuned Λ , we find a class of solutions with exponential time dependence of two scale factors, governed by two Hubble-like parameters H >0 and h, corresponding to factor spaces of dimensions 3 and l > 2, respectively and D = 1 + 3 + l. The fine-tuned Λ = Λ (x, l, α ) depends upon the ratio h/H = x, l and the ratio α = α _2/α _1 of two constants (α _2 and α _1) of the model. For fixed Λ , α and l > 2 the equation Λ (x,l,α ) = Λ is equivalent to a polynomial equation of either fourth or third order and may be solved in radicals (the example l =3 is presented). For certain restrictions on x we prove the stability of the solutions in a class of cosmological solutions with diagonal metrics. A subclass of solutions with small enough variation of the effective gravitational constant G is considered. It is shown that all solutions from this subclass are stable.

  6. Estimating changes in mean body temperature for humans during exercise using core and skin temperatures is inaccurate even with a correction factor.

    PubMed

    Jay, Ollie; Reardon, Francis D; Webb, Paul; Ducharme, Michel B; Ramsay, Tim; Nettlefold, Lindsay; Kenny, Glen P

    2007-08-01

    Changes in mean body temperature (DeltaT(b)) estimated by the traditional two-compartment model of "core" and "shell" temperatures and an adjusted two-compartment model incorporating a correction factor were compared with values derived by whole body calorimetry. Sixty participants (31 men, 29 women) cycled at 40% of peak O(2) consumption for 60 or 90 min in the Snellen calorimeter at 24 or 30 degrees C. The core compartment was represented by esophageal, rectal (T(re)), and aural canal temperature, and the shell compartment was represented by a 12-point mean skin temperature (T(sk)). Using T(re) and conventional core-to-shell weightings (X) of 0.66, 0.79, and 0.90, mean DeltaT(b) estimation error (with 95% confidence interval limits in parentheses) for the traditional model was -95.2% (-83.0, -107.3) to -76.6% (-72.8, -80.5) after 10 min and -47.2% (-40.9, -53.5) to -22.6% (-14.5, -30.7) after 90 min. Using T(re), X = 0.80, and a correction factor (X(0)) of 0.40, mean DeltaT(b) estimation error for the adjusted model was +9.5% (+16.9, +2.1) to -0.3% (+11.9, -12.5) after 10 min and +15.0% (+27.2, +2.8) to -13.7% (-4.2, -23.3) after 90 min. Quadratic analyses of calorimetry DeltaT(b) data was subsequently used to derive best-fitting values of X for both models and X(0) for the adjusted model for each measure of core temperature. The most accurate model at any time point or condition only accounted for 20% of the variation observed in DeltaT(b) for the traditional model and 56% for the adjusted model. In conclusion, throughout exercise the estimation of DeltaT(b) using any measure of core temperature together with mean skin temperature irrespective of weighting is inaccurate even with a correction factor customized for the specific conditions.

  7. Selecting the Correct Solution to a Physics Problem when Given Several Possibilities

    ERIC Educational Resources Information Center

    Richards, Evan Thomas

    2010-01-01

    Despite decades of research on what learning actions are associated with effective learners (Palincsar and Brown, 1984; Atkinson, et al., 2000), the literature has not fully addressed how to cue those actions (particularly within the realm of physics). Recent reforms that integrate incorrect solutions suggest a possible avenue to reach those…

  8. SU-C-304-06: Determination of Intermediate Correction Factors for Three Dosimeters in Small Composite Photon Fields Used in Robotic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christiansen, E; Belec, J; Vandervoort, E

    2015-06-15

    Purpose: To calculate using Monte-Carlo the intermediate and total correction factors (CFs) for two microchambers and a plastic scintillator for composite fields delivered by the CyberKnife system. Methods: A linac model was created in BEAMnrc by matching percentage depth dose (PDD) curves and output factors (OFs) measured using an A16 microchamber with Monte Carlo calculations performed in egs-chamber to explicitly model detector response. Intermediate CFs were determined for the A16 and A26 microchambers and the W1 plastic scintillator in fourteen different composite fields inside a solid water phantom. Seven of these fields used a 5 mm diameter collimator; the remainingmore » fields employed a 7.5 mm collimator but were otherwise identical to the first seven. Intermediate CFs are reported relative to the respective CF for a 60 mm collimator (800 mm source to detector distance and 100 mm depth in water). Results: For microchambers in composite fields, the intermediate CFs that account for detector density and volume were the largest contributors to total CFs. The total CFs for the A26 were larger than those for the A16, especially for the 5 mm cone (1.227±0.003 to 1.144±0.004 versus 1.142±0.003 to 1.099±0.004), due to the A26’s larger active volume (0.015 cc) relative to the A16 (0.007 cc), despite the A26 using similar wall and electrode material. The W1 total and intermediate CFs are closer to unity, due to its smaller active volume and near water-equivalent composition, however, 3–4% detector volume corrections are required for 5 mm collimator fields. In fields using the 7.5 mm collimator, the correction is nearly eliminated for the W1 except for a non-isocentric field. Conclusion: Large and variable CFs are required for microchambers in small composite fields primarily due to density and volume effects. Corrections are reduced but not eliminated for a plastic scintillator in the same fields.« less

  9. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  10. Characterizing the Solid-Solution Coefficient and Plant Uptake Factor of As, Cd and Pb in California Croplands

    USDA-ARS?s Scientific Manuscript database

    In risk assessment models, the solid-solution partition coefficient (Kd), and plant uptake factor (PUF), are often employed to model the fate and transport of trace elements in soils. The trustworthiness of risk assessments depends on the reliability of the parameters used. In this study, we exami...

  11. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  12. Control of Dual-Opposed Stirling Convertors with Active Power Factor Correction Controllers

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.; Schreiber, Jeffrey G.

    2007-01-01

    When using recently-developed active power factor correction (APFC) controllers in power systems comprised of dual-opposed free-piston Stirling convertors, a variety of configurations of the convertors and controller(s) can be considered, with configuration ultimately selected based on benefits of efficiency, reliability, and robust operation. The configuration must not only achieve stable control of the two convertors, but also synchronize and regulate motion of the pistons to minimize net dynamic forces. The NASA Glenn Research Center (GRC) System Dynamic Model (SDM) was used to study ten configurations of dual-opposed convertor systems. These configurations considered one controller with the alternators connected in series or in parallel, and two controllers with the alternators not connected (isolated). For the configurations where the alternators were not connected, several different approaches were evaluated to synchronize the two convertors. In addition, two thermodynamic configurations were considered: two convertors with isolated working spaces and convertors with a shared expansion space. Of the ten configurations studied, stable operating modes were found for four. Three of those four had a common expansion space. One stable configuration was found for the dual-opposed convertors with separate working spaces. That configuration required isochronous control of both convertors, and two APFC controllers were used to accomplish this. A frequency/phase control loop was necessary to allow each APFC controller to synchronize its associated convertor with a common frequency.

  13. Control of Dual-Opposed Stirling Convertors with Active Power Factor Correction Controllers

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.; Schreiber, Jeffrey G.

    2006-01-01

    When using recently-developed active power factor correction (APFC) controllers in power systems comprised of dual-opposed free-piston Stirling convertors, a variety of configurations of the convertors and controller(s) can be considered, with configuration ultimately selected based on benefits of efficiency, reliability, and robust operation. The configuration must not only achieve stable control of the two convertors, but also synchronize and regulate motion of the pistons to minimize net dynamic forces. The NASA Glenn Research Center (GRC) System Dynamic Model (SDM) was used to study ten configurations of dual-opposed convertor systems. These configurations considered one controller with the alternators connected in series or in parallel, and two controllers with the alternators not connected (isolated). For the configurations where the alternators were not connected, several different approaches were evaluated to synchronize the two convertors. In addition, two thermodynamic configurations were considered: two convertors with isolated working spaces and convertors with a shared expansion space. Of the ten configurations studied, stable operating modes were found for four. Three of those four had a common expansion space. One stable configuration was found for the dual-opposed convertors with separate working spaces. That configuration required isochronous control of both convertors, and two APFC controllers were used to accomplish this. A frequency/phase control loop was necessary to allow each APFC controller to synchronize its associated convertor with a common frequency.

  14. Small refractive errors--their correction and practical importance.

    PubMed

    Skrbek, Matej; Petrová, Sylvie

    2013-04-01

    Small refractive errors present a group of specifc far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren't exhibited by loss of the visual acuity. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acuity (or if the improvement isn't noticeable). The next goal was to examine the connection between this correction and other factors (age, size of the refractive error, etc.). The last aim was to describe the subjective personal rating of the correction of these small refractive errors, and to determine the minimal improvement of the visual acuity, that is attractive enough for the client to purchase the correction (glasses, contact lenses). It was confirmed, that there's an indispensable group of subjects with good visual acuity, where the correction is applicable, although it doesn't improve the visual acuity much. The main importance is to eliminate the asthenopia. The prime reason for acceptance of the correction is typically changing during the life, so as the accommodation is declining. Young people prefer the correction on the ground of the asthenopia, caused by small refractive error or latent strabismus; elderly people acquire the correction because of improvement of the visual acuity. Generally the correction was found useful in more than 30%, if the gain of the visual acuity was at least 0,3 of the decimal row.

  15. Toward Best Practices For Assessing Near Surface Sensor Fouling: Potential Correction Approaches Using Underway Ferry Measurements

    NASA Astrophysics Data System (ADS)

    Sastri, A. R.; Dewey, R. K.; Pawlowicz, R.; Krogh, J.

    2016-02-01

    Data from long term deployments of sensors on autonomous, mobile and cabled observation platforms suffer potential quality issues associated with bio-fouling. This issue is of particular concern for optical sensors, such as fluorescence and/or absorbance-based instruments for which light emitting/receiving surfaces are prone to fouling due constant contact with the marine environment. Here we examine signal quality for backscatter, chlorophyll and CDOM fluorescence from a single triplet instrument installed in a ferry box system (nominal depth of 3m) operated by Ocean Networks Canada. The time series consists of 22 months of 8-10 daily transits across the productive waters of the Strait of Georgia, British Columbia, Canada (Nanaimo on Vancouver Island and Vancouver on mainland BC). Instruments were cleaned every 2 weeks since all three instruments experienced significant signal attenuation during that period throughout the year. We experimented with a variety of pre- and post-cleaning measurements in an effort to develop `correction factors' with which to account for the effects of fouling. We found that CDOM fluorescence was especially sensitive to fouling and that correction factors derived from measurements of the fluorescence of standardized solutions successfully accounted for fouling. Similar results were found for chlorophyll fluorescence. Here we present results from our measurements and assess the efficacy of each of these approaches using comparisons against additional instruments less prone to signal attenuation over short periods.

  16. Prescribing in prison: minimizing psychotropic drug diversion in correctional practice.

    PubMed

    Pilkinton, Patricia D; Pilkinton, James C

    2014-04-01

    Correctional facilities are a major provider of mental health care throughout the United States. In spite of the numerous benefits of providing care in this setting, clinicians are sometimes concerned about entering into correctional care because of uncertainty in prescribing practices. This article provides an introduction to prescription drug use, abuse, and diversion in the correctional setting, including systems issues in prescribing, commonly abused prescription medications, motivation for and detection of prescription drug abuse, and the use of laboratory monitoring. By understanding the personal and systemic factors that affect prescribing habits, the clinician can develop a more rewarding correctional practice and improve care for inmates with mental illness.

  17. Comparison of observation level versus 24-hour average atmospheric loading corrections in VLBI analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.; van Dam, T. M.

    2009-04-01

    Variations in the horizontal distribution of atmospheric mass induce displacements of the Earth's surface. Theoretical estimates of the amplitude of the surface displacement indicate that the predicted surface displacement is often large enough to be detected by current geodetic techniques. In fact, the effects of atmospheric pressure loading have been detected in Global Positioning System (GPS) coordinate time series [van Dam et al., 1994; Dong et al., 2002; Scherneck et al., 2003; Zerbini et al., 2004] and very long baseline interferometery (VLBI) coordinates [Rabble and Schuh, 1986; Manabe et al., 1991; van Dam and Herring, 1994; Schuh et al., 2003; MacMillan and Gipson, 1994; and Petrov and Boy, 2004]. Some of these studies applied the atmospheric displacement at the observation level and in other studies, the predicted atmospheric and observed geodetic surface displacements have been averaged over 24 hours. A direct comparison of observation level and 24 hour corrections has not been carried out for VLBI to determine if one or the other approach is superior. In this presentation, we address the following questions: 1) Is it better to correct geodetic data at the observation level rather than applying corrections averaged over 24 hours to estimated geodetic coordinates a posteriori? 2) At the sub-daily periods, the atmospheric mass signal is composed of two components: a tidal component and a non-tidal component. If observation level corrections reduce the scatter of VLBI data more than a posteriori correction, is it sufficient to only model the atmospheric tides or must the entire atmospheric load signal be incorporated into the corrections? 3) When solutions from different geodetic techniques (or analysis centers within a technique) are combined (e.g., for ITRF2008), not all solutions may have applied atmospheric loading corrections. Are any systematic effects on the estimated TRF introduced when atmospheric loading is applied?

  18. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    NASA Astrophysics Data System (ADS)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  19. Direct perturbation theory for the dark soliton solution to the nonlinear Schroedinger equation with normal dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Jialu; Yang Chunnuan; Cai Hao

    2007-04-15

    After finding the basic solutions of the linearized nonlinear Schroedinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  20. On the Solution of the Continuity Equation for Precipitating Electrons in Solar Flares

    NASA Technical Reports Server (NTRS)

    Emslie, A. Gordon; Holman, Gordon D.; Litvinenko, Yuri E.

    2014-01-01

    Electrons accelerated in solar flares are injected into the surrounding plasma, where they are subjected to the influence of collisional (Coulomb) energy losses. Their evolution is modeled by a partial differential equation describing continuity of electron number. In a recent paper, Dobranskis & Zharkova claim to have found an "updated exact analytical solution" to this continuity equation. Their solution contains an additional term that drives an exponential decrease in electron density with depth, leading them to assert that the well-known solution derived by Brown, Syrovatskii & Shmeleva, and many others is invalid. We show that the solution of Dobranskis & Zharkova results from a fundamental error in the application of the method of characteristics and is hence incorrect. Further, their comparison of the "new" analytical solution with numerical solutions of the Fokker-Planck equation fails to lend support to their result.We conclude that Dobranskis & Zharkova's solution of the universally accepted and well-established continuity equation is incorrect, and that their criticism of the correct solution is unfounded. We also demonstrate the formal equivalence of the approaches of Syrovatskii & Shmeleva and Brown, with particular reference to the evolution of the electron flux and number density (both differential in energy) in a collisional thick target. We strongly urge use of these long-established, correct solutions in future works.

  1. FMLRC: Hybrid long read error correction using an FM-index.

    PubMed

    Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D

    2018-02-09

    Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.

  2. Automated MAD and MIR structure solution

    PubMed Central

    Terwilliger, Thomas C.; Berendzen, Joel

    1999-01-01

    Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations. PMID:10089316

  3. Topographic Correction Module at Storm (TC@Storm)

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Cotar, K.; Veljanovski, T.; Pehani, P.; Ostir, K.

    2015-04-01

    Different solar position in combination with terrain slope and aspect result in different illumination of inclined surfaces. Therefore, the retrieved satellite data cannot be accurately transformed to the spectral reflectance, which depends only on the land cover. The topographic correction should remove this effect and enable further automatic processing of higher level products. The topographic correction TC@STORM was developed as a module within the SPACE-SI automatic near-real-time image processing chain STORM. It combines physical approach with the standard Minnaert method. The total irradiance is modelled as a three-component irradiance: direct (dependent on incidence angle, sun zenith angle and slope), diffuse from the sky (dependent mainly on sky-view factor), and diffuse reflected from the terrain (dependent on sky-view factor and albedo). For computation of diffuse irradiation from the sky we assume an anisotropic brightness of the sky. We iteratively estimate a linear combination from 10 different models, to provide the best results. Dependent on the data resolution, we mask shades based on radiometric (image) or geometric properties. The method was tested on RapidEye, Landsat 8, and PROBA-V data. Final results of the correction were evaluated and statistically validated based on various topography settings and land cover classes. Images show great improvements in shaded areas.

  4. Particle multiplicities in lead-lead collisions at the CERN large hadron collider from nonlinear evolution with running coupling corrections.

    PubMed

    Albacete, Javier L

    2007-12-31

    We present predictions for the pseudorapidity density of charged particles produced in central Pb-Pb collisions at the LHC. Particle production in such collisions is calculated in the framework of k(t) factorization. The nuclear unintegrated gluon distributions at LHC energies are determined from numerical solutions of the Balitsky-Kovchegov equation including recently calculated running coupling corrections. The initial conditions for the evolution are fixed by fitting Relativistic Heavy Ion Collider data at collision energies square root[sNN]=130 and 200 GeV per nucleon. We obtain dNch(Pb-Pb)/deta(square root[sNN]=5.5 TeV)/eta=0 approximately 1290-1480.

  5. Astigmatism-corrected echelle spectrometer using an off-the-shelf cylindrical lens.

    PubMed

    Fu, Xiao; Duan, Fajie; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Lv, Changrong

    2017-10-01

    As a special kind of spectrometer with the Czerny-Turner structure, the echelle spectrometer features two-dimensional dispersion, which leads to a complex astigmatic condition. In this work, we propose an optical design of astigmatism-corrected echelle spectrometer using an off-the-shelf cylindrical lens. The mathematical model considering astigmatism introduced by the off-axis mirrors, the echelle grating, and the prism is established. Our solution features simplified calculation and low-cost construction, which is capable of overall compensation of the astigmatism in a wide spectral range (200-600 nm). An optical simulation utilizing ZEMAX software, astigmatism assessment based on Zernike polynomials, and an instrument experiment is implemented to validate the effect of astigmatism correction. The results demonstrated that astigmatism of the echelle spectrometer was corrected to a large extent, and high spectral resolution better than 0.1 nm was achieved.

  6. A simple model for deep tissue attenuation correction and large organ analysis of Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Habte, Frezghi; Natarajan, Arutselvan; Paik, David S.; Gambhir, Sanjiv S.

    2014-03-01

    Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.

  7. SU-F-J-108: TMR Correction Factor Based Online Adaptive Radiotherapy for Stereotactic Radiosurgery (SRS) of L-Spine Tumors Using Cone Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghaffar, I; Balik, S; Zhuang, T

    Purpose: To investigate the feasibility of using TMR ratio correction factors for a fast online adaptive plan to compensate for anatomical changes in stereotactic radiosurgery (SRS) of L-spine tumors. Methods: Three coplanar treatment plans were made for 11 patients: Uniform (9 IMRT beams equally distributed around the patient); Posterior (IMRT with 9 posterior beams every 20 degree) and VMAT (2 360° arcs). For each patient, the external body and bowel gas were contoured on the planning CT and pre-treatment CBCT. After registering CBCT and the planning CT by aligning to the tumor, the CBCT contours were transferred to the planningmore » CT. To estimate the actual delivered dose while considering patient’s anatomy of the treatment day, a hybrid CT was created by overriding densities in planning CT using the differences between CT and CBCT external and bowel gas contours. Correction factors (CF) were calculated using the effective depth information obtained from the planning system using the hybrid CT: CF = TMR (delivery)/TMR (planning). The adaptive plan was generated by multiplying the planned Monitor Units with the CFs. Results: The mean absolute difference (MAD) in V16Gy of the target between planned and estimated delivery with and without TMR correction was 0.8 ± 0.7% vs. 2.4 ± 1.3% for Uniform and 1.0 ± 0.9% vs. 2.6 ± 1.3% for VMAT plans(p<0.05), respectively. For V12Gy of cauda-equina with and without TMR correction, MAD was 0.24 ± 0.19% vs. 1.2 ± 1.02% for Uniform and 0.23 ± 0.20% vs. 0.78 ± 0.79% for VMAT plans(p<0.05), respectively. The differences between adaptive and original plans were not significant for posterior plans. Conclusion: The online adaptive strategy using TMR ratios and pre-treatment CBCT information was feasible strategy to compensate for anatomical changes for the patients treated for L-spine tumors, particularly for equally spaced IMRT and VMAT plans.« less

  8. A simple and effective solution to the constrained QM/MM simulations

    NASA Astrophysics Data System (ADS)

    Takahashi, Hideaki; Kambe, Hiroyuki; Morita, Akihiro

    2018-04-01

    It is a promising extension of the quantum mechanical/molecular mechanical (QM/MM) approach to incorporate the solvent molecules surrounding the QM solute into the QM region to ensure the adequate description of the electronic polarization of the solute. However, the solvent molecules in the QM region inevitably diffuse into the MM bulk during the QM/MM simulation. In this article, we developed a simple and efficient method, referred to as the "boundary constraint with correction (BCC)," to prevent the diffusion of the solvent water molecules by means of a constraint potential. The point of the BCC method is to compensate the error in a statistical property due to the bias potential by adding a correction term obtained through a set of QM/MM simulations. The BCC method is designed so that the effect of the bias potential completely vanishes when the QM solvent is identical with the MM solvent. Furthermore, the desirable conditions, that is, the continuities of energy and force and the conservations of energy and momentum, are fulfilled in principle. We applied the QM/MM-BCC method to a hydronium ion(H3O+) in aqueous solution to construct the radial distribution function (RDF) of the solvent around the solute. It was demonstrated that the correction term fairly compensated the error and led the RDF in good agreement with the result given by an ab initio molecular dynamics simulation.

  9. 77 FR 50163 - Importer of Controlled Substances; Notice of Registration; Catalent Pharma Solutions, Inc.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-20

    ... DEPARTMENT OF JUSTICE Drug Enforcement Administration Importer of Controlled Substances; Notice of Registration; Catalent Pharma Solutions, Inc. Correction In notice document 2012-19202 appearing on page 47114 in the issue of Tuesday, August 7, 2012, make the following correction: On page 47114, in the first...

  10. Alternate solution to generalized Bernoulli equations via an integrating factor: an exact differential equation approach

    NASA Astrophysics Data System (ADS)

    Tisdell, C. C.

    2017-08-01

    Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem through a substitution. The purpose of this note is to present an alternative approach using 'exact methods', illustrating that a substitution and linearization of the problem is unnecessary. The ideas may be seen as forming a complimentary and arguably simpler approach to Azevedo and Valentino that have the potential to be assimilated and adapted to pedagogical needs of those learning and teaching exact differential equations in schools, colleges, universities and polytechnics. We illustrate how to apply the ideas through an analysis of the Gompertz equation, which is of interest in biomathematical models of tumour growth.

  11. Role Models without Guarantees: Corrective Representations and the Cultural Politics of a Latino Male Teacher in the Borderlands

    ERIC Educational Resources Information Center

    Singh, Michael V.

    2018-01-01

    In recent years mentorship has become a popular 'solution' for struggling boys of color and has led to the recruitment of more male of color teachers. While not arguing against the merits of mentorship, this article critiques what the author deems 'corrective representations.' Corrective representations are the imagined embodiment of proper and…

  12. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  13. Satellite clock corrections estimation to accomplish real time ppp: experiments for brazilian real time network

    NASA Astrophysics Data System (ADS)

    Marques, Haroldo; Monico, João; Aquino, Marcio; Melo, Weyller

    2014-05-01

    The real time PPP method requires the availability of real time precise orbits and satellites clocks corrections. Currently, it is possible to apply the solutions of clocks and orbits available by BKG within the context of IGS Pilot project or by using the operational predicted IGU ephemeris. The accuracy of the satellite position available in the IGU is enough for several applications requiring good quality. However, the satellites clocks corrections do not provide enough accuracy (3 ns ~ 0.9 m) to accomplish real time PPP with the same level of accuracy. Therefore, for real time PPP application it is necessary to further research and develop appropriated methodologies for estimating the satellite clock corrections in real time with better accuracy. Currently, it is possible to apply the real time solutions of clocks and orbits available by Federal Agency for Cartography and Geodesy (BKG) within the context of IGS Pilot project. The BKG corrections are disseminated by a new proposed format of the RTCM 3.x and can be applied in the broadcasted orbits and clocks. Some investigations have been proposed for the estimation of the satellite clock corrections using GNSS code and phase observable at the double difference level between satellites and epochs (MERVAT, DOUSA, 2007). Another possibility consists of applying a Kalman Filter in the PPP network mode (HAUSCHILD, 2010) and it is also possible the integration of both methods, using network PPP and observables at double difference level in specific time intervals (ZHANG; LI; GUO, 2010). For this work the methodology adopted consists in the estimation of the satellite clock corrections based on the data adjustment in the PPP mode, but for a network of GNSS stations. The clock solution can be solved by using two types of observables: code smoothed by carrier phase or undifferenced code together with carrier phase. In the former, we estimate receiver clock error; satellite clock correction and troposphere, considering

  14. Improving Performance in Quantum Mechanics with Explicit Incentives to Correct Mistakes

    ERIC Educational Resources Information Center

    Brown, Benjamin R.; Mason, Andrew; Singh, Chandralekha

    2016-01-01

    An earlier investigation found that the performance of advanced students in a quantum mechanics course did not automatically improve from midterm to final exam on identical problems even when they were provided the correct solutions and their own graded exams. Here, we describe a study, which extended over four years, in which upper-level…

  15. APFEL: A PDF evolution library with QED corrections

    NASA Astrophysics Data System (ADS)

    Bertone, Valerio; Carrazza, Stefano; Rojo, Juan

    2014-06-01

    Quantum electrodynamics and electroweak corrections are important ingredients for many theoretical predictions at the LHC. This paper documents APFEL, a new PDF evolution package that allows for the first time to perform DGLAP evolution up to NNLO in QCD and to LO in QED, in the variable-flavor-number scheme and with either pole or MS bar heavy quark masses. APFEL consistently accounts for the QED corrections to the evolution of quark and gluon PDFs and for the contribution from the photon PDF in the proton. The coupled QCD ⊗ QED equations are solved in x-space by means of higher order interpolation, followed by Runge-Kutta solution of the resulting discretized evolution equations. APFEL is based on an innovative and flexible methodology for the sequential solution of the QCD and QED evolution equations and their combination. In addition to PDF evolution, APFEL provides a module that computes Deep-Inelastic Scattering structure functions in the FONLL general-mass variable-flavor-number scheme up to O(αs2) . All the functionalities of APFEL can be accessed via a Graphical User Interface, supplemented with a variety of plotting tools for PDFs, parton luminosities and structure functions. Written in FORTRAN 77, APFEL can also be used via the C/C++ and Python interfaces, and is publicly available from the HepForge repository.

  16. Trajectory Correction and Locomotion Analysis of a Hexapod Walking Robot with Semi-Round Rigid Feet

    PubMed Central

    Zhu, Yaguang; Jin, Bo; Wu, Yongsheng; Guo, Tong; Zhao, Xiangmo

    2016-01-01

    Aimed at solving the misplaced body trajectory problem caused by the rolling of semi-round rigid feet when a robot is walking, a legged kinematic trajectory correction methodology based on the Least Squares Support Vector Machine (LS-SVM) is proposed. The concept of ideal foothold is put forward for the three-dimensional kinematic model modification of a robot leg, and the deviation value between the ideal foothold and real foothold is analyzed. The forward/inverse kinematic solutions between the ideal foothold and joint angular vectors are formulated and the problem of direct/inverse kinematic nonlinear mapping is solved by using the LS-SVM. Compared with the previous approximation method, this correction methodology has better accuracy and faster calculation speed with regards to inverse kinematics solutions. Experiments on a leg platform and a hexapod walking robot are conducted with multi-sensors for the analysis of foot tip trajectory, base joint vibration, contact force impact, direction deviation, and power consumption, respectively. The comparative analysis shows that the trajectory correction methodology can effectively correct the joint trajectory, thus eliminating the contact force influence of semi-round rigid feet, significantly improving the locomotion of the walking robot and reducing the total power consumption of the system. PMID:27589766

  17. Target Uncertainty Mediates Sensorimotor Error Correction.

    PubMed

    Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.

  18. A sun-crown-sensor model and adapted C-correction logic for topographic correction of high resolution forest imagery

    NASA Astrophysics Data System (ADS)

    Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.

    2014-10-01

    Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model

  19. SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction

    NASA Astrophysics Data System (ADS)

    Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang

    2010-08-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.

  20. Bias-correction of PERSIANN-CDR Extreme Precipitation Estimates Over the United States

    NASA Astrophysics Data System (ADS)

    Faridzad, M.; Yang, T.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Ground-based precipitation measurements can be sparse or even nonexistent over remote regions which make it difficult for extreme event analysis. PERSIANN-CDR (CDR), with 30+ years of daily rainfall information, provides an opportunity to study precipitation for regions where ground measurements are limited. In this study, the use of CDR annual extreme precipitation for frequency analysis of extreme events over limited/ungauged basins is explored. The adjustment of CDR is implemented in two steps: (1) Calculated CDR bias correction factor at limited gauge locations based on the linear regression analysis of gauge and CDR annual maxima precipitation; and (2) Extend the bias correction factor to the locations where gauges are not available. The correction factors are estimated at gauge sites over various catchments, elevation zones, and climate regions and the results were generalized to ungauged sites based on regional and climatic similarity. Case studies were conducted on 20 basins with diverse climate and altitudes in the Eastern and Western US. Cross-validation reveals that the bias correction factors estimated on limited calibration data can be extended to regions with similar characteristics. The adjusted CDR estimates also outperform gauge interpolation on validation sites consistently. It is suggested that the CDR with bias adjustment has a potential for study frequency analysis of extreme events, especially for regions with limited gauge observations.

  1. Development of attenuation and diffraction corrections for linear and nonlinear Rayleigh surface waves radiating from a uniform line source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Hyunjo, E-mail: hjjeong@wku.ac.kr; Cho, Sungjong; Zhang, Shuzeng

    2016-04-15

    In recent studies with nonlinear Rayleigh surface waves, harmonic generation measurements have been successfully employed to characterize material damage and microstructural changes, and found to be sensitive to early stages of damage process. A nonlinearity parameter of Rayleigh surface waves was derived and frequently measured to quantify the level of damage. The accurate measurement of the nonlinearity parameter generally requires making corrections for beam diffraction and medium attenuation. These effects are not generally known for nonlinear Rayleigh waves, and therefore not properly considered in most of previous studies. In this paper, the nonlinearity parameter for a Rayleigh surface wave ismore » defined from the plane wave displacement solutions. We explicitly define the attenuation and diffraction corrections for fundamental and second harmonic Rayleigh wave beams radiated from a uniform line source. Attenuation corrections are obtained from the quasilinear theory of plane Rayleigh wave equations. To obtain closed-form expressions for diffraction corrections, multi-Gaussian beam (MGB) models are employed to represent the integral solutions derived from the quasilinear theory of the full two-dimensional wave equation without parabolic approximation. Diffraction corrections are presented for a couple of transmitter-receiver geometries, and the effects of making attenuation and diffraction corrections are examined through the simulation of nonlinearity parameter determination in a solid sample.« less

  2. Experimental verification of Theodorsen's theoretical jet-boundary correction factors

    NASA Technical Reports Server (NTRS)

    Schliestett, George Van

    1934-01-01

    Prandtl's suggested use of a doubly infinite arrangement of airfoil images in the theoretical determination of wind-tunnel jet-boundary corrections was first adapted by Glauert to the case of closed rectangular jets. More recently, Theodorsen, using the same image arrangement but a different analytical treatment, has extended this work to include not only closed but also partly closed and open tunnels. This report presents the results of wind-tunnel tests conducted at the Georgia School of Technology for the purpose of verifying the five cases analyzed by Theodorsen. The tests were conducted in a square tunnel and the results constitute a satisfactory verification of his general method of analysis. During the preparation of the data two minor errors were discovered in the theory and these have been rectified.

  3. Factors determining outcome of corrective osteotomy for malunited paediatric forearm fractures: a systematic review and meta-analysis

    PubMed Central

    Roth, K. C.; Walenkamp, M. M. J.; van Geenen, R. C. I.; Reijman, M.; Verhaar, J. A. N.; Colaris, J. W.

    2017-01-01

    The aim of this study was to identify predictors of a superior functional outcome after corrective osteotomy for paediatric malunited radius and both-bone forearm fractures. We performed a systematic review and meta-analysis of individual participant data, searching databases up to 1 October 2016. Our primary outcome was the gain in pronosupination seen after corrective osteotomy. Individual participant data of 11 cohort studies were included, concerning 71 participants with a median age of 11 years at trauma. Corrective osteotomy was performed after a median of 12 months after trauma, leading to a mean gain of 77° in pronosupination after a median follow-up of 29 months. Analysis of variance and multiple regression analysis revealed that predictors of superior functional outcome after corrective osteotomy are: an interval between trauma and corrective osteotomy of less than 1 year, an angular deformity of greater than 20° and the use of three-dimensional computer-assisted techniques. Level of evidence: II PMID:28891765

  4. On the solution of the continuity equation for precipitating electrons in solar flares

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emslie, A. Gordon; Holman, Gordon D.; Litvinenko, Yuri E., E-mail: emslieg@wku.edu, E-mail: gordon.d.holman@nasa.gov

    2014-09-01

    Electrons accelerated in solar flares are injected into the surrounding plasma, where they are subjected to the influence of collisional (Coulomb) energy losses. Their evolution is modeled by a partial differential equation describing continuity of electron number. In a recent paper, Dobranskis and Zharkova claim to have found an 'updated exact analytical solution' to this continuity equation. Their solution contains an additional term that drives an exponential decrease in electron density with depth, leading them to assert that the well-known solution derived by Brown, Syrovatskii and Shmeleva, and many others is invalid. We show that the solution of Dobranskis andmore » Zharkova results from a fundamental error in the application of the method of characteristics and is hence incorrect. Further, their comparison of the 'new' analytical solution with numerical solutions of the Fokker-Planck equation fails to lend support to their result. We conclude that Dobranskis and Zharkova's solution of the universally accepted and well-established continuity equation is incorrect, and that their criticism of the correct solution is unfounded. We also demonstrate the formal equivalence of the approaches of Syrovatskii and Shmeleva and Brown, with particular reference to the evolution of the electron flux and number density (both differential in energy) in a collisional thick target. We strongly urge use of these long-established, correct solutions in future works.« less

  5. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    NASA Astrophysics Data System (ADS)

    Jeong, Hyunjo; Zhang, Shuzeng; Barnard, Dan; Li, Xiongbing

    2015-09-01

    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α2 ≃ 2α1.

  6. Experimental validation of beam quality correction factors for proton beams

    NASA Astrophysics Data System (ADS)

    Gomà, Carles; Hofstetter-Boillat, Bénédicte; Safai, Sairos; Vörös, Sándor

    2015-04-01

    This paper presents a method to experimentally validate the beam quality correction factors (kQ) tabulated in IAEA TRS-398 for proton beams and to determine the kQ of non-tabulated ionization chambers (based on the already tabulated values). The method is based exclusively on ionometry and it consists in comparing the reading of two ionization chambers under the same reference conditions in a proton beam quality Q and a reference beam quality 60Co. This allows one to experimentally determine the ratio between the kQ of the two ionization chambers. In this work, 7 different ionization chamber models were irradiated under the IAEA TRS-398 reference conditions for 60Co beams and proton beams. For the latter, the reference conditions for both modulated beams (spread-out Bragg peak field) and monoenergetic beams (pseudo-monoenergetic field) were studied. For monoenergetic beams, it was found that the experimental kQ values obtained for plane-parallel chambers are consistent with the values tabulated in IAEA TRS-398; whereas the kQ values obtained for cylindrical chambers are not consistent—being higher than the tabulated values. These results support the suggestion (of previous publications) that the IAEA TRS-398 reference conditions for monoenergetic proton beams should be revised so that the effective point of measurement of cylindrical ionization chambers is taken into account when positioning the reference point of the chamber at the reference depth. For modulated proton beams, the tabulated kQ values of all the ionization chambers studied in this work were found to be consistent with each other—except for the IBA FC65-G, whose experimental kQ value was found to be 0.6% lower than the tabulated one. The kQ of the PTW Advanced Markus chamber, which is not tabulated in IAEA TRS-398, was found to be 0.997 ± 0.042 (k = 2), based on the tabulated value of the PTW Markus chamber.

  7. Electroweak radiative corrections to the top quark decay

    NASA Astrophysics Data System (ADS)

    Kuruma, Toshiyuki

    1993-12-01

    The top quark, once produced, should be an important window to the electroweak symmetry breaking sector. We compute electroweak radiative corrections to the decay process t→b+W + in order to extract information on the Higgs sector and to fix the background in searches for a possible new physics contribution. The large Yukawa coupling of the top quark induces a new form factor through vertex corrections and causes discrepancy from the tree-level longitudinal W-boson production fraction, but the effect is of order 1% or less for m H<1 TeV.

  8. Optimal proximity correction: application for flash memory design

    NASA Astrophysics Data System (ADS)

    Chen, Y. O.; Huang, D. L.; Sung, K. T.; Chiang, J. J.; Yu, M.; Teng, F.; Chu, Lung; Rey, Juan C.; Bernard, Douglas A.; Li, Jiangwei; Li, Junling; Moroz, V.; Boksha, Victor V.

    1998-06-01

    Proximity Correction is the technology for which the most of IC manufacturers are committed already. The final intended result of correction is affected by many factors other than the optical characteristics of the mask-stepper system, such as photoresist exposure, post-exposure bake and development parameters, etch selectivity and anisotropy, and underlying topography. The most advanced industry and research groups already reported immediate need to consider wafer topography as one of the major components during a Proximity Correction procedure. In the present work we are discussing the corners rounding effect (which eventually cause electrical leakage) observed for the elements of Poly2 layer for a Flash Memory Design. It was found that the rounding originated by three- dimensional effects due to variation of photoresist thickness resulting from the non-planar substrate. Our major goal was to understand the reasons and correct corner rounding. As a result of this work highly effective layout correction methodology was demonstrated and manufacturable Depth Of Focus was achieved. Another purpose of the work was to demonstrate complete integration flow for a Flash Memory Design based on photolithography; deposition/etch; ion implantation/oxidation/diffusion; and device simulators.

  9. Wall interference correction improvements for the ONERA main wind tunnels

    NASA Technical Reports Server (NTRS)

    Vaucheret, X.

    1982-01-01

    This paper describes improved methods of calculating wall interference corrections for the ONERA large windtunnels. The mathematical description of the model and its sting support have become more sophisticated. An increasing number of singularities is used until an agreement between theoretical and experimental signatures of the model and sting on the walls of the closed test section is obtained. The singularity decentering effects are calculated when the model reaches large angles of attack. The porosity factor cartography on the perforated walls deduced from the measured signatures now replaces the reference tests previously carried out in larger tunnels. The porosity factors obtained from the blockage terms (signatures at zero lift) and from the lift terms are in good agreement. In each case (model + sting + test section), wall corrections are now determined, before the tests, as a function of the fundamental parameters M, CS, CZ. During the windtunnel tests, the corrections are quickly computed from these functions.

  10. Biometrics encryption combining palmprint with two-layer error correction codes

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  11. Energy response corrections for profile measurements using a combination of different detector types.

    PubMed

    Wegener, Sonja; Sauer, Otto A

    2018-02-01

    Different detector properties will heavily affect the results of off-axis measurements outside of radiation fields, where a different energy spectrum is encountered. While a diode detector would show a high spatial resolution, it contains high atomic number elements, which lead to perturbations and energy-dependent response. An ionization chamber, on the other hand, has a much smaller energy dependence, but shows dose averaging over its larger active volume. We suggest a way to obtain spatial energy response corrections of a detector independent of its volume effect for profiles of arbitrary fields by using a combination of two detectors. Measurements were performed at an Elekta Versa HD accelerator equipped with an Agility MLC. Dose profiles of fields between 10 × 4 cm² and 0.6 × 0.6 cm² were recorded several times, first with different small-field detectors (unshielded diode 60012 and stereotactic field detector SFD, microDiamond, EDGE, and PinPoint 31006) and then with a larger volume ionization chamber Semiflex 31010 for different photon beam qualities of 6, 10, and 18 MV. Correction factors for the small-field detectors were obtained from the readings of the respective detector and the ionization chamber using a convolution method. Selected profiles were also recorded on film to enable a comparison. After applying the correction factors to the profiles measured with different detectors, agreement between the detectors and with profiles measured on EBT3 film was improved considerably. Differences in the full width half maximum obtained with the detectors and the film typically decreased by a factor of two. Off-axis correction factors outside of a 10 × 1 cm² field ranged from about 1.3 for the EDGE diode about 10 mm from the field edge to 0.7 for the PinPoint 31006 25 mm from the field edge. The microDiamond required corrections comparable in size to the Si-diodes and even exceeded the values in the tail region of the field. The SFD was found

  12. Extended Reissner-Nordström solutions sourced by dynamical torsion

    NASA Astrophysics Data System (ADS)

    Cembranos, Jose A. R.; Valcarcel, Jorge Gigante

    2018-04-01

    We find a new exact vacuum solution in the framework of the Poincaré Gauge field theory with massive torsion. In this model, torsion operates as an independent field and introduces corrections to the vacuum structure present in General Relativity. The new static and spherically symmetric configuration shows a Reissner-Nordström-like geometry characterized by a spin charge. It extends the known massless torsion solution to the massive case. The corresponding Reissner-Nordström-de Sitter solution is also compatible with a cosmological constant and additional U (1) gauge fields.

  13. The effect of suspending solution supplemented with marine cations on the oxidation of Biolog GN MicroPlate substrates by Vibrionaceae bacteria.

    PubMed

    Noble, L D; Gow, J A

    1998-03-01

    Bacteria belonging to the family Vibrionaceae were suspended using saline and a solution prepared from a marine-cations supplement. The effect of this on the profile of oxidized substrates obtained when using Biolog GN MicroPlates was investigated. Thirty-nine species belonging to the genera Aeromonas, Listonella, Photobacterium, and Vibrio were studied. Of the strains studied, species of Listonella, Photobacterium, and Vibrio could be expected to benefit from a marine-cations supplement that contained Na+, K+, and Mg2+. Bacteria that are not of marine origin are usually suspended in normal saline. Of the 39 species examined, 9 were not included in the Biolog data base and were not identified. Of the 30 remaining species, 50% were identified correctly using either of the suspending solutions. A further 20% were correctly identified only when suspended in saline. Three species, or 10%, were correctly identified only after suspension in the marine-cations supplemented solution. The remaining 20% of species were not correctly identified by either method. Generally, more substrates were oxidized when the bacteria had been suspended in the more complex salts solution. Usually, when identifications were incorrect, the use of the marine-cations supplemented suspending solution had resulted in many more substrates being oxidized. Based on these results, it would be preferable to use saline to suspend the cells when using Biolog for identification of species of Vibrionaceae. A salts solution containing a marine-cations supplement would be preferable for environmental studies where the objective is to determine profiles of substrates that the bacteria have the potential to oxidize. If identifications are done using marine-cations supplemented suspending solution, it would be advisable to include reference cultures to determine the effect of the supplement. Of the Vibrio and Listonella species associated with human clinical specimens, 8 out of the 11 studied were identified

  14. Feed-forward alignment correction for advanced overlay process control using a standalone alignment station "Litho Booster"

    NASA Astrophysics Data System (ADS)

    Yahiro, Takehisa; Sawamura, Junpei; Dosho, Tomonori; Shiba, Yuji; Ando, Satoshi; Ishikawa, Jun; Morita, Masahiro; Shibazaki, Yuichi

    2018-03-01

    One of the main components of an On-Product Overlay (OPO) error budget is the process induced wafer error. This necessitates wafer-to-wafer correction in order to optimize overlay accuracy. This paper introduces the Litho Booster (LB), standalone alignment station as a solution to improving OPO. LB can execute high speed alignment measurements without throughput (THP) loss. LB can be installed in any lithography process control loop as a metrology tool, and is then able to provide feed-forward (FF) corrections to the scanners. In this paper, the detailed LB design is described and basic LB performance and OPO improvement is demonstrated. Litho Booster's extendibility and applicability as a solution for next generation manufacturing accuracy and productivity challenges are also outlined

  15. Ion recombination correction in carbon ion beams.

    PubMed

    Rossomme, S; Hopfgartner, J; Lee, N D; Delor, A; Thomas, R A S; Romano, F; Fukumura, A; Vynckier, S; Palmans, H

    2016-07-01

    In this work, ion recombination is studied as a function of energy and depth in carbon ion beams. Measurements were performed in three different passively scattered carbon ion beams with energies of 62 MeV/n, 135 MeV/n, and 290 MeV/n using various types of plane-parallel ionization chambers. Experimental results were compared with two analytical models for initial recombination. One model is generally used for photon beams and the other model, developed by Jaffé, takes into account the ionization density along the ion track. An investigation was carried out to ascertain the effect on the ion recombination correction with varying ionization chamber orientation with respect to the direction of the ion tracks. The variation of the ion recombination correction factors as a function of depth was studied for a Markus ionization chamber in the 62 MeV/n nonmodulated carbon ion beam. This variation can be related to the depth distribution of linear energy transfer. Results show that the theory for photon beams is not applicable to carbon ion beams. On the other hand, by optimizing the value of the ionization density and the initial mean-square radius, good agreement is found between Jaffé's theory and the experimental results. As predicted by Jaffé's theory, the results confirm that ion recombination corrections strongly decrease with an increasing angle between the ion tracks and the electric field lines. For the Markus ionization chamber, the variation of the ion recombination correction factor with depth was modeled adequately by a sigmoid function, which is approximately constant in the plateau and strongly increasing in the Bragg peak region to values of up to 1.06. Except in the distal edge region, all experimental results are accurately described by Jaffé's theory. Experimental results confirm that ion recombination in the investigated carbon ion beams is dominated by initial recombination. Ion recombination corrections are found to be significant and cannot be

  16. Involuntary eye motion correction in retinal optical coherence tomography: Hardware or software solution?

    PubMed

    Baghaie, Ahmadreza; Yu, Zeyun; D'Souza, Roshan M

    2017-04-01

    In this paper, we review state-of-the-art techniques to correct eye motion artifacts in Optical Coherence Tomography (OCT) imaging. The methods for eye motion artifact reduction can be categorized into two major classes: (1) hardware-based techniques and (2) software-based techniques. In the first class, additional hardware is mounted onto the OCT scanner to gather information about the eye motion patterns during OCT data acquisition. This information is later processed and applied to the OCT data for creating an anatomically correct representation of the retina, either in an offline or online manner. In software based techniques, the motion patterns are approximated either by comparing the acquired data to a reference image, or by considering some prior assumptions about the nature of the eye motion. Careful investigations done on the most common methods in the field provides invaluable insight regarding future directions of the research in this area. The challenge in hardware-based techniques lies in the implementation aspects of particular devices. However, the results of these techniques are superior to those obtained from software-based techniques because they are capable of capturing secondary data related to eye motion during OCT acquisition. Software-based techniques on the other hand, achieve moderate success and their performance is highly dependent on the quality of the OCT data in terms of the amount of motion artifacts contained in them. However, they are still relevant to the field since they are the sole class of techniques with the ability to be applied to legacy data acquired using systems that do not have extra hardware to track eye motion. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Direct perturbation theory for the dark soliton solution to the nonlinear Schrödinger equation with normal dispersion.

    PubMed

    Yu, Jia-Lu; Yang, Chun-Nuan; Cai, Hao; Huang, Nian-Ning

    2007-04-01

    After finding the basic solutions of the linearized nonlinear Schrödinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  18. Correction factor in temperature measurements by optoelectronic systems

    NASA Astrophysics Data System (ADS)

    Bikberdina, N.; Yunusov, R.; Boronenko, M.; Gulyaev, P.

    2017-11-01

    It is often necessary to investigate high temperature fast moving microobjects. If you want to measure their temperature, use optoelectronic measuring systems. Optoelectronic systems are always calibrated over a stationary absolutely black body. One of the problems of pyrometry is that you can not use this calibration to measure the temperature of moving objects. Two solutions are proposed in [1]. This article outlines the first results of validation [2]. An experimentally justified coefficient that allows one to take into account the influence of its motion on the decrease in the video signal of the photosensor in the regime of charge accumulation. The study was partially supported by RFBR in the framework of a research project № 15-42-00106

  19. CRISPR/Cas9-mediated somatic correction of a novel coagulator factor IX gene mutation ameliorates hemophilia in mouse.

    PubMed

    Guan, Yuting; Ma, Yanlin; Li, Qi; Sun, Zhenliang; Ma, Lie; Wu, Lijuan; Wang, Liren; Zeng, Li; Shao, Yanjiao; Chen, Yuting; Ma, Ning; Lu, Wenqing; Hu, Kewen; Han, Honghui; Yu, Yanhong; Huang, Yuanhua; Liu, Mingyao; Li, Dali

    2016-05-01

    The X-linked genetic bleeding disorder caused by deficiency of coagulator factor IX, hemophilia B, is a disease ideally suited for gene therapy with genome editing technology. Here, we identify a family with hemophilia B carrying a novel mutation, Y371D, in the human F9 gene. The CRISPR/Cas9 system was used to generate distinct genetically modified mouse models and confirmed that the novel Y371D mutation resulted in a more severe hemophilia B phenotype than the previously identified Y371S mutation. To develop therapeutic strategies targeting this mutation, we subsequently compared naked DNA constructs versus adenoviral vectors to deliver Cas9 components targeting the F9 Y371D mutation in adult mice. After treatment, hemophilia B mice receiving naked DNA constructs exhibited correction of over 0.56% of F9 alleles in hepatocytes, which was sufficient to restore hemostasis. In contrast, the adenoviral delivery system resulted in a higher corrective efficiency but no therapeutic effects due to severe hepatic toxicity. Our studies suggest that CRISPR/Cas-mediated in situ genome editing could be a feasible therapeutic strategy for human hereditary diseases, although an efficient and clinically relevant delivery system is required for further clinical studies. © 2016 The Authors. Published under the terms of the CC BY 4.0 license.

  20. Exact Closed-form Solutions for Lamb's Problem

    NASA Astrophysics Data System (ADS)

    Feng, Xi; Zhang, Haiming

    2018-04-01

    In this article, we report on an exact closed-form solution for the displacement at the surface of an elastic half-space elicited by a buried point source that acts at some point underneath that surface. This is commonly referred to as the 3-D Lamb's problem, for which previous solutions were restricted to sources and receivers placed at the free surface. By means of the reciprocity theorem, our solution should also be valid as a means to obtain the displacements at interior points when the source is placed at the free surface. We manage to obtain explicit results by expressing the solution in terms of elementary algebraic expression as well as elliptic integrals. We anchor our developments on Poisson's ratio 0.25 starting from Johnson's (1974) integral solutions which must be computed numerically. In the end, our closed-form results agree perfectly with the numerical results of Johnson (1974), which strongly confirms the correctness of our explicit formulas. It is hoped that in due time, these formulas may constitute a valuable canonical solution that will serve as a yardstick against which other numerical solutions can be compared and measured.

  1. Exact closed-form solutions for Lamb's problem

    NASA Astrophysics Data System (ADS)

    Feng, Xi; Zhang, Haiming

    2018-07-01

    In this paper, we report on an exact closed-form solution for the displacement at the surface of an elastic half-space elicited by a buried point source that acts at some point underneath that surface. This is commonly referred to as the 3-D Lamb's problem for which previous solutions were restricted to sources and receivers placed at the free surface. By means of the reciprocity theorem, our solution should also be valid as a means to obtain the displacements at interior points when the source is placed at the free surface. We manage to obtain explicit results by expressing the solution in terms of elementary algebraic expression as well as elliptic integrals. We anchor our developments on Poisson's ratio 0.25 starting from Johnson's integral solutions which must be computed numerically. In the end, our closed-form results agree perfectly with the numerical results of Johnson, which strongly confirms the correctness of our explicit formulae. It is hoped that in due time, these formulae may constitute a valuable canonical solution that will serve as a yardstick against which other numerical solutions can be compared and measured.

  2. Target Uncertainty Mediates Sensorimotor Error Correction

    PubMed Central

    Vijayakumar, Sethu; Wolpert, Daniel M.

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323

  3. Hepatitis C and the correctional population.

    PubMed

    Reindollar, R W

    1999-12-27

    The hepatitis C epidemic has extended well into the correctional population where individuals predominantly originate from high-risk environments and have high-risk behaviors. Epidemiologic data estimate that 30% to 40% of the 1.8 million inmates in the United States are infected with the hepatitis C virus (HCV), the majority of whom were infected before incarceration. As in the general population, injection drug use accounts for the majority of HCV infections in this group--one to two thirds of inmates have a history of injection drug use before incarceration and continue to do so while in prison. Although correctional facilities also represent a high-risk environment for HCV infection because of a continued high incidence of drug use and high-risk sexual activities, available data indicate a low HCV seroconversion rate of 1.1 per 100 person-years in prison. Moreover, a high annual turnover rate means that many inmates return to their previous high-risk environments and behaviors that are conducive either to acquiring or spreading HCV. Despite a very high prevalence of HCV infection within the US correctional system, identification and treatment of at-risk individuals is inconsistent, at best. Variable access to correctional health-care resources, limited funding, high inmate turnover rates, and deficient follow-up care after release represent a few of the factors that confound HCV control and prevention in this group. Future efforts must focus on establishing an accurate knowledge base and implementing education, policies, and procedures for the prevention and treatment of hepatitis C in correctional populations.

  4. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological

  5. 77 FR 20291 - Energy Conservation Program: Test Procedures for Residential Clothes Washers; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-04

    ... Conservation Program: Test Procedures for Residential Clothes Washers; Correction AGENCY: Office of Energy.... Department of Energy (DOE) is correcting a final rule establishing revised test procedures for residential... factor calculation section of the currently applicable test procedure. DATES: Effective: April 6, 2012...

  6. Differential dissolution of Lake Baikal diatoms: correction factors and implications for palaeoclimatic reconstruction

    NASA Astrophysics Data System (ADS)

    Battarbee, Richard W.; Mackay, A. W.; Jewson, D. H.; Ryves, D. B.; Sturm, M.

    2005-04-01

    In order to assess how faithfully the composition of diatom assemblages in the recent sediments of Lake Baikal represents the composition of the planktonic diatom populations in the lake, we have compared the flux of diatoms from the water column (i.e., "expected" in the sediment) with the accumulation rates of the same diatom taxa (i.e., "observed" in the sediment) from BAIK 38, a sediment core collected in the south basin of the lake. Whilst there are many uncertainties, the results indicate that only approximately 1% of the phytoplankton crop is preserved in the sediment and some species are more affected by dissolution than others. These findings are comparable to similar studies undertaken in the marine environment. In terms of differential dissolution, our studies suggest that the endemic taxa (e.g., Cyclotella minuta and Aulacoseira baicalensis) are the most resilient, whereas cosmopolitan taxa such as Nitzschia acicularis and Synedra acus are the least resilient. N. acicularis dissolves in the water column, but for other taxa, most dissolution takes place at the surface sediment-water interface. We use the data to develop a series of species-specific correction factors that allow the composition of the source populations to be reconstituted, and we argue that failure to take these processes into account can undermine the use of the diatom and biogenic silica record in Lake Baikal for palaeo-productivity and palaeoclimate reconstruction.

  7. The self-absorption correction factors for 210Pb concentration in mining waste and influence on environmental radiation risk assessment.

    PubMed

    Bonczyk, Michal; Michalik, Boguslaw; Chmielewska, Izabela

    2017-03-01

    The radioactive lead isotope 210 Pb occurs in waste originating from metal smelting and refining industry, gas and oil extraction and sometimes from underground coal mines, which are deposited in natural environment very often. Radiation risk assessment requires accurate knowledge about the concentration of 210 Pb in such materials. Laboratory measurements seem to be the only reliable method applicable in environmental 210 Pb monitoring. One of the methods is gamma-ray spectrometry, which is a very fast and cost-effective method to determine 210 Pb concentration. On the other hand, the self-attenuation of gamma ray from 210 Pb (46.5 keV) in a sample is significant as it does not depend only on sample density but also on sample chemical composition (sample matrix). This phenomenon is responsible for the under-estimation of the 210 Pb activity concentration level often when gamma spectrometry is applied with no regard to relevant corrections. Finally, the corresponding radiation risk can be also improperly evaluated. Sixty samples of coal mining solid tailings (sediments created from underground mining water) were analysed. Slightly modified and adapted to the existing laboratory condition, a transmission method has been applied for the accurate measurement of 210 Pb concentration . The observed concentrations of 210 Pb range between 42.2 ÷ 11,700 Bq·kg -1 of dry mass. Experimentally obtained correction factors related to a sample density and elemental composition range between 1.11 and 6.97. Neglecting this factor can cause a significant error or underestimations in radiological risk assessment. The obtained results have been used for environmental radiation risk assessment performed by use of the ERICA tool assuming exposure conditions typical for the final destination of such kind of waste.

  8. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 547: Miscellaneous Contaminated Waste Sites, Nevada National Security Site, Nevada, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mark Krauss

    2011-09-01

    knowledge of the CASs were sufficient to meet the DQOs and evaluate CAAs without additional investigation. As a result, further investigation of the CAU 547 CASs was not required. The following CAAs were identified for the gas sampling assemblies: (1) clean closure, (2) closure in place, (3) modified closure in place, (4) no further action (with administrative controls), and (5) no further action. Based on the CAAs evaluation, the recommended corrective action for the three CASs in CAU 547 is closure in place. This corrective action will involve construction of a soil cover on top of the gas sampling assembly components and establishment of use restrictions at each site. The closure in place alternative was selected as the best and most appropriate corrective action for the CASs at CAU 547 based on the following factors: (1) Provides long-term protection of human health and the environment; (2) Minimizes short-term risk to site workers in implementing corrective action; (3) Is easily implemented using existing technology; (4) Complies with regulatory requirements; (5) Fulfills FFACO requirements for site closure; (6) Does not generate transuranic waste requiring offsite disposal; (7) Is consistent with anticipated future land use of the areas (i.e., testing and support activities); and (8) Is consistent with other NNSS site closures where contamination was left in place.« less

  9. Correction of spin diffusion during iterative automated NOE assignment

    NASA Astrophysics Data System (ADS)

    Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael

    2004-04-01

    Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.

  10. Optical proximity correction for anamorphic extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.

  11. Optical proximity correction for anamorphic extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.

  12. Eye gaze correction with stereovision for video-teleconferencing.

    PubMed

    Yang, Ruigang; Zhang, Zhengyou

    2004-07-01

    The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have been trying to provide a practical software-based solution to bring video-teleconferencing one step closer to the mass market. This paper presents a novel approach: Based on stereo analysis combined with rich domain knowledge (a personalized face model), we synthesize, using graphics hardware, a virtual video that maintains eye contact. A 3D stereo head tracker with a personalized face model is used to compute initial correspondences across two views. More correspondences are then added through template and feature matching. Finally, all the correspondence information is fused together for view synthesis using view morphing techniques. The combined methods greatly enhance the accuracy and robustness of the synthesized views. Our current system is able to generate an eye-gaze corrected video stream at five frames per second on a commodity 1 GHz PC.

  13. Anamorphic quasiperiodic universes in modified and Einstein gravity with loop quantum gravity corrections

    NASA Astrophysics Data System (ADS)

    Amaral, Marcelo M.; Aschheim, Raymond; Bubuianu, Laurenţiu; Irwin, Klee; Vacaru, Sergiu I.; Woolridge, Daniel

    2017-09-01

    The goal of this work is to elaborate on new geometric methods of constructing exact and parametric quasiperiodic solutions for anamorphic cosmology models in modified gravity theories, MGTs, and general relativity, GR. There exist previously studied generic off-diagonal and diagonalizable cosmological metrics encoding gravitational and matter fields with quasicrystal like structures, QC, and holonomy corrections from loop quantum gravity, LQG. We apply the anholonomic frame deformation method, AFDM, in order to decouple the (modified) gravitational and matter field equations in general form. This allows us to find integral varieties of cosmological solutions determined by generating functions, effective sources, integration functions and constants. The coefficients of metrics and connections for such cosmological configurations depend, in general, on all spacetime coordinates and can be chosen to generate observable (quasi)-periodic/aperiodic/fractal/stochastic/(super) cluster/filament/polymer like (continuous, stochastic, fractal and/or discrete structures) in MGTs and/or GR. In this work, we study new classes of solutions for anamorphic cosmology with LQG holonomy corrections. Such solutions are characterized by nonlinear symmetries of generating functions for generic off-diagonal cosmological metrics and generalized connections, with possible nonholonomic constraints to Levi-Civita configurations and diagonalizable metrics depending only on a time like coordinate. We argue that anamorphic quasiperiodic cosmological models integrate the concept of quantum discrete spacetime, with certain gravitational QC-like vacuum and nonvacuum structures. And, that of a contracting universe that homogenizes, isotropizes and flattens without introducing initial conditions or multiverse problems.

  14. An Efficient Algorithm for Partitioning and Authenticating Problem-Solutions of eLeaming Contents

    ERIC Educational Resources Information Center

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2013-01-01

    Content authenticity and correctness is one of the important challenges in eLearning as there can be many solutions to one specific problem in cyber space. Therefore, the authors feel it is necessary to map problems to solutions using graph partition and weighted bipartite matching. This article proposes an efficient algorithm to partition…

  15. Derivation and correction of the Tsu-Esaki tunneling current formula

    NASA Astrophysics Data System (ADS)

    Bandara, K. M. S. V.; Coon, D. D.

    1989-07-01

    The theoretical basis of the Tsu-Esaki tunneling current formula [Appl. Phys. Lett. 22, 562 (1973)] is examined in detail and corrections are found. The starting point is an independent particle picture with fully antisymmetrized N-electron wave functions. Unitarity is used to resolve an orthonormality issue raised in earlier work. A new set of mutually consistent equations is derived for bias voltage, tunneling current, and electron densities in the emitter and collector. Corrections include a previously noted kinematic factor and a modification of emitter and collector Fermi levels. The magnitude of the corrections is illustrated numerically for the case of a resonant tunneling current-voltage characteristic.

  16. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield

    PubMed Central

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-01-01

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723

  17. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    PubMed

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  18. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  19. Supersaturated calcium carbonate solutions are classical.

    PubMed

    Henzler, Katja; Fetisov, Evgenii O; Galib, Mirza; Baer, Marcel D; Legg, Benjamin A; Borca, Camelia; Xto, Jacinta M; Pin, Sonia; Fulton, John L; Schenter, Gregory K; Govind, Niranjan; Siepmann, J Ilja; Mundy, Christopher J; Huthwelker, Thomas; De Yoreo, James J

    2018-01-01

    Mechanisms of CaCO 3 nucleation from solutions that depend on multistage pathways and the existence of species far more complex than simple ions or ion pairs have recently been proposed. Herein, we provide a tightly coupled theoretical and experimental study on the pathways that precede the initial stages of CaCO 3 nucleation. Starting from molecular simulations, we succeed in correctly predicting bulk thermodynamic quantities and experimental data, including equilibrium constants, titration curves, and detailed x-ray absorption spectra taken from the supersaturated CaCO 3 solutions. The picture that emerges is in complete agreement with classical views of cluster populations in which ions and ion pairs dominate, with the concomitant free energy landscapes following classical nucleation theory.

  20. Correction of Microplate Data from High-Throughput Screening.

    PubMed

    Wang, Yuhong; Huang, Ruili

    2016-01-01

    High-throughput screening (HTS) makes it possible to collect cellular response data from a large number of cell lines and small molecules in a timely and cost-effective manner. The errors and noises in the microplate-formatted data from HTS have unique characteristics, and they can be generally grouped into three categories: run-wise (temporal, multiple plates), plate-wise (background pattern, single plate), and well-wise (single well). In this chapter, we describe a systematic solution for identifying and correcting such errors and noises, mainly basing on pattern recognition and digital signal processing technologies.

  1. A parallel method of atmospheric correction for multispectral high spatial resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin

    2018-03-01

    The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.

  2. Publisher Correction: Natural variation in the parameters of innate immune cells is preferentially driven by genetic factors.

    PubMed

    Patin, Etienne; Hasan, Milena; Bergstedt, Jacob; Rouilly, Vincent; Libri, Valentina; Urrutia, Alejandra; Alanio, Cécile; Scepanovic, Petar; Hammer, Christian; Jönsson, Friederike; Beitz, Benoît; Quach, Hélène; Lim, Yoong Wearn; Hunkapiller, Julie; Zepeda, Magge; Green, Cherie; Piasecka, Barbara; Leloup, Claire; Rogge, Lars; Huetz, François; Peguillet, Isabelle; Lantz, Olivier; Fontes, Magnus; Di Santo, James P; Thomas, Stéphanie; Fellay, Jacques; Duffy, Darragh; Quintana-Murci, Lluís; Albert, Matthew L

    2018-06-01

    In the version of this article initially published, the name of one author was incorrect (James P. Santo). The correct name is James P. Di Santo. The error has been corrected in the HTML and PDF versions of the article.

  3. Impact of creatine kinase correction on the predictive value of S-100B after mild traumatic brain injury.

    PubMed

    Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin

    2006-01-01

    To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.

  4. How to tackle protein structural data from solution and solid state: An integrated approach.

    PubMed

    Carlon, Azzurra; Ravera, Enrico; Andrałojć, Witold; Parigi, Giacomo; Murshudov, Garib N; Luchinat, Claudio

    2016-02-01

    Long-range NMR restraints, such as diamagnetic residual dipolar couplings and paramagnetic data, can be used to determine 3D structures of macromolecules. They are also used to monitor, and potentially to improve, the accuracy of a macromolecular structure in solution by validating or "correcting" a crystal model. Since crystal structures suffer from crystal packing forces they may not be accurate models for the macromolecular structures in solution. However, the presence of real differences should be tested for by simultaneous refinement of the structure using both crystal and solution NMR data. To achieve this, the program REFMAC5 from CCP4 was modified to allow the simultaneous use of X-ray crystallographic and paramagnetic NMR data and/or diamagnetic residual dipolar couplings. Inconsistencies between crystal structures and solution NMR data, if any, may be due either to structural rearrangements occurring on passing from the solution to solid state, or to a greater degree of conformational heterogeneity in solution with respect to the crystal. In the case of multidomain proteins, paramagnetic restraints can provide the correct mutual orientations and positions of domains in solution, as well as information on the conformational variability experienced by the macromolecule. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  6. OPTICAL COHERENCE TOMOGRAPHY BASELINE PREDICTORS FOR INITIAL BEST-CORRECTED VISUAL ACUITY RESPONSE TO INTRAVITREAL ANTI-VASCULAR ENDOTHELIAL GROWTH FACTOR TREATMENT IN EYES WITH DIABETIC MACULAR EDEMA: The CHARTRES Study.

    PubMed

    Santos, Ana R; Costa, Miguel Â; Schwartz, Christian; Alves, Dalila; Figueira, João; Silva, Rufino; Cunha-Vaz, Jose G

    2018-06-01

    To identify baseline optical coherence tomography morphologic characteristics predicting the visual response to anti-vascular endothelial growth factor therapy in diabetic macular edema. Sixty-seven patients with diabetic macular edema completed a prospective, observational study (NCT01947881-CHARTRES). All patients received monthly intravitreal injections of Lucentis for 3 months followed by PRN treatment and underwent best-corrected visual acuity measurements and spectral domain optical coherence tomography at Baseline, Months 1, 2, 3, and 6. Visual treatment response was characterized as good (≥10 letters), moderate (5-10 letters), and poor (<5 or letters loss). Spectral domain optical coherence tomography images were graded before and after treatment by a certified Reading Center. One month after loading dose, 26 patients (38.80%) were identified as good responders, 19 (28.35%) as Moderate and 22 (32.83%) as poor responders. There were no significant best-corrected visual acuity and central retinal thickness differences at baseline (P = 0.176; P = 0.573, respectively). Ellipsoid zone disruption and disorganization of retinal inner layers were good predictors for treatment response, representing a significant risk for poor visual recovery to anti-vascular endothelial growth factor therapy (odds ratio = 10.96; P < 0.001 for ellipsoid zone disruption and odds ratio = 7.05; P = 0.034 for disorganization of retinal inner layers). Damage of ellipsoid zone, higher values of disorganization of retinal inner layers, and central retinal thickness decrease are good predictors of best-corrected visual acuity response to anti-vascular endothelial growth factor therapy.

  7. SU-G-TeP1-03: Beam Quality Correction Factors for Linear Accelerator with and Without Flattening Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czarnecki, D; Voigts-Rhetz, P von; Zink, K

    2016-06-15

    Purpose: The impact of removing the flattening filter on absolute dosimetry based on IAEA’s TPR-398 and AAPM’s TG-51 was investigated in this study using Monte Carlo simulations. Methods: The EGSnrc software package was used for all Monte Carlo simulations performed in this work. Five different ionization chambers and nine linear accelerator heads have been modeled according to technical drawings. To generate a flattening filter free radiation field the flattening filter was replaced by a 2 mm thick aluminum layer. Dose calculation in a water phantom were performed to calculate the beam quality correction factor k{sub Q} as a function ofmore » the beam quality specifiers %dd(10){sub x}, TPR{sub 20,10} and mean photon and electron energies at the point of measurement in photon fields with (WFF) and without flattening filter (FFF). Results: The beam quality correction factor as a function of %dd(10){sub x} differs systematically between FFF and WFF beams for all investigated ionization chambers. The largest difference of 1.8% was observed for the largest investigated Farmer-type ionization chamber with a sensitive volume of 0.69 cm{sup 3}. For ionization chambers with a smaller nominal sensitive volume (0.015 – 0.3 cm{sup 3}) the deviation was less than 0.4% between WFF and FFF beams for %dd(10){sub x} > 62%. The specifier TPR{sub 20,10} revealed only a good correlation between WFF and FFF beams (< 0.3%) for low energies. Conclusion: The results confirm that %dd(10){sub x} is a suitable beam quality specifier for FFF beams with an acceptable bias. The deviation depends on the volume of the ionization chamber. Using %dd(10){sub x} to predict k{sub Q} for a large volume chamber in a FFF photon field may lead to not acceptable errors according to the results of this study. This bias may be caused by the volume effect due to the inhomogeneous photon fields of FFF linear accelerators.« less

  8. WE-G-207-07: Iterative CT Shading Correction Method with No Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Niu, T

    2015-06-15

    Purpose: Shading artifacts are caused by scatter contamination, beam hardening effects and other non-ideal imaging condition. Our Purpose is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT imaging (e.g., cone-beam CT, low-kVp CT) without relying on prior information. Methods: Our method applies general knowledge of the relatively uniform CT number distribution in one tissue component. Image segmentation is applied to construct template image where each structure is filled with the same CT number of that specific tissue. By subtracting the ideal template from CT image, the residual from various error sources are generated.more » Since the forward projection is an integration process, the non-continuous low-frequency shading artifacts in the image become continuous and low-frequency signals in the line integral. Residual image is thus forward projected and its line integral is filtered using Savitzky-Golay filter to estimate the error. A compensation map is reconstructed on the error using standard FDK algorithm and added to the original image to obtain the shading corrected one. Since the segmentation is not accurate on shaded CT image, the proposed scheme is iterated until the variation of residual image is minimized. Results: The proposed method is evaluated on a Catphan600 phantom, a pelvic patient and a CT angiography scan for carotid artery assessment. Compared to the one without correction, our method reduces the overall CT number error from >200 HU to be <35 HU and increases the spatial uniformity by a factor of 1.4. Conclusion: We propose an effective iterative algorithm for shading correction in CT imaging. Being different from existing algorithms, our method is only assisted by general anatomical and physical information in CT imaging without relying on prior knowledge. Our method is thus practical and attractive as a general solution to CT shading correction. This work is supported by the National

  9. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be

  10. Electrovacuum solutions in nonlocal gravity

    NASA Astrophysics Data System (ADS)

    Fernandes, Karan; Mitra, Arpita

    2018-05-01

    We consider the coupling of the electromagnetic field to a nonlocal gravity theory comprising of the Einstein-Hilbert action in addition to a nonlocal R □-2R term associated with a mass scale m . We demonstrate that in the case of the minimally coupled electromagnetic field, real corrections about the Reissner-Nordström background only exist between the inner Cauchy horizon and the event horizon of the black hole. This motivates us to consider the modified coupling of electromagnetism to this theory via the Kaluza ansatz. The Kaluza reduction introduces nonlocal terms involving the electromagnetic field to the pure gravitational nonlocal theory. An iterative approach is provided to perturbatively solve the equations of motion to arbitrary order in m2 about any known solution of general relativity. We derive the first-order corrections and demonstrate that the higher order corrections are real and perturbative about the external background of a Reissner-Nordström black hole. We also discuss how the Kaluza reduced action, through the inclusion of nonlocal electromagnetic fields, could also be relevant in quantum effects on curved backgrounds with horizons.

  11. Replace-approximation method for ambiguous solutions in factor analysis of ultrasonic hepatic perfusion

    NASA Astrophysics Data System (ADS)

    Zhang, Ji; Ding, Mingyue; Yuchi, Ming; Hou, Wenguang; Ye, Huashan; Qiu, Wu

    2010-03-01

    Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences. Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped. By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to the two variants of apex-seeking method. The results showed that the technique outperformed two variants in comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived from CEUS images and separation of different physiological regions in hepatic perfusion.

  12. The inner filter effects and their correction in fluorescence spectra of salt marsh humic matter.

    PubMed

    Mendonça, Ana; Rocha, Ana C; Duarte, Armando C; Santos, Eduarda B H

    2013-07-25

    The inner filter effects in synchronous fluorescence spectra (Δλ=60 nm) of sedimentary humic substances from a salt marsh were studied. Accordingly to their type and the influence of plant colonization, these humic substances have different spectral features and the inner filter effects act in a different manner. The fluorescence spectra of the humic substances from sediments with colonizing plants have a protein like band (λexc=280 nm) which is strongly affected by primary and secondary inner filter effects. These effects were also observed for the bands situated at longer wavelengths, i.e., at λexc=350 nm and λex=454 nm for the fulvic acids (FA) and humic acids (HA), respectively. However, they are more important for the band at 280 nm, causing spectral distortions which can be clearly seen when the spectra of solutions 40 mg L(-1) of different samples (Dissolved Organic Carbon - DOC~20 mg L(-1)) are compared with and without correction of the inner filter effects. The importance of the spectral distortions caused by inner filter effects has been demonstrated in solutions containing a mixture of model compounds which represent the fluorophores detected in the spectra of sedimentary humic samples. The effectiveness of the mathematical correction of the inner filter effects in the spectra of those solutions and of solutions of sedimentary humic substances was studied. It was observed that inner filter effects in the sedimentary humic substances spectra can be mathematically corrected, allowing to obtain a linear relationship between the fluorescence intensity and humic substances concentration and preventing distortions at concentrations as high as 50 mg L(-1) which otherwise would obscure the protein like band. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Beam quality corrections for parallel-plate ion chambers in electron reference dosimetry

    NASA Astrophysics Data System (ADS)

    Zink, K.; Wulff, J.

    2012-04-01

    Current dosimetry protocols (AAPM, IAEA, IPEM, DIN) recommend parallel-plate ionization chambers for dose measurements in clinical electron beams. This study presents detailed Monte Carlo simulations of beam quality correction factors for four different types of parallel-plate chambers: NACP-02, Markus, Advanced Markus and Roos. These chambers differ in constructive details which should have notable impact on the resulting perturbation corrections, hence on the beam quality corrections. The results reveal deviations to the recommended beam quality corrections given in the IAEA TRS-398 protocol in the range of 0%-2% depending on energy and chamber type. For well-guarded chambers, these deviations could be traced back to a non-unity and energy-dependent wall perturbation correction. In the case of the guardless Markus chamber, a nearly energy-independent beam quality correction is resulting as the effects of wall and cavity perturbation compensate each other. For this chamber, the deviations to the recommended values are the largest and may exceed 2%. From calculations of type-B uncertainties including effects due to uncertainties of the underlying cross-sectional data as well as uncertainties due to the chamber material composition and chamber geometry, the overall uncertainty of calculated beam quality correction factors was estimated to be <0.7%. Due to different chamber positioning recommendations given in the national and international dosimetry protocols, an additional uncertainty in the range of 0.2%-0.6% is present. According to the IAEA TRS-398 protocol, the uncertainty in clinical electron dosimetry using parallel-plate ion chambers is 1.7%. This study may help to reduce this uncertainty significantly.

  14. Corrective Action Glossary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-07-01

    The glossary of technical terms was prepared to facilitate the use of the Corrective Action Plan (CAP) issued by OSWER on November 14, 1986. The CAP presents model scopes of work for all phases of a corrective action program, including the RCRA Facility Investigation (RFI), Corrective Measures Study (CMS), Corrective Measures Implementation (CMI), and interim measures. The Corrective Action Glossary includes brief definitions of the technical terms used in the CAP and explains how they are used. In addition, expected ranges (where applicable) are provided. Parameters or terms not discussed in the CAP, but commonly associated with site investigations ormore » remediations are also included.« less

  15. Correction of Hydrostatic Cluster Masses through Power Ratios and Weak Lensing

    NASA Astrophysics Data System (ADS)

    Mahdavi, Andisheh

    2009-09-01

    The evolution of rich, X-ray emitting clusters of galaxies has given us precise measurements of the cosmological parameters, with dramatic constraints on the dark energy equation of state. Built into these measurements are wholesale corrections for the infamous "X-ray mass underestimate"---the fact that X-ray masses are systematically low due to the incomplete thermalization of the intracluster plasma. We seek to refine the mass correction for cosmological use through morphological power ratios. Power ratios deliver more accurate correction factors because they take into account variations in substructure from cluster to cluster. We will test their ability to correct X-ray masses by comparing hydrostatic and weak lensing mass profiles for a sample of 44 rich clusters of galaxies.

  16. An approximate JKR solution for a general contact, including rough contacts

    NASA Astrophysics Data System (ADS)

    Ciavarella, M.

    2018-05-01

    In the present note, we suggest a simple closed form approximate solution to the adhesive contact problem under the so-called JKR regime. The derivation is based on generalizing the original JKR energetic derivation assuming calculation of the strain energy in adhesiveless contact, and unloading at constant contact area. The underlying assumption is that the contact area distributions are the same as under adhesiveless conditions (for an appropriately increased normal load), so that in general the stress intensity factors will not be exactly equal at all contact edges. The solution is simply that the indentation is δ =δ1 -√{ 2 wA‧ /P″ } where w is surface energy, δ1 is the adhesiveless indentation, A‧ is the first derivative of contact area and P‧‧ the second derivative of the load with respect to δ1. The solution only requires macroscopic quantities, and not very elaborate local distributions, and is exact in many configurations like axisymmetric contacts, but also sinusoidal waves contact and correctly predicts some features of an ideal asperity model used as a test case and not as a real description of a rough contact problem. The solution permits therefore an estimate of the full solution for elastic rough solids with Gaussian multiple scales of roughness, which so far was lacking, using known adhesiveless simple results. The result turns out to depend only on rms amplitude and slopes of the surface, and as in the fractal limit, slopes would grow without limit, tends to the adhesiveless result - although in this limit the JKR model is inappropriate. The solution would also go to adhesiveless result for large rms amplitude of roughness hrms, irrespective of the small scale details, and in agreement with common sense, well known experiments and previous models by the author.

  17. Centroid — moment tensor solutions for July-September 2000

    NASA Astrophysics Data System (ADS)

    Dziewonski, A. M.; Ekström, G.; Maternovskaya, N. N.

    2001-06-01

    Centroid-moment tensor (CMT) solutions are presented for 308 earthquakes that occurred during the third quarter of 2000. The solutions are obtained using corrections for aspherical earth structure represented by a whole mantle shear velocity model SH8/U4L8 of Dziewonski and Woodward [Acoustical Imaging, Vol. 19, Plenum Press, New York, 1992, p. 785]. A model of anelastic attenuation of Durek and Ekström [Bull. Seism. Soc. Am. 86 (1996) 144] is used to predict the decay of the wave forms.

  18. Linear optics measurements and corrections using an AC dipole in RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, G.; Bai, M.; Yang, L.

    2010-05-23

    We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.

  19. S-NPP VIIRS thermal emissive band gain correction during the blackbody warm-up-cool-down cycle

    NASA Astrophysics Data System (ADS)

    Choi, Taeyoung J.; Cao, Changyong; Weng, Fuzhong

    2016-09-01

    The Suomi National Polar orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) has onboard calibrators called blackbody (BB) and Space View (SV) for Thermal Emissive Band (TEB) radiometric calibration. In normal operation, the BB temperature is set to 292.5 K providing one radiance level. From the NOAA's Integrated Calibration and Validation System (ICVS) monitoring system, the TEB calibration factors (F-factors) have been trended and show very stable responses, however the BB Warm-Up-Cool-Down (WUCD) cycles provide detectors' gain and temperature dependent sensitivity measurements. Since the launch of S-NPP, the NOAA Sea Surface Temperature (SST) group noticed unexpected global SST anomalies during the WUCD cycles. In this study, the TEB Ffactors are calculated during the WUCD cycle on June 17th 2015. The TEB F-factors are analyzed by identifying the VIIRS On-Board Calibrator Intermediate Product (OBCIP) files to be Warm-Up or Cool-Down granules. To correct the SST anomaly, an F-factor correction parameter is calculated by the modified C1 (or b1) values which are derived from the linear portion of C1 coefficient during the WUCD. The F-factor correction factors are applied back to the original VIIRS SST bands showing significantly reducing the F-factor changes. Obvious improvements are observed in M12, M14 and M16, but corrections effects are hardly seen in M16. Further investigation is needed to find out the source of the F-factor oscillations during the WUCD.

  20. Optimized distortion correction technique for echo planar imaging.

    PubMed

    Chen , N K; Wyrwicz, A M

    2001-03-01

    A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.

  1. One-loop corrections from higher dimensional tree amplitudes

    DOE PAGES

    Cachazo, Freddy; He, Song; Yuan, Ellis Ye

    2016-08-01

    We show how one-loop corrections to scattering amplitudes of scalars and gauge bosons can be obtained from tree amplitudes in one higher dimension. Starting with a complete tree-level scattering amplitude of n + 2 particles in five dimensions, one assumes that two of them cannot be “detected” and therefore an integration over their LIPS is carried out. The resulting object, function of the remaining n particles, is taken to be four-dimensional by restricting the corresponding momenta. We perform this procedure in the context of the tree-level CHY formulation of amplitudes. The scattering equations obtained in the procedure coincide with thosemore » derived by Geyer et al. from ambitwistor constructions and recently studied by two of the authors for bi-adjoint scalars. They have two sectors of solutions: regular and singular. We prove that the contribution from regular solutions generically gives rise to unphysical poles. However, using a BCFW argument we prove that the unphysical contributions are always homogeneous functions of the loop momentum and can be discarded. We also show that the contribution from singular solutions turns out to be homogeneous as well.« less

  2. One-loop corrections from higher dimensional tree amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cachazo, Freddy; He, Song; Yuan, Ellis Ye

    We show how one-loop corrections to scattering amplitudes of scalars and gauge bosons can be obtained from tree amplitudes in one higher dimension. Starting with a complete tree-level scattering amplitude of n + 2 particles in five dimensions, one assumes that two of them cannot be “detected” and therefore an integration over their LIPS is carried out. The resulting object, function of the remaining n particles, is taken to be four-dimensional by restricting the corresponding momenta. We perform this procedure in the context of the tree-level CHY formulation of amplitudes. The scattering equations obtained in the procedure coincide with thosemore » derived by Geyer et al. from ambitwistor constructions and recently studied by two of the authors for bi-adjoint scalars. They have two sectors of solutions: regular and singular. We prove that the contribution from regular solutions generically gives rise to unphysical poles. However, using a BCFW argument we prove that the unphysical contributions are always homogeneous functions of the loop momentum and can be discarded. We also show that the contribution from singular solutions turns out to be homogeneous as well.« less

  3. Correcting a Metacognitive Error: Feedback Increases Retention of Low-Confidence Correct Responses

    ERIC Educational Resources Information Center

    Butler, Andrew C.; Karpicke, Jeffrey D.; Roediger, Henry L., III

    2008-01-01

    Previous studies investigating posttest feedback have generally conceptualized feedback as a method for correcting erroneous responses, giving virtually no consideration to how feedback might promote learning of correct responses. Here, the authors show that when correct responses are made with low confidence, feedback serves to correct this…

  4. Work characteristics as predictors of correctional supervisors’ health outcomes

    PubMed Central

    Buden, Jennifer C.; Dugan, Alicia G.; Namazi, Sara; Huedo-Medina, Tania B.; Cherniack, Martin G.; Faghri, Pouran D.

    2016-01-01

    Objective This study examined associations among health behaviors, psychosocial work factors, and health status. Methods Correctional supervisors (n=157) completed a survey that assessed interpersonal and organizational views on health. Chi-square and logistic regressions were used to examine relationships among variables. Results Respondents had a higher prevalence of obesity and comorbidities compared to the general U.S. adult population. Burnout was significantly associated with nutrition, physical activity, sleep duration, sleep quality, diabetes, and anxiety/depression. Job meaning, job satisfaction and workplace social support may predict health behaviors and outcomes. Conclusions Correctional supervisors are understudied and have poor overall health status. Improving health behaviors of middle-management employees may have a beneficial effect on the health of the entire workforce. This paper demonstrates the importance of psychosocial work factors that may contribute to health behaviors and outcomes. PMID:27483335

  5. Method and apparatus for providing pulse pile-up correction in charge quantizing radiation detection systems

    DOEpatents

    Britton, Jr., Charles L.; Wintenberg, Alan L.

    1993-01-01

    A radiation detection method and system for continuously correcting the quantization of detected charge during pulse pile-up conditions. Charge pulses from a radiation detector responsive to the energy of detected radiation events are converted to voltage pulses of predetermined shape whose peak amplitudes are proportional to the quantity of charge of each corresponding detected event by means of a charge-sensitive preamplifier. These peak amplitudes are sampled and stored sequentially in accordance with their respective times of occurrence. Based on the stored peak amplitudes and times of occurrence, a correction factor is generated which represents the fraction of a previous pulses influence on a preceding pulse peak amplitude. This correction factor is subtracted from the following pulse amplitude in a summing amplifier whose output then represents the corrected charge quantity measurement.

  6. Correction factors for the ISO rod phantom, a cylinder phantom, and the ICRU sphere for reference beta radiation fields of the BSS 2

    NASA Astrophysics Data System (ADS)

    Behrens, R.

    2015-03-01

    The International Organization for Standardization (ISO) requires in its standard ISO 6980 that beta reference radiation fields for radiation protection be calibrated in terms of absorbed dose to tissue at a depth of 0.07 mm in a slab phantom (30 cm x 30 cm x 15 cm). However, many beta dosemeters are ring dosemeters and are, therefore, irradiated on a rod phantom (1.9 cm in diameter and 30 cm long), or they are eye dosemeters possibly irradiated on a cylinder phantom (20 cm in diameter and 20 cm high), or area dosemeters irradiated free in air with the conventional quantity value (true value) being defined in a sphere (30 cm in diameter, made of ICRU tissue (International Commission on Radiation Units and Measurements)). Therefore, the correction factors for the conventional quantity value in the rod, the cylinder, and the sphere instead of the slab (all made of ICRU tissue) were calculated for the radiation fields of 147Pm, 85Kr, 90Sr/90Y, and, 106Ru/106Rh sources of the beta secondary standard BSS 2 developed at PTB. All correction factors were calculated for 0° up to 75° (in steps of 15°) radiation incidence. The results are ready for implementation in ISO 6980-3 and have recently been (partly) implemented in the software of the BSS 2.

  7. On special training for correct deposition of semen.

    PubMed

    Dyrendahl, I

    1980-11-01

    The semen volume used in AI has been reduced during recent years from 1.0-1.2 ml with fluid semen to 0.5 ml with medium straws and to 0.25 ml with ministraws. According correct deposition has become more important. Low fertility results attained by some technicians is often due to failure of precision in deposition. A special insemination syringe "Romeo" has been constructed in order to observe and correct this factor in the field work. The syringe can be fixed in the cervix after it is placed in the supposedly correct position. An instructor can then check the position, and there can be a dialogue about mistakes between instructor and technician. The instrument can also be used in the same way for training on slaughter animals of technicians who have repeatedly placed the brand mark of the ordinary searing syringe wrongly.

  8. Evaluation of thermal network correction program using test temperature data

    NASA Technical Reports Server (NTRS)

    Ishimoto, T.; Fink, L. C.

    1972-01-01

    An evaluation process to determine the accuracy of a computer program for thermal network correction is discussed. The evaluation is required since factors such as inaccuracies of temperatures, insufficient number of temperature points over a specified time period, lack of one-to-one correlation between temperature sensor and nodal locations, and incomplete temperature measurements are not present in the computer-generated information. The mathematical models used in the evaluation are those that describe a physical system composed of both a conventional and a heat pipe platform. A description of the models used, the results of the evaluation of the thermal network correction, and input instructions for the thermal network correction program are presented.

  9. Atmospheric scattering corrections to solar radiometry

    NASA Technical Reports Server (NTRS)

    Box, M. A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.

  10. Determination of recombination and polarity correction factors, kS and kP, for small cylindrical ionization chambers PTW 31021 and PTW 31022 in pulsed filtered and unfiltered beams.

    PubMed

    Bruggmoser, Gregor; Saum, Rainer; Kranzer, Rafael

    2018-01-12

    The aim of this technical communication is to provide correction factors for recombination and polarity effect for two new ionization chambers PTW PinPoint 3D (type 31022) and PTW Semiflex 3D (type 31021). The correction factors provided are for the (based on the) German DIN 6800-2 dosimetry protocol and the AAPM TG51 protocol. The measurements were made in filtered and unfiltered high-energy photon beams in a water equivalent phantom at maximum depth of the PDD and a field size on the surface of 10cm×10cm. The design of the new chamber types leads to an ion collection efficiency and a polarity effect that are well within the specifications requested by pertinent dosimetry protocols including the addendum of TG-51. It was confirmed that the recombination effect of both chambers mainly depends on dose per pulse and is independent of the filtration of the photon beam. Copyright © 2018. Published by Elsevier GmbH.

  11. Conductivity Cell Thermal Inertia Correction Revisited

    NASA Astrophysics Data System (ADS)

    Eriksen, C. C.

    2012-12-01

    Salinity measurements made with a CTD (conductivity-temperature-depth instrument) rely on accurate estimation of water temperature within their conductivity cell. Lueck (1990) developed a theoretical framework for heat transfer between the cell body and water passing through it. Based on this model, Lueck and Picklo (1990) introduced the practice of correcting for cell thermal inertia by filtering a temperature time series using two parameters, an amplitude α and a decay time constant τ, a practice now widely used. Typically these two parameters are chosen for a given cell configuration and internal flushing speed by a statistical method applied to a particular data set. Here, thermal inertia correction theory has been extended to apply to flow speeds spanning well over an order of magnitude, both within and outside a conductivity cell, to provide predictions of α and τ from cell geometry and composition. The extended model enables thermal inertia correction for the variable flows encountered by conductivity cells on autonomous gliders and floats, as well as tethered platforms. The length scale formed as the product of cell encounter speed of isotherms, α, and τ can be used to gauge the size of the temperature correction for a given thermal stratification. For cells flushed by dynamic pressure variation induced by platform motion, this length varies by less than a factor of 2 over more than a decade of speed variation. The magnitude of correction for free-flow flushed sensors is comparable to that of pumped cells, but at an order of magnitude in energy savings. Flow conditions around a cell's exterior are found to be of comparable importance to thermal inertia response as flushing speed. Simplification of cell thermal response to a single normal mode is most valid at slow speed. Error in thermal inertia estimation arises from both neglect of higher modes and numerical discretization of the correction scheme, both of which can be easily quantified

  12. SiC MOSFET Based Single Phase Active Boost Rectifier with Power Factor Correction for Wireless Power Transfer Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onar, Omer C; Tang, Lixin; Chinthavali, Madhu Sudhan

    2014-01-01

    Wireless Power Transfer (WPT) technology is a novel research area in the charging technology that bridges the utility and the automotive industries. There are various solutions that are currently being evaluated by several research teams to find the most efficient way to manage the power flow from the grid to the vehicle energy storage system. There are different control parameters that can be utilized to compensate for the change in the impedance due to variable parameters such as battery state-of-charge, coupling factor, and coil misalignment. This paper presents the implementation of an active front-end rectifier on the grid side formore » power factor control and voltage boost capability for load power regulation. The proposed SiC MOSFET based single phase active front end rectifier with PFC resulted in >97% efficiency at 137mm air-gap and >95% efficiency at 160mm air-gap.« less

  13. Implementation of Four-Phase Interleaved Balance Charger for Series-Connected Batteries with Power Factor Correction

    NASA Astrophysics Data System (ADS)

    Juan, Y. L.; Lee, Y. T.; Lee, Y. L.; Chen, L. L.; Huang, M. L.

    2017-11-01

    A four-phase interleaved balance charger for series-connected batteries with power factor correction is proposed in this dissertation. In the two phases of two buckboost converters, the rectified ac power is firstly converted to a dc link capacitor. In the other two phases of two flyback converters, the rectified ac power is directly converted to charge the corresponding batteries. Additionally, the energy on the leakage inductance of flyback converter is bypassed to the dc link capacitor. Then, a dual-output balance charging circuit is connected to the dc link to deliver the dc link power to charge two batteries in the series-connected batteries module. The constant-current/constant-voltage charging strategy is adopted. Finally, a prototype of the proposed charger with rated power 500 W is constructed. From the experimental results, the performance and validity of the proposed topology are verified. Compared to the conventional topology with passive RCD snubber, the efficiency of the proposed topology is improved about 3% and the voltage spike on the active switch is also reduced. The efficiency of the proposed charger is at least 83.6 % within the CC/CV charging progress.

  14. The photon fluence non-uniformity correction for air kerma near Cs-137 brachytherapy sources.

    PubMed

    Rodríguez, M L; deAlmeida, C E

    2004-05-07

    The use of brachytherapy sources in radiation oncology requires their proper calibration to guarantee the correctness of the dose delivered to the treatment volume of a patient. One of the elements to take into account in the dose calculation formalism is the non-uniformity of the photon fluence due to the beam divergence that causes a steep dose gradient near the source. The correction factors for this phenomenon have been usually evaluated by the two theories available, both of which were conceived only for point sources. This work presents the Monte Carlo assessment of the non-uniformity correction factors for a Cs-137 linear source and a Farmer-type ionization chamber. The results have clearly demonstrated that for linear sources there are some important differences among the values obtained from different calculation models, especially at short distances from the source. The use of experimental values for each specific source geometry is recommended in order to assess the non-uniformity factors for linear sources in clinical situations that require special dose calculations or when the correctness of treatment planning software is verified during the acceptance tests.

  15. Profile of hepatitis B and C virus infection in prisoners in Lubuk Pakam correctional facilities

    NASA Astrophysics Data System (ADS)

    Rey, I.; Saragih, R. H.; Effendi-YS, R.; Sembiring, J.; Siregar, G. A.; Zain, L. H.

    2018-03-01

    Prisoners in correctional facilities are predisposed to chronic viral infections because of their high-risk behaviors or unsafe lifestyle. The economic and public health burden of chronic hepatitis B and C and its sequelae need to be addressed, such as by finding the risk factors and therefore reducing the spread of HCV and HBV infection in prisons. This study aimed to see the profile of Hepatitis B and C Virus Infection in prisoners in Lubuk Pakam Correctional Facilities. This cross-sectional study was in Lubuk Pakam Correctional Facilities in 2016. From 1114 prisoners in Lubuk Pakam correctional facility, we randomly examined 120 prisoners for HBV and HCV serology markers. From 120 prisoners, six prisoners were HBV positive, 21 prisoners were HCV positive and one prisoner positive for both HCV and HBV infection. The most common risk factors for prisoners getting HBV infection are tattoos and free sex (36.4% and 36.4%, respectively). The most common risk factors for HCV infection in prisoners are tattoos and free sex (40% and 35%, respectively).

  16. Correction of Thermal Gradient Errors in Stem Thermocouple Hygrometers

    PubMed Central

    Michel, Burlyn E.

    1979-01-01

    Stem thermocouple hygrometers were subjected to transient and stable thermal gradients while in contact with reference solutions of NaCl. Both dew point and psychrometric voltages were directly related to zero offset voltages, the latter reflecting the size of the thermal gradient. Although slopes were affected by absolute temperature, they were not affected by water potential. One hygrometer required a correction of 1.75 bars water potential per microvolt of zero offset, a value that was constant from 20 to 30 C. PMID:16660685

  17. Supersaturated calcium carbonate solutions are classical

    PubMed Central

    Henzler, Katja; Fetisov, Evgenii O.; Galib, Mirza; Baer, Marcel D.; Legg, Benjamin A.; Borca, Camelia; Xto, Jacinta M.; Pin, Sonia; Fulton, John L.; Schenter, Gregory K.; Govind, Niranjan; Siepmann, J. Ilja; Mundy, Christopher J.; Huthwelker, Thomas; De Yoreo, James J.

    2018-01-01

    Mechanisms of CaCO3 nucleation from solutions that depend on multistage pathways and the existence of species far more complex than simple ions or ion pairs have recently been proposed. Herein, we provide a tightly coupled theoretical and experimental study on the pathways that precede the initial stages of CaCO3 nucleation. Starting from molecular simulations, we succeed in correctly predicting bulk thermodynamic quantities and experimental data, including equilibrium constants, titration curves, and detailed x-ray absorption spectra taken from the supersaturated CaCO3 solutions. The picture that emerges is in complete agreement with classical views of cluster populations in which ions and ion pairs dominate, with the concomitant free energy landscapes following classical nucleation theory. PMID:29387793

  18. Atomic-level simulation of ferroelectricity in perovskite solid solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepliarsky, M.; Instituto de Fisica Rosario, CONICET-UNR, Rosario,; Phillpot, S. R.

    2000-06-26

    Building on the insights gained from electronic-structure calculations and from experience obtained with an earlier atomic-level method, we developed an atomic-level simulation approach based on the traditional Buckingham potential with shell model which correctly reproduces the ferroelectric phase behavior and dielectric and piezoelectric properties of KNbO{sub 3}. This approach now enables the simulation of solid solutions and defected systems; we illustrate this capability by elucidating the ferroelectric properties of a KTa{sub 0.5}Nb{sub 0.5}O{sub 3} random solid solution. (c) 2000 American Institute of Physics.

  19. An asymptotic solution to a passive biped walker model

    NASA Astrophysics Data System (ADS)

    Yudaev, Sergey A.; Rachinskii, Dmitrii; Sobolev, Vladimir A.

    2017-02-01

    We consider a simple model of a passive dynamic biped robot walker with point feet and legs without knee. The model is a switched system, which includes an inverted double pendulum. Robot’s gait and its stability depend on parameters such as the slope of the ramp, the length of robot’s legs, and the mass distribution along the legs. We present an asymptotic solution of the model. The first correction to the zero order approximation is shown to agree with the numerical solution for a limited parameter range.

  20. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    PubMed

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  1. Modeling methylene blue aggregation in acidic solution to the limits of factor analysis.

    PubMed

    Golz, Emily K; Vander Griend, Douglas A

    2013-01-15

    Methylene blue (MB(+)), a common cationic thiazine dye, aggregates in acidic solutions. Absorbance data for equilibrated solutions of the chloride salt were analyzed over a concentration range of 1.0 × 10(-3) to 2.6 × 10(-5) M, in both 0.1 M HCl and 0.1 M HNO(3). Factor analyses of the raw absorbance data sets (categorically a better choice than effective absorbance) definitively show there are at least three distinct molecular absorbers regardless of acid type. A model with monomer, dimer, and trimer works well, but extensive testing has resulted in several other good models, some with higher order aggregates and some with chloride anions. Good models were frequently indistinguishable from each other by quality of fit or reasonability of molar absorptivity curves. The modeling of simulated data sets demonstrates the cases and degrees to which signal noise in the original data obscure the true model. In particular, the more mathematically similar (less orthogonal) the molar absorptivity curves of the chemical species in a model are, the less signal noise it takes to obscure the true model from other potentially good models. Unfortunately, the molar absorptivity curves in dye aggregation systems like that of methylene blue tend to be sufficiently similar so as to lead to the obscuration of models even at the noise levels (0.0001 ABS) of typical benchtop spectrophotometers.

  2. Liquid-Liquid Phase Separation in a Dual Variable Domain Immunoglobulin Protein Solution: Effect of Formulation Factors and Protein-Protein Interactions.

    PubMed

    Raut, Ashlesha S; Kalonia, Devendra S

    2015-09-08

    Dual variable domain immunoglobulin proteins (DVD-Ig proteins) are large molecules (MW ∼ 200 kDa) with increased asymmetry because of their extended Y-like shape, which results in increased formulation challenges. Liquid-liquid phase separation (LLPS) of protein solutions into protein-rich and protein-poor phases reduces solution stability at intermediate concentrations and lower temperatures, and is a serious concern in formulation development as therapeutic proteins are generally stored at refrigerated conditions. In the current work, LLPS was studied for a DVD-Ig protein molecule as a function of solution conditions by measuring solution opalescence. LLPS of the protein was confirmed by equilibrium studies and by visually observing under microscope. The protein does not undergo any structural change after phase separation. Protein-protein interactions were measured by light scattering (kD) and Tcloud (temperature that marks the onset of phase separation). There is a good agreement between kD measured in dilute solution with Tcloud measured in the critical concentration range. Results indicate that the increased complexity of the molecule (with respect to size, shape, and charge distribution on the molecule) increases contribution of specific and nonspecific interactions in solution, which are affected by formulation factors, resulting in LLPS for DVD-Ig protein.

  3. Hip Arthroscopy: Common Problems and Solutions.

    PubMed

    Casp, Aaron; Gwathmey, Frank Winston

    2018-04-01

    The use of hip arthroscopy continues to expand. Understanding potential pitfalls and complications associated with hip arthroscopy is paramount to optimizing clinical outcomes and minimizing unfavorable results. Potential pitfalls and complications are associated with preoperative factors such as patient selection, intraoperative factors such as iatrogenic damage, traction-related complications, inadequate correction of deformity, and nerve injury, or postoperative factors such as poor rehabilitation. This article outlines common factors that contribute to less-than-favorable outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Determination of the Kwall correction factor for a cylindrical ionization chamber to measure air-kerma in 60Co gamma beams.

    PubMed

    Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M

    2002-07-21

    The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.

  5. Quantum annealing correction with minor embedding

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.

    2015-10-01

    Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.

  6. Consistency analysis and correction of ground-based radar observations using space-borne radar

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Zhu, Yiqing; Wang, Zhenhui; Wang, Yadong

    2018-04-01

    The lack of an accurate determination of radar constant can introduce biases in ground-based radar (GR) reflectivity factor data, and lead to poor consistency of radar observations. The geometry-matching method was applied to carry out spatial matching of radar data from the Precipitation Radar (PR) on board the Tropical Rainfall Measuring Mission (TRMM) satellite to observations from a GR deployed at Nanjing, China, in their effective sampling volume, with 250 match-up cases obtained from January 2008 to October 2013. The consistency of the GR was evaluated with reference to the TRMM PR, whose stability is established. The results show that the below-bright-band-height data of the Nanjing radar can be split into three periods: Period I from January 2008 to March 2010, Period II from March 2010 to May 2013, and Period III from May 2013 to October 2013. There are distinct differences in overall reflectivity factor between the three periods, and the overall reflectivity factor in period II is smaller by a factor of over 3 dB than in periods I and III, although the overall reflectivity within each period remains relatively stable. Further investigation shows that in period II the difference between the GR and PR observations changed with echo intensity. A best-fit relation between the two radar reflectivity factors provides a linear correction that is applied to the reflectivity of the Nanjing radar, and which is effective in improving its consistency. Rain-gauge data were used to verify the correction, and the estimated precipitation based on the corrected GR reflectivity data was closer to the rain-gauge observations than that without correction.

  7. Computer simulations of alkali-acetate solutions: Accuracy of the forcefields in difference concentrations

    NASA Astrophysics Data System (ADS)

    Ahlstrand, Emma; Zukerman Schpector, Julio; Friedman, Ran

    2017-11-01

    When proteins are solvated in electrolyte solutions that contain alkali ions, the ions interact mostly with carboxylates on the protein surface. Correctly accounting for alkali-carboxylate interactions is thus important for realistic simulations of proteins. Acetates are the simplest carboxylates that are amphipathic, and experimental data for alkali acetate solutions are available and can be compared with observables obtained from simulations. We carried out molecular dynamics simulations of alkali acetate solutions using polarizable and non-polarizable forcefields and examined the ion-acetate interactions. In particular, activity coefficients and association constants were studied in a range of concentrations (0.03, 0.1, and 1M). In addition, quantum-mechanics (QM) based energy decomposition analysis was performed in order to estimate the contribution of polarization, electrostatics, dispersion, and QM (non-classical) effects on the cation-acetate and cation-water interactions. Simulations of Li-acetate solutions in general overestimated the binding of Li+ and acetates. In lower concentrations, the activity coefficients of alkali-acetate solutions were too high, which is suggested to be due to the simulation protocol and not the forcefields. Energy decomposition analysis suggested that improvement of the forcefield parameters to enable accurate simulations of Li-acetate solutions can be achieved but may require the use of a polarizable forcefield. Importantly, simulations with some ion parameters could not reproduce the correct ion-oxygen distances, which calls for caution in the choice of ion parameters when protein simulations are performed in electrolyte solutions.

  8. A complete solution for GP-B's gyroscopic precession by retarded gravitational theory

    NASA Astrophysics Data System (ADS)

    Tang, Keyun

    Mainstream physicists generally believe that Mercury’s Perihelion precession and GP-B’ gyroscopic precession are two of the strongest evidences supporting Einstein’ curved spacetime and general relativity. However, most classical literatures and textbooks (e.g. Ohanain: Gravitation and Spacetime) paint an incorrect picture of Mercury’s orbit anomaly, namely Mercury’s perihelion precessed 43 arc-seconds per century; a correct picture should be that Mercury rotated 43 arc-seconds per century more than along Newtonian theoretical orbit. The essence of Le Verrier’s and Newcomb’s observation and analysis is that the angular speed of Mercury is slightly faster than the Newtonian theoretical value. The complete explanation to Mercury’s orbit anomaly should include two factors, perihelion precession is one of two factors, in addition, the change of orbital radius will also cause a change of angular speed, which is another component of Mercury's orbital anomaly. If Schwarzschild metric is correct, then the solution of the Schwarzschild orbit equation must contain three non-ignorable items. The first corresponds to Newtonian ellipse; the second is a nonlinear perturbation with increasing amplitude, which causes the precession of orbit perihelion; this is just one part of the angular speed anomaly of Mercury; the third part is a linear perturbation, corresponding to a similar figure of the Newton's ellipse, but with a minimal radius; this makes no contribution to the perihelion precession of the Schwarzschild orbit, but makes the Schwarzschild orbital radius slightly smaller, leading to a slight increase in Mercury’s angular speed. All classical literatures of general relativity ignored this last factor, which is a gross oversight. If you correctly take all three factors into consideration, the final result is that the difference between the angles rotated along Schwarzschild’s orbit and the angle rotated along Newton’s orbit for one hundred years should

  9. Ozone Correction for AM0 Calibrated Solar Cells for the Aircraft Method

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Rieke, William J.; Blankenship, Kurt S.

    2002-01-01

    The aircraft solar cell calibration method has provided cells calibrated to space conditions for 37 years. However, it is susceptible to systematic errors due to ozone concentrations in the stratosphere. The present correction procedure applies a 1 percent increase to the measured I(sub SC) values. High band-gap cells are more sensitive to ozone absorbed wavelengths (0.4 to 0.8 microns) so it becomes important to reassess the correction technique. This paper evaluates the ozone correction to be 1+O3xFo, where O3 is the total ozone along the optical path, and Fo is 29.8 x 10(exp -6)/du for a Silicon solar cell, 42.6 x 10(exp -6)/du for a GaAs cell and 57.2 x 10(exp -6)/du for an InGaP cell. These correction factors work best to correct data points obtained during the flight rather than as a correction to the final result.

  10. Wellness and illness self-management skills in community corrections.

    PubMed

    Kelly, Patricia J; Ramaswamy, Megha; Chen, Hsiang-Feng; Denny, Donald

    2015-02-01

    Community corrections provide a readjustment venue for re-entry between incarceration and home for inmates in the US corrections system. Our goal was to determine how self-management skills, an important predictor of re-entry success, varied by demographic and risk factors. In this cross-sectional study, we analyzed responses of 675 clients from 57 community corrections programs run by the regional division of the Federal Bureau of Prisons. A self-administered survey collected data on self-management skills, demographics, and risk factors; significant associations were applied in four regression models: the overall self-management score and three self-management subscales: coping skills, goals, and drug use. Over one-quarter (27.2%/146) of participants had a mental health history. White race, no mental health history and high school education were associated with better overall self-management scores; mental health history and drug use in the past year were associated with lower coping scores; female gender and high school education were associated with better self-management goals; female gender was associated with better self-management drug use scores. Self-management programs may need to be individualized for different groups of clients. Lower scores for those with less education suggest an area for targeted, nurse-led interventions.

  11. Determination of a correction factor for the interaction potential of He + ions backscattered from a Cu(1 0 0) surface

    NASA Astrophysics Data System (ADS)

    Draxler, M.; Walker, M.; McConville, C. F.

    2006-08-01

    We have used coaxial impact collision ion scattering spectroscopy (CAICISS) data collected from 3 keV He+ ions backscattered from a Cu(1 0 0) surface in different azimuthal orientations to investigate the influence of the screening length on CAICISS polar angle scans. We have compared the experimental data to computer simulations generated with the FAN code and found that for our experimental conditions an exceptionally low value of 0.53 was required for the correction factor to the Firsov screening length used with the Thomas-Fermi-Moliere potential. In addition we found that the Ziegler-Biersack-Littmark potential is not applicable, resulting in incorrect peak positions in the CAICISS polar angle plots.

  12. Boundary conditions for the solution of compressible Navier-Stokes equations by an implicit factored method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Smith, G. E.; Springer, G. S.; Rimon, Y.

    1983-01-01

    A method is presented for formulating the boundary conditions in implicit finite-difference form needed for obtaining solutions to the compressible Navier-Stokes equations by the Beam and Warming implicit factored method. The usefulness of the method was demonstrated (a) by establishing the boundary conditions applicable to the analysis of the flow inside an axisymmetric piston-cylinder configuration and (b) by calculating velocities and mass fractions inside the cylinder for different geometries and different operating conditions. Stability, selection of time step and grid sizes, and computer time requirements are discussed in reference to the piston-cylinder problem analyzed.

  13. Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence

    PubMed Central

    Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos

    2016-01-01

    When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10−3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered. PMID:27924865

  14. Intra-field on-product overlay improvement by application of RegC and TWINSCAN corrections

    NASA Astrophysics Data System (ADS)

    Sharoni, Ofir; Dmitriev, Vladimir; Graitzer, Erez; Perets, Yuval; Gorhad, Kujan; van Haren, Richard; Cekli, Hakki E.; Mulkens, Jan

    2015-03-01

    reticle generally results in global distortion of the reticle. This is not a problem as long as these global distortions can be corrected by the TWINSCANTM system (currently up to the third order). It is anticipated that the combination of the RegC® and the TWINSCANTM corrections act as complementary solutions. These solutions perfectly fit into the ASML Litho InSight (LIS) product in which feedforward and feedback corrections based on YieldStar overlay measurements are used to improve the on product overlay.

  15. Calculation and measurement of radiation corrections for plasmon resonances in nanoparticles

    NASA Astrophysics Data System (ADS)

    Hung, L.; Lee, S. Y.; McGovern, O.; Rabin, O.; Mayergoyz, I.

    2013-08-01

    The problem of plasmon resonances in metallic nanoparticles can be formulated as an eigenvalue problem under the condition that the wavelengths of the incident radiation are much larger than the particle dimensions. As the nanoparticle size increases, the quasistatic condition is no longer valid. For this reason, the accuracy of the electrostatic approximation may be compromised and appropriate radiation corrections for the calculation of resonance permittivities and resonance wavelengths are needed. In this paper, we present the radiation corrections in the framework of the eigenvalue method for plasmon mode analysis and demonstrate that the computational results accurately match analytical solutions (for nanospheres) and experimental data (for nanorings and nanocubes). We also demonstrate that the optical spectra of silver nanocube suspensions can be fully assigned to dipole-type resonance modes when radiation corrections are introduced. Finally, our method is used to predict the resonance wavelengths for face-to-face silver nanocube dimers on glass substrates. These results may be useful for the indirect measurements of the gaps in the dimers from extinction cross-section observations.

  16. Consistent Long-Time Series of GPS Satellite Antenna Phase Center Corrections

    NASA Astrophysics Data System (ADS)

    Steigenberger, P.; Schmid, R.; Rothacher, M.

    2004-12-01

    The current IGS processing strategy disregards satellite antenna phase center variations (pcvs) depending on the nadir angle and applies block-specific phase center offsets only. However, the transition from relative to absolute receiver antenna corrections presently under discussion necessitates the consideration of satellite antenna pcvs. Moreover, studies of several groups have shown that the offsets are not homogeneous within a satellite block. Manufacturer specifications seem to confirm this assumption. In order to get best possible antenna corrections, consistent ten-year time series (1994-2004) of satellite-specific pcvs and offsets were generated. This challenging effort became possible as part of the reprocessing of a global GPS network currently performed by the Technical Universities of Munich and Dresden. The data of about 160 stations since the official start of the IGS in 1994 have been reprocessed, as today's GPS time series are mostly inhomogeneous and inconsistent due to continuous improvements in the processing strategies and modeling of global GPS solutions. An analysis of the signals contained in the time series of the phase center offsets demonstrates amplitudes on the decimeter level, at least one order of magnitude worse than the desired accuracy. The periods partly arise from the GPS orbit configuration, as the orientation of the orbit planes with regard to the inertial system repeats after about 350 days due to the rotation of the ascending nodes. In addition, the rms values of the X- and Y-offsets show a high correlation with the angle between the orbit plane and the direction to the sun. The time series of the pcvs mainly point at the correlation with the global terrestrial scale. Solutions with relative and absolute phase center corrections, with block- and satellite-specific satellite antenna corrections demonstrate the effect of this parameter group on other global GPS parameters such as the terrestrial scale, station velocities, the

  17. Aspheric Solute Ions Modulate Gold Nanoparticle Interactions in an Aqueous Solution: An Optimal Way to Reversibly Concentrate Functionalized Nanoparticles

    PubMed Central

    Villarreal, Oscar D; Chen, Liao Y; Whetten, Robert L; Demeler, Borries

    2015-01-01

    Nanometer-sized gold particles (AuNPs) are of peculiar interest because their behaviors in an aqueous solution are sensitive to changes in environmental factors including the size and shape of the solute ions. In order to determine these important characteristics, we performed all-atom molecular dynamics simulations on the icosahedral Au144 nanoparticles each coated with a homogeneous set of 60 thiolates (4-mercapto-benzoate, pMBA) in eight aqueous solutions having ions of varying sizes and shapes (Na+, K+, tetramethylamonium cation TMA+, trisamonium cation TRS+, Cl−, and OH−). For each solution, we computed the reversible work (potential of mean of force) to bring two nanoparticles together as a function of their separation distance. We found that the behavior of pMBA protected Au144 nanoparticles can be readily modulated by tuning their aqueous environmental factors (pH and solute ion combinations). We examined the atomistic details on how the sizes and shapes of solute ions quantitatively factor in the definitive characteristics of nanoparticle-environment and nanoparticle-nanoparticle interactions. We predict that tuning the concentrations of non-spherical composite ions such as TRS+ in an aqueous solution of AuNPs be an effective means to modulate the aggregation propensity desired in biomedical and other applications of small charged nanoparticles. PMID:26581232

  18. Aspheric Solute Ions Modulate Gold Nanoparticle Interactions in an Aqueous Solution: An Optimal Way To Reversibly Concentrate Functionalized Nanoparticles.

    PubMed

    Villarreal, Oscar D; Chen, Liao Y; Whetten, Robert L; Demeler, Borries

    2015-12-17

    Nanometer-sized gold particles (AuNPs) are of peculiar interest because their behaviors in an aqueous solution are sensitive to changes in environmental factors including the size and shape of the solute ions. In order to determine these important characteristics, we performed all-atom molecular dynamics simulations on the icosahedral Au144 nanoparticles each coated with a homogeneous set of 60 thiolates (4-mercaptobenzoate, pMBA) in eight aqueous solutions having ions of varying sizes and shapes (Na(+), K(+), tetramethylamonium cation TMA(+), tris-ammonium cation TRS(+), Cl(-), and OH(-)). For each solution, we computed the reversible work (potential of mean of force) to bring two nanoparticles together as a function of their separation distance. We found that the behavior of pMBA protected Au144 nanoparticles can be readily modulated by tuning their aqueous environmental factors (pH and solute ion combinations). We examined the atomistic details on how the sizes and shapes of solute ions quantitatively factor in the definitive characteristics of nanoparticle-environment and nanoparticle-nanoparticle interactions. We predict that tuning the concentrations of nonspherical composite ions such as TRS(+) in an aqueous solution of AuNPs be an effective means to modulate the aggregation propensity desired in biomedical and other applications of small charged nanoparticles.

  19. Corrections Officer Physical Abilities Report. Standards and Training for Corrections Program.

    ERIC Educational Resources Information Center

    California State Board of Corrections, Sacramento.

    A study examined the physical ability requirements for entry-level corrections officers in the California. The study, which was undertaken at the request of the California Board of Corrections, had the following objectives: statewide job analysis of the requirements of three entry-level positions in county agencies--corrections officer, probation…

  20. Alternate corrections for estimating actual wetland evapotranspiration from potential evapotranspiration

    USGS Publications Warehouse

    Shoemaker, W. Barclay; Sumner, D.M.

    2006-01-01

    Corrections can be used to estimate actual wetland evapotranspiration (AET) from potential evapotranspiration (PET) as a means to define the hydrology of wetland areas. Many alternate parameterizations for correction coefficients for three PET equations are presented, covering a wide range of possible data-availability scenarios. At nine sites in the wetland Everglades of south Florida, USA, the relatively complex PET Penman equation was corrected to daily total AET with smaller standard errors than the PET simple and Priestley-Taylor equations. The simpler equations, however, required less data (and thus less funding for instrumentation), with the possibility of being corrected to AET with slightly larger, comparable, or even smaller standard errors. Air temperature generally corrected PET simple most effectively to wetland AET, while wetland stage and humidity generally corrected PET Priestley-Taylor and Penman most effectively to wetland AET. Stage was identified for PET Priestley-Taylor and Penman as the data type with the most correction ability at sites that are dry part of each year or dry part of some years. Finally, although surface water generally was readily available at each monitoring site, AET was not occurring at potential rates, as conceptually expected under well-watered conditions. Apparently, factors other than water availability, such as atmospheric and stomata resistances to vapor transport, also were limiting the PET rate. ?? 2006, The Society of Wetland Scientists.

  1. Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.

    PubMed

    Ripple, Dean C; Hu, Zhishang

    2016-03-01

    Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.

  2. A vibration correction method for free-fall absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  3. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  4. Learners' Perception of Corrective Feedback in Pair Work

    ERIC Educational Resources Information Center

    Yoshida, Reiko

    2008-01-01

    The present study examines Japanese language learners' perception of corrective feedback (CF) in pair work in relation to their noticing and understanding of their partners' CF and the factors that influence it. This study focuses on three learners, who worked together in pair work. The data collection methods consist of classroom observation,…

  5. Chromatographic background drift correction coupled with parallel factor analysis to resolve coelution problems in three-dimensional chromatographic data: quantification of eleven antibiotics in tap water samples by high-performance liquid chromatography coupled with a diode array detector.

    PubMed

    Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin

    2013-08-09

    Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Coping with Misinformation: Corrections, Backfire Effects, and Choice Architectures

    NASA Astrophysics Data System (ADS)

    Lewandowsky, S.; Cook, J.; Ecker, U. K.

    2012-12-01

    The widespread prevalence and persistence of misinformation about many important scientific issues, from climate change to vaccinations or the link between HIV and AIDS, must give rise to concern. We first review the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. We then survey and explain the cognitive factors that often render misinformation resistant to correction. We answer the question why retractions of misinformation can be so ineffective and why they can even backfire and ironically increase misbelief. We discuss the overriding role of ideology and personal worldviews in the resistance of misinformation to correction and show how their role can be attenuated. We discuss the risks associated with repeating misinformation while seeking to correct it and we point to the design of "choice architectures" as an alternative to the attempt to retract misinformation.

  7. Development of a Pressure Sensitive Paint System with Correction for Temperature Variation

    NASA Technical Reports Server (NTRS)

    Simmons, Kantis A.

    1995-01-01

    Pressure Sensitive Paint (PSP) is known to provide a global image of pressure over a model surface. However, improvements in its accuracy and reliability are needed. Several factors contribute to the inaccuracy of PSP. One major factor is that luminescence is temperature dependent. To correct the luminescence of the pressure sensing component for changes in temperature, a temperature sensitive luminophore incorporated in the paint allows the user to measure both pressure and temperature simultaneously on the surface of a model. Magnesium Octaethylporphine (MgOEP) was used as a temperature sensing luminophore, with the pressure sensing luminophore, Platinum Octaethylporphine (PtOEP), to correct for temperature variations in model surface pressure measurements.

  8. Rigorous asymptotics of traveling-wave solutions to the thin-film equation and Tanner’s law

    NASA Astrophysics Data System (ADS)

    Giacomelli, Lorenzo; Gnann, Manuel V.; Otto, Felix

    2016-09-01

    We are interested in traveling-wave solutions to the thin-film equation with zero microscopic contact angle (in the sense of complete wetting without precursor) and inhomogeneous mobility {{h}3}+{λ3-n}{{h}n} , where h, λ, and n\\in ≤ft(\\frac{3}{2},\\frac{7}{3}\\right) denote film height, slip parameter, and mobility exponent, respectively. Existence and uniqueness of these solutions have been established by Maria Chiricotto and the first of the authors in previous work under the assumption of sub-quadratic growth as h\\to ∞ . In the present work we investigate the asymptotics of solutions as h\\searrow 0 (the contact-line region) and h\\to ∞ . As h\\searrow 0 we observe, to leading order, the same asymptotics as for traveling waves or source-type self-similar solutions to the thin-film equation with homogeneous mobility h n and we additionally characterize corrections to this law. Moreover, as h\\to ∞ we identify, to leading order, the logarithmic Tanner profile, i.e. the solution to the corresponding unperturbed problem with λ =0 that determines the apparent macroscopic contact angle. Besides higher-order terms, corrections turn out to affect the asymptotic law as h\\to ∞ only by setting the length scale in the logarithmic Tanner profile. Moreover, we prove that both the correction and the length scale depend smoothly on n. Hence, in line with the common philosophy, the precise modeling of liquid-solid interactions (within our model, the mobility exponent) does not affect the qualitative macroscopic properties of the film.

  9. TH-CD-BRA-05: First Water Calorimetric Dw Measurement and Direct Measurement of Magnetic Field Correction Factors, KQ,B, in a 1.5 T B-Field of An MRI Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prez, L de; Pooter, J de; Jansen, B

    2016-06-15

    Purpose: Reference dosimetry in MR-guided radiotherapy is performed in the presence of a B-field. As a consequence the response of ionization chambers changes considerably and depends on parameters not considered in traditional reference dosimetry. Therefore future Codes of Practices need ionization chamber correction factors to correct for both the change in beam quality and the presence of a B-field. The objective was to study the feasibility of water calorimetric absorbed-dose measurements in a 1.5 T B-field of an MRLinac and the direct measurement of kQ,B calibration of ionization chambers. Methods: Calorimetric absorbed dose to water Dw was measured with amore » new water calorimeter in the bore of an MRLinac (TPR20,10 of 0.702). Two waterproof ionization chambers (PTW 30013, IBA FC-65G) were calibrated inside the calorimeter phantom (ND,w,Q,B). Both measurements were normalized to a monitor ionization chamber. Ionization chamber measurements were corrected for conventional influence parameter. Based on the chambers’ Co-60 calibrations (ND,w,Q0), measured directly against the calorimeter. In this study the correction factors kQ,B was determined as the ratio of the calibration coefficients in the MRLinac and in Co-60. Additionally, kB was determined based on kQ values obtained with the IAEA TRS-398 Code of Practice. Results: The kQ,B factors of the ionization chambers mentioned above were respectively 0.9488(8) and 0.9445(8) with resulting kB factors of 0.961(13) and 0.952(13) with standard uncertainties on the least significant digit(s) between brackets. Conclusion: Calorimetric Dw measurements and calibration of waterproof ionization chambers were successfully carried out in the 1.5 T B-field of an MRLinac with a standard uncertainty of 0.7%. Preliminary kQ,B and kB factors were determined with standard uncertainties of respectively 0.8% and 1.3%. The kQ,B agrees with an alternative method within 0.4%. The feasibility of water calorimetry in the presence

  10. Wall interference assessment and corrections

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Kemp, W. B., Jr.; Garriz, J. A.

    1989-01-01

    Wind tunnel wall interference assessment and correction (WIAC) concepts, applications, and typical results are discussed in terms of several nonlinear transonic codes and one panel method code developed for and being implemented at NASA-Langley. Contrasts between 2-D and 3-D transonic testing factors which affect WIAC procedures are illustrated using airfoil data from the 0.3 m Transonic Cryogenic Tunnel and Pathfinder 1 data from the National Transonic Facility. Initial results from the 3-D WIAC codes are encouraging; research on and implementation of WIAC concepts continue.

  11. Conversion and correction factors for historical measurements of iodine-131 in Hanford-area vegetation, 1945--1947. Hanford Environmental Dose Reconstruction Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mart, E.I.; Denham, D.H.; Thiede, M.E.

    1993-12-01

    This report is a result of the Hanford Environmental Dose Reconstruction (HEDR) Project whose goal is to estimate the radiation dose that individuals could have received from emissions since 1944 at the U.S. Department of Energy`s (DOE) Hanford Site near Richland, Washington. The HEDR Project is conducted by Battelle, Pacific Northwest Laboratories (BNW). One of the radionuclides emitted that would affect the radiation dose was iodine-131. This report describes in detail the reconstructed conversion and correction factors for historical measurements of iodine-131 in Hanford-area vegetation which was collected from the beginning of October 1945 through the end of December 1947.

  12. Self-similar solutions to isothermal shock problems

    NASA Astrophysics Data System (ADS)

    Deschner, Stephan C.; Illenseer, Tobias F.; Duschl, Wolfgang J.

    We investigate exact solutions for isothermal shock problems in different one-dimensional geometries. These solutions are given as analytical expressions if possible, or are computed using standard numerical methods for solving ordinary differential equations. We test the numerical solutions against the analytical expressions to verify the correctness of all numerical algorithms. We use similarity methods to derive a system of ordinary differential equations (ODE) yielding exact solutions for power law density distributions as initial conditions. Further, the system of ODEs accounts for implosion problems (IP) as well as explosion problems (EP) by changing the initial or boundary conditions, respectively. Taking genuinely isothermal approximations into account leads to additional insights of EPs in contrast to earlier models. We neglect a constant initial energy contribution but introduce a parameter to adjust the initial mass distribution of the system. Moreover, we show that due to this parameter a constant initial density is not allowed for isothermal EPs. Reasonable restrictions for this parameter are given. Both, the (genuinely) isothermal implosion as well as the explosion problem are solved for the first time.

  13. Corrective Action Decision Document for Corrective Action Unit 428: Area 3 Septic Waste Systems 1 and 5, Tonopah Test Range, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    U.S. Department of Energy, Nevada Operations Office

    2000-02-08

    This Corrective Action Decision Document identifies and rationalizes the US Department of Energy, Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 428, Septic Waste Systems 1 and 5, under the Federal Facility Agreement and Consent Order. Located in Area 3 at the Tonopah Test Range (TTR) in Nevada, CAU 428 is comprised of two Corrective Action Sites (CASs): (1) CAS 03-05-002-SW01, Septic Waste System 1 and (2) CAS 03-05-002- SW05, Septic Waste System 5. A corrective action investigation performed in 1999 detected analyte concentrations that exceeded preliminarymore » action levels; specifically, contaminants of concern (COCs) included benzo(a) pyrene in a septic tank integrity sample associated with Septic Tank 33-1A of Septic Waste System 1, and arsenic in a soil sample associated with Septic Waste System 5. During this investigation, three Corrective Action Objectives (CAOs) were identified to prevent or mitigate exposure to contents of the septic tanks and distribution box, to subsurface soil containing COCs, and the spread of COCs beyond the CAU. Based on these CAOs, a review of existing data, future use, and current operations in Area 3 of the TTR, three CAAs were developed for consideration: Alternative 1 - No Further Action; Alternative 2 - Closure in Place with Administrative Controls; and Alternative 3 - Clean Closure by Excavation and Disposal. These alternatives were evaluated based on four general corrective action standards and five remedy selection decision factors. Based on the results of the evaluation, the preferred CAA was Alternative 3. This alternative meets all applicable state and federal regulations for closure of the site and will eliminate potential future exposure pathways to the contaminated soils at the Area 3 Septic Waste Systems 1 and 5.« less

  14. [Correction of indigestion in chronic biliary pancreatitis].

    PubMed

    Trukhan, D I; Tarasova, L V

    2013-01-01

    Chronic pancreatitis (CP) is one of the most urgent and investigated problems in gastroenterology. Despite the variety of the spectrum of etiologic, pathogenetic and provoking factors for CP, one of the leading causes of disease pathology is pathology of biliary tract. A key element in the treatment of CP is a correction of the digestive system, with biliary pancreatitis feature that distinguishes it from other forms of pancreatitis, is a combination of exocrine pancreatic insufficiency with chronic biliary insufficiency. The variety of biochemical and immunological effects of ursodeoxycholic acid (UDCA) can treat it with biliary pancreatitis as the drug of etiological, pathogenetic and substitution therapy. UDCA (Ursosan) in combination with modern mini-microspheroidal polyfermental drugs significantly improves the clinical efficacy of the correction of the digestive system in biliary pancreatitis.

  15. Radiative corrections to double-Dalitz decays revisited

    NASA Astrophysics Data System (ADS)

    Kampf, Karol; Novotný, Jiři; Sanchez-Puertas, Pablo

    2018-03-01

    In this study, we revisit and complete the full next-to-leading order corrections to pseudoscalar double-Dalitz decays within the soft-photon approximation. Comparing to the previous study, we find small differences, which are nevertheless relevant for extracting information about the pseudoscalar transition form factors. Concerning the latter, these processes could offer the opportunity to test them—for the first time—in their double-virtual regime.

  16. Air slab-correction for Γ-ray attenuation measurements

    NASA Astrophysics Data System (ADS)

    Mann, Kulwinder Singh

    2017-12-01

    Gamma (γ)-ray shielding behaviour (GSB) of a material can be ascertained from its linear attenuation coefficient (μ, cm-1). Narrow-beam transmission geometry is required for μ-measurement. In such measurements, a thin slab of the material has to insert between point-isotropic γ-ray source and detector assembly. The accuracy in measurements requires that sample's optical thickness (OT) remain below 0.5 mean free path (mfp). Sometimes it is very difficult to produce thin slab of sample (absorber), on the other hand for thick absorber, i.e. OT >0.5 mfp, the influence of the air displaced by it cannot be ignored during μ-measurements. Thus, for a thick sample, correction factor has been suggested which compensates the air present in the transmission geometry. The correction factor has been named as an air slab-correction (ASC). Six samples of low-Z engineering materials (cement-black, clay, red-mud, lime-stone, cement-white and plaster-of-paris) have been selected for investigating the effect of ASC on μ-measurements at three γ-ray energies (661.66, 1173.24, 1332.50 keV). The measurements have been made using point-isotropic γ-ray sources (Cs-137 and Co-60), NaI(Tl) detector and multi-channel-analyser coupled with a personal computer. Theoretical values of μ have been computed using a GRIC2-toolkit (standardized computer programme). Elemental compositions of the samples were measured with Wavelength Dispersive X-ray Fluorescence (WDXRF) analyser. Inter-comparison of measured and computed μ-values, suggested that the application of ASC helps in precise μ-measurement for thick samples of low-Z materials. Thus, this hitherto widely ignored ASC factor is recommended to use in similar γ-ray measurements.

  17. Correcting for diffusion in carbon-14 dating of ground water

    USGS Publications Warehouse

    Sanford, W.E.

    1997-01-01

    It has generally been recognized that molecular diffusion can be a significant process affecting the transport of carbon-14 in the subsurface when occurring either from a permeable aquifer into a confining layer or from a fracture into a rock matrix. An analytical solution that is valid for steady-state radionuclide transport through fractured rock is shown to be applicable to many multilayered aquifer systems. By plotting the ratio of the rate of diffusion to the rate of decay of carbon-14 over the length scales representative of several common hydrogeologic settings, it is demonstrated that diffusion of carbon-14 should often be not only a significant process, but a dominant one relative to decay. An age-correction formula is developed and applied to the Bangkok Basin of Thailand, where a mean carbon-14-based age of 21,000 years was adjusted to 11,000 years to account for diffusion. This formula and its graphical representation should prove useful for many studies, for they can be used first to estimate the potential role of diffusion and then to make a simple first-order age correction if necessary.It has generally been recognized that molecular diffusion can be a significant process affecting the transport of carbon-14 in the subsurface when occurring either from a permeable aquifer into a confining layer or from a fracture into a rock matrix. An analytical solution that is valid for steady-state radionuclide transport through fractured rock is shown to be applicable to many multilayered aquifer systems. By plotting the ratio of the rate of diffusion to the rate of decay of carbon-14 over the length scales representative of several common hydrogeologic settings, it is demonstrated that diffusion of carbon-14 should often be not only a significant process, but a dominant one relative to decay. An age-correction formula is developed and applied to the Bangkok Basin of Thailand, where a mean carbon-14-based age of 21,000 years was adjusted to 11,000 years to

  18. Slow-roll corrections in multi-field inflation: a separate universes approach

    NASA Astrophysics Data System (ADS)

    Karčiauskas, Mindaugas; Kohri, Kazunori; Mori, Taro; White, Jonathan

    2018-05-01

    In view of cosmological parameters being measured to ever higher precision, theoretical predictions must also be computed to an equally high level of precision. In this work we investigate the impact on such predictions of relaxing some of the simplifying assumptions often used in these computations. In particular, we investigate the importance of slow-roll corrections in the computation of multi-field inflation observables, such as the amplitude of the scalar spectrum Pζ, its spectral tilt ns, the tensor-to-scalar ratio r and the non-Gaussianity parameter fNL. To this end we use the separate universes approach and δ N formalism, which allows us to consider slow-roll corrections to the non-Gaussianity of the primordial curvature perturbation as well as corrections to its two-point statistics. In the context of the δ N expansion, we divide slow-roll corrections into two categories: those associated with calculating the correlation functions of the field perturbations on the initial flat hypersurface and those associated with determining the derivatives of the e-folding number with respect to the field values on the initial flat hypersurface. Using the results of Nakamura & Stewart '96, corrections of the first kind can be written in a compact form. Corrections of the second kind arise from using different levels of slow-roll approximation in solving for the super-horizon evolution, which in turn corresponds to using different levels of slow-roll approximation in the background equations of motion. We consider four different levels of approximation and apply the results to a few example models. The various approximations are also compared to exact numerical solutions.

  19. Development of a primary standard for absorbed dose from unsealed radionuclide solutions

    NASA Astrophysics Data System (ADS)

    Billas, I.; Shipley, D.; Galer, S.; Bass, G.; Sander, T.; Fenwick, A.; Smyth, V.

    2016-12-01

    Currently, the determination of the internal absorbed dose to tissue from an administered radionuclide solution relies on Monte Carlo (MC) calculations based on published nuclear decay data, such as emission probabilities and energies. In order to validate these methods with measurements, it is necessary to achieve the required traceability of the internal absorbed dose measurements of a radionuclide solution to a primary standard of absorbed dose. The purpose of this work was to develop a suitable primary standard. A comparison between measurements and calculations of absorbed dose allows the validation of the internal radiation dose assessment methods. The absorbed dose from an yttrium-90 chloride (90YCl) solution was measured with an extrapolation chamber. A phantom was developed at the National Physical Laboratory (NPL), the UK’s National Measurement Institute, to position the extrapolation chamber as closely as possible to the surface of the solution. The performance of the extrapolation chamber was characterised and a full uncertainty budget for the absorbed dose determination was obtained. Absorbed dose to air in the collecting volume of the chamber was converted to absorbed dose at the centre of the radionuclide solution by applying a MC calculated correction factor. This allowed a direct comparison of the analytically calculated and experimentally determined absorbed dose of an 90YCl solution. The relative standard uncertainty in the measurement of absorbed dose at the centre of an 90YCl solution with the extrapolation chamber was found to be 1.6% (k  =  1). The calculated 90Y absorbed doses from published medical internal radiation dose (MIRD) and radiation dose assessment resource (RADAR) data agreed with measurements to within 1.5% and 1.4%, respectively. This study has shown that it is feasible to use an extrapolation chamber for performing primary standard absorbed dose measurements of an unsealed radionuclide solution. Internal radiation

  20. Turboprop+: enhanced Turboprop diffusion-weighted imaging with a new phase correction.

    PubMed

    Lee, Chu-Yu; Li, Zhiqiang; Pipe, James G; Debbins, Josef P

    2013-08-01

    Faster periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) diffusion-weighted imaging acquisitions, such as Turboprop and X-prop, remain subject to phase errors inherent to a gradient echo readout, which ultimately limits the applied turbo factor (number of gradient echoes between each pair of radiofrequency refocusing pulses) and, thus, scan time reductions. This study introduces a new phase correction to Turboprop, called Turboprop+. This technique employs calibration blades, which generate 2-D phase error maps and are rotated in accordance with the data blades, to correct phase errors arising from off-resonance and system imperfections. The results demonstrate that with a small increase in scan time for collecting calibration blades, Turboprop+ had a superior immunity to the off-resonance-related artifacts when compared to standard Turboprop and recently proposed X-prop with the high turbo factor (turbo factor = 7). Thus, low specific absorption rate and short scan time can be achieved in Turboprop+ using a high turbo factor, whereas off-resonance related artifacts are minimized. © 2012 Wiley Periodicals, Inc.

  1. Correction of self-reported BMI based on objective measurements: a Belgian experience.

    PubMed

    Drieskens, S; Demarest, S; Bel, S; De Ridder, K; Tafforeau, J

    2018-01-01

    Based on successive Health Interview Surveys (HIS), it has been demonstrated that also in Belgium obesity, measured by means of a self-reported body mass index (BMI in kg/m 2 ), is a growing public health problem that needs to be monitored as accurately as possible. Studies have shown that a self-reported BMI can be biased. Consequently, if the aim is to rely on a self-reported BMI, adjustment is recommended. Data on measured and self-reported BMI, derived from the Belgian Food Consumption Survey (FCS) 2014 offers the opportunity to do so. The HIS and FCS are cross-sectional surveys based on representative population samples. This study focused on adults aged 18-64 years (sample HIS = 6545 and FCS = 1213). Measured and self-reported BMI collected in FCS were used to assess possible misreporting. Using FCS data, correction factors (measured BMI/self-reported BMI) were calculated in function of a combination of background variables (region, gender, educational level and age group). Individual self-reported BMI of the HIS 2013 were then multiplied with the corresponding correction factors to produce a corrected BMI-classification. When compared with the measured BMI, the self-reported BMI in the FCS was underestimated (mean 0.97 kg/m 2 ). 28% of the obese people underestimated their BMI. After applying the correction factors, the prevalence of obesity based on HIS data significantly increased (from 13% based on the original HIS data to 17% based on the corrected HIS data) and approximated the measured one derived from the FCS data. Since self-reported calculations of BMI are underestimated, it is recommended to adjust them to obtain accurate estimates which are important for decision making.

  2. Correction to Account for the Isomer of 87Y in the 87Y Radiochemical Diagnostic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes-Sterbenz, Anna Catherine; Jungman, Gerard

    Here we summarize the need to correct inventories of 87Y reported by the Los Alamos weapons radiochemistry team. The need for a correction arises from the fact that a 13.37 hour isomer of 87Y, that is strongly populated through (n, 2n) reactions on 88Y and isomers of 88Y, has not been included in the experimental analyses of NTS data. Inventories of 87Y reported by LANL’s weapons radiochemistry team should be multiplied by a correction factor that is numerically close to 0.9. Alternatively, the user could increase simulated values of 87Y by 1.1 for comparison with the original method for reportingmore » NTS values. If the inventories in question were directly reported by LLNL’s radiochemistry team, care must be taken to determine whether or not the correction factor has already been applied.« less

  3. Centroid-moment tensor solutions for October-December 2000

    NASA Astrophysics Data System (ADS)

    Dziewonski, A. M.; Ekström, G.; Maternovskaya, N. N.

    2003-04-01

    Centroid-moment tensor solutions are presented for 263 earthquakes that occurred during the fourth quarter of 2000. The solutions are obtained using corrections for a spherical earth structure represented by the whole mantle shear velocity model SH8/U4L8 of Dziewonski and Woodward [A.M. Dziewonski, R.L. Woodward, Acoustic imaging at the planetary scale, in: H. Emert, H.-P. Harjes (Eds.), Acoustical Imaging, Plenum Press, New York, vol. 19, 1992, pp. 785-797]. The model of an elastic attenuation of Durek and Ekström [Bull. Seism. Soc. Am. 86 (1996) 144] is used to predict the decay of the waveforms.

  4. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    NASA Astrophysics Data System (ADS)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  5. Statistical simplex approach to primary and secondary color correction in thick lens assemblies

    NASA Astrophysics Data System (ADS)

    Ament, Shelby D. V.; Pfisterer, Richard

    2017-11-01

    A glass selection optimization algorithm is developed for primary and secondary color correction in thick lens systems. The approach is based on the downhill simplex method, and requires manipulation of the surface color equations to obtain a single glass-dependent parameter for each lens element. Linear correlation is used to relate this parameter to all other glass-dependent variables. The algorithm provides a statistical distribution of Abbe numbers for each element in the system. Examples of several lenses, from 2-element to 6-element systems, are performed to verify this approach. The optimization algorithm proposed is capable of finding glass solutions with high color correction without requiring an exhaustive search of the glass catalog.

  6. Terrain Correction on the moving equal area cylindrical map projection of the surface of a reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A.; Safari, A.; Grafarend, E.

    2003-04-01

    An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in

  7. De-confusing the THOG problem: the Pythagorean solution.

    PubMed

    Griggs, R A; Koenig, C S; Alea, N L

    2001-08-01

    Sources of facilitation for Needham and Amado's (1995) Pythagoras version of Wason's THOG problem were systematically examined in three experiments with 174 participants. Although both the narrative structure and figural notation used in the Pythagoras problem independently led to significant facilitation (40-50% correct), pairing hypothesis generation with either factor or pairing the two factors together was found to be necessary to obtain substantial facilitation (> 50% correct). Needham and Amado's original finding for the complete Pythagoras problem was also replicated. These results are discussed in terms of the "confusion theory" explanation for performance on the standard THOG problem. The possible role of labelling as a de-confusing factor in other versions of the THOG problem and the implications of the present findings for human reasoning are also considered.

  8. Corrected Implicit Monte Carlo

    DOE PAGES

    Cleveland, Mathew Allen; Wollaber, Allan Benton

    2018-01-02

    Here in this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle formore » frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. Finally, we present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.« less

  9. Corrected implicit Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cleveland, M. A.; Wollaber, A. B.

    2018-04-01

    In this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle for frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. We present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.

  10. Myopia Stabilization and Associated Factors Among Participants in the Correction of Myopia Evaluation Trial (COMET)

    PubMed Central

    2013-01-01

    Purpose. To use the Gompertz function to estimate the age and the amount of myopia at stabilization and to evaluate associated factors in the Correction of Myopia Evaluation Trial (COMET) cohort, a large ethnically diverse group of myopic children. Methods. The COMET enrolled 469 ethnically diverse children aged 6 to younger than 12 years with spherical equivalent refraction between −1.25 and −4.50 diopters (D). Noncycloplegic refraction was measured semiannually for 4 years and annually thereafter. Right eye data were fit to individual Gompertz functions in participants with at least 6 years of follow-up and at least seven refraction measurements over 11 years. Function parameters were estimated using a nonlinear least squares procedure. Associated factors were evaluated using linear regression. Results. In total, 426 participants (91%) had valid Gompertz curve fits. The mean (SD) age at myopia stabilization was 15.61 (4.17) years, and the mean (SD) amount of myopia at stabilization was −4.87 (2.01) D. Ethnicity (P < 0.0001) but not sex or the number of myopic parents was associated with the age at stabilization. Ethnicity (P = 0.02) and the number of myopic parents (P = 0.01) but not sex were associated with myopia magnitude at stabilization. At stabilization, African Americans were youngest (mean age, 13.82 years) and had the least myopia (mean, −4.36 D). Participants with two versus no myopic parents had approximately 1.00 D more myopia at stabilization. The age and the amount of myopia at stabilization were correlated (r = −0.60, P < 0.0001). Conclusions. The Gompertz function provides estimates of the age and the amount of myopia at stabilization in an ethnically diverse cohort. These findings should provide guidance on the time course of myopia and on decisions regarding the type and timing of interventions. PMID:24159085

  11. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  12. Counterion adsorption theory of dilute polyelectrolyte solutions: Apparent molecular weight, second virial coefficient, and intermolecular structure factor

    PubMed Central

    Muthukumar, M.

    2012-01-01

    Polyelectrolyte chains are well known to be strongly correlated even in extremely dilute solutions in the absence of additional strong electrolytes. Such correlations result in severe difficulties in interpreting light scattering measurements in the determination of the molecular weight, radius of gyration, and the second virial coefficient of charged macromolecules at lower ionic strengths from added strong electrolytes. By accounting for charge-regularization of the polyelectrolyte by the counterions, we present a theory of the apparent molecular weight, second virial coefficient, and the intermolecular structure factor in dilute polyelectrolyte solutions in terms of concentrations of the polymer and the added strong electrolyte. The counterion adsorption of the polyelectrolyte chains to differing levels at different concentrations of the strong electrolyte can lead to even an order of magnitude discrepancy in the molecular weight inferred from light scattering measurements. Based on counterion-mediated charge regularization, the second virial coefficient of the polyelectrolyte and the interchain structure factor are derived self-consistently. The effect of the interchain correlations, dominating at lower salt concentrations, on the inference of the radius of gyration and on molecular weight is derived. Conditions for the onset of nonmonotonic scattering wave vector dependence of scattered intensity upon lowering the electrolyte concentration and interpretation of the apparent radius of gyration are derived in terms of the counterion adsorption mechanism. PMID:22830728

  13. Bohm-criterion approximation versus optimal matched solution for a cylindrical probe in radial-motion theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Din, Alif

    2016-08-15

    The theory of positive-ion collection by a probe immersed in a low-pressure plasma was reviewed and extended by Allen et al. [Proc. Phys. Soc. 70, 297 (1957)]. The numerical computations for cylindrical and spherical probes in a sheath region were presented by F. F. Chen [J. Nucl. Energy C 7, 41 (1965)]. Here, in this paper, the sheath and presheath solutions for a cylindrical probe are matched through a numerical matching procedure to yield “matched” potential profile or “M solution.” The solution based on the Bohm criterion approach “B solution” is discussed for this particular problem. The comparison of cylindricalmore » probe characteristics obtained from the correct potential profile (M solution) and the approximated Bohm-criterion approach are different. This raises questions about the correctness of cylindrical probe theories relying only on the Bohm-criterion approach. Also the comparison between theoretical and experimental ion current characteristics shows that in an argon plasma the ions motion towards the probe is almost radial.« less

  14. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  15. Correcting coils in end magnets of accelerators

    NASA Astrophysics Data System (ADS)

    Kassab, L. R.; Gouffon, P.

    1998-05-01

    We present an empirical investigation of the correcting coils behavior used to homogenize the field distribution of the race-track microtron accelerator end magnets. These end magnets belong to the second stage of the 30.0 MeV cw electron accelerator under construction at IFUSP, the race-track microtron booster, in which the beam energy is raised from 1.97 to 5.1 MeV. The correcting coils are attached to the pole faces and are based on the inhomogeneities of the magnetic field measured. The performance of these coils, when operating the end magnets with currents that differ by +/-10% from the one used in the mappings that originated the coils copper leads, is presented. For one of the magnets, adjusting conveniently the current of the correcting coils makes it possible to homogenize field distributions of different intensities, once their shapes are practically identical to those that originated the coils. For the other one, the shapes are changed and the coils are less efficient. This is related to intrinsic factors that determine the inhomogeneities. However, we obtained uniformity of 0.001% in both cases.

  16. Ellipsoidal terrain correction based on multi-cylindrical equal-area map projection of the reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.

    2004-09-01

    An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical

  17. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  18. Solution to the spectral filter problem of residual terrain modelling (RTM)

    NASA Astrophysics Data System (ADS)

    Rexer, Moritz; Hirt, Christian; Bucha, Blažej; Holmes, Simon

    2018-06-01

    In physical geodesy, the residual terrain modelling (RTM) technique is frequently used for high-frequency gravity forward modelling. In the RTM technique, a detailed elevation model is high-pass-filtered in the topography domain, which is not equivalent to filtering in the gravity domain. This in-equivalence, denoted as spectral filter problem of the RTM technique, gives rise to two imperfections (errors). The first imperfection is unwanted low-frequency (LF) gravity signals, and the second imperfection is missing high-frequency (HF) signals in the forward-modelled RTM gravity signal. This paper presents new solutions to the RTM spectral filter problem. Our solutions are based on explicit modelling of the two imperfections via corrections. The HF correction is computed using spectral domain gravity forward modelling that delivers the HF gravity signal generated by the long-wavelength RTM reference topography. The LF correction is obtained from pre-computed global RTM gravity grids that are low-pass-filtered using surface or solid spherical harmonics. A numerical case study reveals maximum absolute signal strengths of ˜ 44 mGal (0.5 mGal RMS) for the HF correction and ˜ 33 mGal (0.6 mGal RMS) for the LF correction w.r.t. a degree-2160 reference topography within the data coverage of the SRTM topography model (56°S ≤ φ ≤ 60°N). Application of the LF and HF corrections to pre-computed global gravity models (here the GGMplus gravity maps) demonstrates the efficiency of the new corrections over topographically rugged terrain. Over Switzerland, consideration of the HF and LF corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 4.41 to 3.27 mGal, which translates into ˜ 26% improvement. Over a second test area (Canada), our corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 5.65 to 5.30 mGal (˜ 6% improvement). Particularly over Switzerland, geophysical signals (associated, e.g. with

  19. An Analysis of Offset, Gain, and Phase Corrections in Analog to Digital Converters

    NASA Astrophysics Data System (ADS)

    Cody, Devin; Ford, John

    2015-01-01

    Many high-speed analog to digital converters (ADCs) use interwoven ADCs to greatly boost their sample rate. This interwoven architecture can introduce problems if the low speed ADCs do not have identical outputs. These errors are manifested as phantom frequencies that appear in the digitized signal although they never existed in the analog domain. Through the application of offset, gain, and phase (OGP) corrections to the ADC, this problem can be reduced. Here we report on an implementation of such a correction in a high speed ADC chip used for radio astronomy. While the corrections could not be implemented in the ADCs themselves, a partial solution was devised and implemented digitally inside of a signal processing field programmable gate array (FPGA). Positive results to contrived situations are shown, and null results are presented for implementation in an ADC083000 card with minimal error. Lastly, we discuss the implications of this method as well as its mathematical basis.

  20. Dissociation of glucocerebrosidase dimer in solution by its co-factor, saposin C

    DOE PAGES

    Gruschus, James M.; Jiang, Zhiping; Yap, Thai Leong; ...

    2015-01-16

    Mutations in the gene for the lysosomal enzyme glucocerebrosidase (GCase) cause Gaucher disease and are the most common risk factor for Parkinson disease (PD). Analytical ultracentrifugation of 8 μM GCase shows equilibrium between monomer and dimer forms. However, in the presence of its co-factor saposin C (Sap C), only monomer GCase is seen. Isothermal calorimetry confirms that Sap C associates with GCase in solution in a 1:1 complex (K d = 2.1 ± 1.1 μM). Saturation cross-transfer NMR determined that the region of Sap C contacting GCase includes residues 63–66 and 74–76, which is distinct from the region known tomore » enhance GCase activity. Because α-synuclein (α-syn), a protein closely associated with PD etiology, competes with Sap C for GCase binding, its interaction with GCase was also measured by ultracentrifugation and saturation cross-transfer. Unlike Sap C, binding of α-syn to GCase does not affect multimerization. However, adding α-syn reduces saturation cross-transfer from Sap C to GCase, confirming displacement. To explore where Sap C might disrupt multimeric GCase, GCase x-ray structures were analyzed using the program PISA, which predicted stable dimer and tetramer forms. In conclusion, for the most frequently predicted multimer interface, the GCase active sites are partially buried, suggesting that Sap C might disrupt the multimer by binding near the active site.« less