Sample records for factor correction solutions

  1. A comparative study of the effects of cone-plate and parallel-plate geometries on rheological properties under oscillatory shear flow

    NASA Astrophysics Data System (ADS)

    Song, Hyeong Yong; Salehiyan, Reza; Li, Xiaolei; Lee, Seung Hak; Hyun, Kyu

    2017-11-01

    In this study, the effects of cone-plate (C/P) and parallel-plate (P/P) geometries were investigated on the rheological properties of various complex fluids, e.g. single-phase (polymer melts and solutions) and multiphase systems (polymer blend and nanocomposite, and suspension). Small amplitude oscillatory shear (SAOS) tests were carried out to compare linear rheological responses while nonlinear responses were compared using large amplitude oscillatory shear (LAOS) tests at different frequencies. Moreover, Fourier-transform (FT)-rheology method was used to analyze the nonlinear responses under LAOS flow. Experimental results were compared with predictions obtained by single-point correction and shear rate correction. For all systems, SAOS data measured by C/P and P/P coincide with each other, but results showed discordance between C/P and P/P measurements in the nonlinear regime. For all systems except xanthan gum solutions, first-harmonic moduli were corrected using a single horizontal shift factor, whereas FT rheology-based nonlinear parameters ( I 3/1, I 5/1, Q 3, and Q 5) were corrected using vertical shift factors that are well predicted by single-point correction. Xanthan gum solutions exhibited anomalous corrections. Their first-harmonic Fourier moduli were superposed using a horizontal shift factor predicted by shear rate correction applicable to highly shear thinning fluids. The distinguished corrections were observed for FT rheology-based nonlinear parameters. I 3/1 and I 5/1 were superposed by horizontal shifts, while the other systems displayed vertical shifts of I 3/1 and I 5/1. Q 3 and Q 5 of xanthan gum solutions were corrected using both horizontal and vertical shift factors. In particular, the obtained vertical shift factors for Q 3 and Q 5 were twice as large as predictions made by single-point correction. Such larger values are rationalized by the definitions of Q 3 and Q 5. These results highlight the significance of horizontal shift corrections in nonlinear oscillatory shear data.

  2. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  3. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  4. Three-Dimensional Thermal Boundary Layer Corrections for Circular Heat Flux Gauges Mounted in a Flat Plate with a Surface Temperature Discontinuity

    NASA Technical Reports Server (NTRS)

    Kandula, M.; Haddad, G. F.; Chen, R.-H.

    2006-01-01

    Three-dimensional Navier-Stokes computational fluid dynamics (CFD) analysis has been performed in an effort to determine thermal boundary layer correction factors for circular convective heat flux gauges (such as Schmidt-Boelter and plug type)mounted flush in a flat plate subjected to a stepwise surface temperature discontinuity. Turbulent flow solutions with temperature-dependent properties are obtained for a free stream Reynolds number of 1E6, and freestream Mach numbers of 2 and 4. The effect of gauge diameter and the plate surface temperature have been investigated. The 3-D CFD results for the heat flux correction factors are compared to quasi-21) results deduced from constant property integral solutions and also 2-D CFD analysis with both constant and variable properties. The role of three-dimensionality and of property variations on the heat flux correction factors has been demonstrated.

  5. Efficiency for unretained solutes in packed column supercritical fluid chromatography. I. Theory for isothermal conditions and correction factors for carbon dioxide.

    PubMed

    Poe, Donald P

    2005-06-17

    A general theory for efficiency of nonuniform columns with compressible mobile phase fluids is applied to the elution of an unretained solute in packed-column supercritical fluid chromatography (pSFC). The theoretical apparent plate height under isothermal conditions is given by the Knox equation multiplied by a compressibility correction factor f1, which is equal to the ratio of the temporal-to-spatial average densities of the mobile phase. If isothermal conditions are maintained, large pressure drops in pSFC should not result in excessive efficiency losses for elution of unretained solutes.

  6. Resolution of the COBE Earth sensor anomaly

    NASA Technical Reports Server (NTRS)

    Sedler, J.

    1993-01-01

    Since its launch on November 18, 1989, the Earth sensors on the Cosmic Background Explorer (COBE) have shown much greater noise than expected. The problem was traced to an error in Earth horizon acquisition-of-signal (AOS) times. Due to this error, the AOS timing correction was ignored, causing Earth sensor split-to-index (SI) angles to be incorrectly time-tagged to minor frame synchronization times. Resulting Earth sensor residuals, based on gyro-propagated fine attitude solutions, were as large as plus or minus 0.45 deg (much greater than plus or minus 0.10 deg from scanner specifications (Reference 1)). Also, discontinuities in single-frame coarse attitude pitch and roll angles (as large as 0.80 and 0.30 deg, respectively) were noted several times during each orbit. However, over the course of the mission, each Earth sensor was observed to independently and unexpectedly reset and then reactivate into a new configuration. Although the telemetered AOS timing corrections are still in error, a procedure has been developed to approximate and apply these corrections. This paper describes the approach, analysis, and results of approximating and applying AOS timing adjustments to correct Earth scanner data. Furthermore, due to the continuing degradation of COBE's gyroscopes, gyro-propagated fine attitude solutions may soon become unavailable, requiring an alternative method for attitude determination. By correcting Earth scanner AOS telemetry, as described in this paper, more accurate single-frame attitude solutions are obtained. All aforementioned pitch and roll discontinuities are removed. When proper AOS corrections are applied, the standard deviation of pitch residuals between coarse attitude and gyro-propagated fine attitude solutions decrease by a factor of 3. Also, the overall standard deviation of SI residuals from fine attitude solutions decrease by a factor of 4 (meeting sensor specifications) when AOS corrections are applied.

  7. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  8. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  9. Radiation boundary condition and anisotropy correction for finite difference solutions of the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Webb, Jay C.

    1994-01-01

    In this paper finite-difference solutions of the Helmholtz equation in an open domain are considered. By using a second-order central difference scheme and the Bayliss-Turkel radiation boundary condition, reasonably accurate solutions can be obtained when the number of grid points per acoustic wavelength used is large. However, when a smaller number of grid points per wavelength is used excessive reflections occur which tend to overwhelm the computed solutions. Excessive reflections are due to the incompability between the governing finite difference equation and the Bayliss-Turkel radiation boundary condition. The Bayliss-Turkel radiation boundary condition was developed from the asymptotic solution of the partial differential equation. To obtain compatibility, the radiation boundary condition should be constructed from the asymptotic solution of the finite difference equation instead. Examples are provided using the improved radiation boundary condition based on the asymptotic solution of the governing finite difference equation. The computed results are free of reflections even when only five grid points per wavelength are used. The improved radiation boundary condition has also been tested for problems with complex acoustic sources and sources embedded in a uniform mean flow. The present method of developing a radiation boundary condition is also applicable to higher order finite difference schemes. In all these cases no reflected waves could be detected. The use of finite difference approximation inevita bly introduces anisotropy into the governing field equation. The effect of anisotropy is to distort the directional distribution of the amplitude and phase of the computed solution. It can be quite large when the number of grid points per wavelength used in the computation is small. A way to correct this effect is proposed. The correction factor developed from the asymptotic solutions is source independent and, hence, can be determined once and for all. The effectiveness of the correction factor in providing improvements to the computed solution is demonstrated in this paper.

  10. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  11. A Higher-Order Bending Theory for Laminated Composite and Sandwich Beams

    NASA Technical Reports Server (NTRS)

    Cook, Geoffrey M.

    1997-01-01

    A higher-order bending theory is derived for laminated composite and sandwich beams. This is accomplished by assuming a special form for the axial and transverse displacement expansions. An independent expansion is also assumed for the transverse normal stress. Appropriate shear correction factors based on energy considerations are used to adjust the shear stiffness. A set of transverse normal correction factors is introduced, leading to significant improvements in the transverse normal strain and stress for laminated composite and sandwich beams. A closed-form solution to the cylindrical elasticity solutions for a wide range of beam aspect ratios and commonly used material systems. Accurate shear stresses for a wide range of laminates, including the challenging unsymmetric composite and sandwich laminates, are obtained using an original corrected integration scheme. For application of the theory to a wider range of problems, guidelines for finite element approximations are presented.

  12. Asymptotic, multigroup flux reconstruction and consistent discontinuity factors

    DOE PAGES

    Trahan, Travis J.; Larsen, Edward W.

    2015-05-12

    Recent theoretical work has led to an asymptotically derived expression for reconstructing the neutron flux from lattice functions and multigroup diffusion solutions. The leading-order asymptotic term is the standard expression for flux reconstruction, i.e., it is the product of a shape function, obtained through a lattice calculation, and the multigroup diffusion solution. The first-order asymptotic correction term is significant only where the gradient of the diffusion solution is not small. Inclusion of this first-order correction term can significantly improve the accuracy of the reconstructed flux. One may define discontinuity factors (DFs) to make certain angular moments of the reconstructed fluxmore » continuous across interfaces between assemblies in 1-D. Indeed, the standard assembly discontinuity factors make the zeroth moment (scalar flux) of the reconstructed flux continuous. The inclusion of the correction term in the flux reconstruction provides an additional degree of freedom that can be used to make two angular moments of the reconstructed flux continuous across interfaces by using current DFs in addition to flux DFs. Thus, numerical results demonstrate that using flux and current DFs together can be more accurate than using only flux DFs, and that making the second angular moment continuous can be more accurate than making the zeroth moment continuous.« less

  13. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  14. Stiffness of frictional contact of dissimilar elastic solids

    DOE PAGES

    Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; ...

    2017-12-22

    The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less

  15. Stiffness of frictional contact of dissimilar elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.

    The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less

  16. Stiffness of frictional contact of dissimilar elastic solids

    NASA Astrophysics Data System (ADS)

    Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; Xu, Haitao; Pharr, George M.

    2018-03-01

    The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This paper gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the friction coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations - adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. The correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.

  17. Multicultural Content and Class Participation: Do Students Self-Censor?

    ERIC Educational Resources Information Center

    Hyde, Cheryl A.; Ruth, Betty J.

    2002-01-01

    Through survey and focus group data, examined student discomfort in social work courses, reasons for self-censorship, and solutions to self-censorship. Found that general classroom factors (being too shy or being unprepared), not political correctness, were more likely to be reasons for self-censorship. Solutions focused on the faculty's role in…

  18. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  19. The more the merrier? Increasing group size may be detrimental to decision-making performance in nominal groups.

    PubMed

    Amir, Ofra; Amir, Dor; Shahar, Yuval; Hart, Yuval; Gal, Kobi

    2018-01-01

    Demonstrability-the extent to which group members can recognize a correct solution to a problem-has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors-the difficulty of solving a problem and the difficulty of verifying the correctness of a solution-on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually decrease group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making.

  20. Intercomparison of methods for coincidence summing corrections in gamma-ray spectrometry--part II (volume sources).

    PubMed

    Lépy, M-C; Altzitzoglou, T; Anagnostakis, M J; Capogni, M; Ceccatelli, A; De Felice, P; Djurasevic, M; Dryak, P; Fazio, A; Ferreux, L; Giampaoli, A; Han, J B; Hurtado, S; Kandic, A; Kanisch, G; Karfopoulos, K L; Klemola, S; Kovar, P; Laubenstein, M; Lee, J H; Lee, J M; Lee, K B; Pierre, S; Carvalhal, G; Sima, O; Tao, Chau Van; Thanh, Tran Thien; Vidmar, T; Vukanac, I; Yang, M J

    2012-09-01

    The second part of an intercomparison of the coincidence summing correction methods is presented. This exercise concerned three volume sources, filled with liquid radioactive solution. The same experimental spectra, decay scheme and photon emission intensities were used by all the participants. The results were expressed as coincidence summing corrective factors for several energies of (152)Eu and (134)Cs, and different source-to-detector distances. They are presented and discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2009-02-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.

  2. Corridor of existence of thermodynamically consistent solution of the Ornstein-Zernike equation.

    PubMed

    Vorob'ev, V S; Martynov, G A

    2007-07-14

    We obtain the exact equation for a correction to the Ornstein-Zernike (OZ) equation based on the assumption of the uniqueness of thermodynamical functions. We show that this equation is reduced to a differential equation with one arbitrary parameter for the hard sphere model. The compressibility factor within narrow limits of this parameter variation can either coincide with one of the formulas obtained on the basis of analytical solutions of the OZ equation or assume all intermediate values lying in a corridor between these solutions. In particular, we find the value of this parameter when the thermodynamically consistent compressibility factor corresponds to the Carnahan-Stirling formula.

  3. Intermediate boundary conditions for LOD, ADI and approximate factorization methods

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.

    1985-01-01

    A general approach to determining the correct intermediate boundary conditions for dimensional splitting methods is presented. The intermediate solution U is viewed as a second order accurate approximation to a modified equation. Deriving the modified equation and using the relationship between this equation and the original equation allows us to determine the correct boundary conditions for U*. This technique is illustrated by applying it to locally one dimensional (LOD) and alternating direction implicit (ADI) methods for the heat equation in two and three space dimensions. The approximate factorization method is considered in slightly more generality.

  4. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  5. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE PAGES

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    2017-11-27

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  6. Angular spectral framework to test full corrections of paraxial solutions.

    PubMed

    Mahillo-Isla, R; González-Morales, M J

    2015-07-01

    Different correction methods for paraxial solutions have been used when such solutions extend out of the paraxial regime. The authors have used correction methods guided by either their experience or some educated hypothesis pertinent to the particular problem that they were tackling. This article provides a framework so as to classify full wave correction schemes. Thus, for a given solution of the paraxial wave equation, we can select the best correction scheme of those available. Some common correction methods are considered and evaluated under the proposed scope. Another remarkable contribution is obtained by giving the necessary conditions that two solutions of the Helmholtz equation must accomplish to accept a common solution of the parabolic wave equation as a paraxial approximation of both solutions.

  7. An entropy correction method for unsteady full potential flows with strong shocks

    NASA Technical Reports Server (NTRS)

    Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.

    1986-01-01

    An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.

  8. Senior Cross-Functional Support -- Essential for Implementing Corrective Actions at C3RS Sites

    DOT National Transportation Integrated Search

    2012-08-01

    The Federal Railroad Administrations (FRA) Office of Railroad Policy and Development believes that, in addition to process and technology innovations, human factors-based solutions can make a significant contribution to improving safety in the rai...

  9. An exact solution of a simplified two-phase plume model. [for solid propellant rocket

    NASA Technical Reports Server (NTRS)

    Wang, S.-Y.; Roberts, B. B.

    1974-01-01

    An exact solution of a simplified two-phase, gas-particle, rocket exhaust plume model is presented. It may be used to make the upper-bound estimation of the heat flux and pressure loads due to particle impingement on the objects existing in the rocket exhaust plume. By including the correction factors to be determined experimentally, the present technique will provide realistic data concerning the heat and aerodynamic loads on these objects for design purposes. Excellent agreement in trend between the best available computer solution and the present exact solution is shown.

  10. Using the Karolinska Scales of Personality on male juvenile delinquents: relationships between scales and factor structure.

    PubMed

    Dåderman, Anna M; Hellström, Ake; Wennberg, Peter; Törestad, Bertil

    2005-01-01

    The aim of the present study was to investigate relationships between scales from the Karolinska Scales of Personality (KSP) and the factor structure of the KSP in a sample of male juvenile delinquents. The KSP was administered to a group of male juvenile delinquents (n=55, mean age 17 years; standard deviation=1.2) from four Swedish national correctional institutions for serious offenders. As expected, the KSP showed appropriate correlations between the scales. Factor analysis (maximum likelihood) arrived at a four-factor solution in this sample, which is in line with previous research performed in a non-clinical sample of Swedish males. More research is needed in a somewhat larger sample of juvenile delinquents in order to confirm the present results regarding the factor solution.

  11. Thin wing corrections for phase-change heat-transfer data.

    NASA Technical Reports Server (NTRS)

    Hunt, J. L.; Pitts, J. I.

    1971-01-01

    Since no methods are available for determining the magnitude of the errors incurred when the semiinfinite slab assumption is violated, a computer program was developed to calculate the heat-transfer coefficients to both sides of a finite, one-dimensional slab subject to the boundary conditions ascribed to the phase-change coating technique. The results have been correlated in the form of correction factors to the semiinfinite slab solutions in terms of parameters normally used with the technique.

  12. What about False Insights? Deconstructing the Aha! Experience along Its Multiple Dimensions for Correct and Incorrect Solutions Separately

    PubMed Central

    Danek, Amory H.; Wiley, Jennifer

    2017-01-01

    The subjective Aha! experience that problem solvers often report when they find a solution has been taken as a marker for insight. If Aha! is closely linked to insightful solution processes, then theoretically, an Aha! should only be experienced when the correct solution is found. However, little work has explored whether the Aha! experience can also accompany incorrect solutions (“false insights”). Similarly, although the Aha! experience is not a unitary construct, little work has explored the different dimensions that have been proposed as its constituents. To address these gaps in the literature, 70 participants were presented with a set of difficult problems (37 magic tricks), and rated each of their solutions for Aha! as well as with regard to Suddenness in the emergence of the solution, Certainty of being correct, Surprise, Pleasure, Relief, and Drive. Solution times were also used as predictors for the Aha! experience. This study reports three main findings: First, false insights exist. Second, the Aha! experience is multidimensional and consists of the key components Pleasure, Suddenness and Certainty. Third, although Aha! experiences for correct and incorrect solutions share these three common dimensions, they are also experienced differently with regard to magnitude and quality, with correct solutions emerging faster, leading to stronger Aha! experiences, and higher ratings of Pleasure, Suddenness, and Certainty. Solution correctness proffered a slightly different emotional coloring to the Aha! experience, with the additional perception of Relief for correct solutions, and Surprise for incorrect ones. These results cast some doubt on the assumption that the occurrence of an Aha! experience can serve as a definitive signal that a true insight has taken place. On the other hand, the quantitative and qualitative differences in the experience of correct and incorrect solutions demonstrate that the Aha! experience is not a mere epiphenomenon. Strong Aha! experiences are clearly, but not exclusively linked to correct solutions. PMID:28163687

  13. What about False Insights? Deconstructing the Aha! Experience along Its Multiple Dimensions for Correct and Incorrect Solutions Separately.

    PubMed

    Danek, Amory H; Wiley, Jennifer

    2016-01-01

    The subjective Aha! experience that problem solvers often report when they find a solution has been taken as a marker for insight. If Aha! is closely linked to insightful solution processes, then theoretically, an Aha! should only be experienced when the correct solution is found. However, little work has explored whether the Aha! experience can also accompany incorrect solutions ("false insights"). Similarly, although the Aha! experience is not a unitary construct, little work has explored the different dimensions that have been proposed as its constituents. To address these gaps in the literature, 70 participants were presented with a set of difficult problems (37 magic tricks), and rated each of their solutions for Aha! as well as with regard to Suddenness in the emergence of the solution, Certainty of being correct, Surprise, Pleasure, Relief, and Drive. Solution times were also used as predictors for the Aha! This study reports three main findings: First, false insights exist. Second, the Aha! experience is multidimensional and consists of the key components Pleasure, Suddenness and Certainty. Third, although Aha! experiences for correct and incorrect solutions share these three common dimensions, they are also experienced differently with regard to magnitude and quality, with correct solutions emerging faster, leading to stronger Aha! experiences, and higher ratings of Pleasure, Suddenness, and Certainty. Solution correctness proffered a slightly different emotional coloring to the Aha! experience, with the additional perception of Relief for correct solutions, and Surprise for incorrect ones. These results cast some doubt on the assumption that the occurrence of an Aha! experience can serve as a definitive signal that a true insight has taken place. On the other hand, the quantitative and qualitative differences in the experience of correct and incorrect solutions demonstrate that the Aha! experience is not a mere epiphenomenon. Strong Aha! experiences are clearly, but not exclusively linked to correct solutions.

  14. Hip Arthroscopy: Common Problems and Solutions.

    PubMed

    Casp, Aaron; Gwathmey, Frank Winston

    2018-04-01

    The use of hip arthroscopy continues to expand. Understanding potential pitfalls and complications associated with hip arthroscopy is paramount to optimizing clinical outcomes and minimizing unfavorable results. Potential pitfalls and complications are associated with preoperative factors such as patient selection, intraoperative factors such as iatrogenic damage, traction-related complications, inadequate correction of deformity, and nerve injury, or postoperative factors such as poor rehabilitation. This article outlines common factors that contribute to less-than-favorable outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  16. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  17. Color correction strategies in optical design

    NASA Astrophysics Data System (ADS)

    Pfisterer, Richard N.; Vorndran, Shelby D.

    2014-12-01

    An overview of color correction strategies is presented. Starting with basic first-order aberration theory, we identify known color corrected solutions for doublets and triplets. Reviewing the modern approaches of Robb-Mercado, Rayces-Aguilar, and C. de Albuquerque et al, we find that they confirm the existence of glass combinations for doublets and triplets that yield color corrected solutions that we already know exist. Finally we explore the use of the y, ӯ diagram in conjunction with aberration theory to identify the solution space of glasses capable of leading to color corrected solutions in arbitrary optical systems.

  18. Orbit-product representation and correction of Gaussian belief propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K; Chertkov, Michael; Chernyak, Vladimir

    We present a new interpretation of Gaussian belief propagation (GaBP) based on the 'zeta function' representation of the determinant as a product over orbits of a graph. We show that GaBP captures back-tracking orbits of the graph and consider how to correct this estimate by accounting for non-backtracking orbits. We show that the product over non-backtracking orbits may be interpreted as the determinant of the non-backtracking adjacency matrix of the graph with edge weights based on the solution of GaBP. An efficient method is proposed to compute a truncated correction factor including all non-backtracking orbits up to a specified length.

  19. Emerging technology for transonic wind-tunnel-wall interference assessment and corrections

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Kemp, W. B., Jr.; Garriz, J. A.

    1988-01-01

    Several nonlinear transonic codes and a panel method code for wind tunnel/wall interference assessment and correction (WIAC) studies are reviewed. Contrasts between two- and three-dimensional transonic testing factors which affect WIAC procedures are illustrated with airfoil data from the NASA/Langley 0.3-meter transonic cyrogenic tunnel and Pathfinder I data. Also, three-dimensional transonic WIAC results for Mach number and angle-of-attack corrections to data from a relatively large 20 deg swept semispan wing in the solid wall NASA/Ames high Reynolds number Channel I are verified by three-dimensional thin-layer Navier-Stokes free-air solutions.

  20. De-confusing the THOG problem: the Pythagorean solution.

    PubMed

    Griggs, R A; Koenig, C S; Alea, N L

    2001-08-01

    Sources of facilitation for Needham and Amado's (1995) Pythagoras version of Wason's THOG problem were systematically examined in three experiments with 174 participants. Although both the narrative structure and figural notation used in the Pythagoras problem independently led to significant facilitation (40-50% correct), pairing hypothesis generation with either factor or pairing the two factors together was found to be necessary to obtain substantial facilitation (> 50% correct). Needham and Amado's original finding for the complete Pythagoras problem was also replicated. These results are discussed in terms of the "confusion theory" explanation for performance on the standard THOG problem. The possible role of labelling as a de-confusing factor in other versions of the THOG problem and the implications of the present findings for human reasoning are also considered.

  1. Molecular Volumes and the Stokes-Einstein Equation

    ERIC Educational Resources Information Center

    Edward, John T.

    1970-01-01

    Examines the limitations of the Stokes-Einstein equation as it applies to small solute molecules. Discusses molecular volume determinations by atomic increments, molecular models, molar volumes of solids and liquids, and molal volumes. Presents an empirical correction factor for the equation which applies to molecular radii as small as 2 angstrom…

  2. Continuous quantum error correction for non-Markovian decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089

    2007-08-15

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less

  3. Space-charge-limited currents for cathodes with electric field enhanced geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Dingguo, E-mail: laidingguo@nint.ac.cn; Qiu, Mengtong; Xu, Qifu

    This paper presents the approximate analytic solutions of current density for annulus and circle cathodes. The current densities of annulus and circle cathodes are derived approximately from first principles, which are in agreement with simulation results. The large scaling laws can predict current densities of high current vacuum diodes including annulus and circle cathodes in practical applications. In order to discuss the relationship between current density and electric field on cathode surface, the existing analytical solutions of currents for concentric cylinder and sphere diodes are fitted from existing solutions relating with electric field enhancement factors. It is found that themore » space-charge-limited current density for the cathode with electric-field enhanced geometry can be written in a general form of J = g(β{sub E}){sup 2}J{sub 0}, where J{sub 0} is the classical (1D) Child-Langmuir current density, β{sub E} is the electric field enhancement factor, and g is the geometrical correction factor depending on the cathode geometry.« less

  4. Spatial homogenization methods for pin-by-pin neutron transport calculations

    NASA Astrophysics Data System (ADS)

    Kozlowski, Tomasz

    For practical reactor core applications low-order transport approximations such as SP3 have been shown to provide sufficient accuracy for both static and transient calculations with considerably less computational expense than the discrete ordinate or the full spherical harmonics methods. These methods have been applied in several core simulators where homogenization was performed at the level of the pin cell. One of the principal problems has been to recover the error introduced by pin-cell homogenization. Two basic approaches to treat pin-cell homogenization error have been proposed: Superhomogenization (SPH) factors and Pin-Cell Discontinuity Factors (PDF). These methods are based on well established Equivalence Theory and Generalized Equivalence Theory to generate appropriate group constants. These methods are able to treat all sources of error together, allowing even few-group diffusion with one mesh per cell to reproduce the reference solution. A detailed investigation and consistent comparison of both homogenization techniques showed potential of PDF approach to improve accuracy of core calculation, but also reveal its limitation. In principle, the method is applicable only for the boundary conditions at which it was created, i.e. for boundary conditions considered during the homogenization process---normally zero current. Therefore, there exists a need to improve this method, making it more general and environment independent. The goal of proposed general homogenization technique is to create a function that is able to correctly predict the appropriate correction factor with only homogeneous information available, i.e. a function based on heterogeneous solution that could approximate PDFs using homogeneous solution. It has been shown that the PDF can be well approximated by least-square polynomial fit of non-dimensional heterogeneous solution and later used for PDF prediction using homogeneous solution. This shows a promise for PDF prediction for off-reference conditions, such as during reactor transients which provide conditions that can not typically be anticipated a priori.

  5. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types.

    PubMed

    Webb, Margaret E; Little, Daniel R; Cropper, Simon J

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions.

  6. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types

    PubMed Central

    Webb, Margaret E.; Little, Daniel R.; Cropper, Simon J.

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions. PMID:27725805

  7. Empirical Correction for Differences in Chemical Exchange Rates in Hydrogen Exchange-Mass Spectrometry Measurements.

    PubMed

    Toth, Ronald T; Mills, Brittney J; Joshi, Sangeeta B; Esfandiary, Reza; Bishop, Steven M; Middaugh, C Russell; Volkin, David B; Weis, David D

    2017-09-05

    A barrier to the use of hydrogen exchange-mass spectrometry (HX-MS) in many contexts, especially analytical characterization of various protein therapeutic candidates, is that differences in temperature, pH, ionic strength, buffering agent, or other additives can alter chemical exchange rates, making HX data gathered under differing solution conditions difficult to compare. Here, we present data demonstrating that HX chemical exchange rates can be substantially altered not only by the well-established variables of temperature and pH but also by additives including arginine, guanidine, methionine, and thiocyanate. To compensate for these additive effects, we have developed an empirical method to correct the hydrogen-exchange data for these differences. First, differences in chemical exchange rates are measured by use of an unstructured reporter peptide, YPI. An empirical chemical exchange correction factor, determined by use of the HX data from the reporter peptide, is then applied to the HX measurements obtained from a protein of interest under different solution conditions. We demonstrate that the correction is experimentally sound through simulation and in a proof-of-concept experiment using unstructured peptides under slow-exchange conditions (pD 4.5 at ambient temperature). To illustrate its utility, we applied the correction to HX-MS excipient screening data collected for a pharmaceutically relevant IgG4 mAb being characterized to determine the effects of different formulations on backbone dynamics.

  8. Catatonia in inpatients with psychiatric disorders: A comparison of schizophrenia and mood disorders.

    PubMed

    Grover, Sandeep; Chakrabarti, Subho; Ghormode, Deepak; Agarwal, Munish; Sharma, Akhilesh; Avasthi, Ajit

    2015-10-30

    This study aimed to evaluate the symptom threshold for making the diagnosis of catatonia. Further the objectives were to (1) to study the factor solution of Bush Francis Catatonia Rating Scale (BFCRS); (2) To compare the prevalence and symptom profile of catatonia in patients with psychotic and mood disorders among patients admitted to the psychiatry inpatient of a general hospital psychiatric unit. 201 patients were screened for presence of catatonia by using BFCRS. By using cluster analysis, discriminant analysis, ROC curve, sensitivity and specificity analysis, data suggested that a threshold of 3 symptoms was able to correctly categorize 89.4% of patients with catatonia and 100% of patients without catatonia. Prevalence of catatonia was 9.45%. There was no difference in the prevalence rate and symptom profile of catatonia between those with schizophrenia and mood disorders (i.e., unipolar depression and bipolar affective disorder). Factor analysis of the data yielded 2 factor solutions, i.e., retarded and excited catatonia. To conclude this study suggests that presence of 3 symptoms for making the diagnosis of catatonia can correctly distinguish patients with and without catatonia. This is compatible with the recommendations of DSM-5. Prevalence of catatonia is almost equal in patients with schizophrenia and mood disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. The Harrison Diffusion Kinetics Regimes in Solute Grain Boundary Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belova, Irina; Fiedler, T; Kulkarni, Nagraj S

    2012-01-01

    Knowledge of the limits of the principal Harrison kinetics regimes (Type-A, B and C) for grain boundary diffusion is very important for the correct analysis of the depth profiles in a tracer diffusion experiment. These regimes for self-diffusion have been extensively studied in the past by making use of the phenomenological Lattice Monte Carlo (LMC) method with the result that the limits are now well established. The relationship of those self-diffusion limits to the corresponding ones for solute diffusion in the presence of solute segregation to the grain boundaries remains unclear. In the present study, the influence of solute segregationmore » on the limits is investigated with the LMC method for the well-known parallel grain boundary slab model by showing the equivalence of two diffusion models. It is shown which diffusion parameters are useful for identifying the limits of the Harrison kinetics regimes for solute grain boundary diffusion. It is also shown how the measured segregation factor from the diffusion experiment in the Harrison Type-B kinetics regime may differ from the global segregation factor.« less

  10. Calibration of the NPL secondary standard radionuclide calibrator for 32P, 89Sr and 90Y

    NASA Astrophysics Data System (ADS)

    Woods, M. J.; Munster, A. S.; Sephton, J. P.; Lucas, S. E. M.; Walsh, C. Paton

    1996-02-01

    Pure beta particle emitting radionuclides have many therapeutic applications in nuclear medicine. The response of the NPL secondary standard radionuclide calibrator to 32P, 89Sr and 90Y has been measured using accurately calibrated solutions. For this purpose, high efficiency solid sources were prepared gravimetrically from dilute solutions of each radionuclide and assayed in a 4π proportional counter; the source activities were determined using known detection efficiency factors. Measurements were made of the current response (pA/MBq) of the NPL secondary standard radionuclide calibrator using the original concentrated solutions. Calibration figures have been derived for 2 and 5 ml British Standard glass ampoules and Amersham International plc P6 vials. Volume correction factors have also been determined. Gamma-ray emitting contaminants can have a disproportionate effect on the calibrator response and particular attention has been paid to this.

  11. About one counterexample of applying method of splitting in modeling of plating processes

    NASA Astrophysics Data System (ADS)

    Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Korobova, I. L.

    2018-05-01

    The paper presents the main factors that affect the uniformity of the thickness distribution of plating on the surface of the product. The experimental search for the optimal values of these factors is expensive and time-consuming. The problem of adequate simulation of coating processes is very relevant. The finite-difference approximation using seven-point and five-point templates in combination with the splitting method is considered as solution methods for the equations of the model. To study the correctness of the solution of equations of the mathematical model by these methods, the experiments were conducted on plating with a flat anode and cathode, which relative position was not changed in the bath. The studies have shown that the solution using the splitting method was up to 1.5 times faster, but it did not give adequate results due to the geometric features of the task under the given boundary conditions.

  12. Effect of formulated glyphosate and adjuvant tank mixes on atomization from aerial application flat fan nozzles

    USDA-ARS?s Scientific Manuscript database

    This study was designed to determine if the present USDA ARS Spray Nozzle models based on water plus non-ionic surfactant spray solutions could be used to estimate spray droplet size data for different spray formulations through use of experimentally determined correction factors or if full spray fo...

  13. Quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, G.; Leble, S.

    2014-03-01

    Analytical form of quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model is obtained through zeta function regularisation with account of all rest variables of a d-dimensional theory. Qualitative dependence of quantum corrections on parameters of the classical systems is also evaluated for a much broader class of potentials u(x) = b2f(bx) + C with b and C as arbitrary real constants.

  14. On the flow of a compressible fluid by the hodograph method I : unification and extension of present-day results

    NASA Technical Reports Server (NTRS)

    Garrick, I E; Kaplan, Carl

    1944-01-01

    Elementary basic solutions of the equations of motion of a compressible fluid in the hodograph variables are developed and used to provide a basis for comparison, in the form of velocity correction formulas, of corresponding compressible and incompressible flows. The known approximate results of Chaplygin, Von Karman and Tsien, Temple and Yarwood, and Prandtl and Glauert are unified by means of the analysis of the present paper. Two new types of approximations, obtained from the basic solutions, are introduced; they possess certain desirable features of the other approximations and appear preferable as a basis for extrapolation into the range of high stream Mach numbers and large disturbances to the main stream. Tables and figures giving velocity and pressure-coefficient correction factors are included in order to facilitate the practical application of the results.

  15. Improved scatter correction with factor analysis for planar and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.

  16. Error analysis and correction of discrete solutions from finite element codes

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.

    1984-01-01

    Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.

  17. Correctional officers' perceptions of a solution-focused training program: potential implications for working with offenders.

    PubMed

    Pan, Peter Jen Der; Deng, Liang-Yu F; Chang, Shona Shih Hua; Jiang, Karen Jye-Ru

    2011-09-01

    The purpose of this exploratory study was to explore correctional officers' perceptions and experiences during a solution-focused training program and to initiate development of a modified pattern for correctional officers to use in jails. The study uses grounded theory procedures combined with a follow-up survey. The findings identified six emergent themes: obstacles to doing counseling work in prisons, offenders' amenability to change, correctional officers' self-image, advantages of a solution-focused approach (SFA), potential advantages of applying SFA to offenders, and the need for the consolidation of learning and transformation. Participants perceived the use of solution-focused techniques as appropriate, important, functional, and of only moderate difficulty in interacting with offenders. Finally, a modified pattern was developed for officers to use when working with offenders in jails. Suggestions and recommendations are made for correctional interventions and future studies.

  18. Solution of Einsteins Equation for Deformation of a Magnetized Neutron Star

    NASA Astrophysics Data System (ADS)

    Rizaldy, R.; Sulaksono, A.

    2018-04-01

    We studied the effect of very large and non-uniform magnetic field existed in the neutron star on the deformation of the neutron star. We used in our analytical calculation, multipole expansion of the tensor metric and the momentum-energy tensor in Legendre polynomial expansion up to the quadrupole order. In this way we obtain the solutions of Einstein’s equation with the correction factors due to the magnetic field are taken into account. We obtain from our numerical calculation that the degree of deformation (ellipticity) is increased when the the mass is decreased.

  19. Study of different solutes for determination of neutron source strength based on the water bath

    NASA Astrophysics Data System (ADS)

    Khabaz, Rahim

    2018-09-01

    Time required for activation to saturation and background measurement is considered a limitation of strength determination of radionuclide neutron sources using manganese bath system (MBS). The objective of this research was to evaluate the other solutes based on water bath for presentation of the suitable replacement with MBS. With the aid Monte Carlo simulation, for three neutron sources, having different neutron spectra, immersed in six aqueous solutions, i.e., Na2SO4, VOSO4, MnSO4, Rh2(SO4)3, In2(SO4)3, I2O5, the correction factors in all nuclei of solutions for neutron losses with different process were obtained. The calculations results indicate that the Rh2(SO4)3 and VOSO4 are best options for replacing with MnSO4.

  20. A Full-Core Resonance Self-Shielding Method Using a Continuous-Energy Quasi–One-Dimensional Slowing-Down Solution that Accounts for Temperature-Dependent Fuel Subregions and Resonance Interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuxuan; Martin, William; Williams, Mark

    In this paper, a correction-based resonance self-shielding method is developed that allows annular subdivision of the fuel rod. The method performs the conventional iteration of the embedded self-shielding method (ESSM) without subdivision of the fuel to capture the interpin shielding effect. The resultant self-shielded cross sections are modified by correction factors incorporating the intrapin effects of radial variation of the shielded cross section, radial temperature distribution, and resonance interference. A quasi–one-dimensional slowing-down equation is developed to calculate such correction factors. The method is implemented in the DeCART code and compared with the conventional ESSM and subgroup method with benchmark MCNPmore » results. The new method yields substantially improved results for both spatially dependent reaction rates and eigenvalues for typical pressurized water reactor pin cell cases with uniform and nonuniform fuel temperature profiles. Finally, the new method is also proved effective in treating assembly heterogeneity and complex material composition such as mixed oxide fuel, where resonance interference is much more intense.« less

  1. Influence of electrolytes in the QCM response: discrimination and quantification of the interference to correct microgravimetric data.

    PubMed

    Encarnação, João M; Stallinga, Peter; Ferreira, Guilherme N M

    2007-02-15

    In this work we demonstrate that the presence of electrolytes in solution generates desorption-like transients when the resonance frequency is measured. Using impedance spectroscopy analysis and Butterworth-Van Dyke (BVD) equivalent electrical circuit modeling we demonstrate that non-Kanazawa responses are obtained in the presence of electrolytes mainly due to the formation of a diffuse electric double layer (DDL) at the sensor surface, which also causes a capacitor like signal. We extend the BVD equivalent circuit by including additional parallel capacitances in order to account for such capacitor like signal. Interfering signals from electrolytes and DDL perturbations were this way discriminated. We further quantified as 8.0+/-0.5 Hz pF-1 the influence of electrolytes to the sensor resonance frequency and we used this factor to correct the data obtained by frequency counting measurements. The applicability of this approach is demonstrated by the detection of oligonucleotide sequences. After applying the corrective factor to the frequency counting data, the mass contribution to the sensor signal yields identical values when estimated by impedance analysis and frequency counting.

  2. Radiative-Transfer Modeling of Spectra of Densely Packed Particulate Media

    NASA Astrophysics Data System (ADS)

    Ito, G.; Mishchenko, M. I.; Glotch, T. D.

    2017-12-01

    Remote sensing measurements over a wide range of wavelengths from both ground- and space-based platforms have provided a wealth of data regarding the surfaces and atmospheres of various solar system bodies. With proper interpretations, important properties, such as composition and particle size, can be inferred. However, proper interpretation of such datasets can often be difficult, especially for densely packed particulate media with particle sizes on the order of wavelength of light being used for remote sensing. Radiative transfer theory has often been applied to the study of densely packed particulate media like planetary regoliths and snow, but with difficulty, and here we continue to investigate radiative transfer modeling of spectra of densely packed media. We use the superposition T-matrix method to compute scattering properties of clusters of particles and capture the near-field effects important for dense packing. Then, the scattering parameters from the T-matrix computations are modified with the static structure factor correction, accounting for the dense packing of the clusters themselves. Using these corrected scattering parameters, reflectance (or emissivity via Kirchhoff's Law) is computed with the method of invariance imbedding solution to the radiative transfer equation. For this work we modeled the emissivity spectrum of the 3.3 µm particle size fraction of enstatite, representing some common mineralogical and particle size components of regoliths, in the mid-infrared wavelengths (5 - 50 µm). The modeled spectrum from the T-matrix method with static structure factor correction using moderate packing densities (filling factors of 0.1 - 0.2) produced better fits to the laboratory measurement of corresponding spectrum than the spectrum modeled by the equivalent method without static structure factor correction. Future work will test the method of the superposition T-matrix and static structure factor correction combination for larger particles sizes and polydispersed clusters in search for the most effective modeling of spectra of densely packed particulate media.

  3. α '-corrected black holes in String Theory

    NASA Astrophysics Data System (ADS)

    Cano, Pablo A.; Meessen, Patrick; Ortín, Tomás; Ramírez, Pedro F.

    2018-05-01

    We consider the well-known solution of the Heterotic Superstring effective action to zeroth order in α ' that describes the intersection of a fundamental string with momentum and a solitonic 5-brane and which gives a 3-charge, static, extremal, supersymmetric black hole in 5 dimensions upon dimensional reduction on T5. We compute explicitly the first-order in α ' corrections to this solution, including SU(2) Yang-Mills fields which can be used to cancel some of these corrections and we study the main properties of this α '-corrected solution: supersymmetry, values of the near-horizon and asymptotic charges, behavior under α '-corrected T-duality, value of the entropy (using Wald formula directly in 10 dimensions), existence of small black holes etc. The value obtained for the entropy agrees, within the limits of approximation, with that obtained by microscopic methods. The α ' corrections coming from Wald's formula prove crucial for this result.

  4. Closing the gap: connecting sudden representational change to the subjective Aha! experience in insightful problem solving.

    PubMed

    Danek, Amory H; Williams, Joshua; Wiley, Jennifer

    2018-01-18

    Two hallmarks of insightful problem solving are thought to be suddenness in the emergence of solution due to changes in problem representation, and the subjective Aha! Although a number of studies have explored the Aha! experience, few studies have attempted to measure representational change. Following the lead of Durso et al. (Psychol Sci 5(2):94-97, 1994) and Cushen and Wiley (Conscious Cognit 21(3):1166-1175, 2012), in this study, participants made importance-to-solution ratings throughout their solution attempts as a way to assess representational change. Participants viewed a set of magic trick videos with the task of finding out how each trick worked, and rated six action verbs for each trick (including one that implied the correct solution) multiple times during solution. They were also asked to indicate the extent to which they experienced an Aha! moment. Patterns of ratings that showed a sudden change towards a correct solution led to stronger Aha! experiences than patterns that showed a more incremental change towards a correct solution, or a change towards incorrect solutions. The results show a connection between sudden changes in problem representations (leading to correct solutions) and the subjective appraisal of solutions as an Aha! This offers the first empirical support for a close relationship between two theoretical constructs that have traditionally been assumed to be related to insightful problem solving.

  5. Concentration of stresses and strains in a notched cyclinder of a viscoplastic material under harmonic loading

    NASA Astrophysics Data System (ADS)

    Zhuk, Ya A.; Senchenkov, I. K.

    1999-02-01

    Certain aspects of the correct definitions of stress and strain concentration factors for elastic-viscoplastic solids under cyclic loading are discussed. Problems concerning the harmonic kinematic excitation of cylindrical specimens with a lateral V-notch are examined. The behavior of the material of a cylinder is modeled using generalized flow theory. An approximate model based on the concept of complex moduli is used for comparison. Invariant characteristics such as stress and strain intensities and maximum principal stress and strain are chosen as constitutive quantities for concentration-factor definitions. The behavior of time-varying factors is investigated. Concentration factors calculated in terms of the amplitudes of the constitutive quantities are used as representative characteristics over the cycle of vibration. The dependences of the concentration factors on the loads are also studied. The accuracy of Nueber's and Birger's formulas is evaluated. The solution of the problem in the approximate formulation agrees with its solution in the exact formulation. The possibilities of the approximate model for estimating low-cycle fatigue are evaluated.

  6. Thermal properties of composite materials : effective conductivity tensor and edge effects

    NASA Astrophysics Data System (ADS)

    Matine, A.; Boyard, N.; Cartraud, P.; Legrain, G.; Jarny, Y.

    2012-11-01

    The homogenization theory is a powerful approach to determine the effective thermal conductivity tensor of heterogeneous materials such as composites, including thermoset matrix and fibres. Once the effective properties are calculated, they can be used to solve a heat conduction problem on the composite structure at the macroscopic scale. This approach leads to good approximations of both the heat flux and temperature in the interior zone of the structure, however edge effects occur in the vicinity of the domain boundaries. In this paper, following the approach proposed in [10] for elasticity, it is shown how these edge effects can be corrected. Thus an additional asymptotic expansion is introduced, which plays the role of a edge effect term. This expansion tends to zero far from the boundary, and is assumed to decrease exponentially. Moreover, the length of the edge effect region can be determined from the solution of an eigenvalue problem. Numerical examples are considered for a standard multilayered material. The homogenized solutions computed with a finite element software, and corrected with the edge effect terms, are compared to a heterogeneous finite element solution at the microscopic scale. The influences of the thermal contrast and scale factor are illustrated for different kind of boundary conditions.

  7. Martin Gardner's Mistake

    ERIC Educational Resources Information Center

    Khovanova, Tanya

    2012-01-01

    When Martin Gardner first presented the Two-Children problem, he made a mistake in its solution. Later he corrected the error, but unfortunately the incorrect solution is more widely known than his correction. In fact, a Tuesday-Child variation of this problem went viral in 2010, and the same flaw keeps reappearing in proposed solutions of that…

  8. Feedback That Corrects and Contrasts Students' Erroneous Solutions with Expert Ones Improves Expository Instruction for Conceptual Change

    ERIC Educational Resources Information Center

    Asterhan, Christa S. C.; Dotan, Aviv

    2018-01-01

    In the present study, we examined the effects of feedback that corrects and contrasts a student's own erroneous solutions with the canonical, correct one (CEC&C feedback) on learning in a conceptual change task. Sixty undergraduate students received expository instruction about natural selection, which presented the canonical, scientifically…

  9. Developing a quality assurance program for online services.

    PubMed Central

    Humphries, A W; Naisawald, G V

    1991-01-01

    A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas. PMID:1909197

  10. Developing a quality assurance program for online services.

    PubMed

    Humphries, A W; Naisawald, G V

    1991-07-01

    A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas.

  11. Modifying a Risk Assessment Instrument for Youthful Offenders.

    PubMed

    Shapiro, Cheri J; Malone, Patrick S; Gavazzi, Stephen M

    2018-02-01

    High rates of incarceration in the United States are compounded by high rates of recidivism and prison return. One solution is more accurate identification of individual prisoner risks and needs to promote offender rehabilitation and successful community re-entry; this is particularly important for youthful offenders who developmentally are in late adolescence or early adulthood, and who struggle to reengage in education and/or employment after release. Thus, this study examined the feasibility of administration and initial psychometric properties of a risk and needs assessment instrument originally created for a juvenile justice population (the Global Risk Assessment Device or GRAD) with 895 male youthful offenders in one adult correctional system. Initial feasibility of implementation within the correctional system was demonstrated; confirmatory factor analyses support the invariance of the modified GRAD factor structure across age and race. Future studies are needed to examine the predictive validity and the sensitivity of the instrument.

  12. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  13. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    NASA Astrophysics Data System (ADS)

    Cooling, M. P.; Humphrey, V. F.; Wilkens, V.

    2011-02-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  14. Analyses of factors of crash avoidance maneuvers using the general estimates system.

    PubMed

    Yan, Xuedong; Harb, Rami; Radwan, Essam

    2008-06-01

    Taking an effective corrective action to a critical traffic situation provides drivers an opportunity to avoid crash occurrence and minimize crash severity. The objective of this study is to investigate the relationship between the probability of taking corrective actions and the characteristics of drivers, vehicles, and driving environments. Using the 2004 GES crash database, this study classified drivers who encountered critical traffic events (identified as P_CRASH3 in the GES database) into two pre-crash groups: corrective avoidance actions group and no corrective avoidance actions group. Single and multiple logistic regression analyses were performed to identify potential traffic factors associated with the probability of drivers taking corrective actions. The regression results showed that the driver/vehicle factors associated with the probability of taking corrective actions include: driver age, gender, alcohol use, drug use, physical impairments, distraction, sight obstruction, and vehicle type. In particular, older drivers, female drivers, drug/alcohol use, physical impairment, distraction, or poor visibility may increase the probability of failing to attempt to avoid crashes. Moreover, drivers of larger size vehicles are 42.5% more likely to take corrective avoidance actions than passenger car drivers. On the other hand, the significant environmental factors correlated with the drivers' crash avoidance maneuver include: highway type, number of lanes, divided/undivided highway, speed limit, highway alignment, highway profile, weather condition, and surface condition. Some adverse highway environmental factors, such as horizontal curves, vertical curves, worse weather conditions, and slippery road surface conditions are correlated with a higher probability of crash avoidance maneuvers. These results may seem counterintuitive but they can be explained by the fact that motorists may be more likely to drive cautiously in those adverse driving environments. The analyses revealed that drivers' distraction could be the highest risk factor leading to the failure of attempting to avoid crashes. Further analyses entailing distraction causes (e.g., cellular phone use) and their possible countermeasures need to be conducted. The age and gender factors are overrepresented in the "no avoidance maneuver." A possible solution could involve the integration of a new function in the current ITS technologies. A personalized system, which could be related to the expected type of maneuver for a driver with certain characteristics, would assist different drivers with different characteristics to avoid crashes. Further crash database studies are recommended to investigate the association of drivers' emergency maneuvers such as braking, steering, or their combination with crash severity.

  15. Revision of the NIST Standard for (223)Ra: New Measurements and Review of 2008 Data.

    PubMed

    Zimmerman, B E; Bergeron, D E; Cessna, J T; Fitzgerald, R; Pibida, L

    2015-01-01

    After discovering a discrepancy in the transfer standard currently being disseminated by the National Institute of Standards and Technology (NIST), we have performed a new primary standardization of the alpha-emitter (223)Ra using Live-timed Anticoincidence Counting (LTAC) and the Triple-to-Double Coincidence Ratio Method (TDCR). Additional confirmatory measurements were made with the CIEMAT-NIST efficiency tracing method (CNET) of liquid scintillation counting, integral γ-ray counting using a NaI(Tl) well counter, and several High Purity Germanium (HPGe) detectors in an attempt to understand the origin of the discrepancy and to provide a correction. The results indicate that a -9.5 % difference exists between activity values obtained using the former transfer standard relative to the new primary standardization. During one of the experiments, a 2 % difference in activity was observed between dilutions of the (223)Ra master solution prepared using the composition used in the original standardization and those prepared using 1 mol·L(-1) HCl. This effect appeared to be dependent on the number of dilutions or the total dilution factor to the master solution, but the magnitude was not reproducible. A new calibration factor ("K-value") has been determined for the NIST Secondary Standard Ionization Chamber (IC "A"), thereby correcting the discrepancy between the primary and secondary standards.

  16. Affine generalization of the Komar complex of general relativity

    NASA Astrophysics Data System (ADS)

    Mielke, Eckehard W.

    2001-02-01

    On the basis of the ``on shell'' Noether identities of the metric-affine gauge approach of gravity, an affine superpotential is derived which comprises the energy- and angular-momentum content of exact solutions. In the special case of general relativity (GR) or its teleparallel equivalent, the Komar or Freud complex, respectively, are recovered. Applying this to the spontaneously broken anti-de Sitter gauge model of McDowell and Mansouri with an induced Euler term automatically yields the correct mass and spin of the Kerr-AdS solution of GR with a (induced) cosmological constant without the factor two discrepancy of the Komar formula.

  17. Particle multiplicities in lead-lead collisions at the CERN large hadron collider from nonlinear evolution with running coupling corrections.

    PubMed

    Albacete, Javier L

    2007-12-31

    We present predictions for the pseudorapidity density of charged particles produced in central Pb-Pb collisions at the LHC. Particle production in such collisions is calculated in the framework of k(t) factorization. The nuclear unintegrated gluon distributions at LHC energies are determined from numerical solutions of the Balitsky-Kovchegov equation including recently calculated running coupling corrections. The initial conditions for the evolution are fixed by fitting Relativistic Heavy Ion Collider data at collision energies square root[sNN]=130 and 200 GeV per nucleon. We obtain dNch(Pb-Pb)/deta(square root[sNN]=5.5 TeV)/eta=0 approximately 1290-1480.

  18. The two sides of the C-factor.

    PubMed

    Fok, Alex S L; Aregawi, Wondwosen A

    2018-04-01

    The aim of this paper is to investigate the effects on shrinkage strain/stress development of the lateral constraints at the bonded surfaces of resin composite specimens used in laboratory measurement. Using three-dimensional (3D) Hooke's law, a recently developed shrinkage stress theory is extended to 3D to include the additional out-of-plane strain/stress induced by the lateral constraints at the bonded surfaces through the Poisson's ratio effect. The model contains a parameter that defines the relative thickness of the boundary layers, adjacent to the bonded surfaces, that are under such multiaxial stresses. The resulting differential equation is solved for the shrinkage stress under different boundary conditions. The accuracy of the model is assessed by comparing the numerical solutions with a wide range of experimental data, which include those from both shrinkage strain and shrinkage stress measurements. There is good agreement between theory and experiments. The model correctly predicts the different instrument-dependent effects that a specimen's configuration factor (C-factor) has on shrinkage stress. That is, for noncompliant stress-measuring instruments, shrinkage stress increases with the C-factor of the cylindrical specimen; while the opposite is true for compliant instruments. The model also provides a correction factor, which is a function of the C-factor, Poisson's ratio and boundary layer thickness of the specimen, for shrinkage strain measured using the bonded-disc method. For the resin composite examined, the boundary layers have a combined thickness that is ∼11.5% of the specimen's diameter. The theory provides a physical and mechanical basis for the C-factor using principles of engineering mechanics. The correction factor it provides allows the linear shrinkage strain of a resin composite to be obtained more accurately from the bonded-disc method. Published by Elsevier Ltd.

  19. Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning

    ERIC Educational Resources Information Center

    Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan

    2009-01-01

    In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…

  20. Method for Correcting Control Surface Angle Measurements in Single Viewpoint Photogrammetry

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W. (Inventor); Barrows, Danny A. (Inventor)

    2006-01-01

    A method of determining a corrected control surface angle for use in single viewpoint photogrammetry to correct control surface angle measurements affected by wing bending. First and second visual targets are spaced apart &om one another on a control surface of an aircraft wing. The targets are positioned at a semispan distance along the aircraft wing. A reference target separation distance is determined using single viewpoint photogrammetry for a "wind off condition. An apparent target separation distance is then computed for "wind on." The difference between the reference and apparent target separation distances is minimized by recomputing the single viewpoint photogrammetric solution for incrementally changed values of target semispan distances. A final single viewpoint photogrammetric solution is then generated that uses the corrected semispan distance that produced the minimized difference between the reference and apparent target separation distances. The final single viewpoint photogrammetric solution set is used to determine the corrected control surface angle.

  1. Characterization of the nanoDot OSLD dosimeter in CT.

    PubMed

    Scarboro, Sarah B; Cody, Dianna; Alvarez, Paola; Followill, David; Court, Laurence; Stingo, Francesco C; Zhang, Di; McNitt-Gray, Michael; Kry, Stephen F

    2015-04-01

    The extensive use of computed tomography (CT) in diagnostic procedures is accompanied by a growing need for more accurate and patient-specific dosimetry techniques. Optically stimulated luminescent dosimeters (OSLDs) offer a potential solution for patient-specific CT point-based surface dosimetry by measuring air kerma. The purpose of this work was to characterize the OSLD nanoDot for CT dosimetry, quantifying necessary correction factors, and evaluating the uncertainty of these factors. A characterization of the Landauer OSL nanoDot (Landauer, Inc., Greenwood, IL) was conducted using both measurements and theoretical approaches in a CT environment. The effects of signal depletion, signal fading, dose linearity, and angular dependence were characterized through direct measurement for CT energies (80-140 kV) and delivered doses ranging from ∼5 to >1000 mGy. Energy dependence as a function of scan parameters was evaluated using two independent approaches: direct measurement and a theoretical approach based on Burlin cavity theory and Monte Carlo simulated spectra. This beam-quality dependence was evaluated for a range of CT scanning parameters. Correction factors for the dosimeter response in terms of signal fading, dose linearity, and angular dependence were found to be small for most measurement conditions (<3%). The relative uncertainty was determined for each factor and reported at the two-sigma level. Differences in irradiation geometry (rotational versus static) resulted in a difference in dosimeter signal of 3% on average. Beam quality varied with scan parameters and necessitated the largest correction factor, ranging from 0.80 to 1.15 relative to a calibration performed in air using a 120 kV beam. Good agreement was found between the theoretical and measurement approaches. Correction factors for the measurement of air kerma were generally small for CT dosimetry, although angular effects, and particularly effects due to changes in beam quality, could be more substantial. In particular, it would likely be necessary to account for variations in CT scan parameters and measurement location when performing CT dosimetry using OSLD.

  2. Doctors' confusion over ratios and percentages in drug solutions: the case for standard labelling

    PubMed Central

    Wheeler, Daniel Wren; Remoundos, Dionysios Dennis; Whittlestone, Kim David; Palmer, Michael Ian; Wheeler, Sarah Jane; Ringrose, Timothy Richard; Menon, David Krishna

    2004-01-01

    The different ways of expressing concentrations of drugs in solution, as ratios or percentages or mass per unit volume, are a potential cause of confusion that may contribute to dose errors. To assess doctors' understanding of what they signify, all active subscribers to doctors.net.uk, an online community exclusively for UK doctors, were invited to complete a brief web-based multiple-choice questionnaire that explored their familiarity with solutions of adrenaline (expressed as a ratio), lidocaine (expressed as a percentage) and atropine (expressed in mg per mL), and their ability to calculate the correct volume to administer in clinical scenarios relevant to all specialties. 2974 (24.6%) replied. The mean score achieved was 4.80 out of 6 (SD 1.38). Only 85.2% and 65.8% correctly identified the mass of drug in the adrenaline and lidocaine solutions, respectively, whilst 93.1% identified the correct concentration of atropine. More would have administered the correct volume of adrenaline and lidocaine in clinical scenarios (89.4% and 81.0%, respectively) but only 65.5% identified the correct volume of atropine. The labelling of drug solutions as ratios or percentages is antiquated and confusing. Labelling should be standardized to mass per unit volume. PMID:15286190

  3. An empirical model for polarized and cross-polarized scattering from a vegetation layer

    NASA Technical Reports Server (NTRS)

    Liu, H. L.; Fung, A. K.

    1988-01-01

    An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.

  4. 78 FR 33698 - New Animal Drugs; Dexmedetomidine; Lasalocid; Melengestrol; Monensin; and Tylosin; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... 2013 that appeared in the Federal Register of April 30, 2013. FDA is correcting the approved strengths... correcting the approved strengths of dexmedetomidine hydrochloride injectable solution. This correction is...

  5. Analysis of phases in the structure determination of an icosahedral virus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plevka, Pavel; Kaufmann, Bärbel; Rossmann, Michael G.

    2012-03-15

    The constraints imposed on structure-factor phases by noncrystallographic symmetry (NCS) allow phase improvement, phase extension to higher resolution and hence ab initio phase determination. The more numerous the NCS redundancy and the greater the volume used for solvent flattening, the greater the power for phase determination. In a case analyzed here the icosahedral NCS phasing appeared to have broken down, although later successful phase extension was possible when the envelope around the NCS region was tightened. The phases from the failed phase-determination attempt fell into four classes, all of which satisfied the NCS constraints. These four classes corresponded to themore » correct solution, opposite enantiomorph, Babinet inversion and opposite enantiomorph with Babinet inversion. These incorrect solutions can be seeded from structure factors belonging to reciprocal-space volumes that lie close to icosahedral NCS axes where the structure amplitudes tend to be large and the phases tend to be 0 or {pi}. Furthermore, the false solutions can spread more easily if there are large errors in defining the envelope designating the region in which NCS averaging is performed.« less

  6. Analysis of phases in the structure determination of an icosahedral virus.

    PubMed

    Plevka, Pavel; Kaufmann, Bärbel; Rossmann, Michael G

    2011-06-01

    The constraints imposed on structure-factor phases by noncrystallographic symmetry (NCS) allow phase improvement, phase extension to higher resolution and hence ab initio phase determination. The more numerous the NCS redundancy and the greater the volume used for solvent flattening, the greater the power for phase determination. In a case analyzed here the icosahedral NCS phasing appeared to have broken down, although later successful phase extension was possible when the envelope around the NCS region was tightened. The phases from the failed phase-determination attempt fell into four classes, all of which satisfied the NCS constraints. These four classes corresponded to the correct solution, opposite enantiomorph, Babinet inversion and opposite enantiomorph with Babinet inversion. These incorrect solutions can be seeded from structure factors belonging to reciprocal-space volumes that lie close to icosahedral NCS axes where the structure amplitudes tend to be large and the phases tend to be 0 or π. Furthermore, the false solutions can spread more easily if there are large errors in defining the envelope designating the region in which NCS averaging is performed. © 2011 International Union of Crystallography

  7. Analysis of phases in the structure determination of an icosahedral virus

    PubMed Central

    Plevka, Pavel; Kaufmann, Bärbel; Rossmann, Michael G.

    2011-01-01

    The constraints imposed on structure-factor phases by non­crystallographic symmetry (NCS) allow phase improvement, phase extension to higher resolution and hence ab initio phase determination. The more numerous the NCS redundancy and the greater the volume used for solvent flattening, the greater the power for phase determination. In a case analyzed here the icosahedral NCS phasing appeared to have broken down, although later successful phase extension was possible when the envelope around the NCS region was tightened. The phases from the failed phase-determination attempt fell into four classes, all of which satisfied the NCS constraints. These four classes corresponded to the correct solution, opposite enantiomorph, Babinet inversion and opposite enantiomorph with Babinet inversion. These incorrect solutions can be seeded from structure factors belonging to reciprocal-space volumes that lie close to icosahedral NCS axes where the structure amplitudes tend to be large and the phases tend to be 0 or π. Furthermore, the false solutions can spread more easily if there are large errors in defining the envelope designating the region in which NCS averaging is performed. PMID:21636897

  8. A wide angle and high Mach number parabolic equation.

    PubMed

    Lingevitch, Joseph F; Collins, Michael D; Dacol, Dalcio K; Drob, Douglas P; Rogers, Joel C W; Siegmann, William L

    2002-02-01

    Various parabolic equations for advected acoustic waves have been derived based on the assumptions of small Mach number and narrow propagation angles, which are of limited validity in atmospheric acoustics. A parabolic equation solution that does not require these assumptions is derived in the weak shear limit, which is appropriate for frequencies of about 0.1 Hz and above for atmospheric acoustics. When the variables are scaled appropriately in this limit, terms involving derivatives of the sound speed, density, and wind speed are small but can have significant cumulative effects. To obtain a solution that is valid at large distances from the source, it is necessary to account for linear terms in the first derivatives of these quantities [A. D. Pierce, J. Acoust. Soc. Am. 87, 2292-2299 (1990)]. This approach is used to obtain a scalar wave equation for advected waves. Since this equation contains two depth operators that do not commute with each other, it does not readily factor into outgoing and incoming solutions. An approximate factorization is obtained that is correct to first order in the commutator of the depth operators.

  9. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE PAGES

    Gnedin, Nickolay Y.

    2016-04-01

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  10. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y.

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  11. Study on the influence of stochastic properties of correction terms on the reliability of instantaneous network RTK

    NASA Astrophysics Data System (ADS)

    Próchniewicz, Dominik

    2014-03-01

    The reliability of precision GNSS positioning primarily depends on correct carrier-phase ambiguity resolution. An optimal estimation and correct validation of ambiguities necessitates a proper definition of mathematical positioning model. Of particular importance in the model definition is the taking into account of the atmospheric errors (ionospheric and tropospheric refraction) as well as orbital errors. The use of the network of reference stations in kinematic positioning, known as Network-based Real-Time Kinematic (Network RTK) solution, facilitates the modeling of such errors and their incorporation, in the form of correction terms, into the functional description of positioning model. Lowered accuracy of corrections, especially during atmospheric disturbances, results in the occurrence of unaccounted biases, the so-called residual errors. The taking into account of such errors in Network RTK positioning model is possible by incorporating the accuracy characteristics of the correction terms into the stochastic model of observations. In this paper we investigate the impact of the expansion of the stochastic model to include correction term variances on the reliability of the model solution. In particular the results of instantaneous solution that only utilizes a single epoch of GPS observations, is analyzed. Such a solution mode due to the low number of degrees of freedom is very sensitive to an inappropriate mathematical model definition. Thus the high level of the solution reliability is very difficult to achieve. Numerical tests performed for a test network located in mountain area during ionospheric disturbances allows to verify the described method for the poor measurement conditions. The results of the ambiguity resolution as well as the rover positioning accuracy shows that the proposed method of stochastic modeling can increase the reliability of instantaneous Network RTK performance.

  12. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  13. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  14. Structure of amplitude correlations in open chaotic systems

    NASA Astrophysics Data System (ADS)

    Ericson, Torleif E. O.

    2013-02-01

    The Verbaarschot-Weidenmüller-Zirnbauer (VWZ) model is believed to correctly represent the correlations of two S-matrix elements for an open quantum chaotic system, but the solution has considerable complexity and is presently only accessed numerically. Here a procedure is developed to deduce its features over the full range of the parameter space in a transparent and simple analytical form preserving accuracy to a considerable degree. The bulk of the VWZ correlations are described by the Gorin-Seligman expression for the two-amplitude correlations of the Ericson-Gorin-Seligman model. The structure of the remaining correction factors for correlation functions is discussed with special emphasis of the rôle of the level correlation hole both for inelastic and elastic correlations.

  15. [Overlay prosthetic solution in subtotal edentation treatment].

    PubMed

    Tatarciuc, M; Ursache, M; Grădinaru, I

    2001-01-01

    The preservation of the natural dental roots represents a big advantage for the overdenture prosthetic appliances. The realization of an overdenture needs a perfect correlation of all the clinical and technological factors, in all the prosthetics steps. Another important aspect is represented by the possibility of a correct treatment of the remaining roots. The remaining roots will be correctly endodontically treated. The root section will be prepared with two inclines bucally and orally located. A metallic cup will cover the root section. On the inner surface of the overdenture the dental technician will realize specific preparations corresponding to this cups. This kind of prosthetic treatment is indicated only for patients with a very good oral hygiene and a good general health.

  16. Mean ionic activity coefficients in aqueous NaCl solutions from molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mester, Zoltan; Panagiotopoulos, Athanassios Z., E-mail: azp@princeton.edu

    The mean ionic activity coefficients of aqueous NaCl solutions of varying concentrations at 298.15 K and 1 bar have been obtained from molecular dynamics simulations by gradually turning on the interactions of an ion pair inserted into the solution. Several common non-polarizable water and ion models have been used in the simulations. Gibbs-Duhem equation calculations of the thermodynamic activity of water are used to confirm the thermodynamic consistency of the mean ionic activity coefficients. While the majority of model combinations predict the correct trends in mean ionic activity coefficients, they overestimate their values at high salt concentrations. The solubility predictionsmore » also suffer from inaccuracies, with all models underpredicting the experimental values, some by large factors. These results point to the need for further ion and water model development.« less

  17. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  18. The effect of suspending solution supplemented with marine cations on the oxidation of Biolog GN MicroPlate substrates by Vibrionaceae bacteria.

    PubMed

    Noble, L D; Gow, J A

    1998-03-01

    Bacteria belonging to the family Vibrionaceae were suspended using saline and a solution prepared from a marine-cations supplement. The effect of this on the profile of oxidized substrates obtained when using Biolog GN MicroPlates was investigated. Thirty-nine species belonging to the genera Aeromonas, Listonella, Photobacterium, and Vibrio were studied. Of the strains studied, species of Listonella, Photobacterium, and Vibrio could be expected to benefit from a marine-cations supplement that contained Na+, K+, and Mg2+. Bacteria that are not of marine origin are usually suspended in normal saline. Of the 39 species examined, 9 were not included in the Biolog data base and were not identified. Of the 30 remaining species, 50% were identified correctly using either of the suspending solutions. A further 20% were correctly identified only when suspended in saline. Three species, or 10%, were correctly identified only after suspension in the marine-cations supplemented solution. The remaining 20% of species were not correctly identified by either method. Generally, more substrates were oxidized when the bacteria had been suspended in the more complex salts solution. Usually, when identifications were incorrect, the use of the marine-cations supplemented suspending solution had resulted in many more substrates being oxidized. Based on these results, it would be preferable to use saline to suspend the cells when using Biolog for identification of species of Vibrionaceae. A salts solution containing a marine-cations supplement would be preferable for environmental studies where the objective is to determine profiles of substrates that the bacteria have the potential to oxidize. If identifications are done using marine-cations supplemented suspending solution, it would be advisable to include reference cultures to determine the effect of the supplement. Of the Vibrio and Listonella species associated with human clinical specimens, 8 out of the 11 studied were identified correctly when either of the suspending solutions was used.

  19. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    NASA Astrophysics Data System (ADS)

    Kilcrease, D. P.; Brookes, S.

    2013-12-01

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. A simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure for the Born cross-sections that employs the Elwert-Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. We also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.

  20. Design Trade-off Between Performance and Fault-Tolerance of Space Onboard Computers

    NASA Astrophysics Data System (ADS)

    Gorbunov, M. S.; Antonov, A. A.

    2017-01-01

    It is well known that there is a trade-off between performance and power consumption in onboard computers. The fault-tolerance is another important factor affecting performance, chip area and power consumption. Involving special SRAM cells and error-correcting codes is often too expensive with relation to the performance needed. We discuss the possibility of finding the optimal solutions for modern onboard computer for scientific apparatus focusing on multi-level cache memory design.

  1. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  2. Modeling boundary measurements of scattered light using the corrected diffusion approximation

    PubMed Central

    Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.

    2012-01-01

    We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102

  3. 77 FR 50163 - Importer of Controlled Substances; Notice of Registration; Catalent Pharma Solutions, Inc.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-20

    ... DEPARTMENT OF JUSTICE Drug Enforcement Administration Importer of Controlled Substances; Notice of Registration; Catalent Pharma Solutions, Inc. Correction In notice document 2012-19202 appearing on page 47114 in the issue of Tuesday, August 7, 2012, make the following correction: On page 47114, in the first...

  4. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  5. Exact-solution for cone-plate viscometry

    NASA Astrophysics Data System (ADS)

    Giacomin, A. J.; Gilbert, P. H.

    2017-11-01

    The viscosity of a Newtonian fluid is often measured by confining the fluid to the gap between a rotating cone that is perpendicular to a fixed disk. We call this experiment cone-plate viscometry. When the cone angle approaches π/2 , the viscometer gap is called narrow. The shear stress in the fluid, throughout a narrow gap, hardly departs from the shear stress exerted on the plate, and we thus call cone-plate flow nearly homogeneous. In this paper, we derive an exact solution for this slight heterogeneity, and from this, we derive the correction factors for the shear rate on the cone and plate, for the torque, and thus, for the measured Newtonian viscosity. These factors thus allow the cone-plate viscometer to be used more accurately, and with cone-angles well below π/2 . We find cone-plate flow field heterogeneity to be far slighter than previously thought. We next use our exact solution for the velocity to arrive at the exact solution for the temperature rise, due to viscous dissipation, in cone-plate flow subject to isothermal boundaries. Since Newtonian viscosity is a strong function of temperature, we expect our new exact solution for the temperature rise be useful to those measuring Newtonian viscosity, and especially so, to those using wide gaps. We include two worked examples to teach practitioners how to use our main results.

  6. Joint pricing and production management: a geometric programming approach with consideration of cubic production cost function

    NASA Astrophysics Data System (ADS)

    Sadjadi, Seyed Jafar; Hamidi Hesarsorkh, Aghil; Mohammadi, Mehdi; Bonyadi Naeini, Ali

    2015-06-01

    Coordination and harmony between different departments of a company can be an important factor in achieving competitive advantage if the company corrects alignment between strategies of different departments. This paper presents an integrated decision model based on recent advances of geometric programming technique. The demand of a product considers as a power function of factors such as product's price, marketing expenditures, and consumer service expenditures. Furthermore, production cost considers as a cubic power function of outputs. The model will be solved by recent advances in convex optimization tools. Finally, the solution procedure is illustrated by numerical example.

  7. Underwater and Dive Station Work-Site Noise Surveys

    DTIC Science & Technology

    2008-03-14

    A) octave band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet...band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A...noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A) level, and

  8. Selecting the correct solution to a physics problem when given several possibilities

    NASA Astrophysics Data System (ADS)

    Richards, Evan Thomas

    Despite decades of research on what learning actions are associated with effective learners (Palincsar and Brown, 1984; Atkinson, et al., 2000), the literature has not fully addressed how to cue those actions (particularly within the realm of physics). Recent reforms that integrate incorrect solutions suggest a possible avenue to reach those actions. However, there is only a limited understanding as to what actions are invoked with such reforms (Grosse and Renkl, 2007). This paper reports on a study that tasked participants with selecting the correct solution to a physics problem when given three possible solutions, where only one of the solutions was correct and the other two solutions contained errors. Think aloud protocol data (Ericsson and Simon, 1993) was analyzed per a framework adapted from Palincsar and Brown (1984). Cued actions were indeed connected to those identified in the worked example literature. Particularly satisfying is the presence of internal consistency checks (i.e., are the solutions self-consistent?), which is a behavior predicted by the Palincsar and Brown (1984) framework, but not explored in the worked example realm. Participant discussions were also found to be associated with those physics-related solution features that were varied across solutions (such as fundamental principle selection or system and surroundings selections).

  9. Molecular density functional theory of water describing hydrophobicity at short and long length scales

    NASA Astrophysics Data System (ADS)

    Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2013-10-01

    We present an extension of our recently introduced molecular density functional theory of water [G. Jeanmairet et al., J. Phys. Chem. Lett. 4, 619 (2013)] to the solvation of hydrophobic solutes of various sizes, going from angstroms to nanometers. The theory is based on the quadratic expansion of the excess free energy in terms of two classical density fields: the particle density and the multipolar polarization density. Its implementation requires as input a molecular model of water and three measurable bulk properties, namely, the structure factor and the k-dependent longitudinal and transverse dielectric susceptibilities. The fine three-dimensional water structure around small hydrophobic molecules is found to be well reproduced. In contrast, the computed solvation free-energies appear overestimated and do not exhibit the correct qualitative behavior when the hydrophobic solute is grown in size. These shortcomings are corrected, in the spirit of the Lum-Chandler-Weeks theory, by complementing the functional with a truncated hard-sphere functional acting beyond quadratic order in density, and making the resulting functional compatible with the Van-der-Waals theory of liquid-vapor coexistence at long range. Compared to available molecular simulations, the approach yields reasonable solvation structure and free energy of hard or soft spheres of increasing size, with a correct qualitative transition from a volume-driven to a surface-driven regime at the nanometer scale.

  10. The new view of hydrophobic free energy.

    PubMed

    Baldwin, Robert L

    2013-04-17

    In the new view, hydrophobic free energy is measured by the work of solute transfer of hydrocarbon gases from vapor to aqueous solution. Reasons are given for believing that older values, measured by solute transfer from a reference solvent to water, are not quantitatively correct. The hydrophobic free energy from gas-liquid transfer is the sum of two opposing quantities, the cavity work (unfavorable) and the solute-solvent interaction energy (favorable). Values of the interaction energy have been found by simulation for linear alkanes and are used here to find the cavity work, which scales linearly with molar volume, not accessible surface area. The hydrophobic free energy is the dominant factor driving folding as judged by the heat capacity change for transfer, which agrees with values for solvating hydrocarbon gases. There is an apparent conflict with earlier values of hydrophobic free energy from studies of large-to-small mutations and an explanation is given. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  11. Tidal Amplitude for a Self-gravitating, Compressible Sphere

    NASA Astrophysics Data System (ADS)

    Hurford, T. A.; Greenberg, R.

    2001-11-01

    Most modern evaluations of tidal amplitude derive from the approach presented by Love [1]. Love's analysis for a homogeneous sphere assumed an incompressible material, which required introduction of a non-rigorously justified pressure term. We solve the more general case of arbitrary compressibility, which allows for a more straightforward derivation. We find the h2 love number of a body of radius R, density ρ , and surface gravity g to be h2 = \\Bigg[\\frac{{5}/{2}}{1+\\frac{19 \\mu}{2 \\rho g R}}\\Bigg] \\Bigg\\{ \\frac{2 \\rho g R (35+28\\frac{\\mu}{\\lambda}) + 19 \\mu (35+28\\frac{\\mu}{\\lambda})} {2 \\rho g R (35+31\\frac{\\mu}{\\lambda}) + 19 \\mu (35+{490}/{19}\\frac{\\mu}{\\lambda})}\\Bigg\\} λ the Lamé constant. This h2 is the product of Love's expression for h2 (in square brackets) and a ``compressibility-correction'' factor (in \\{\\} brackets). Unlike Love's expression, this result is valid for any degree of compressibility (i.e. any λ ). For the incompressible case (λ -> ∞ ) the correction factor approaches 1, so that h2 matches the classical form given by Love. In reality, of course, materials are not incompressible and the difference between our solution and Love's is significant. Assuming that the elastic terms dominate over the gravitational contribution (i.e. 19 μ /(2 ρ g R) >> 1), our solution can be ~ 7% percent larger than Love's solution for large μ /λ . If the gravity dominates (i.e. 19 μ /(2 ρ g R) << 1), our solution is ~ 10 % smaller than Love's solution for large μ /λ . For example, a rocky body (μ /λ ~ 1 [2]), Earth-sized (19μ /(2 ρ g R) ~ 1) body, h2 would be reduced by about 1% from the classical formula. Similarly, under some circumstances the l2 Love number for a uniform sphere could be 22% smaller than Love's evaluation. [1] Love, A.E.H., A Treatise on the Mathematical Theory of Elasticity, New York Dover Publications, 1944 [2] Kaula, W.M., An Introduction to Planetary Physics: The Terrestrial Planets, John Wiley & Sons, Inc., 1968

  12. II. Comment on “Critique and correction of the currently accepted solution of the infinite spherical well in quantum mechanics” by Huang Young-Sea and Thomann Hans-Rudolph

    NASA Astrophysics Data System (ADS)

    Prados, Antonio; Plata, Carlos A.

    2016-12-01

    We comment on the paper "Critique and correction of the currently accepted solution of the infinite spherical well in quantum mechanics" by Huang Young-Sea and Thomann Hans-Rudolph, EPL 115, 60001 (2016) .

  13. Solution for the nonuniformity correction of infrared focal plane arrays.

    PubMed

    Zhou, Huixin; Liu, Shangqian; Lai, Rui; Wang, Dabao; Cheng, Yubao

    2005-05-20

    Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.

  14. Direct perturbation theory for the dark soliton solution to the nonlinear Schroedinger equation with normal dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Jialu; Yang Chunnuan; Cai Hao

    2007-04-15

    After finding the basic solutions of the linearized nonlinear Schroedinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  15. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  16. COMPARISON OF EXPERIMENTS TO CFD MODELS FOR MIXING USING DUAL OPPOSING JETS IN TANKS WITH AND WITHOUT INTERNAL OBSTRUCTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leishear, R.; Poirier, M.; Lee, S.

    2012-06-26

    This paper documents testing methods, statistical data analysis, and a comparison of experimental results to CFD models for blending of fluids, which were blended using a single pump designed with dual opposing nozzles in an eight foot diameter tank. Overall, this research presents new findings in the field of mixing research. Specifically, blending processes were clearly shown to have random, chaotic effects, where possible causal factors such as turbulence, pump fluctuations, and eddies required future evaluation. CFD models were shown to provide reasonable estimates for the average blending times, but large variations -- or scatter -- occurred for blending timesmore » during similar tests. Using this experimental blending time data, the chaotic nature of blending was demonstrated and the variability of blending times with respect to average blending times were shown to increase with system complexity. Prior to this research, the variation in blending times caused discrepancies between CFD models and experiments. This research addressed this discrepancy, and determined statistical correction factors that can be applied to CFD models, and thereby quantified techniques to permit the application of CFD models to complex systems, such as blending. These blending time correction factors for CFD models are comparable to safety factors used in structural design, and compensate variability that cannot be theoretically calculated. To determine these correction factors, research was performed to investigate blending, using a pump with dual opposing jets which re-circulate fluids in the tank to promote blending when fluids are added to the tank. In all, eighty-five tests were performed both in a tank without internal obstructions and a tank with vertical obstructions similar to a tube bank in a heat exchanger. These obstructions provided scale models of vertical cooling coils below the liquid surface for a full scale, liquid radioactive waste storage tank. Also, different jet diameters and different horizontal orientations of the jets were investigated with respect to blending. Two types of blending tests were performed. The first set of eighty-one tests blended small quantities of tracer fluids into solution. Data from these tests were statistically evaluated to determine blending times for the addition of tracer solution to tanks, and blending times were successfully compared to Computational Fluid Dynamics (CFD) models. The second set of four tests blended bulk quantities of solutions of different density and viscosity. For example, in one test a quarter tank of water was added to a three quarters of a tank of a more viscous salt solution. In this case, the blending process was noted to significantly change due to stratification of fluids, and blending times increased substantially. However, CFD models for stratification and the variability of blending times for different density fluids was not pursued, and further research is recommended in the area of blending bulk quantities of fluids. All in all, testing showed that CFD models can be effectively applied if statistically validated through experimental testing, but in the absence of experimental validation CFD model scan be extremely misleading as a basis for design and operation decisions.« less

  17. Differential pricing of new pharmaceuticals in lower income European countries.

    PubMed

    Kaló, Zoltán; Annemans, Lieven; Garrison, Louis P

    2013-12-01

    Pharmaceutical companies adjust the pricing strategy of innovative medicines to the imperatives of their major markets. The ability of payers to influence the ex-factory price of new drugs depends on country population size and income per capita, among other factors. Differential pricing based on Ramsey principles is a 'second-best' solution to correct the imperfections of the global market for innovative pharmaceuticals, and it is also consistent with standard norms of equity. This analysis summarizes the boundaries of differential pharmaceutical pricing for policymakers, payers and other stakeholders in lower-income countries, with special focus on Central-Eastern Europe, and describes the feasibility and implications of potential solutions to ensure lower pharmaceutical prices as compared to higher-income countries. European stakeholders, especially in Central-Eastern Europe and at the EU level, should understand the implications of increased transparency of pricing and should develop solutions to prevent the limited accessibility of new medicines in lower-income countries.

  18. Second-order numerical methods for multi-term fractional differential equations: Smooth and non-smooth solutions

    NASA Astrophysics Data System (ADS)

    Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em

    2017-12-01

    Starting with the asymptotic expansion of the error equation of the shifted Gr\\"{u}nwald--Letnikov formula, we derive a new modified weighted shifted Gr\\"{u}nwald--Letnikov (WSGL) formula by introducing appropriate correction terms. We then apply one special case of the modified WSGL formula to solve multi-term fractional ordinary and partial differential equations, and we prove the linear stability and second-order convergence for both smooth and non-smooth solutions. We show theoretically and numerically that numerical solutions up to certain accuracy can be obtained with only a few correction terms. Moreover, the correction terms can be tuned according to the fractional derivative orders without explicitly knowing the analytical solutions. Numerical simulations verify the theoretical results and demonstrate that the new formula leads to better performance compared to other known numerical approximations with similar resolution.

  19. Direct perturbation theory for the dark soliton solution to the nonlinear Schrödinger equation with normal dispersion.

    PubMed

    Yu, Jia-Lu; Yang, Chun-Nuan; Cai, Hao; Huang, Nian-Ning

    2007-04-01

    After finding the basic solutions of the linearized nonlinear Schrödinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  20. Aerosol hygroscopic growth parameterization based on a solute specific coefficient

    NASA Astrophysics Data System (ADS)

    Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.

    2011-09-01

    Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.

  1. Characterization of the nanoDot OSLD dosimeter in CT

    PubMed Central

    Scarboro, Sarah B.; Cody, Dianna; Alvarez, Paola; Followill, David; Court, Laurence; Stingo, Francesco C.; Zhang, Di; Kry, Stephen F.

    2015-01-01

    Purpose: The extensive use of computed tomography (CT) in diagnostic procedures is accompanied by a growing need for more accurate and patient-specific dosimetry techniques. Optically stimulated luminescent dosimeters (OSLDs) offer a potential solution for patient-specific CT point-based surface dosimetry by measuring air kerma. The purpose of this work was to characterize the OSLD nanoDot for CT dosimetry, quantifying necessary correction factors, and evaluating the uncertainty of these factors. Methods: A characterization of the Landauer OSL nanoDot (Landauer, Inc., Greenwood, IL) was conducted using both measurements and theoretical approaches in a CT environment. The effects of signal depletion, signal fading, dose linearity, and angular dependence were characterized through direct measurement for CT energies (80–140 kV) and delivered doses ranging from ∼5 to >1000 mGy. Energy dependence as a function of scan parameters was evaluated using two independent approaches: direct measurement and a theoretical approach based on Burlin cavity theory and Monte Carlo simulated spectra. This beam-quality dependence was evaluated for a range of CT scanning parameters. Results: Correction factors for the dosimeter response in terms of signal fading, dose linearity, and angular dependence were found to be small for most measurement conditions (<3%). The relative uncertainty was determined for each factor and reported at the two-sigma level. Differences in irradiation geometry (rotational versus static) resulted in a difference in dosimeter signal of 3% on average. Beam quality varied with scan parameters and necessitated the largest correction factor, ranging from 0.80 to 1.15 relative to a calibration performed in air using a 120 kV beam. Good agreement was found between the theoretical and measurement approaches. Conclusions: Correction factors for the measurement of air kerma were generally small for CT dosimetry, although angular effects, and particularly effects due to changes in beam quality, could be more substantial. In particular, it would likely be necessary to account for variations in CT scan parameters and measurement location when performing CT dosimetry using OSLD. PMID:25832070

  2. Characterization of the nanoDot OSLD dosimeter in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarboro, Sarah B.; Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030; The Methodist Hospital, Houston, Texas 77030

    Purpose: The extensive use of computed tomography (CT) in diagnostic procedures is accompanied by a growing need for more accurate and patient-specific dosimetry techniques. Optically stimulated luminescent dosimeters (OSLDs) offer a potential solution for patient-specific CT point-based surface dosimetry by measuring air kerma. The purpose of this work was to characterize the OSLD nanoDot for CT dosimetry, quantifying necessary correction factors, and evaluating the uncertainty of these factors. Methods: A characterization of the Landauer OSL nanoDot (Landauer, Inc., Greenwood, IL) was conducted using both measurements and theoretical approaches in a CT environment. The effects of signal depletion, signal fading, dosemore » linearity, and angular dependence were characterized through direct measurement for CT energies (80–140 kV) and delivered doses ranging from ∼5 to >1000 mGy. Energy dependence as a function of scan parameters was evaluated using two independent approaches: direct measurement and a theoretical approach based on Burlin cavity theory and Monte Carlo simulated spectra. This beam-quality dependence was evaluated for a range of CT scanning parameters. Results: Correction factors for the dosimeter response in terms of signal fading, dose linearity, and angular dependence were found to be small for most measurement conditions (<3%). The relative uncertainty was determined for each factor and reported at the two-sigma level. Differences in irradiation geometry (rotational versus static) resulted in a difference in dosimeter signal of 3% on average. Beam quality varied with scan parameters and necessitated the largest correction factor, ranging from 0.80 to 1.15 relative to a calibration performed in air using a 120 kV beam. Good agreement was found between the theoretical and measurement approaches. Conclusions: Correction factors for the measurement of air kerma were generally small for CT dosimetry, although angular effects, and particularly effects due to changes in beam quality, could be more substantial. In particular, it would likely be necessary to account for variations in CT scan parameters and measurement location when performing CT dosimetry using OSLD.« less

  3. Nonlinear Local Bending Response and Bulging Factors for Longitudinal and Circumferential Cracks in Pressurized Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Young, Richard D.; Rose, Cheryl A.; Starnes, James H., Jr.

    2000-01-01

    Results of a geometrically nonlinear finite element parametric study to determine curvature correction factors or bulging factors that account for increased stresses due to curvature for longitudinal and circumferential cracks in unstiffened pressurized cylindrical shells are presented. Geometric parameters varied in the study include the shell radius, the shell wall thickness, and the crack length. The major results are presented in the form of contour plots of the bulging factor as a function of two nondimensional parameters: the shell curvature parameter, lambda, which is a function of the shell geometry, Poisson's ratio, and the crack length; and a loading parameter, eta, which is a function of the shell geometry, material properties, and the applied internal pressure. These plots identify the ranges of the shell curvature and loading parameters for which the effects of geometric nonlinearity are significant. Simple empirical expressions for the bulging factor are then derived from the numerical results and shown to predict accurately the nonlinear response of shells with longitudinal and circumferential cracks. The numerical results are also compared with analytical solutions based on linear shallow shell theory for thin shells, and with some other semi-empirical solutions from the literature, and limitations on the use of these other expressions are suggested.

  4. COSMIC REIONIZATION ON COMPUTERS: NUMERICAL AND PHYSICAL CONVERGENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov; Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637; Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce a weakmore » convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite-resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ∼20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, such as stellar masses and metallicities. Yet other properties of model galaxies, for example, their H i masses, are recovered in the weakly converged runs only within a factor of 2.« less

  5. A simple and accurate method for calculation of the structure factor of interacting charged spheres.

    PubMed

    Wu, Chu; Chan, Derek Y C; Tabor, Rico F

    2014-07-15

    Calculation of the structure factor of a system of interacting charged spheres based on the Ginoza solution of the Ornstein-Zernike equation has been developed and implemented on a stand-alone spreadsheet. This facilitates direct interactive numerical and graphical comparisons between experimental structure factors with the pioneering theoretical model of Hayter-Penfold that uses the Hansen-Hayter renormalisation correction. The method is used to fit example experimental structure factors obtained from the small-angle neutron scattering of a well-characterised charged micelle system, demonstrating that this implementation, available in the supplementary information, gives identical results to the Hayter-Penfold-Hansen approach for the structure factor, S(q) and provides direct access to the pair correlation function, g(r). Additionally, the intermediate calculations and outputs can be readily accessed and modified within the familiar spreadsheet environment, along with information on the normalisation procedure. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    DOE PAGES

    Kilcrease, D. P.; Brookes, S.

    2013-08-19

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. Additionally, a simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure formore » the Born cross-sections that employs the Elwert–Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. Furthermore, we also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.« less

  7. WE-G-18A-03: Cone Artifacts Correction in Iterative Cone Beam CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Folkerts, M; Jiang, S

    Purpose: For iterative reconstruction (IR) in cone-beam CT (CBCT) imaging, data truncation along the superior-inferior (SI) direction causes severe cone artifacts in the reconstructed CBCT volume images. Not only does it reduce the effective SI coverage of the reconstructed volume, it also hinders the IR algorithm convergence. This is particular a problem for regularization based IR, where smoothing type regularization operations tend to propagate the artifacts to a large area. It is our purpose to develop a practical cone artifacts correction solution. Methods: We found it is the missing data residing in the truncated cone area that leads to inconsistencymore » between the calculated forward projections and measured projections. We overcome this problem by using FDK type reconstruction to estimate the missing data and design weighting factors to compensate the inconsistency caused by the missing data. We validate the proposed methods in our multi-GPU low-dose CBCT reconstruction system on multiple patients' datasets. Results: Compared to the FDK reconstruction with full datasets, while IR is able to reconstruct CBCT images using a subset of projection data, the severe cone artifacts degrade overall image quality. For head-neck case under a full-fan mode, 13 out of 80 slices are contaminated. It is even more severe in pelvis case under half-fan mode, where 36 out of 80 slices are affected, leading to inferior soft-tissue delineation. By applying the proposed method, the cone artifacts are effectively corrected, with a mean intensity difference decreased from ∼497 HU to ∼39HU for those contaminated slices. Conclusion: A practical and effective solution for cone artifacts correction is proposed and validated in CBCT IR algorithm. This study is supported in part by NIH (1R01CA154747-01)« less

  8. Integrability in AdS/CFT correspondence: quasi-classical analysis

    NASA Astrophysics Data System (ADS)

    Gromov, Nikolay

    2009-06-01

    In this review, we consider a quasi-classical method applicable to integrable field theories which is based on a classical integrable structure—the algebraic curve. We apply it to the Green-Schwarz superstring on the AdS5 × S5 space. We show that the proposed method reproduces perfectly the earlier results obtained by expanding the string action for some simple classical solutions. The construction is explicitly covariant and is not based on a particular parameterization of the fields and as a result is free from ambiguities. On the other hand, the finite size corrections in some particularly important scaling limit are studied in this paper for a system of Bethe equations. For the general superalgebra \\su(N|K) , the result for the 1/L corrections is obtained. We find an integral equation which describes these corrections in a closed form. As an application, we consider the conjectured Beisert-Staudacher (BS) equations with the Hernandez-Lopez dressing factor where the finite size corrections should reproduce quasi-classical results around a general classical solution. Indeed, we show that our integral equation can be interpreted as a sum of all physical fluctuations and thus prove the complete one-loop consistency of the BS equations. We demonstrate that any local conserved charge (including the AdS energy) computed from the BS equations is indeed given at one loop by the sum of the charges of fluctuations with an exponential precision for large S5 angular momentum of the string. As an independent result, the BS equations in an \\su(2) sub-sector were derived from Zamolodchikovs's S-matrix. The paper is based on the author's PhD thesis.

  9. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.

  10. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.

  11. Extension of relativistic dissipative hydrodynamics to third order

    NASA Astrophysics Data System (ADS)

    El, Andrej; Xu, Zhe; Greiner, Carsten

    2010-04-01

    Following the procedure introduced by Israel and Stewart, we expand the entropy current up to the third order in the shear stress tensor παβ and derive a novel third-order evolution equation for παβ. This equation is solved for the one-dimensional Bjorken boost-invariant expansion. The scaling solutions for various values of the shear viscosity to the entropy density ratio η/s are shown to be in very good agreement with those obtained from kinetic transport calculations. For the pressure isotropy starting with 1 at τ0=0.4 fm/c, the third-order corrections to Israel-Stewart theory are approximately 10% for η/s=0.2 and more than a factor of 2 for η/s=3. We also estimate all higher-order corrections to Israel-Stewart theory and demonstrate their importance in describing highly viscous matters.

  12. Diffusion of Small Solute Particles in Viscous Liquids: Cage Diffusion, a Result of Decoupling of Solute-Solvent Dynamics, Leads to Amplification of Solute Diffusion.

    PubMed

    Acharya, Sayantan; Nandi, Manoj K; Mandal, Arkajit; Sarkar, Sucharita; Bhattacharyya, Sarika Maitra

    2015-08-27

    We study the diffusion of small solute particles through solvent by keeping the solute-solvent interaction repulsive and varying the solvent properties. The study involves computer simulations, development of a new model to describe diffusion of small solutes in a solvent, and also mode coupling theory (MCT) calculations. In a viscous solvent, a small solute diffuses via coupling to the solvent hydrodynamic modes and also through the transient cages formed by the solvent. The model developed can estimate the independent contributions from these two different channels of diffusion. Although the solute diffusion in all the systems shows an amplification, the degree of it increases with solvent viscosity. The model correctly predicts that when the solvent viscosity is high, the solute primarily diffuses by exploiting the solvent cages. In such a scenario the MCT diffusion performed for a static solvent provides a correct estimation of the cage diffusion.

  13. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  14. Thermodynamics of charged Lifshitz black holes with quadratic corrections

    NASA Astrophysics Data System (ADS)

    Bravo-Gaete, Moisés; Hassaïne, Mokhtar

    2015-03-01

    In arbitrary dimension, we consider the Einstein-Maxwell Lagrangian supplemented by the more general quadratic-curvature corrections. For this model, we derive four classes of charged Lifshitz black hole solutions for which the metric function is shown to depend on a unique integration constant. The masses of these solutions are computed using the quasilocal formalism based on the relation established between the off-shell Abbott-Deser-Tekin and Noether potentials. Among these four solutions, three of them are interpreted as extremal in the sense that their masses vanish identically. For the last family of solutions, both the quasilocal mass and the electric charge are shown to depend on the integration constant. Finally, we verify that the first law of thermodynamics holds for each solution and a Smarr formula is also established for the four solutions.

  15. Puzzler Solution: Just Making an Observation | Poster

    Cancer.gov

    Editor’s Note: It looks like we stumped you. None of the puzzler guesses were correct, but our winner was the closest to getting it right. He guessed it was a sanitary sewer clean-out pipe, and that’s what the photo looks like, according to our source at Facilities Maintenance and Engineering. Please continue reading for the correct puzzler solution. By Ashley DeVine, Staff

  16. Puzzler Solution: Just Making an Observation | Poster

    Cancer.gov

    Editor’s Note: It looks like we stumped you. None of the puzzler guesses were correct, but our winner was the closest to getting it right. He guessed it was a sanitary sewer clean-out pipe, and that’s what the photo looks like, according to our source at Facilities Maintenance and Engineering. Please continue reading for the correct puzzler solution. By Ashley DeVine, Staff Writer

  17. A numerical study of the steady scalar convective diffusion equation for small viscosity

    NASA Technical Reports Server (NTRS)

    Giles, M. B.; Rose, M. E.

    1983-01-01

    A time-independent convection diffusion equation is studied by means of a compact finite difference scheme and numerical solutions are compared to the analytic inviscid solutions. The correct internal and external boundary layer behavior is observed, due to an inherent feature of the scheme which automatically produces upwind differencing in inviscid regions and the correct viscous behavior in viscous regions.

  18. Effective Boundary Conditions for Continuum Method of Investigation of Rarefied Gas Flow over Blunt Body

    NASA Astrophysics Data System (ADS)

    Brykina, I. G.; Rogov, B. V.; Semenov, I. L.; Tirskiy, G. A.

    2011-05-01

    Super- and hypersonic rarefied gas flow over blunt bodies is investigated by using asymptotically correct viscous shock layer (VSL) model with effective boundary conditions and thin viscous shock layer model. Correct shock and wall conditions for VSL are proposed with taking into account terms due to the curvature which are significant at low Reynolds number. These conditions improve original Davis's VSL model [1]. Numerical calculation of Krook equation [2] is carried out to verify continuum results. Continuum numerical and asymptotic solutions are compared with kinetic solution, free-molecule flow solution and with DSMC solutions [3, 4, 5] over a wide range of free-stream Knudsen number Kn∞. It is shown that taking into account terms with shock and surface curvatures have a pronounced effect on skin friction and heat-transfer in transitional flow regime. Using the asymptotically correct VSL model with effective boundary conditions significantly extends the range of its applicability to higher Kn∞ numbers.

  19. One-loop quantum gravity repulsion in the early Universe.

    PubMed

    Broda, Bogusław

    2011-03-11

    Perturbative quantum gravity formalism is applied to compute the lowest order corrections to the classical spatially flat cosmological Friedmann-Lemaître-Robertson-Walker solution (for the radiation). The presented approach is analogous to the approach applied to compute quantum corrections to the Coulomb potential in electrodynamics, or rather to the approach applied to compute quantum corrections to the Schwarzschild solution in gravity. In the framework of the standard perturbative quantum gravity, it is shown that the corrections to the classical deceleration, coming from the one-loop graviton vacuum polarization (self-energy), have (UV cutoff free) opposite to the classical repulsive properties which are not negligible in the very early Universe. The repulsive "quantum forces" resemble those known from loop quantum cosmology.

  20. Vibrations of a thin cylindrical shell stiffened by rings with various stiffness

    NASA Astrophysics Data System (ADS)

    Nesterchuk, G. A.

    2018-05-01

    The problem of vibrations of a thin-walled elastic cylindrical shell reinforced by frames of different rigidity is investigated. The solution for the case of the clamped shell edges was obtained by asymptotic methods and refined by the finite element method. Rings with zero eccentricity and stiffness varying along the generatrix of the shell cylinder are considered. Varying the optimal coefficients of the distribution functions of the rigidity of the frames and finding more precise parameters makes it possible to find correction factors for analytical formulas of approximate calculation.

  1. A de Sitter tachyon thick braneworld

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germán, Gabriel; Herrera-Aguilar, Alfredo; Malagón-Morejón, Dagoberto

    2013-02-01

    Among the multiple 5D thick braneworld models that have been proposed in the last years, in order to address several open problems in modern physics, there is a specific one involving a tachyonic bulk scalar field. Delving into this framework, a thick braneworld with a cosmological background induced on the brane is here investigated. The respective field equations — derived from the model with a warped 5D geometry — are highly non-linear equations, admitting a non-trivial solution for the warp factor and the tachyon scalar field as well, in a de Sitter 4D cosmological background. Moreover, the non-linear tachyonic scalarmore » field, that generates the brane in complicity with warped gravity, has the form of a kink-like configuration. Notwithstanding, the non-linear field equations restricting character does not allow one to easily find thick brane solutions with a decaying warp factor which leads to the localization of 4D gravity and other matter fields. We derive such a thick brane configuration altogether in this tachyon-gravity setup. When analyzing the spectrum of gravity fluctuations in the transverse traceless sector, the 4D gravity is shown to be localized due to the presence of a single zero mode bound state, separated by a continuum of massive Kaluza-Klein (KK) modes by a mass gap. It contrasts with previous results, where there is a KK massive bound excitation providing no clear physical interpretation. The mass gap is determined by the scale of the metric parameter H. Finally, the corrections to Newton's law in this model are computed and shown to decay exponentially. It is in full compliance to corrections reported in previous results (up to a constant factor) within similar braneworlds with induced 4D de Sitter metric, despite the fact that the warp factor and the massive modes have a different form.« less

  2. A nearly-linear computational-cost scheme for the forward dynamics of an N-body pendulum

    NASA Technical Reports Server (NTRS)

    Chou, Jack C. K.

    1989-01-01

    The dynamic equations of motion of an n-body pendulum with spherical joints are derived to be a mixed system of differential and algebraic equations (DAE's). The DAE's are kept in implicit form to save arithmetic and preserve the sparsity of the system and are solved by the robust implicit integration method. At each solution point, the predicted solution is corrected to its exact solution within given tolerance using Newton's iterative method. For each iteration, a linear system of the form J delta X = E has to be solved. The computational cost for solving this linear system directly by LU factorization is O(n exp 3), and it can be reduced significantly by exploring the structure of J. It is shown that by recognizing the recursive patterns and exploiting the sparsity of the system the multiplicative and additive computational costs for solving J delta X = E are O(n) and O(n exp 2), respectively. The formulation and solution method for an n-body pendulum is presented. The computational cost is shown to be nearly linearly proportional to the number of bodies.

  3. Evaluation of the telegrapher's equation and multiple-flux theories for calculating the transmittance and reflectance of a diffuse absorbing slab.

    PubMed

    Kong, Steven H; Shore, Joel D

    2007-03-01

    We study the propagation of light through a medium containing isotropic scattering and absorption centers. With a Monte Carlo simulation serving as the benchmark solution to the radiative transfer problem of light propagating through a turbid slab, we compare the transmission and reflection density computed from the telegrapher's equation, the diffusion equation, and multiple-flux theories such as the Kubelka-Munk and four-flux theories. Results are presented for both normally incident light and diffusely incident light. We find that we can always obtain very good results from the telegrapher's equation provided that two parameters that appear in the solution are set appropriately. We also find an interesting connection between certain solutions of the telegrapher's equation and solutions of the Kubelka-Munk and four-flux theories with a small modification to how the phenomenological parameters in those theories are traditionally related to the optical scattering and absorption coefficients of the slab. Finally, we briefly explore how well the theories can be extended to the case of anisotropic scattering by multiplying the scattering coefficient by a simple correction factor.

  4. Importance of the effective strong ion difference of an intravenous solution in the treatment of diarrheic calves with naturally acquired acidemia and strong ion (metabolic) acidosis.

    PubMed

    Müller, K R; Gentile, A; Klee, W; Constable, P D

    2012-01-01

    The effect of sodium bicarbonate on acid-base balance in metabolic acidosis is interpreted differently by Henderson-Hasselbalch and strong ion acid-base approaches. Application of the traditional bicarbonate-centric approach indicates that bicarbonate administration corrects the metabolic acidosis by buffering hydrogen ions, whereas strong ion difference theory indicates that the co-administration of the strong cation sodium with a volatile buffer (bicarbonate) corrects the strong ion acidosis by increasing the strong ion difference (SID) in plasma. To investigate the relative importance of the effective SID of IV solutions in correcting acidemia in calves with diarrhea. Twenty-two Holstein-Friesian calves (4-21 days old) with naturally acquired diarrhea and strong ion (metabolic) acidosis. Calves were randomly assigned to IV treatment with a solution of sodium bicarbonate (1.4%) or sodium gluconate (3.26%). Fluids were administered over 4 hours and the effect on acid-base balance was determined. Calves suffered from acidemia owing to moderate to strong ion acidosis arising from hyponatremia and hyper-D-lactatemia. Sodium bicarbonate infusion was effective in correcting the strong ion acidosis. In contrast, sodium gluconate infusion did not change blood pH, presumably because the strong anion gluconate was minimally metabolized. A solution containing a high effective SID (sodium bicarbonate) is much more effective in alkalinizing diarrheic calves with strong ion acidosis than a solution with a low effective SID (sodium gluconate). Sodium gluconate is ineffective in correcting acidemia, which can be explained using traditional acid-base theory but requires a new parameter, effective SID, to be understood using the strong ion approach. Copyright © 2012 by the American College of Veterinary Internal Medicine.

  5. Does RAIM with Correct Exclusion Produce Unbiased Positions?

    PubMed Central

    Teunissen, Peter J. G.; Imparato, Davide; Tiberius, Christian C. J. M.

    2017-01-01

    As the navigation solution of exclusion-based RAIM follows from a combination of least-squares estimation and a statistically based exclusion-process, the computation of the integrity of the navigation solution has to take the propagated uncertainty of the combined estimation-testing procedure into account. In this contribution, we analyse, theoretically as well as empirically, the effect that this combination has on the first statistical moment, i.e., the mean, of the computed navigation solution. It will be shown, although statistical testing is intended to remove biases from the data, that biases will always remain under the alternative hypothesis, even when the correct alternative hypothesis is properly identified. The a posteriori exclusion of a biased satellite range from the position solution will therefore never remove the bias in the position solution completely. PMID:28672862

  6. Carrier-phase multipath corrections for GPS-based satellite attitude determination

    NASA Technical Reports Server (NTRS)

    Axelrad, A.; Reichert, P.

    2001-01-01

    This paper demonstrates the high degree of spatial repeatability of these errors for a spacecraft environment and describes a correction technique, termed the sky map method, which exploits the spatial correlation to correct measurements and improve the accuracy of GPS-based attitude solutions.

  7. Systematic evaluation of three different commercial software solutions for automatic segmentation for adaptive therapy in head-and-neck, prostate and pleural cancer.

    PubMed

    La Macchia, Mariangela; Fellin, Francesco; Amichetti, Maurizio; Cianchetti, Marco; Gianolini, Stefano; Paola, Vitali; Lomax, Antony J; Widesott, Lamberto

    2012-09-18

    To validate, in the context of adaptive radiotherapy, three commercial software solutions for atlas-based segmentation. Fifteen patients, five for each group, with cancer of the Head&Neck, pleura, and prostate were enrolled in the study. In addition to the treatment planning CT (pCT) images, one replanning CT (rCT) image set was acquired for each patient during the RT course. Three experienced physicians outlined on the pCT and rCT all the volumes of interest (VOIs). We used three software solutions (VelocityAI 2.6.2 (V), MIM 5.1.1 (M) by MIMVista and ABAS 2.0 (A) by CMS-Elekta) to generate the automatic contouring on the repeated CT. All the VOIs obtained with automatic contouring (AC) were successively corrected manually. We recorded the time needed for: 1) ex novo ROIs definition on rCT; 2) generation of AC by the three software solutions; 3) manual correction of AC.To compare the quality of the volumes obtained automatically by the software and manually corrected with those drawn from scratch on rCT, we used the following indexes: overlap coefficient (DICE), sensitivity, inclusiveness index, difference in volume, and displacement differences on three axes (x, y, z) from the isocenter. The time saved by the three software solutions for all the sites, compared to the manual contouring from scratch, is statistically significant and similar for all the three software solutions. The time saved for each site are as follows: about an hour for Head&Neck, about 40 minutes for prostate, and about 20 minutes for mesothelioma. The best DICE similarity coefficient index was obtained with the manual correction for: A (contours for prostate), A and M (contours for H&N), and M (contours for mesothelioma). From a clinical point of view, the automated contouring workflow was shown to be significantly shorter than the manual contouring process, even though manual correction of the VOIs is always needed.

  8. A novel scale for measuring mixed states in bipolar disorder.

    PubMed

    Cavanagh, Jonathan; Schwannauer, Matthias; Power, Mick; Goodwin, Guy M

    2009-01-01

    Conventional descriptions of bipolar disorder tend to treat the mixed state as something of an afterthought. There is no scale that specifically measures the phenomena of the mixed state. This study aimed to test a novel scale for mixed state in a clinical and community population of bipolar patients. The scale included clinically relevant symptoms of both mania and depression in a bivariate scale. Recovered respondents were asked to recall their last manic episode. The scale allowed endorsement of one or more of the manic and depressive symptoms. Internal consistency analyses were carried out using Cronbach alpha. Factor analysis was carried out using a standard Principal Components Analysis followed by Varimax Rotation. A confirmatory factor analytic method was used to validate the scale structure in a representative clinical sample. The reliability analysis gave a Cronbach alpha value of 0.950, with a range of corrected-item-total-scale correlations from 0.546 (weight change) to 0.830 (mood). The factor analysis revealed a two-factor solution for the manic and depressed items which accounted for 61.2% of the variance in the data. Factor 1 represented physical activity, verbal activity, thought processes and mood. Factor 2 represented eating habits, weight change, passage of time and pain sensitivity. This novel scale appears to capture the key features of mixed states. The two-factor solution fits well with previous models of bipolar disorder and concurs with the view that mixed states may be more than the sum of their parts.

  9. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  10. Toward Best Practices For Assessing Near Surface Sensor Fouling: Potential Correction Approaches Using Underway Ferry Measurements

    NASA Astrophysics Data System (ADS)

    Sastri, A. R.; Dewey, R. K.; Pawlowicz, R.; Krogh, J.

    2016-02-01

    Data from long term deployments of sensors on autonomous, mobile and cabled observation platforms suffer potential quality issues associated with bio-fouling. This issue is of particular concern for optical sensors, such as fluorescence and/or absorbance-based instruments for which light emitting/receiving surfaces are prone to fouling due constant contact with the marine environment. Here we examine signal quality for backscatter, chlorophyll and CDOM fluorescence from a single triplet instrument installed in a ferry box system (nominal depth of 3m) operated by Ocean Networks Canada. The time series consists of 22 months of 8-10 daily transits across the productive waters of the Strait of Georgia, British Columbia, Canada (Nanaimo on Vancouver Island and Vancouver on mainland BC). Instruments were cleaned every 2 weeks since all three instruments experienced significant signal attenuation during that period throughout the year. We experimented with a variety of pre- and post-cleaning measurements in an effort to develop `correction factors' with which to account for the effects of fouling. We found that CDOM fluorescence was especially sensitive to fouling and that correction factors derived from measurements of the fluorescence of standardized solutions successfully accounted for fouling. Similar results were found for chlorophyll fluorescence. Here we present results from our measurements and assess the efficacy of each of these approaches using comparisons against additional instruments less prone to signal attenuation over short periods.

  11. Anomalous Rayleigh scattering with dilute concentrations of elements of biological importance

    NASA Astrophysics Data System (ADS)

    Hugtenburg, Richard P.; Bradley, David A.

    2004-01-01

    The anomalous scattering factor (ASF) correction to the relativistic form-factor approximation for Rayleigh scattering is examined in support of its utilization in radiographic imaging. ASF corrected total cross-section data have been generated for a low resolution grid for the Monte Carlo code EGS4 for the biologically important elements, K, Ca, Mn, Fe, Cu and Zn. Points in the fixed energy grid used by EGS4 as well as 8 other points in the vicinity of the K-edge have been chosen to achieve an uncertainty in the ASF component of 20% according to the Thomas-Reiche-Kuhn sum rule and an energy resolution of 20 eV. Such data is useful for analysis of imaging with a quasi-monoenergetic source. Corrections to the sampled distribution of outgoing photons, due to ASF, are given and new total cross-section data including that of the photoelectric effect have been computed using the Slater exchange self-consistent potential with the Latter tail. A measurement of Rayleigh scattering in a dilute aqueous solution of manganese (II) was performed, this system enabling determination of the absolute cross-section, although background subtraction was necessary to remove K β fluorescence and resonant Raman scattering occurring within several 100 eV of the edge. Measurements confirm the presence of below edge bound-bound structure and variation in the structure due to the ionic state that are not currently included in tabulations.

  12. Curves from Motion, Motion from Curves

    DTIC Science & Technology

    2000-01-01

    De linearum curvarum cum lineis rectis comparatione dissertatio geometrica - an appendix to a treatise by de Lalouv~re (this was the only publication... correct solution to the problem of motion in the gravity of a permeable rotating Earth, considered by Torricelli (see §3). If the Earth is a homogeneous...in 1686, which contains the correct solution as part of a remarkably comprehensive theory of orbital motions under centripetal forces. It is a

  13. A coupled Eulerian/Lagrangian method for the solution of three-dimensional vortical flows

    NASA Technical Reports Server (NTRS)

    Felici, Helene Marie

    1992-01-01

    A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of three-dimensional rotational flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method using particle markers is added to the Eulerian time-marching procedure and provides a correction of the Eulerian solution. In turn, the Eulerian solutions is used to integrate the Lagrangian state-vector along the particles trajectories. The Lagrangian correction technique does not require any a-priori information on the structure or position of the vortical regions. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers, used as 'accuracy boosters,' take advantage of the accurate convection description of the Lagrangian solution and enhance the vorticity and entropy capturing capabilities of standard Eulerian finite-volume methods. The combined solution procedures is tested in several applications. The convection of a Lamb vortex in a straight channel is used as an unsteady compressible flow preservation test case. The other test cases concern steady incompressible flow calculations and include the preservation of turbulent inlet velocity profile, the swirling flow in a pipe, and the constant stagnation pressure flow and secondary flow calculations in bends. The last application deals with the external flow past a wing with emphasis on the trailing vortex solution. The improvement due to the addition of the Lagrangian correction technique is measured by comparison with analytical solutions when available or with Eulerian solutions on finer grids. The use of the combined Eulerian/Lagrangian scheme results in substantially lower grid resolution requirements than the standard Eulerian scheme for a given solution accuracy.

  14. An evaluation of a manganese bath system having a new geometry through MCNP modelling.

    PubMed

    Khabaz, Rahim

    2012-12-01

    In this study, an approximate symmetric cylindrical manganese bath system with equal diameter and height was appraised using a Monte Carlo simulation. For nine sizes of the tank filled with MnSO(4).H(2)O solution of three different concentrations, the necessary correction factors involved in the absolute measurement of neutron emission rate were determined by a detailed modelling of the MCNP4C code with the ENDF/B-VII.0 neutron cross section data library. The results obtained were also used to determine the optimum dimensions of the bath for each concentration of solution in the calibration of (241)Am-Be and (252)Cf sources. Also, the amount of gamma radiation produced as a result of (n,γ) the reaction with the nuclei of the manganese sulphate solution that escaped from the boundary of each tank was evaluated. This gamma can be important for the background in NaI(Tl) detectors and issues concerned with radiation protection.

  15. Notch Sensitivity of Woven Ceramic Matrix Composites Under Tensile Loading: An Experimental, Analytical, and Finite Element Study

    NASA Technical Reports Server (NTRS)

    Haque, A.; Ahmed, L.; Ware, T.; Jeelani, S.; Verrilli, Michael J. (Technical Monitor)

    2001-01-01

    The stress concentrations associated with circular notches and subjected to uniform tensile loading in woven ceramic matrix composites (CMCs) have been investigated for high-efficient turbine engine applications. The CMC's were composed of Nicalon silicon carbide woven fabric in SiNC matrix manufactured through polymer impregnation process (PIP). Several combinations of hole diameter/plate width ratios and ply orientations were considered in this study. In the first part, the stress concentrations were calculated measuring strain distributions surrounding the hole using strain gages at different locations of the specimens during the initial portion of the stress-strain curve before any microdamage developed. The stress concentration was also calculated analytically using Lekhnitskii's solution for orthotropic plates. A finite-width correction factor for anisotropic and orthotropic composite plate was considered. The stress distributions surrounding the circular hole of a CMC's plate were further studied using finite element analysis. Both solid and shell elements were considered. The experimental results were compared with both the analytical and finite element solutions. Extensive optical and scanning electron microscopic examinations were carried out for identifying the fracture behavior and failure mechanisms of both the notched and notched specimens. The stress concentration factors (SCF) determined by analytical method overpredicted the experimental results. But the numerical solution underpredicted the experimental SCF. Stress concentration factors are shown to increase with enlarged hole size and the effects of ply orientations on stress concentration factors are observed to be negligible. In all the cases, the crack initiated at the notch edge and propagated along the width towards the edge of the specimens.

  16. Kinect-Based Virtual Game for the Elderly that Detects Incorrect Body Postures in Real Time

    PubMed Central

    Saenz-de-Urturi, Zelai; Garcia-Zapirain Soto, Begonya

    2016-01-01

    Poor posture can result in loss of physical function, which is necessary to preserving independence in later life. Its decline is often the determining factor for loss of independence in the elderly. To avoid this, a system to correct poor posture in the elderly, designed for Kinect-based indoor applications, is proposed in this paper. Due to the importance of maintaining a healthy life style in senior citizens, the system has been integrated into a game which focuses on their physical stimulation. The game encourages users to perform physical activities while the posture correction system helps them to adopt proper posture. The system captures limb node data received from the Kinect sensor in order to detect posture variations in real time. The DTW algorithm compares the original posture with the current one to detect any deviation from the original correct position. The system was tested and achieved a successful detection percentage of 95.20%. Experimental tests performed in a nursing home with different users show the effectiveness of the proposed solution. PMID:27196903

  17. Virtual k -Space Modulation Optical Microscopy

    NASA Astrophysics Data System (ADS)

    Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Zheng, Guoan; Fang, Yue; Xu, Yingke; Liu, Xu; So, Peter T. C.

    2016-07-01

    We report a novel superresolution microscopy approach for imaging fluorescence samples. The reported approach, termed virtual k -space modulation optical microscopy (VIKMOM), is able to improve the lateral resolution by a factor of 2, reduce the background level, improve the optical sectioning effect and correct for unknown optical aberrations. In the acquisition process of VIKMOM, we used a scanning confocal microscope setup with a 2D detector array to capture sample information at each scanned x -y position. In the recovery process of VIKMOM, we first modulated the captured data by virtual k -space coding and then employed a ptychography-inspired procedure to recover the sample information and correct for unknown optical aberrations. We demonstrated the performance of the reported approach by imaging fluorescent beads, fixed bovine pulmonary artery endothelial (BPAE) cells, and living human astrocytes (HA). As the VIKMOM approach is fully compatible with conventional confocal microscope setups, it may provide a turn-key solution for imaging biological samples with ˜100 nm lateral resolution, in two or three dimensions, with improved optical sectioning capabilities and aberration correcting.

  18. A Correction to the Stress-Strain Curve During Multistage Hot Deformation of 7150 Aluminum Alloy Using Instantaneous Friction Factors

    NASA Astrophysics Data System (ADS)

    Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui

    2018-04-01

    Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.

  19. Bistatic scattering from a cone frustum

    NASA Technical Reports Server (NTRS)

    Ebihara, W.; Marhefka, R. J.

    1986-01-01

    The bistatic scattering from a perfectly conducting cone frustum is investigated using the Geometrical Theory of Diffraction (GTD). The first-order GTD edge-diffraction solution has been extended by correcting for its failure in the specular region off the curved surface and in the rim-caustic regions of the endcaps. The corrections are accomplished by the use of transition functions which are developed and introduced into the diffraction coefficients. Theoretical results are verified in the principal plane by comparison with the moment method solution and experimental measurements. The resulting solution for the scattered fields is accurate, easy to apply, and fast to compute.

  20. Pipeline for illumination correction of images for high-throughput microscopy.

    PubMed

    Singh, S; Bray, M-A; Jones, T R; Carpenter, A E

    2014-12-01

    The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  1. PDR with a Foot-Mounted IMU and Ramp Detection

    PubMed Central

    Jiménez, Antonio R.; Seco, Fernando; Zampella, Francisco; Prieto, José C.; Guevara, Jorge

    2011-01-01

    The localization of persons in indoor environments is nowadays an open problem. There are partial solutions based on the deployment of a network of sensors (Local Positioning Systems or LPS). Other solutions only require the installation of an inertial sensor on the person’s body (Pedestrian Dead-Reckoning or PDR). PDR solutions integrate the signals coming from an Inertial Measurement Unit (IMU), which usually contains 3 accelerometers and 3 gyroscopes. The main problem of PDR is the accumulation of positioning errors due to the drift caused by the noise in the sensors. This paper presents a PDR solution that incorporates a drift correction method based on detecting the access ramps usually found in buildings. The ramp correction method is implemented over a PDR framework that uses an Inertial Navigation algorithm (INS) and an IMU attached to the person’s foot. Unlike other approaches that use external sensors to correct the drift error, we only use one IMU on the foot. To detect a ramp, the slope of the terrain on which the user is walking, and the change in height sensed when moving forward, are estimated from the IMU. After detection, the ramp is checked for association with one of the existing in a database. For each associated ramp, a position correction is fed into the Kalman Filter in order to refine the INS-PDR solution. Drift-free localization is achieved with positioning errors below 2 meters for 1,000-meter-long routes in a building with a few ramps. PMID:22163701

  2. A correction for Dupuit-Forchheimer interface flow models of seawater intrusion in unconfined coastal aquifers

    NASA Astrophysics Data System (ADS)

    Koussis, Antonis D.; Mazi, Katerina; Riou, Fabien; Destouni, Georgia

    2015-06-01

    Interface flow models that use the Dupuit-Forchheimer (DF) approximation for assessing the freshwater lens and the seawater intrusion in coastal aquifers lack representation of the gap through which fresh groundwater discharges to the sea. In these models, the interface outcrops unrealistically at the same point as the free surface, is too shallow and intersects the aquifer base too far inland, thus overestimating an intruding seawater front. To correct this shortcoming of DF-type interface solutions for unconfined aquifers, we here adapt the outflow gap estimate of an analytical 2-D interface solution for infinitely thick aquifers to fit the 50%-salinity contour of variable-density solutions for finite-depth aquifers. We further improve the accuracy of the interface toe location predicted with depth-integrated DF interface solutions by ∼20% (relative to the 50%-salinity contour of variable-density solutions) by combining the outflow-gap adjusted aquifer depth at the sea with a transverse-dispersion adjusted density ratio (Pool and Carrera, 2011), appropriately modified for unconfined flow. The effectiveness of the combined correction is exemplified for two regional Mediterranean aquifers, the Israel Coastal and Nile Delta aquifers.

  3. Performance Enhancement of a USV INS/CNS/DVL Integration Navigation System Based on an Adaptive Information Sharing Factor Federated Filter

    PubMed Central

    Wang, Qiuying; Cui, Xufei; Li, Yibing; Ye, Fang

    2017-01-01

    To improve the ability of autonomous navigation for Unmanned Surface Vehicles (USVs), multi-sensor integrated navigation based on Inertial Navigation System (INS), Celestial Navigation System (CNS) and Doppler Velocity Log (DVL) is proposed. The CNS position and the DVL velocity are introduced as the reference information to correct the INS divergence error. The autonomy of the integrated system based on INS/CNS/DVL is much better compared with the integration based on INS/GNSS alone. However, the accuracy of DVL velocity and CNS position are decreased by the measurement noise of DVL and bad weather, respectively. Hence, the INS divergence error cannot be estimated and corrected by the reference information. To resolve the problem, the Adaptive Information Sharing Factor Federated Filter (AISFF) is introduced to fuse data. The information sharing factor of the Federated Filter is adaptively adjusted to maintaining multiple component solutions usable as back-ups, which can improve the reliability of overall system. The effectiveness of this approach is demonstrated by simulation and experiment, the results show that for the INS/CNS/DVL integrated system, when the DVL velocity accuracy is decreased and the CNS cannot work under bad weather conditions, the INS/CNS/DVL integrated system can operate stably based on the AISFF method. PMID:28165369

  4. Performance Enhancement of a USV INS/CNS/DVL Integration Navigation System Based on an Adaptive Information Sharing Factor Federated Filter.

    PubMed

    Wang, Qiuying; Cui, Xufei; Li, Yibing; Ye, Fang

    2017-02-03

    To improve the ability of autonomous navigation for Unmanned Surface Vehicles (USVs), multi-sensor integrated navigation based on Inertial Navigation System (INS), Celestial Navigation System (CNS) and Doppler Velocity Log (DVL) is proposed. The CNS position and the DVL velocity are introduced as the reference information to correct the INS divergence error. The autonomy of the integrated system based on INS/CNS/DVL is much better compared with the integration based on INS/GNSS alone. However, the accuracy of DVL velocity and CNS position are decreased by the measurement noise of DVL and bad weather, respectively. Hence, the INS divergence error cannot be estimated and corrected by the reference information. To resolve the problem, the Adaptive Information Sharing Factor Federated Filter (AISFF) is introduced to fuse data. The information sharing factor of the Federated Filter is adaptively adjusted to maintaining multiple component solutions usable as back-ups, which can improve the reliability of overall system. The effectiveness of this approach is demonstrated by simulation and experiment, the results show that for the INS/CNS/DVL integrated system, when the DVL velocity accuracy is decreased and the CNS cannot work under bad weather conditions, the INS/CNS/DVL integrated system can operate stably based on the AISFF method.

  5. A mass-balanced definition of corrected retention volume in gas chromatography.

    PubMed

    Kurganov, A

    2007-05-25

    The mass balance equation of a chromatographic system using a compressible moving phase has been compiled for mass flow of the mobile phase instead of traditional volumetric flow allowing solution of the equation in an analytical form. The relation obtained correlates retention volume measured under ambient conditions with the partition coefficient of the solute. Compared to the relation in the ideal chromatographic system the equation derived contains an additional correction term accounting for the compressibility of the moving phase. When the retention volume is measured under the mean column pressure and column temperature the correction term is reduced to unit and the relation is simplified to those known for the ideal system. This volume according to International Union of Pure and Applied Chemistry (IUPAC) is called the corrected retention volume.

  6. Constructing Acceptable RWM Approaches: The Politics of Participation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laes, E.; Bombaerts, G.

    2006-07-01

    Public participation in a complex technological issue such as the management of radioactive waste needs to be based on a simultaneous construction of scientific, ethical and socio-political foundations. Confronting this challenge is in no way straightforward. The problem is not only that the 'hard' technocrats downplay the importance of socio-political and ethical factors; also, our 'soft' ethical vocabularies (e.g. Habermasian 'discourse ethics') seem to be ill-equipped for tackling such complex questions (in terms of finding concrete solutions). On the other hand, professionals in the field, confronted with a (sometimes urgent) need for finding workable solutions, cannot wait for armchair philosophersmore » to formulate the correct academic answers to their questions. Different public participation and communication models have been developed and tested in real-world conditions, for instance in the Belgian 'partnership approach' to the siting of a low-level waste management facility. Starting from the confrontation of theoretical outlooks and pragmatic solutions, this paper identifies a number of 'dilemmas of participation' that can only be resolved by inherently political choices. Successfully negotiating these dilemmas is of course difficult and conditional on many contextual factors, but nevertheless at the end of the paper an attempt is made to sketch the contours of three possible future scenarios (each with their own limits and possibilities). (authors)« less

  7. General relativistic electromagnetic fields of a slowly rotating magnetized neutron star - I. Formulation of the equations

    NASA Astrophysics Data System (ADS)

    Rezzolla, L.; Ahmedov, B. J.; Miller, J. C.

    2001-04-01

    We present analytic solutions of Maxwell equations in the internal and external background space-time of a slowly rotating magnetized neutron star. The star is considered isolated and in vacuum, with a dipolar magnetic field not aligned with the axis of rotation. With respect to a flat space-time solution, general relativity introduces corrections related both to the monopolar and the dipolar parts of the gravitational field. In particular, we show that in the case of infinite electrical conductivity general relativistic corrections resulting from the dragging of reference frames are present, but only in the expression for the electric field. In the case of finite electrical conductivity, however, corrections resulting from both the space-time curvature and the dragging of reference frames are shown to be present in the induction equation. These corrections could be relevant for the evolution of the magnetic fields of pulsars and magnetars. The solutions found, while obtained through some simplifying assumption, reflect a rather general physical configuration and could therefore be used in a variety of astrophysical situations.

  8. Comninou contact zones for a crack parallel to an interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, P.F.; Gadi, K.S.; Erdogen, F.

    One of the interesting features in studying the state of stress in elastic solids near singular points, is the so called complex singularity that gives rise to an apparent local oscillatory behavior in the stress and displacement fields. The region in which this occurs is very small, much smaller than any plastic zone would be, and therefore the oscillations can be ignored in practical applications. Nevertheless, it is a matter of interesting theoretical investigation. The Comninou model of a small contact zone near the crack tip appears to correct for this anomaly within the framework of the linear theory. Thismore » model seems to make sense out of a {open_quotes}solution{close_quotes} that violates the boundary conditions. Erdogan and Joseph, showed (to themselves anyway) that the Comninou model actually has a physical basis. They considered a crack parallel to an interface where the order of the singularity is always real. With great care in solving the singular integral equations, it was shown that as the crack approaches the interface, a pinching effect is observed at the crack tip. This pinching effect proves that in the limit as the crack approaches the interface, the correct way to handle the problem is to consider crack surface contact. In this way, the issue of {open_quotes}oscillations{close_quotes} is never encountered for the interface crack problem. In the present study, the value of h/a that corresponds to crack closure (zero value of the stress intensity factor) will be determined for a given material pair for tensile loading. An asymptotic numerical method for the solution of singular integral equations making use of is used to obtain this result. Results for the crack opening displacement near the tip of the crack and the behavior of the stress intensity factor for cracks very close to the interface are presented. Among other interesting issues to be discussed, this solution shows that the semi-infinite crack parallel to an interface is closed.« less

  9. Evaluation of a Pair-Wise Conflict Detection and Resolution Algorithm in a Multiple Aircraft Scenario

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    2002-01-01

    The KB3D algorithm is a pairwise conflict detection and resolution (CD&R) algorithm. It detects and generates trajectory vectoring for an aircraft which has been predicted to be in an airspace minima violation within a given look-ahead time. It has been proven, using mechanized theorem proving techniques, that for a pair of aircraft, KB3D produces at least one vectoring solution and that all solutions produced are correct. Although solutions produced by the algorithm are mathematically correct, they might not be physically executable by an aircraft or might not solve multiple aircraft conflicts. This paper describes a simple solution selection method which assesses all solutions generated by KB3D and determines the solution to be executed. The solution selection method and KB3D are evaluated using a simulation in which N aircraft fly in a free-flight environment and each aircraft in the simulation uses KB3D to maintain separation. Specifically, the solution selection method filters KB3D solutions which are procedurally undesirable or physically not executable and uses a predetermined criteria for selection.

  10. Improving ESL Writing Using an Online Formulaic Sequence Word-Combination Checker

    ERIC Educational Resources Information Center

    Grami, G. M. A.; Alkazemi, B. Y.

    2016-01-01

    Writing correct English sentences can be challenging. Furthermore, writing correct formulaic sequences can be especially difficult because accepted combinations do not follow clear rules governing which words appear together in a sequence. One solution is to provide examples of correct usage accompanied by statistical feedback from web-based…

  11. Strategies for Meeting Correctional Training and Manpower Needs, Four Developmental Projects.

    ERIC Educational Resources Information Center

    Law Enforcement Assistance Administration (Dept. of Justice), Washington, DC.

    The Law Enforcement Education Act of 1965 has placed special emphasis on projects involving training and utilization of correctional manpower. The four representative projects reported here give a comprehensive view of the problems of upgrading correctional staff and possible solutions to those problems: (1) "The Developmental Laboratory for…

  12. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  13. Extension of relativistic dissipative hydrodynamics to third order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El, Andrej; Xu Zhe; Greiner, Carsten

    2010-04-15

    Following the procedure introduced by Israel and Stewart, we expand the entropy current up to the third order in the shear stress tensor pi{sup a}lpha{sup b}eta and derive a novel third-order evolution equation for pi{sup a}lpha{sup b}eta. This equation is solved for the one-dimensional Bjorken boost-invariant expansion. The scaling solutions for various values of the shear viscosity to the entropy density ratio eta/s are shown to be in very good agreement with those obtained from kinetic transport calculations. For the pressure isotropy starting with 1 at tau{sub 0}=0.4 fm/c, the third-order corrections to Israel-Stewart theory are approximately 10% for eta/s=0.2more » and more than a factor of 2 for eta/s=3. We also estimate all higher-order corrections to Israel-Stewart theory and demonstrate their importance in describing highly viscous matters.« less

  14. Kerr-Newman black holes with string corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, Anthony M.; Larsen, Finn

    We study N = 2 supergravity with higher-derivative corrections that preserve the N = 2 supersymmetry and show that Kerr-Newman black holes are solutions to these theories. Modifications of the black hole entropy due to the higher derivatives are universal and apply even in the BPS and Schwarzschild limits. Our solutions and their entropy are greatly simplified by supersymmetry of the theory even though the black holes generally do not preserve any of the supersymmetry.

  15. Kerr-Newman black holes with string corrections

    DOE PAGES

    Charles, Anthony M.; Larsen, Finn

    2016-10-26

    We study N = 2 supergravity with higher-derivative corrections that preserve the N = 2 supersymmetry and show that Kerr-Newman black holes are solutions to these theories. Modifications of the black hole entropy due to the higher derivatives are universal and apply even in the BPS and Schwarzschild limits. Our solutions and their entropy are greatly simplified by supersymmetry of the theory even though the black holes generally do not preserve any of the supersymmetry.

  16. Improved Convergence and Robustness of USM3D Solutions on Mixed Element Grids (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2015-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Scheme (HANIS), has been developed and implemented. It provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier Stokes (RANS) equations and a nonlinear control of the solution update. Two variants of the new methodology are assessed on four benchmark cases, namely, a zero-pressure gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the baseline solver technology.

  17. THE CALCULATION OF BURNABLE POISON CORRECTION FACTORS FOR PWR FRESH FUEL ACTIVE COLLAR MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea; Swinhoe, Martyn T.

    2012-06-19

    Verification of commercial low enriched uranium light water reactor fuel takes place at the fuel fabrication facility as part of the overall international nuclear safeguards solution to the civilian use of nuclear technology. The fissile mass per unit length is determined nondestructively by active neutron coincidence counting using a neutron collar. A collar comprises four slabs of high density polyethylene that surround the assembly. Three of the slabs contain {sup 3}He filled proportional counters to detect time correlated fission neutrons induced by an AmLi source placed in the fourth slab. Historically, the response of a particular collar design to amore » particular fuel assembly type has been established by careful cross-calibration to experimental absolute calibrations. Traceability exists to sources and materials held at Los Alamos National Laboratory for over 35 years. This simple yet powerful approach has ensured consistency of application. Since the 1980's there has been a steady improvement in fuel performance. The trend has been to higher burn up. This requires the use of both higher initial enrichment and greater concentrations of burnable poisons. The original analytical relationships to correct for varying fuel composition are consequently being challenged because the experimental basis for them made use of fuels of lower enrichment and lower poison content than is in use today and is envisioned for use in the near term. Thus a reassessment of the correction factors is needed. Experimental reassessment is expensive and time consuming given the great variation between fuel assemblies in circulation. Fortunately current modeling methods enable relative response functions to be calculated with high accuracy. Hence modeling provides a more convenient and cost effective means to derive correction factors which are fit for purpose with confidence. In this work we use the Monte Carlo code MCNPX with neutron coincidence tallies to calculate the influence of Gd{sub 2}O{sub 3} burnable poison on the measurement of fresh pressurized water reactor fuel. To empirically determine the response function over the range of historical and future use we have considered enrichments up to 5 wt% {sup 235}U/{sup tot}U and Gd weight fractions of up to 10 % Gd/UO{sub 2}. Parameterized correction factors are presented.« less

  18. EGSIEM combination service: combination of GRACE monthly K-band solutions on normal equation level

    NASA Astrophysics Data System (ADS)

    Meyer, Ulrich; Jean, Yoomin; Arnold, Daniel; Jäggi, Adrian

    2017-04-01

    The European Gravity Service for Improved Emergency Management (EGSIEM) project offers a scientific combination service, combining for the first time monthly GRACE gravity fields of different analysis centers (ACs) on normal equation (NEQ) level and thus taking all correlations between the gravity field coefficients and pre-eliminated orbit and instrument parameters correctly into account. Optimal weights for the individual NEQs are commonly derived by variance component estimation (VCE), as is the case for the products of the International VLBI Service (IVS) or the DTRF2008 reference frame realisation that are also derived by combination on NEQ-level. But variance factors are based on post-fit residuals and strongly depend on observation sampling and noise modeling, which both are very diverse in case of the individual EGSIEM ACs. These variance factors do not necessarily represent the true error levels of the estimated gravity field parameters that are still governed by analysis noise. We present a combination approach where weights are derived on solution level, thereby taking the analysis noise into account.

  19. Crack Turning and Arrest Mechanisms for Integral Structure

    NASA Technical Reports Server (NTRS)

    Pettit, Richard; Ingraffea, Anthony

    1999-01-01

    In the course of several years of research efforts to predict crack turning and flapping in aircraft fuselage structures and other problems related to crack turning, the 2nd order maximum tangential stress theory has been identified as the theory most capable of predicting the observed test results. This theory requires knowledge of a material specific characteristic length, and also a computation of the stress intensity factors and the T-stress, or second order term in the asymptotic stress field in the vicinity of the crack tip. A characteristic length, r(sub c), is proposed for ductile materials pertaining to the onset of plastic instability, as opposed to the void spacing theories espoused by previous investigators. For the plane stress case, an approximate estimate of r(sub c), is obtained from the asymptotic field for strain hardening materials given by Hutchinson, Rice and Rosengren (HRR). A previous study using of high order finite element methods to calculate T-stresses by contour integrals resulted in extremely high accuracy values obtained for selected test specimen geometries, and a theoretical error estimation parameter was defined. In the present study, it is shown that a large portion of the error in finite element computations of both K and T are systematic, and can be corrected after the initial solution if the finite element implementation utilizes a similar crack tip discretization scheme for all problems. This scheme is applied for two-dimensional problems to a both a p-version finite element code, showing that sufficiently accurate values of both K(sub I) and T can be obtained with fairly low order elements if correction is used. T-stress correction coefficients are also developed for the singular crack tip rosette utilized in the adaptive mesh finite element code FRANC2D, and shown to reduce the error in the computed T-stress significantly. Stress intensity factor correction was not attempted for FRANC2D because it employs a highly accurate quarter-point scheme to obtain stress intensity factors.

  20. A complete solution for GP-B's gyroscopic precession by retarded gravitational theory

    NASA Astrophysics Data System (ADS)

    Tang, Keyun

    Mainstream physicists generally believe that Mercury’s Perihelion precession and GP-B’ gyroscopic precession are two of the strongest evidences supporting Einstein’ curved spacetime and general relativity. However, most classical literatures and textbooks (e.g. Ohanain: Gravitation and Spacetime) paint an incorrect picture of Mercury’s orbit anomaly, namely Mercury’s perihelion precessed 43 arc-seconds per century; a correct picture should be that Mercury rotated 43 arc-seconds per century more than along Newtonian theoretical orbit. The essence of Le Verrier’s and Newcomb’s observation and analysis is that the angular speed of Mercury is slightly faster than the Newtonian theoretical value. The complete explanation to Mercury’s orbit anomaly should include two factors, perihelion precession is one of two factors, in addition, the change of orbital radius will also cause a change of angular speed, which is another component of Mercury's orbital anomaly. If Schwarzschild metric is correct, then the solution of the Schwarzschild orbit equation must contain three non-ignorable items. The first corresponds to Newtonian ellipse; the second is a nonlinear perturbation with increasing amplitude, which causes the precession of orbit perihelion; this is just one part of the angular speed anomaly of Mercury; the third part is a linear perturbation, corresponding to a similar figure of the Newton's ellipse, but with a minimal radius; this makes no contribution to the perihelion precession of the Schwarzschild orbit, but makes the Schwarzschild orbital radius slightly smaller, leading to a slight increase in Mercury’s angular speed. All classical literatures of general relativity ignored this last factor, which is a gross oversight. If you correctly take all three factors into consideration, the final result is that the difference between the angles rotated along Schwarzschild’s orbit and the angle rotated along Newton’s orbit for one hundred years should be more than 130.5 arc-seconds; this means that Le Verrier’s observation on Mercury’s orbital anomaly can not be explained correctly by the Schwarzschild metric. In contrast, Mercury’s angular speed anomaly can be explained satisfactorily by the radial induction component and angular component of retarded gravitation. From the perspective of energy, the additional radial component of retarded gravitation makes the radius of Mercury’s orbit slightly smaller, i.e. some potential energy is lost. And the angular component of retarded gravitation changes the Mercury's angular momentum; this proves that the changes of Mercury’s orbit and angular speed are the results of gravitational radiation. I have found that there are similar errors in the explanation on the gyroscopic precession of GP-B, i.e. physicists only consider the contribution of the nonlinear perturbation terms and never consider the contribution of linear perturbation terms. For the precession of GP-B, the complete Schwarzschild’s solution should be about 19.8 arc-seconds per year; it is far more than the experimental results of 6.602 arc-seconds per year. I have calculated the gyroscopic precession of GP-B due to retarded gravitation, the result is 6.607 arc-seconds per year; this matches well with the experimental results. These successful explanations for both anomalies of Mercury’s orbit and the gyroscopic precession of GP -B shows that Retarded Gravitation is indeed a sound gravitational theory, and that spacetime is in fact flat, and gravity travels at the speed of light. Both Mercury’s angular speed anomaly and GP - B gyro precession were the result of the gravitational radiation!

  1. Sample Dimensionality Effects on d' and Proportion of Correct Responses in Discrimination Testing.

    PubMed

    Bloom, David J; Lee, Soo-Yeun

    2016-09-01

    Products in the food and beverage industry have varying levels of dimensionality ranging from pure water to multicomponent food products, which can modify sensory perception and possibly influence discrimination testing results. The objectives of the study were to determine the impact of (1) sample dimensionality and (2) complex formulation changes on the d' and proportion of correct response of the 3-AFC and triangle methods. Two experiments were conducted using 47 prescreened subjects who performed either triangle or 3-AFC test procedures. In Experiment I, subjects performed 3-AFC and triangle tests using model solutions with different levels of dimensionality. Samples increased in dimensionality from 1-dimensional sucrose in water solution to 3-dimensional sucrose, citric acid, and flavor in water solution. In Experiment II, subjects performed 3-AFC and triangle tests using 3-dimensional solutions. Sample pairs differed in all 3 dimensions simultaneously to represent complex formulation changes. Two forms of complexity were compared: dilution, where all dimensions decreased in the same ratio, and compensation, where a dimension was increased to compensate for a reduction in another. The proportion of correct responses decreased for both methods when the dimensionality was increased from 1- to 2-dimensional samples. No reduction in correct responses was observed from 2- to 3-dimensional samples. No significant differences in d' were demonstrated between the 2 methods when samples with complex formulation changes were tested. Results reveal an impact on proportion of correct responses due to sample dimensionality and should be explored further using a wide range of sample formulations. © 2016 Institute of Food Technologists®

  2. Model based high NA anamorphic EUV RET

    NASA Astrophysics Data System (ADS)

    Jiang, Fan; Wiaux, Vincent; Fenger, Germain; Clifford, Chris; Liubich, Vlad; Hendrickx, Eric

    2018-03-01

    With the announcement of the extension of the Extreme Ultraviolet (EUV) roadmap to a high NA lithography tool that utilizes anamorphic optics design, an investigation of design tradeoffs unique to the imaging of anamorphic lithography tool is shown. An anamorphic optical proximity correction (OPC) solution has been developed that models fully the EUV near field electromagnetic effects and the anamorphic imaging using the Domain Decomposition Method (DDM). Clips of imec representative for the N3 logic node were used to demonstrate the OPC solutions on critical layers that will benefit from the increased contrast at high NA using anamorphic imaging. However, unlike isomorphic case, from wafer perspective, OPC needs to treat x and y differently. In the paper, we show a design trade-off seen unique to Anamorphic EUV, namely that using a mask rule of 48nm (mask scale), approaching current state of the art, limitations are observed in the available correction that can be applied to the mask. The metal pattern has a pitch of 24nm and CD of 12nm. During OPC, the correction of the metal lines oriented vertically are being limited by the mask rule of 12nm 1X. The horizontally oriented lines do not suffer from this mask rule limitation as the correction is allowed to go to 6nm 1X. For this example, the masks rules will need to be more aggressive to allow complete correction, or design rules and wafer processes (wafer rotation) would need to be created that utilize the orientation that can image more aggressive features. When considering VIA or block level correction, aggressive polygon corner to corner designs can be handled with various solutions, including applying a 45 degree chop. Multiple solutions are discussed with the metrics of edge placement error (EPE) and Process Variation Bands (PVBands), together with all the mask constrains. Noted in anamorphic OPC, the 45 degree chop is maintained at the mask level to meet mask manufacturing constraints, but results in skewed angle edge in wafer level correction. In this paper, we used both contact (Via/block) patterns and metal patterns for OPC practice. By comparing the EPE of horizontal and vertical patterns with a fixed mask rule check (MRC), and the PVBand, we focus on the challenges and the solutions of OPC with anamorphic High-NA lens.

  3. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    PubMed

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  4. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  5. Multiple use of aspheres in cine lenses

    NASA Astrophysics Data System (ADS)

    Beder, Christian; Gängler, Dietmar

    2008-09-01

    Today's high performance cine lenses rely more and more on the use of aspheres. These are as powerful in correcting aberrations as they are expensive if it is not possible to use high-volume manufacturing processes. One possible solution to meet the increasing demands of design to cost is the use of identical parts in several lenses. The biggest gain is possible with the most expensive parts: the aspheres. In this presentation a successful as well as an ineffective way of incorporating the same asphere in three lenses which differ by a factor of 1.5 in focal length will be shown.

  6. Fluid therapy in calves.

    PubMed

    Smith, Geof W; Berchtold, Joachim

    2014-07-01

    Early and aggressive fluid therapy is critical in correcting the metabolic complications associated with calf diarrhea. Oral electrolyte therapy can be used with success in calves, but careful consideration should be given to the type of oral electrolyte used. Electrolyte solutions with high osmolalities can significantly slow abomasal emptying and can be a risk factor for abomasal bloat in calves. Milk should not be withheld from calves with diarrhea for more than 12 to 24 hours. Hypertonic saline and hypertonic sodium bicarbonate can be used effectively for intravenous fluid therapy on farms when intravenous catheterization is not possible. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 12: Suitability of plan class specific reference fields for estimating dosimeter correction factors for small clinical CyberKnife fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandervoort, Eric; Christiansen, Eric; Belec, Jaso

    Purpose: The purpose of this work is to investigate the utility of plan class specific reference (PCSR) fields for predicting dosimeter response within isocentric and non-isocentric composite clinical fields using the smallest fields employed by the CyberKnife radiosurgery system. Methods: Monte Carlo dosimeter response correction factors (CFs) were calculated for a plastic scintillator and microchamber dosimeter in 21 clinical fields and 9 candidate plan-class PCSR fields which employ the 5, 7.5 and 10 mm diameter collimators. Measurements were performed in 5 PCSR fields to confirm the predicted relative response of detectors in the same field. Results: Ratios of corrected measuredmore » dose in the PCSR fields agree to within 1% of unity. Calculated CFs for isocentric fields agree within 1.5% of those for PCSR fields. Large and variable microchamber CFs are required for non-isocentric fields, with differences as high as 5% between different clinical fields in the same plan class and 4% within the same field depending on the point of measurement. Non-isocentric PCSR fields constructed to have relatively homogenous dose over a region larger than the detector have very different ion chamber CFs from clinical fields. The plastic scintillator detector has much more consistent response within each plan class but still require 3–4% corrections in some fields. Conclusions: While the PCSR field concept is useful for small isocentric fields, this approach may not be appropriate for non-isocentric clinical fields which exhibit large and variable ion chamber CFs which differ significantly from CFs for homogenous field PCSRs.« less

  8. Neoclassical transport including collisional nonlinearity.

    PubMed

    Candy, J; Belli, E A

    2011-06-10

    In the standard δf theory of neoclassical transport, the zeroth-order (Maxwellian) solution is obtained analytically via the solution of a nonlinear equation. The first-order correction δf is subsequently computed as the solution of a linear, inhomogeneous equation that includes the linearized Fokker-Planck collision operator. This equation admits analytic solutions only in extreme asymptotic limits (banana, plateau, Pfirsch-Schlüter), and so must be solved numerically for realistic plasma parameters. Recently, numerical codes have appeared which attempt to compute the total distribution f more accurately than in the standard ordering by retaining some nonlinear terms related to finite-orbit width, while simultaneously reusing some form of the linearized collision operator. In this work we show that higher-order corrections to the distribution function may be unphysical if collisional nonlinearities are ignored.

  9. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  10. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  11. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  12. Urinary tract infections in women: etiology and treatment options

    PubMed Central

    Minardi, Daniele; d’Anzeo, Gianluca; Cantoro, Daniele; Conti, Alessandro; Muzzonigro, Giovanni

    2011-01-01

    Urinary tract infections (UTI) are common among the female population. It has been calculated that about one-third of adult women have experienced an episode of symptomatic cystitis at least once. It is also common for these episodes to recur. If predisposing factors are not identified and removed, UTI can lead to more serious consequences, in particular kidney damage and renal failure. The aim of this review was to analyze the factors more commonly correlated with UTI in women, and to see what possible solutions are currently used in general practice and specialized areas, as well as those still under investigation. A good understanding of the possible pathogenic factors contributing to the development of UTI and its recurrence will help the general practitioner to interview the patient, search for causes that would otherwise remain undiscovered, and to identify the correct therapeutic strategy. PMID:21674026

  13. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  14. Erratum: Nonlinear Dirac equation solitary waves in external fields [Phys. Rev. E 86, 046602 (2012)

    DOE PAGES

    Mertens, Franz G.; Quintero, Niurka R.; Cooper, Fred; ...

    2016-05-10

    In Sec. IV of our original paper, we assumed a particular conservation law Eq. (4.6), which was true in the absence of external potentials, to derive some particular potentials for which we obtained solutions to the nonlinear Dirac equation (NLDE). Because the conservation law of Eq. (4.6) for the component T 11 of the energy-momentum tensor is not true in the presence of these external potentials, the solutions we found do not satisfy the NLDEs in the presence of these potentials. Thus all the equations from Eq. (4.6) through Eq. (4.44) are not correct, since the exact solutions that followedmore » in that section presumed Eq. (4.6) was true. Also Eqs. (A3)–(A5) are a restatement of Eq. (4.6) and also are not correct. These latter equations are also not used in Sec. V and beyond. The rest of our original paper (starting with Sec. V) was not concerned with exact solutions, rather it was concerned with how the exact solitary-wave solutions to the NLDE in the absence of an external potential responded to being in the presence of various external potentials. This Erratum corrects this mistake.« less

  15. On the Flow of a Compressible Fluid by the Hodograph Method. II - Fundamental Set of Particular Flow Solutions of the Chaplygin Differential Equation

    NASA Technical Reports Server (NTRS)

    Garrick, I. E.; Kaplan, Carl

    1944-01-01

    The differential equation of Chaplygin's jet problem is utilized to give a systematic development of particular solutions of the hodograph flow equations, which extends the treatment of Chaplygin into the supersonic range and completes the set of particular solutions. The particular solutions serve to place on a reasonable basis the use of velocity correction formulas for the comparison of incompressible and compressible flows. It is shown that the geometric-mean type of velocity correction formula introduced in part I has significance as an over-all type of approximation in the subsonic range. A brief review of general conditions limiting the potential flow of an adiabatic compressible fluid is given and application is made to the particular solutions, yielding conditions for the existence of singular loci in the supersonic range. The combining of particular solutions in accordance with prescribed boundary flow conditions is not treated in the present paper.

  16. The Charge Transfer Efficiency and Calibration of WFPC2

    NASA Technical Reports Server (NTRS)

    Dolphin, Andrew E.

    2000-01-01

    A new determination of WFPC2 photometric corrections is presented, using HSTphot reduction of the WFPC2 Omega Centauri and NGC 2419 observations from January 1994 through March 2000 and a comparison with ground-based photometry. No evidence is seen for any position-independent photometric offsets (the "long-short anomaly"); all systematic errors appear to be corrected with the CTE and zero point solution. The CTE loss time dependence is determined to be very significant in the Y direction, causing time-independent CTE solutions to be valid only for a small range of times. On average, the present solution produces corrections similar to Whitmore, Heyer, & Casertano, although with an improved functional form that produces less scatter in the residuals and determined with roughly a year of additional data. In addition to the CTE loss characterization, zero point corrections are also determined as functions of chip, gain, filter, and temperature. Of interest, there are chip-to-chip differences of order 0.01 - 0.02 magnitudes relative to the Holtzman et al. calibrations, and the present study provides empirical zero point determinations for the non-standard filters such as the frequently-used F450W, F606W, and F702W.

  17. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  18. Sulfate and sulfide sulfur isotopes (δ34S and δ33S) measured by solution and laser ablation MC-ICP-MS: An enhanced approach using external correction

    USGS Publications Warehouse

    Pribil, Michael; Ridley, William I.; Emsbo, Poul

    2015-01-01

    Isotope ratio measurements using a multi-collector inductively coupled plasma mass spectrometer (MC-ICP-MS) commonly use standard-sample bracketing with a single isotope standard for mass bias correction for elements with narrow-range isotope systems measured by MC-ICP-MS, e.g. Cu, Fe, Zn, and Hg. However, sulfur (S) isotopic composition (δ34S) in nature can range from at least − 40 to + 40‰, potentially exceeding the ability of standard-sample bracketing using a single sulfur isotope standard to accurately correct for mass bias. Isotopic fractionation via solution and laser ablation introduction was determined during sulfate sulfur (Ssulfate) isotope measurements. An external isotope calibration curve was constructed using in-house and National Institute of Standards and Technology (NIST) Ssulfate isotope reference materials (RM) in an attempt to correct for the difference. The ability of external isotope correction for Ssulfate isotope measurements was evaluated by analyzing NIST and United States Geological Survey (USGS) Ssulfate isotope reference materials as unknowns. Differences in δ34Ssulfate between standard-sample bracketing and standard-sample bracketing with external isotope correction for sulfate samples ranged from 0.72‰ to 2.35‰ over a δ34S range of 1.40‰ to 21.17‰. No isotopic differences were observed when analyzing Ssulfide reference materials over a δ34Ssulfide range of − 32.1‰ to 17.3‰ and a δ33S range of − 16.5‰ to 8.9‰ via laser ablation (LA)-MC-ICP-MS. Here, we identify a possible plasma induced fractionation for Ssulfate and describe a new method using external isotope calibration corrections using solution and LA-MC-ICP-MS.

  19. Benchmarking of calculation schemes in APOLLO2 and COBAYA3 for WER lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheleva, N.; Ivanov, P.; Todorova, G.

    This paper presents solutions of the NURISP WER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2 mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes withmore » the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs. TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of Generalized Equivalence Theory (GET) or Black Box Homogenization (BBH) type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors. (authors)« less

  20. Collective induction without cooperation? Learning and knowledge transfer in cooperative groups and competitive auctions.

    PubMed

    Maciejovsky, Boris; Budescu, David V

    2007-05-01

    There is strong evidence that groups perform better than individuals do on intellective tasks with demonstrably correct solutions. Typically, these studies assume that group members share common goals. The authors extend this line of research by replacing standard face-to-face group interactions with competitive auctions, allowing for conflicting individual incentives. In a series of studies involving the well-known Wason selection task, they demonstrate that competitive auctions induce learning effects equally impressive as those of standard group interactions, and they uncover specific and general knowledge transfers from these institutions to new reasoning problems. The authors identify payoff feedback and information pooling as the driving factors underlying these findings, and they explain these factors within the theoretical framework of collective induction. ((c) 2007 APA, all rights reserved).

  1. a Cell Vertex Algorithm for the Incompressible Navier-Stokes Equations on Non-Orthogonal Grids

    NASA Astrophysics Data System (ADS)

    Jessee, J. P.; Fiveland, W. A.

    1996-08-01

    The steady, incompressible Navier-Stokes (N-S) equations are discretized using a cell vertex, finite volume method. Quadrilateral and hexahedral meshes are used to represent two- and three-dimensional geometries respectively. The dependent variables include the Cartesian components of velocity and pressure. Advective fluxes are calculated using bounded, high-resolution schemes with a deferred correction procedure to maintain a compact stencil. This treatment insures bounded, non-oscillatory solutions while maintaining low numerical diffusion. The mass and momentum equations are solved with the projection method on a non-staggered grid. The coupling of the pressure and velocity fields is achieved using the Rhie and Chow interpolation scheme modified to provide solutions independent of time steps or relaxation factors. An algebraic multigrid solver is used for the solution of the implicit, linearized equations.A number of test cases are anlaysed and presented. The standard benchmark cases include a lid-driven cavity, flow through a gradual expansion and laminar flow in a three-dimensional curved duct. Predictions are compared with data, results of other workers and with predictions from a structured, cell-centred, control volume algorithm whenever applicable. Sensitivity of results to the advection differencing scheme is investigated by applying a number of higher-order flux limiters: the MINMOD, MUSCL, OSHER, CLAM and SMART schemes. As expected, studies indicate that higher-order schemes largely mitigate the diffusion effects of first-order schemes but also shown no clear preference among the higher-order schemes themselves with respect to accuracy. The effect of the deferred correction procedure on global convergence is discussed.

  2. Defense Logistics Agency Disposition Services Afghanistan Disposal Process Needed Improvement

    DTIC Science & Technology

    2013-11-08

    audit, and management was proactive in correcting the deficiencies we identified. DLA DS eliminated backlogs, identified and corrected system ...problems, provided additional system training, corrected coding errors, added personnel to key positions, addressed scale issues, submitted debit...Service Automated Information System to the Reutilization Business Integration2 (RBI) solution. The implementation of RBI in Afghanistan occurred in

  3. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  4. Remarks on Heisenberg-Euler-type electrodynamics

    NASA Astrophysics Data System (ADS)

    Kruglov, S. I.

    2017-05-01

    We consider Heisenberg-Euler-type model of nonlinear electrodynamics with two parameters. Heisenberg-Euler electrodynamics is a particular case of this model. Corrections to Coulomb’s law at r →∞ are obtained and energy conditions are studied. The total electrostatic energy of charged particles is finite. The charged black hole solution in the framework of nonlinear electrodynamics is investigated. We find the asymptotic of the metric and mass functions at r →∞. Corrections to the Reissner-Nordström solution are obtained.

  5. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Tanny, S; Parsai, E

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference conditions.« less

  6. An Evaluation of Unit and ½ Mass Correction Approaches as a ...

    EPA Pesticide Factsheets

    Rare earth elements (REE) and certain alkaline earths can produce M+2 interferences in ICP-MS because they have sufficiently low second ionization energies. Four REEs (150Sm, 150Nd, 156Gd and 156Dy) produce false positives on 75As and 78Se and 132Ba can produce a false positive on 66Zn. Currently, US EPA Method 200.8 does not address these as sources of false positives. Additionally, these M+2 false positives are typically enhanced if collision cell technology is utilized to reduce polyatomic interferences associated with ICP-MS detection. A preliminary evaluation indicates that instrumental tuning conditions can impact the observed M+2/M+1 ratio and in turn the false positives generated on Zn, As and Se. Both unit and ½ mass approaches will be evaluated to correct for these false positives relative to the benchmark concentrations estimates from a triple quadrupole ICP-MS using standard solutions. The impact of matrix on these M+2 corrections will be evaluated over multiple analysis days with a focus on evaluating internal standards that mirror the matrix induced shifts in the M+2 ion transmission. The goal of this evaluation is to move away from fixed M+2 corrective approaches and move towards sample specific approaches that mimic the sample matrix induced variability while attempting to address intra-day variability of the M+2 correction factors through the use of internal standards. Oral Presentation via webinar for EPA Laboratory Technical Informati

  7. Resonance treatment using pin-based pointwise energy slowing-down method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sooyoung, E-mail: csy0321@unist.ac.kr; Lee, Changho, E-mail: clee@anl.gov; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for {sup 238}U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solutionmore » is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.« less

  8. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  9. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.

    2016-01-01

    Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  10. i4OilSpill, an operational marine oil spill forecasting model for Bohai Sea

    NASA Astrophysics Data System (ADS)

    Yu, Fangjie; Yao, Fuxin; Zhao, Yang; Wang, Guansuo; Chen, Ge

    2016-10-01

    Oil spill models can effectively simulate the trajectories and fate of oil slicks, which is an essential element in contingency planning and effective response strategies prepared for oil spill accidents. However, when applied to offshore areas such as the Bohai Sea, the trajectories and fate of oil slicks would be affected by time-varying factors in a regional scale, which are assumed to be constant in most of the present models. In fact, these factors in offshore regions show much more variation over time than in the deep sea, due to offshore bathymetric and climatic characteristics. In this paper, the challenge of parameterizing these offshore factors is tackled. The remote sensing data of the region are used to analyze the modification of wind-induced drift factors, and a well-suited solution is established in parameter correction mechanism for oil spill models. The novelty of the algorithm is the self-adaptive modification mechanism of the drift factors derived from the remote sensing data for the targeted sea region, in respect to empirical constants in the present models. Considering this situation, a new regional oil spill model (i4OilSpill) for the Bohai Sea is developed, which can simulate oil transformation and fate processes by Eulerian-Lagrangian methodology. The forecasting accuracy of the proposed model is proven by the validation results in the comparison between model simulation and subsequent satellite observations on the Penglai 19-3 oil spill accident. The performance of the model parameter correction mechanism is evaluated by comparing with the real spilled oil position extracted from ASAR images.

  11. Determination of plutonium in nitric acid solutions using energy dispersive L X-ray fluorescence with a low power X-ray generator

    NASA Astrophysics Data System (ADS)

    Py, J.; Groetz, J.-E.; Hubinois, J.-C.; Cardona, D.

    2015-04-01

    This work presents the development of an in-line energy dispersive L X-ray fluorescence spectrometer set-up, with a low power X-ray generator and a secondary target, for the determination of plutonium concentration in nitric acid solutions. The intensity of the L X-rays from the internal conversion and gamma rays emitted by the daughter nuclei from plutonium is minimized and corrected, in order to eliminate the interferences with the L X-ray fluorescence spectrum. The matrix effects are then corrected by the Compton peak method. A calibration plot for plutonium solutions within the range 0.1-20 g L-1 is given.

  12. General Relativistic Theory of the VLBI Time Delay in the Gravitational Field of Moving Bodies

    NASA Technical Reports Server (NTRS)

    Kopeikin, Sergei

    2003-01-01

    The general relativistic theory of the gravitational VLBI experiment conducted on September 8, 2002 by Fomalont and Kopeikin is explained. Equations of radio waves (light) propagating from the quasar to the observer are integrated in the time-dependent gravitational field of the solar system by making use of either retarded or advanced solutions of the Einstein field equations. This mathematical technique separates explicitly the effects associated with the propagation of gravity from those associated with light in the integral expression for the relativistic VLBI time delay of light. We prove that the relativistic correction to the Shapiro time delay, discovered by Kopeikin (ApJ, 556, L1, 2001), changes sign if one retains direction of the light propagation but replaces the retarded for the advanced solution of the Einstein equations. Hence, this correction is associated with the propagation of gravity. The VLBI observation measured its speed, and that the retarded solution is the correct one.

  13. Expanding wave solutions of the Einstein equations that induce an anomalous acceleration into the Standard Model of Cosmology.

    PubMed

    Temple, Blake; Smoller, Joel

    2009-08-25

    We derive a system of three coupled equations that implicitly defines a continuous one-parameter family of expanding wave solutions of the Einstein equations, such that the Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology is embedded as a single point in this family. By approximating solutions near the center to leading order in the Hubble length, the family reduces to an explicit one-parameter family of expanding spacetimes, given in closed form, that represents a perturbation of the Standard Model. By introducing a comoving coordinate system, we calculate the correction to the Hubble constant as well as the exact leading order quadratic correction to the redshift vs. luminosity relation for an observer at the center. The correction to redshift vs. luminosity entails an adjustable free parameter that introduces an anomalous acceleration. We conclude (by continuity) that corrections to the redshift vs. luminosity relation observed after the radiation phase of the Big Bang can be accounted for, at the leading order quadratic level, by adjustment of this free parameter. The next order correction is then a prediction. Since nonlinearities alone could actuate dissipation and decay in the conservation laws associated with the highly nonlinear radiation phase and since noninteracting expanding waves represent possible time-asymptotic wave patterns that could result, we propose to further investigate the possibility that these corrections to the Standard Model might be the source of the anomalous acceleration of the galaxies, an explanation not requiring the cosmological constant or dark energy.

  14. Basic FGF or VEGF gene therapy corrects insufficiency in the intrinsic healing capacity of tendons

    PubMed Central

    Tang, Jin Bo; Wu, Ya Fang; Cao, Yi; Chen, Chuan Hao; Zhou, You Lang; Avanessian, Bella; Shimada, Masaru; Wang, Xiao Tian; Liu, Paul Y.

    2016-01-01

    Tendon injury during limb motion is common. Damaged tendons heal poorly and frequently undergo unpredictable ruptures or impaired motion due to insufficient innate healing capacity. By basic fibroblast growth factor (bFGF) or vascular endothelial growth factor (VEGF) gene therapy via adeno-associated viral type-2 (AAV2) vector to produce supernormal amount of bFGF or VEGF intrinsically in the tendon, we effectively corrected the insufficiency of the tendon healing capacity. This therapeutic approach (1) resulted in substantial amelioration of the low growth factor activity with significant increases in bFGF or VEGF from weeks 4 to 6 in the treated tendons (p < 0.05 or p < 0.01), (2) significantly promoted production of type I collagen and other extracellular molecules (p < 0.01) and accelerated cellular proliferation, and (3) significantly increased tendon strength by 68–91% from week 2 after AAV2-bFGF treatment and by 82–210% from week 3 after AAV2-VEGF compared with that of the controls (p < 0.05 or p < 0.01). Moreover, the transgene expression dissipated after healing was complete. These findings show that the gene transfers provide an optimistic solution to the insufficiencies of the intrinsic healing capacity of the tendon and offers an effective therapeutic possibility for patients with tendon disunion. PMID:26865366

  15. The biology of distraction osteogenesis for correction of mandibular and craniomaxillofacial defects: A review

    PubMed Central

    Natu, Subodh Shankar; Ali, Iqbal; Alam, Sarwar; Giri, Kolli Yada; Agarwal, Anshita; Kulkarni, Vrishali Ajit

    2014-01-01

    Limb lengthening by distraction osteogenesis was first described in 1905. The technique did not gain wide acceptance until Gavril Ilizarov identified the physiologic and mechanical factors governing successful regeneration of bone formation. Distraction osteogenesis is a new variation of more traditional orthognathic surgical procedure for the correction of dentofacial deformities. It is most commonly used for the correction of more severe deformities and syndromes of both the maxilla and the mandible and can also be used in children at ages previously untreatable. The basic technique includes surgical fracture of deformed bone, insertion of device, 5-7 days rest, and gradual separation of bony segments by subsequent activation at the rate of 1 mm per day, followed by an 8-12 weeks consolidation phase. This allows surgeons, the lengthening and reshaping of deformed bone. The aim of this paper is to review the principle, technical considerations, applications and limitations of distraction osteogenesis. The application of osteodistraction offers novel solutions for surgical-orthodontic management of developmental anomalies of the craniofacial skeleton as bone may be molded into different shapes along with the soft tissue component gradually thereby resulting in less relapse. PMID:24688555

  16. On the Solution of the Continuity Equation for Precipitating Electrons in Solar Flares

    NASA Technical Reports Server (NTRS)

    Emslie, A. Gordon; Holman, Gordon D.; Litvinenko, Yuri E.

    2014-01-01

    Electrons accelerated in solar flares are injected into the surrounding plasma, where they are subjected to the influence of collisional (Coulomb) energy losses. Their evolution is modeled by a partial differential equation describing continuity of electron number. In a recent paper, Dobranskis & Zharkova claim to have found an "updated exact analytical solution" to this continuity equation. Their solution contains an additional term that drives an exponential decrease in electron density with depth, leading them to assert that the well-known solution derived by Brown, Syrovatskii & Shmeleva, and many others is invalid. We show that the solution of Dobranskis & Zharkova results from a fundamental error in the application of the method of characteristics and is hence incorrect. Further, their comparison of the "new" analytical solution with numerical solutions of the Fokker-Planck equation fails to lend support to their result.We conclude that Dobranskis & Zharkova's solution of the universally accepted and well-established continuity equation is incorrect, and that their criticism of the correct solution is unfounded. We also demonstrate the formal equivalence of the approaches of Syrovatskii & Shmeleva and Brown, with particular reference to the evolution of the electron flux and number density (both differential in energy) in a collisional thick target. We strongly urge use of these long-established, correct solutions in future works.

  17. Employing UMLS for generating hints in a tutoring system for medical problem-based learning.

    PubMed

    Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan

    2012-06-01

    While problem-based learning has become widely popular for imparting clinical reasoning skills, the dynamics of medical PBL require close attention to a small group of students, placing a burden on medical faculty, whose time is over taxed. Intelligent tutoring systems (ITSs) offer an attractive means to increase the amount of facilitated PBL training the students receive. But typical intelligent tutoring system architectures make use of a domain model that provides a limited set of approved solutions to problems presented to students. Student solutions that do not match the approved ones, but are otherwise partially correct, receive little acknowledgement as feedback, stifling broader reasoning. Allowing students to creatively explore the space of possible solutions is exactly one of the attractive features of PBL. This paper provides an alternative to the traditional ITS architecture by using a hint generation strategy that leverages a domain ontology to provide effective feedback. The concept hierarchy and co-occurrence between concepts in the domain ontology are drawn upon to ascertain partial correctness of a solution and guide student reasoning towards a correct solution. We describe the strategy incorporated in METEOR, a tutoring system for medical PBL, wherein the widely available UMLS is deployed and represented as the domain ontology. Evaluation of expert agreement with system generated hints on a 5-point likert scale resulted in an average score of 4.44 (Spearman's ρ=0.80, p<0.01). Hints containing partial correctness feedback scored significantly higher than those without it (Mann Whitney, p<0.001). Hints produced by a human expert received an average score of 4.2 (Spearman's ρ=0.80, p<0.01). Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Do doctors understand the test characteristics of lung cancer screening?

    PubMed

    Schmidt, Richard; Breyer, Marie; Breyer-Kohansal, Robab; Urban, Matthias; Funk, Georg-Christian

    2018-04-01

    Screening for lung cancer with a low-dose computed tomography (CT) scan is estimated to prevent 3 deaths per 1000 individuals at high risk; however, false positive results and radiation exposure are relevant harms and deserve careful consideration. Screening candidates can only make an autonomous decision if doctors correctly inform them of the pros and cons of the method; therefore, this study aimed to evaluate whether doctors understand the test characteristics of lung cancer screening. In a randomized trial 556 doctors (members of the Austrian Respiratory Society) were invited to answer questions regarding lung cancer screening based on online case vignettes. Half of the participants were randomized to the group 'solutions provided' and received the correct solutions in advance. The group 'solutions withheld' had to rely on prior knowledge or estimates. The primary endpoint was the between-group difference in the estimated number of deaths preventable by screening. Secondary endpoints were the between-group differences in the prevalence of lung cancer, prevalence of a positive screening results, sensitivity, specificity, positive predictive value, and false negative rate. Estimations were also compared with current data from the literature. The response rate was 29% in both groups. The reduction in the number of deaths due to screening was overestimated six-fold (95% confidence interval CI: 4-8) compared with the actual data, and there was no effect of group allocation. Providing the correct solutions to doctors had no systematic effect on their answers. Doctors poorly understand the test characteristics of lung cancer screening. Providing the correct solutions in advance did not improve the answers. Continuing education regarding lung cancer screening and the interpretation of test characteristics may be a simple remedy. Clinical trial registered with www.clinicaltrials.gov (NCT02542332).

  19. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  20. Phase correction for ALMA. Investigating water vapour radiometer scaling: The long-baseline science verification data case study

    NASA Astrophysics Data System (ADS)

    Maud, L. T.; Tilanus, R. P. J.; van Kempen, T. A.; Hogerheijde, M. R.; Schmalzl, M.; Yoon, I.; Contreras, Y.; Toribio, M. C.; Asaki, Y.; Dent, W. R. F.; Fomalont, E.; Matsushita, S.

    2017-09-01

    The Atacama Large millimetre/submillimetre Array (ALMA) makes use of water vapour radiometers (WVR), which monitor the atmospheric water vapour line at 183 GHz along the line of sight above each antenna to correct for phase delays introduced by the wet component of the troposphere. The application of WVR derived phase corrections improve the image quality and facilitate successful observations in weather conditions that were classically marginal or poor. We present work to indicate that a scaling factor applied to the WVR solutions can act to further improve the phase stability and image quality of ALMA data. We find reduced phase noise statistics for 62 out of 75 datasets from the long-baseline science verification campaign after a WVR scaling factor is applied. The improvement of phase noise translates to an expected coherence improvement in 39 datasets. When imaging the bandpass source, we find 33 of the 39 datasets show an improvement in the signal-to-noise ratio (S/N) between a few to 30 percent. There are 23 datasets where the S/N of the science image is improved: 6 by <1%, 11 between 1 and 5%, and 6 above 5%. The higher frequencies studied (band 6 and band 7) are those most improved, specifically datasets with low precipitable water vapour (PWV), <1 mm, where the dominance of the wet component is reduced. Although these improvements are not profound, phase stability improvements via the WVR scaling factor come into play for the higher frequency (>450 GHz) and long-baseline (>5 km) observations. These inherently have poorer phase stability and are taken in low PWV (<1 mm) conditions for which we find the scaling to be most effective. A promising explanation for the scaling factor is the mixing of dry and wet air components, although other origins are discussed. We have produced a python code to allow ALMA users to undertake WVR scaling tests and make improvements to their data.

  1. Role Models without Guarantees: Corrective Representations and the Cultural Politics of a Latino Male Teacher in the Borderlands

    ERIC Educational Resources Information Center

    Singh, Michael V.

    2018-01-01

    In recent years mentorship has become a popular 'solution' for struggling boys of color and has led to the recruitment of more male of color teachers. While not arguing against the merits of mentorship, this article critiques what the author deems 'corrective representations.' Corrective representations are the imagined embodiment of proper and…

  2. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  3. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  4. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  5. Spatially unresolved SED fitting can underestimate galaxy masses: a solution to the missing mass problem

    NASA Astrophysics Data System (ADS)

    Sorba, Robert; Sawicki, Marcin

    2018-05-01

    We perform spatially resolved, pixel-by-pixel Spectral Energy Distribution (SED) fitting on galaxies up to z ˜ 2.5 in the Hubble eXtreme Deep Field (XDF). Comparing stellar mass estimates from spatially resolved and spatially unresolved photometry we find that unresolved masses can be systematically underestimated by factors of up to 5. The ratio of the unresolved to resolved mass measurement depends on the galaxy's specific star formation rate (sSFR): at low sSFRs the bias is small, but above sSFR ˜ 10-9.5 yr-1 the discrepancy increases rapidly such that galaxies with sSFRs ˜ 10-8 yr-1 have unresolved mass estimates of only one-half to one-fifth of the resolved value. This result indicates that stellar masses estimated from spatially unresolved data sets need to be systematically corrected, in some cases by large amounts, and we provide an analytic prescription for applying this correction. We show that correcting stellar mass measurements for this bias changes the normalization and slope of the star-forming main sequence and reduces its intrinsic width; most dramatically, correcting for the mass bias increases the stellar mass density of the Universe at high redshift and can resolve the long-standing discrepancy between the directly measured cosmic SFR density at z ≳ 1 and that inferred from stellar mass densities (`the missing mass problem').

  6. A simple and effective solution to the constrained QM/MM simulations

    NASA Astrophysics Data System (ADS)

    Takahashi, Hideaki; Kambe, Hiroyuki; Morita, Akihiro

    2018-04-01

    It is a promising extension of the quantum mechanical/molecular mechanical (QM/MM) approach to incorporate the solvent molecules surrounding the QM solute into the QM region to ensure the adequate description of the electronic polarization of the solute. However, the solvent molecules in the QM region inevitably diffuse into the MM bulk during the QM/MM simulation. In this article, we developed a simple and efficient method, referred to as the "boundary constraint with correction (BCC)," to prevent the diffusion of the solvent water molecules by means of a constraint potential. The point of the BCC method is to compensate the error in a statistical property due to the bias potential by adding a correction term obtained through a set of QM/MM simulations. The BCC method is designed so that the effect of the bias potential completely vanishes when the QM solvent is identical with the MM solvent. Furthermore, the desirable conditions, that is, the continuities of energy and force and the conservations of energy and momentum, are fulfilled in principle. We applied the QM/MM-BCC method to a hydronium ion(H3O+) in aqueous solution to construct the radial distribution function (RDF) of the solvent around the solute. It was demonstrated that the correction term fairly compensated the error and led the RDF in good agreement with the result given by an ab initio molecular dynamics simulation.

  7. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  8. Wavefront-guided correction of ocular aberrations: Are phase plate and refractive surgery solutions equal?

    NASA Astrophysics Data System (ADS)

    Marchese, Linda E.; Munger, Rejean; Priest, David

    2005-08-01

    Wavefront-guided laser eye surgery has been recently introduced and holds the promise of correcting not only defocus and astigmatism in patients but also higher-order aberrations. Research is just beginning on the implementation of wavefront-guided methods in optical solutions, such as phase-plate-based spectacles, as alternatives to surgery. We investigate the theoretical differences between the implementation of wavefront-guided surgical and phase plate corrections. The residual aberrations of 43 model eyes are calculated after simulated refractive surgery and also after a phase plate is placed in front of the untreated eye. In each case, the current wavefront-guided paradigm that applies a direct map of the ocular aberrations to the correction zone is used. The simulation results demonstrate that an ablation map that is a Zernike fit of a direct transform of the ocular wavefront phase error is not as efficient in correcting refractive errors of sphere, cylinder, spherical aberration, and coma as when the same Zernike coefficients are applied to a phase plate, with statistically significant improvements from 2% to 6%.

  9. Correction of the equilibrium temperature caused by slight evaporation of water in protein crystal growth cells during long-term space experiments at International Space Station.

    PubMed

    Fujiwara, Takahisa; Suzuki, Yoshihisa; Yoshizaki, Izumi; Tsukamoto, Katsuo; Murayama, Kenta; Fukuyama, Seijiro; Hosokawa, Kouhei; Oshi, Kentaro; Ito, Daisuke; Yamazaki, Tomoya; Tachibana, Masaru; Miura, Hitoshi

    2015-08-01

    The normal growth rates of the {110} faces of tetragonal hen egg-white lysozyme crystals, R, were measured as a function of the supersaturation σ parameter using a reflection type interferometer under μG at the International Space Station (NanoStep Project). Since water slightly evaporated from in situ observation cells during a long-term space station experiment for several months, equilibrium temperature T(e) changed, and the actual σ, however, significantly increased mainly due to the increase in salt concentration C(s). To correct σ, the actual C(s) and protein concentration C(p), which correctly represent the measured T(e) value in space, were first calculated. Second, a new solubility curve with the corrected C(s) was plotted. Finally, the revised σ was obtained from the new solubility curve. This correction method successfully revealed that the 2.8% water was evaporated from the solution, leading to 2.8% increase in the C(s) and C(p) of the solution.

  10. Can small field diode correction factors be applied universally?

    PubMed

    Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R

    2014-09-01

    Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.

  11. An analytic, approximate method for modeling steady, three-dimensional flow to partially penetrating wells

    NASA Astrophysics Data System (ADS)

    Bakker, Mark

    2001-05-01

    An analytic, approximate solution is derived for the modeling of three-dimensional flow to partially penetrating wells. The solution is written in terms of a correction on the solution for a fully penetrating well and is obtained by dividing the aquifer up, locally, in a number of aquifer layers. The resulting system of differential equations is solved by application of the theory for multiaquifer flow. The presented approach has three major benefits. First, the solution may be applied to any groundwater model that can simulate flow to a fully penetrating well; the solution may be superimposed onto the solution for the fully penetrating well to simulate the local three-dimensional drawdown and flow field. Second, the approach is applicable to isotropic, anisotropic, and stratified aquifers and to both confined and unconfined flow. Third, the solution extends over a small area around the well only; outside this area the three-dimensional effect of the partially penetrating well is negligible, and no correction to the fully penetrating well is needed. A number of comparisons are made to existing three-dimensional, analytic solutions, including radial confined and unconfined flow and a well in a uniform flow field. It is shown that a subdivision in three layers is accurate for many practical cases; very accurate solutions are obtained with more layers.

  12. Testing the Dependence of Airborne Gravity Results on Three Variables in Kinematic GPS Processing

    NASA Astrophysics Data System (ADS)

    Weil, C.; Diehl, T. M.

    2011-12-01

    The National Geodetic Survey's Gravity for the Redefinition of the American Vertical Datum (GRAV-D) program plans to collect airborne gravity data across the entire U.S. and its holdings over the next decade. The goal is to build a geoid accurate to 1-2 cm, for which the airborne gravity data is key. The first phase is underway, with > 13% of data collection completed in: parts of Alaska, parts of California, most of the Gulf Coast, Puerto Rico, and the Virgin Islands. Obtaining accurate airborne gravity survey results depends on the quality of the GPS/IMU position solution used in the processing. There are many factors that could have an influence on the positioning results. First, we will investigate how an increased data sampling rate for the GPS/IMU affects the position solution and accelerations derived from those positions. Second we will test the hypothesis that, for differential kinematic processing a better solution is obtained using both a base and a rover GPS unit that contain an additional rubidium clock that is reported to sync better with GPS time. Finally, we will look at a few different GPS+IMU processing methods available in commercial software. This includes comparing GPS-only solutions with loosely coupled GPS/IMU solutions from the Applanix POSAV-510 system and tightly coupled solutions with our newly-acquired NovAtel SPAN system (micro-IRS IMU). Differential solutions are compared with PPP (Precise Point Positioning) solutions along with multi-pass and advanced tropospheric corrections available with the NovAtel Inertial Explorer software. Based on preliminary research, we expect that the tightly-coupled solutions with either better troposphere and/or multi-pass solutions will provide superior position (and gravity) results.

  13. Course Corrections. Experts Offer Solutions to the College Cost Crisis

    ERIC Educational Resources Information Center

    Lumina Foundation for Education, 2005

    2005-01-01

    This paper discusses outsourcing as one solution to the college cost crisis. It is not presented as the solution; rather, it is put forth as an attractive strategy characterized by minimal financial and programmatic risk. To explore the basic policy considerations associated with outsourcing, this paper briefly reviews why institutions consider…

  14. Calculating Probabilistic Distance to Solution in a Complex Problem Solving Domain

    ERIC Educational Resources Information Center

    Sudol, Leigh Ann; Rivers, Kelly; Harris, Thomas K.

    2012-01-01

    In complex problem solving domains, correct solutions are often comprised of a combination of individual components. Students usually go through several attempts, each attempt reflecting an individual solution state that can be observed during practice. Classic metrics to measure student performance over time rely on counting the number of…

  15. Analytical guidance law development for aerocapture at Mars

    NASA Technical Reports Server (NTRS)

    Calise, A. J.

    1992-01-01

    During the first part of this reporting period research has concentrated on performing a detailed evaluation, to zero order, of the guidance algorithm developed in the first period taking the numerical approach developed in the third period. A zero order matched asymptotic expansion (MAE) solution that closely satisfies a set of 6 implicit equations in 6 unknowns to an accuracy of 10(exp -10), was evaluated. Guidance law implementation entails treating the current state as a new initial state and repetitively solving the MAE problem to obtain the feedback controls. A zero order guided solution was evaluated and compared with optimal solution that was obtained by numerical methods. Numerical experience shows that the zero order guided solution is close to optimal solution, and that the zero order MAE outer solution plays a critical role in accounting for the variations in Loh's term near the exit phase of the maneuver. However, the deficiency that remains in several of the critical variables indicates the need for a first order correction. During the second part of this period, methods for computing a first order correction were explored.

  16. Essentially nonoscillatory postprocessing filtering methods

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1992-01-01

    High order accurate centered flux approximations used in the computation of numerical solutions to nonlinear partial differential equations produce large oscillations in regions of sharp transitions. Here, we present a new class of filtering methods denoted by Essentially Nonoscillatory Least Squares (ENOLS), which constructs an upgraded filtered solution that is close to the physically correct weak solution of the original evolution equation. Our method relies on the evaluation of a least squares polynomial approximation to oscillatory data using a set of points which is determined via the ENO network. Numerical results are given in one and two space dimensions for both scalar and systems of hyperbolic conservation laws. Computational running time, efficiency, and robustness of method are illustrated in various examples such as Riemann initial data for both Burgers' and Euler's equations of gas dynamics. In all standard cases, the filtered solution appears to converge numerically to the correct solution of the original problem. Some interesting results based on nonstandard central difference schemes, which exactly preserve entropy, and have been recently shown generally not to be weakly convergent to a solution of the conservation law, are also obtained using our filters.

  17. Interfacial ion solvation: Obtaining the thermodynamic limit from molecular simulations

    NASA Astrophysics Data System (ADS)

    Cox, Stephen J.; Geissler, Phillip L.

    2018-06-01

    Inferring properties of macroscopic solutions from molecular simulations is complicated by the limited size of systems that can be feasibly examined with a computer. When long-ranged electrostatic interactions are involved, the resulting finite size effects can be substantial and may attenuate very slowly with increasing system size, as shown by previous work on dilute ions in bulk aqueous solution. Here we examine corrections for such effects, with an emphasis on solvation near interfaces. Our central assumption follows the perspective of Hünenberger and McCammon [J. Chem. Phys. 110, 1856 (1999)]: Long-wavelength solvent response underlying finite size effects should be well described by reduced models like dielectric continuum theory, whose size dependence can be calculated straightforwardly. Applied to an ion in a periodic slab of liquid coexisting with vapor, this approach yields a finite size correction for solvation free energies that differs in important ways from results previously derived for bulk solution. For a model polar solvent, we show that this new correction quantitatively accounts for the variation of solvation free energy with volume and aspect ratio of the simulation cell. Correcting periodic slab results for an aqueous system requires an additional accounting for the solvent's intrinsic charge asymmetry, which shifts electric potentials in a size-dependent manner. The accuracy of these finite size corrections establishes a simple method for a posteriori extrapolation to the thermodynamic limit and also underscores the realism of dielectric continuum theory down to the nanometer scale.

  18. Geometrical E-beam proximity correction for raster scan systems

    NASA Astrophysics Data System (ADS)

    Belic, Nikola; Eisenmann, Hans; Hartmann, Hans; Waas, Thomas

    1999-04-01

    High pattern fidelity is a basic requirement for the generation of masks containing sub micro structures and for direct writing. Increasing needs mainly emerging from OPC at mask level and x-ray lithography require a correction of the e-beam proximity effect. The most part of e-beam writers are raster scan system. This paper describes a new method for geometrical pattern correction in order to provide a correction solution for e-beam system that are not able to apply variable doses.

  19. A fast iterative convolution weighting approach for gridding-based direct Fourier three-dimensional reconstruction with correction for the contrast transfer function.

    PubMed

    Abrishami, V; Bilbao-Castro, J R; Vargas, J; Marabini, R; Carazo, J M; Sorzano, C O S

    2015-10-01

    We describe a fast and accurate method for the reconstruction of macromolecular complexes from a set of projections. Direct Fourier inversion (in which the Fourier Slice Theorem plays a central role) is a solution for dealing with this inverse problem. Unfortunately, the set of projections provides a non-equidistantly sampled version of the macromolecule Fourier transform in the single particle field (and, therefore, a direct Fourier inversion) may not be an optimal solution. In this paper, we introduce a gridding-based direct Fourier method for the three-dimensional reconstruction approach that uses a weighting technique to compute a uniform sampled Fourier transform. Moreover, the contrast transfer function of the microscope, which is a limiting factor in pursuing a high resolution reconstruction, is corrected by the algorithm. Parallelization of this algorithm, both on threads and on multiple CPU's, makes the process of three-dimensional reconstruction even faster. The experimental results show that our proposed gridding-based direct Fourier reconstruction is slightly more accurate than similar existing methods and presents a lower computational complexity both in terms of time and memory, thereby allowing its use on larger volumes. The algorithm is fully implemented in the open-source Xmipp package and is downloadable from http://xmipp.cnb.csic.es. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Magnetometer bias determination and attitude determination for near-earth spacecraft

    NASA Technical Reports Server (NTRS)

    Lerner, G. M.; Shuster, M. D.

    1979-01-01

    A simple linear-regression algorithm is used to determine simultaneously magnetometer biases, misalignments, and scale factor corrections, as well as the dependence of the measured magnetic field on magnetic control systems. This algorithm has been applied to data from the Seasat-1 and the Atmosphere Explorer Mission-1/Heat Capacity Mapping Mission (AEM-1/HCMM) spacecraft. Results show that complete inflight calibration as described here can improve significantly the accuracy of attitude solutions obtained from magnetometer measurements. This report discusses the difficulties involved in obtaining attitude information from three-axis magnetometers, briefly derives the calibration algorithm, and presents numerical results for the Seasat-1 and AEM-1/HCMM spacecraft.

  1. Dielectric constants of soils at microwave frequencies

    NASA Technical Reports Server (NTRS)

    Geiger, F. E.; Williams, D.

    1972-01-01

    A knowledge of the complex dielectric constant of soils is essential in the interpretation of microwave airborne radiometer data of the earth's surface. Measurements were made at 37 GHz on various soils from the Phoenix, Ariz., area. Extensive data have been obtained for dry soil and soil with water content in the range from 0.6 to 35 percent by dry weight. Measurements were made in a two arm microwave bridge and results were corrected for reflections at the sample interfaces by solution of the parallel dielectric plate problem. The maximum dielectric constants are about a factor of 3 lower than those reported for similar soils at X-band frequencies.

  2. Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations

    NASA Astrophysics Data System (ADS)

    Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Rieben, R.; Tomov, V.

    2018-03-01

    We present a new predictor-corrector approach to enforcing local maximum principles in piecewise-linear finite element schemes for the compressible Euler equations. The new element-based limiting strategy is suitable for continuous and discontinuous Galerkin methods alike. In contrast to synchronized limiting techniques for systems of conservation laws, we constrain the density, momentum, and total energy in a sequential manner which guarantees positivity preservation for the pressure and internal energy. After the density limiting step, the total energy and momentum gradients are adjusted to incorporate the irreversible effect of density changes. Antidiffusive corrections to bounds-compatible low-order approximations are limited to satisfy inequality constraints for the specific total and kinetic energy. An accuracy-preserving smoothness indicator is introduced to gradually adjust lower bounds for the element-based correction factors. The employed smoothness criterion is based on a Hessian determinant test for the density. A numerical study is performed for test problems with smooth and discontinuous solutions.

  3. The potential for short rotation energy forestry on restored landfill caps.

    PubMed

    Nixon, D J; Stephens, W; Tyrrel, S F; Brierley, E D

    2001-05-01

    This review examines the potential for producing biomass on restored landfills using willow and poplar species in short rotation energy forestry. In southern England, the potential production may be about 20 t ha(-1) of dry stem wood annually. However, actual yields are likely to be constrained by detrimental soil conditions, including shallow depth, compaction, low water holding capacity and poor nutritional status. These factors will affect plant growth by causing drought, waterlogging, poor soil aeration and nutritional deficiencies. Practical solutions to these problems include the correct placement and handling of the agricultural cap material, soil amelioration using tillage and the addition of organic matter (such as sewage sludge), irrigation (possibly using landfill leachate), the installation of drainage and the application of inorganic fertilizers. The correct choice of species and clone, along with good site management are also essential if economically viable yields are to be obtained. Further investigations are required to determine the actual yields that can be obtained on landfill sites using a range of management inputs.

  4. Thermodynamic instability of topological black holes in Gauss-Bonnet gravity with a generalized electrodynamics

    NASA Astrophysics Data System (ADS)

    Hendi, S. H.; Panahiyan, S.

    2014-12-01

    Motivated by the string corrections on the gravity and electrodynamics sides, we consider a quadratic Maxwell invariant term as a correction of the Maxwell Lagrangian to obtain exact solutions of higher dimensional topological black holes in Gauss-Bonnet gravity. We first investigate the asymptotically flat solutions and obtain conserved and thermodynamic quantities which satisfy the first law of thermodynamics. We also analyze thermodynamic stability of the solutions by calculating the heat capacity and the Hessian matrix. Then, we focus on horizon-flat solutions with an anti-de Sitter (AdS) asymptote and produce a rotating spacetime with a suitable transformation. In addition, we calculate the conserved and thermodynamic quantities for asymptotically AdS black branes which satisfy the first law of thermodynamics. Finally, we perform thermodynamic instability criterion to investigate the effects of nonlinear electrodynamics in canonical and grand canonical ensembles.

  5. An Eulerian/Lagrangian coupling procedure for three-dimensional vortical flows

    NASA Technical Reports Server (NTRS)

    Felici, Helene M.; Drela, Mark

    1993-01-01

    A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of 3D vortical flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method, added to the Eulerian time-marching procedure, provides a correction of the Eulerian solution. In turn, the Eulerian solution is used to integrate the Lagrangian state-vector along the particles trajectories. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers describe accurately the convection properties and enhance the vorticity and entropy capturing capabilities of the Eulerian solver. The Eulerian/Lagrangian coupling strategies are discussed and the combined scheme is tested on a constant stagnation pressure flow in a 90 deg bend and on a swirling pipe flow. As the numerical diffusion is reduced when using the Lagrangian correction, a vorticity gradient augmentation is identified as a basic problem of this inviscid calculation.

  6. Black hole solution in the framework of arctan-electrodynamics

    NASA Astrophysics Data System (ADS)

    Kruglov, S. I.

    An arctan-electrodynamics coupled with the gravitational field is investigated. We obtain the regular black hole solution that at r →∞ gives corrections to the Reissner-Nordström solution. The corrections to Coulomb’s law at r →∞ are found. We evaluate the mass of the black hole that is a function of the dimensional parameter β introduced in the model. The magnetically charged black hole was investigated and we have obtained the magnetic mass of the black hole and the metric function at r →∞. The regular black hole solution is obtained at r → 0 with the de Sitter core. We show that there is no singularity of the Ricci scalar for electrically and magnetically charged black holes. Restrictions on the electric and magnetic fields are found that follow from the requirement of the absence of superluminal sound speed and the requirement of a classical stability.

  7. Dispersive analysis of the scalar form factor of the nucleon

    NASA Astrophysics Data System (ADS)

    Hoferichter, M.; Ditsche, C.; Kubis, B.; Meißner, U.-G.

    2012-06-01

    Based on the recently proposed Roy-Steiner equations for pion-nucleon ( πN) scattering [1], we derive a system of coupled integral equations for the π π to overline N N and overline K K to overline N N S-waves. These equations take the form of a two-channel Muskhelishvili-Omnès problem, whose solution in the presence of a finite matching point is discussed. We use these results to update the dispersive analysis of the scalar form factor of the nucleon fully including overline K K intermediate states. In particular, we determine the correction {Δ_{σ }} = σ ( {2M_{π }^2} ) - {σ_{{π N}}} , which is needed for the extraction of the pion-nucleon σ term from πN scattering, as a function of pion-nucleon subthreshold parameters and the πN coupling constant.

  8. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  9. A family of heavenly metrics

    NASA Astrophysics Data System (ADS)

    Nutku, Y.; Sheftel, M. B.

    2014-02-01

    This is a corrected and essentially extended version of the unpublished manuscript by Y Nutku and M Sheftel which contains new results. It is proposed to be published in honour of Y Nutku’s memory. All corrections and new results in sections 1, 2 and 4 are due to M Sheftel. We present new anti-self-dual exact solutions of the Einstein field equations with Euclidean and neutral (ultra-hyperbolic) signatures that admit only one rotational Killing vector. Such solutions of the Einstein field equations are determined by non-invariant solutions of Boyer-Finley (BF) equation. For the case of Euclidean signature such a solution of the BF equation was first constructed by Calderbank and Tod. Two years later, Martina, Sheftel and Winternitz applied the method of group foliation to the BF equation and reproduced the Calderbank-Tod solution together with new solutions for the neutral signature. In the case of Euclidean signature we obtain new metrics which asymptotically locally look like a flat space and have a non-removable singular point at the origin. In the case of ultra-hyperbolic signature there exist three inequivalent forms of metric. Only one of these can be obtained by analytic continuation from the Calderbank-Tod solution whereas the other two are new.

  10. Feed-forward alignment correction for advanced overlay process control using a standalone alignment station "Litho Booster"

    NASA Astrophysics Data System (ADS)

    Yahiro, Takehisa; Sawamura, Junpei; Dosho, Tomonori; Shiba, Yuji; Ando, Satoshi; Ishikawa, Jun; Morita, Masahiro; Shibazaki, Yuichi

    2018-03-01

    One of the main components of an On-Product Overlay (OPO) error budget is the process induced wafer error. This necessitates wafer-to-wafer correction in order to optimize overlay accuracy. This paper introduces the Litho Booster (LB), standalone alignment station as a solution to improving OPO. LB can execute high speed alignment measurements without throughput (THP) loss. LB can be installed in any lithography process control loop as a metrology tool, and is then able to provide feed-forward (FF) corrections to the scanners. In this paper, the detailed LB design is described and basic LB performance and OPO improvement is demonstrated. Litho Booster's extendibility and applicability as a solution for next generation manufacturing accuracy and productivity challenges are also outlined

  11. Rigorous asymptotics of traveling-wave solutions to the thin-film equation and Tanner’s law

    NASA Astrophysics Data System (ADS)

    Giacomelli, Lorenzo; Gnann, Manuel V.; Otto, Felix

    2016-09-01

    We are interested in traveling-wave solutions to the thin-film equation with zero microscopic contact angle (in the sense of complete wetting without precursor) and inhomogeneous mobility {{h}3}+{λ3-n}{{h}n} , where h, λ, and n\\in ≤ft(\\frac{3}{2},\\frac{7}{3}\\right) denote film height, slip parameter, and mobility exponent, respectively. Existence and uniqueness of these solutions have been established by Maria Chiricotto and the first of the authors in previous work under the assumption of sub-quadratic growth as h\\to ∞ . In the present work we investigate the asymptotics of solutions as h\\searrow 0 (the contact-line region) and h\\to ∞ . As h\\searrow 0 we observe, to leading order, the same asymptotics as for traveling waves or source-type self-similar solutions to the thin-film equation with homogeneous mobility h n and we additionally characterize corrections to this law. Moreover, as h\\to ∞ we identify, to leading order, the logarithmic Tanner profile, i.e. the solution to the corresponding unperturbed problem with λ =0 that determines the apparent macroscopic contact angle. Besides higher-order terms, corrections turn out to affect the asymptotic law as h\\to ∞ only by setting the length scale in the logarithmic Tanner profile. Moreover, we prove that both the correction and the length scale depend smoothly on n. Hence, in line with the common philosophy, the precise modeling of liquid-solid interactions (within our model, the mobility exponent) does not affect the qualitative macroscopic properties of the film.

  12. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

    NASA Astrophysics Data System (ADS)

    Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

    2014-12-01

    We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic inversions we examine the importance of including the topography in the inversion and we test different regularization schemes using weighted second norm of model gradient as well as inverting for a static distortion matrix following Miensopust/Avdeeva approach. We also apply our algorithm to invert MT data collected at Mt St Helens.

  13. On the solution of the continuity equation for precipitating electrons in solar flares

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emslie, A. Gordon; Holman, Gordon D.; Litvinenko, Yuri E., E-mail: emslieg@wku.edu, E-mail: gordon.d.holman@nasa.gov

    2014-09-01

    Electrons accelerated in solar flares are injected into the surrounding plasma, where they are subjected to the influence of collisional (Coulomb) energy losses. Their evolution is modeled by a partial differential equation describing continuity of electron number. In a recent paper, Dobranskis and Zharkova claim to have found an 'updated exact analytical solution' to this continuity equation. Their solution contains an additional term that drives an exponential decrease in electron density with depth, leading them to assert that the well-known solution derived by Brown, Syrovatskii and Shmeleva, and many others is invalid. We show that the solution of Dobranskis andmore » Zharkova results from a fundamental error in the application of the method of characteristics and is hence incorrect. Further, their comparison of the 'new' analytical solution with numerical solutions of the Fokker-Planck equation fails to lend support to their result. We conclude that Dobranskis and Zharkova's solution of the universally accepted and well-established continuity equation is incorrect, and that their criticism of the correct solution is unfounded. We also demonstrate the formal equivalence of the approaches of Syrovatskii and Shmeleva and Brown, with particular reference to the evolution of the electron flux and number density (both differential in energy) in a collisional thick target. We strongly urge use of these long-established, correct solutions in future works.« less

  14. Acanthamoeba keratitis in patients wearing scleral contact lenses.

    PubMed

    Sticca, Matheus Porto; Carrijo-Carvalho, Linda C; Silva, Isa M B; Vieira, Luiz A; Souza, Luciene B; Junior, Rubens Belfort; Carvalho, Fábio Ramos S; Freitas, Denise

    2018-06-01

    To report a series of cases of Acanthamoeba keratitis (AK) in scleral lens wearers with keratoconus to determine whether this type of contact lens presents a greater risk for development of infection. This study reports three patients who wore scleral contact lenses to correct keratoconus and developed AK. The diagnoses of AK were established based on cultures of the cornea, scleral contact lenses, and contact lens paraphernalia. This study investigated the risk factors for infections. The possible risks for AK in scleral contact lens wearers are hypoxic changes in the corneal epithelium because of the large diameter and minimal tear exchange, use of large amounts of saline solution necessary for scleral lens fitting, storing the scleral lens overnight in saline solution rather than contact lens multipurpose solutions, not rubbing the contact lens during cleaning, and the space between the cornea and the back surface of the scleral lens that might serve as a fluid reservoir and environment for Acanthamoeba multiplication. Two patients responded well to medical treatment of AK; one is still being treated. The recommendations for use and care of scleral contact lenses should be emphasized, especially regarding use of sterile saline (preferably single use), attention to rubbing the lens during cleaning, cleaning of the plunger, and overnight storage in fresh contact lens multipurpose solutions without topping off the lens solution in the case. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  15. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    NASA Astrophysics Data System (ADS)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  16. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  17. The wave equation in Friedmann-Robertson-Walker space-times and asymptotics of the intensity and distance relationship of a localised source

    NASA Astrophysics Data System (ADS)

    Starko, Darij; Craig, Walter

    2018-04-01

    Variations in redshift measurements of Type 1a supernovae and intensity observations from large sky surveys are an indicator of a component of acceleration in the rate of expansion of space-time. A key factor in the measurements is the intensity-distance relation for Maxwell's equations in Friedmann-Robertson-Walker (FRW) space-times. In view of future measurements of the decay of other fields on astronomical time and spatial scales, we determine the asymptotic behavior of the intensity-distance relationship for the solution of the wave equation in space-times with an FRW metric. This builds on previous work done on initial value problems for the wave equation in FRW space-time [Abbasi, B. and Craig, W., Proc. R. Soc. London, Ser. A 470, 20140361 (2014)]. In this paper, we focus on the precise intensity decay rates of the special cases for curvature k = 0 and k = -1, as well as giving a general derivation of the wave solution for -∞ < k < 0. We choose a Cauchy surface {(t, x) : t = t0 > 0} where t0 represents the time of an initial emission source, relative to the Big Bang singularity at t = 0. The initial data [g(x), h(x)] are assumed to be compactly supported; supp(g, h) ⊆ BR(0) and terms in the expression for the fundamental solution for the wave equation with the slowest decay rate are retained. The intensities calculated for coordinate time {t : t > 0} contain correction terms proportional to the ratio of t0 and the time differences ρ = t - t0. For the case of general curvature k, these expressions for the intensity reduce by scaling to the same form as for k = -1, from which we deduce the general formula. We note that for typical astronomical events such as Type 1a supernovae, the first order correction term for all curvatures -∞ < k < 0 is on the order of 10-4 smaller than the zeroth order term. These correction terms are small but may be significant in applications to alternative observations of cosmological space-time expansion rates.

  18. A simple model for deep tissue attenuation correction and large organ analysis of Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Habte, Frezghi; Natarajan, Arutselvan; Paik, David S.; Gambhir, Sanjiv S.

    2014-03-01

    Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.

  19. Development of attenuation and diffraction corrections for linear and nonlinear Rayleigh surface waves radiating from a uniform line source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Hyunjo, E-mail: hjjeong@wku.ac.kr; Cho, Sungjong; Zhang, Shuzeng

    2016-04-15

    In recent studies with nonlinear Rayleigh surface waves, harmonic generation measurements have been successfully employed to characterize material damage and microstructural changes, and found to be sensitive to early stages of damage process. A nonlinearity parameter of Rayleigh surface waves was derived and frequently measured to quantify the level of damage. The accurate measurement of the nonlinearity parameter generally requires making corrections for beam diffraction and medium attenuation. These effects are not generally known for nonlinear Rayleigh waves, and therefore not properly considered in most of previous studies. In this paper, the nonlinearity parameter for a Rayleigh surface wave ismore » defined from the plane wave displacement solutions. We explicitly define the attenuation and diffraction corrections for fundamental and second harmonic Rayleigh wave beams radiated from a uniform line source. Attenuation corrections are obtained from the quasilinear theory of plane Rayleigh wave equations. To obtain closed-form expressions for diffraction corrections, multi-Gaussian beam (MGB) models are employed to represent the integral solutions derived from the quasilinear theory of the full two-dimensional wave equation without parabolic approximation. Diffraction corrections are presented for a couple of transmitter-receiver geometries, and the effects of making attenuation and diffraction corrections are examined through the simulation of nonlinearity parameter determination in a solid sample.« less

  20. People--things and data--ideas: bipolar dimensions?

    PubMed

    Tay, Louis; Su, Rong; Rounds, James

    2011-07-01

    We examined a longstanding assumption in vocational psychology that people-things and data-ideas are bipolar dimensions. Two minimal criteria for bipolarity were proposed and examined across 3 studies: (a) The correlation between opposite interest types should be negative; (b) after correcting for systematic responding, the correlation should be greater than -.40. In Study 1, a meta-analysis using 26 interest inventories with a sample size of 1,008,253 participants showed that meta-analytic correlations between opposite RIASEC (realistic, investigative, artistic, social, enterprising, conventional) types ranged from -.03 to .18 (corrected meta-analytic correlations ranged from -.23 to -.06). In Study 2, structural equation models (SEMs) were fit to the Interest Finder (IF; Wall, Wise, & Baker, 1996) and the Interest Profiler (IP; Rounds, Smith, Hubert, Lewis, & Rivkin, 1999) with sample sizes of 13,939 and 1,061, respectively. The correlations of opposite RIASEC types were positive, ranging from .17 to .53. No corrected correlation met the criterion of -.40 except for investigative-enterprising (r = -.67). Nevertheless, a direct estimate of the correlation between data-ideas end poles using targeted factor rotation did not reveal bipolarity. Furthermore, bipolar SEMs fit substantially worse than a multiple-factor representation of vocational interests. In Study 3, a two-way clustering solution on IF and IP respondents and items revealed a substantial number of individuals with interests in both people and things. We discuss key theoretical, methodological, and practical implications such as the structure of vocational interests, interpretation and scoring of interest measures for career counseling, and expert RIASEC ratings of occupations.

  1. Immediate Truth--Temporal Contiguity between a Cognitive Problem and Its Solution Determines Experienced Veracity of the Solution

    ERIC Educational Resources Information Center

    Topolinski, Sascha; Reber, Rolf

    2010-01-01

    A temporal contiguity hypothesis for the experience of veracity is tested which states that a solution candidate to a cognitive problem is more likely to be experienced as correct the faster it succeeds the problem. Experiment 1 varied the onset time of the appearance of proposed solutions to anagrams (50 ms vs. 150 ms) and found for both correct…

  2. How to tackle protein structural data from solution and solid state: An integrated approach.

    PubMed

    Carlon, Azzurra; Ravera, Enrico; Andrałojć, Witold; Parigi, Giacomo; Murshudov, Garib N; Luchinat, Claudio

    2016-02-01

    Long-range NMR restraints, such as diamagnetic residual dipolar couplings and paramagnetic data, can be used to determine 3D structures of macromolecules. They are also used to monitor, and potentially to improve, the accuracy of a macromolecular structure in solution by validating or "correcting" a crystal model. Since crystal structures suffer from crystal packing forces they may not be accurate models for the macromolecular structures in solution. However, the presence of real differences should be tested for by simultaneous refinement of the structure using both crystal and solution NMR data. To achieve this, the program REFMAC5 from CCP4 was modified to allow the simultaneous use of X-ray crystallographic and paramagnetic NMR data and/or diamagnetic residual dipolar couplings. Inconsistencies between crystal structures and solution NMR data, if any, may be due either to structural rearrangements occurring on passing from the solution to solid state, or to a greater degree of conformational heterogeneity in solution with respect to the crystal. In the case of multidomain proteins, paramagnetic restraints can provide the correct mutual orientations and positions of domains in solution, as well as information on the conformational variability experienced by the macromolecule. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Evaporation rate of nucleating clusters.

    PubMed

    Zapadinsky, Evgeni

    2011-11-21

    The Becker-Döring kinetic scheme is the most frequently used approach to vapor liquid nucleation. In the present study it has been extended so that master equations for all cluster configurations are included into consideration. In the Becker-Döring kinetic scheme the nucleation rate is calculated through comparison of the balanced steady state and unbalanced steady state solutions of the set of kinetic equations. It is usually assumed that the balanced steady state produces equilibrium cluster distribution, and the evaporation rates are identical in the balanced and unbalanced steady state cases. In the present study we have shown that the evaporation rates are not identical in the equilibrium and unbalanced steady state cases. The evaporation rate depends on the number of clusters at the limit of the cluster definition. We have shown that the ratio of the number of n-clusters at the limit of the cluster definition to the total number of n-clusters is different in equilibrium and unbalanced steady state cases. This causes difference in evaporation rates for these cases and results in a correction factor to the nucleation rate. According to rough estimation it is 10(-1) by the order of magnitude and can be lower if carrier gas effectively equilibrates the clusters. The developed approach allows one to refine the correction factor with Monte Carlo and molecular dynamic simulations.

  4. New type of hill-top inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barvinsky, A.O.; Department of Physics, Tomsk State University,Lenin Ave. 36, Tomsk 634050; Department of Physics and Astronomy, Pacific Institue for Theoretical Physics,University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1

    2016-01-20

    We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ϵ and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less

  5. New type of hill-top inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barvinsky, A.O.; Nesterov, D.V.; Kamenshchik, A.Yu., E-mail: barvin@td.lpi.ru, E-mail: Alexander.Kamenshchik@bo.infn.it, E-mail: nesterov@td.lpi.ru

    2016-01-01

    We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ε and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less

  6. An Efficient Algorithm for Partitioning and Authenticating Problem-Solutions of eLeaming Contents

    ERIC Educational Resources Information Center

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2013-01-01

    Content authenticity and correctness is one of the important challenges in eLearning as there can be many solutions to one specific problem in cyber space. Therefore, the authors feel it is necessary to map problems to solutions using graph partition and weighted bipartite matching. This article proposes an efficient algorithm to partition…

  7. Development of the Metacognitive Skills of Prediction and Evaluation in Children With or Without Math Disability

    PubMed Central

    Garrett, Adia J.; Mazzocco, Michèle M. M.; Baker, Linda

    2009-01-01

    Metacognition refers to knowledge about one’s own cognition. The present study was designed to assess metacognitive skills that either precede or follow task engagement, rather than the processes that occur during a task. Specifically, we examined prediction and evaluation skills among children with (n = 17) or without (n = 179) mathematics learning disability (MLD), from grades 2 to 4. Children were asked to predict which of several math problems they could solve correctly; later, they were asked to solve those problems. They were asked to evaluate whether their solution to each of another set of problems was correct. Children’s ability to evaluate their answers to math problems improved from grade 2 to grade 3, whereas there was no change over time in the children’s ability to predict which problems they could solve correctly. Children with MLD were less accurate than children without MLD in evaluating both their correct and incorrect solutions, and they were less accurate at predicting which problems they could solve correctly. However, children with MLD were as accurate as their peers in correctly predicting that they could not solve specific math problems. The findings have implications for the usefulness of children’s self-review during mathematics problem solving. PMID:20084181

  8. High-precision GNSS ocean positioning with BeiDou short-message communication

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhiteng; Zang, Nan; Wang, Siyao

    2018-04-01

    The current popular GNSS RTK technique would be not applicable on ocean due to the limited communication access for transmitting differential corrections. A new technique is proposed for high-precision ocean RTK, referred to as ORTK, where the corrections are transmitted by employing the function of BeiDou satellite short-message communication (SMC). To overcome the limitation of narrow bandwidth of BeiDou SMC, a new strategy of simplifying and encoding corrections is proposed instead of standard differential corrections, which reduces the single-epoch corrections from more than 1000 to less than 300 bytes. To solve the problems of correction delays, cycle slips, blunders and abnormal epochs over ultra-long baseline ORTK, a series of powerful algorithms were designed at the user-end software for achieving the stable and precise kinematic solutions on far ocean applications. The results from two long baselines of 240 and 420 km and real ocean experiments reveal that the kinematic solutions with horizontal accuracy of 5 cm and vertical accuracy of better than 15 cm are achievable by convergence time of 3-10 min. Compared to commercial ocean PPP with satellite telecommunication, ORTK is of much cheaper expense, higher accuracy and shorter convergence. It will be very prospective in many location-based ocean services.

  9. Closure Report for Corrective Action Unit 340: NTS Pesticide Release Sites Nevada Test Site, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. M. Obi

    The purpose of this report is to provide documentation of the completed corrective action and to provide data confirming the corrective action. The corrective action was performed in accordance with the approved Corrective Action Plan (CAP) (U.S. Department of Energy [DOE], 1999) and consisted of clean closure by excavation and disposal. The Area 15 Quonset Hut 15-11 was formerly used for storage of farm supplies including pesticides, herbicides, and fertilizers. The Area 23 Quonset Hut 800 was formerly used to clean pesticide and herbicide equipment. Steam-cleaning rinsate and sink drainage occasionally overflowed a sump into adjoining drainage ditches. One ditchmore » flows south and is referred to as the quonset hut ditch. The other ditch flows southeast and is referred to as the inner drainage ditch. The Area 23 Skid Huts were formerly used for storing and mixing pesticide and herbicide solutions. Excess solutions were released directly to the ground near the skid huts. The skid huts were moved to a nearby location prior to the site characterization performed in 1998 and reported in the Corrective Action Decision Document (CADD) (DOE, 1998). The vicinity and site plans of the Area 23 sites are shown in Figures 2 and 3, respectively.« less

  10. Constraint Embedding Technique for Multibody System Dynamics

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with closure-constraints into an equivalent tree-topology system, and thus allows one to take advantage of the host of techniques available to the latter class of systems. This technology is highly suitable for the class of multibody systems where the closure-constraints are local, i.e., where they are confined to small groupings of bodies within the system. Important examples of such local closure-constraints are constraints associated with four-bar linkages, geared motors, differential suspensions, etc. One can eliminate these closure-constraints and convert the system into a tree-topology system by embedding the constraints directly into the system dynamics and effectively replacing the body groupings with virtual aggregate bodies. Once eliminated, one can apply the well-known results and algorithms for tree-topology systems to solve the dynamics of such closed-chain system.

  11. SU-F-T-23: Correspondence Factor Correction Coefficient for Commissioning of Leipzig and Valencia Applicators with the Standard Imaging IVB 1000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaghue, J; Gajdos, S

    Purpose: To determine the correction factor of the correspondence factor for the Standard Imaging IVB 1000 well chamber for commissioning of Elekta’s Leipzig and Valencia skin applicators. Methods: The Leipzig and Valencia applicators are designed to treat small skin lesions by collimating irradiation to the treatment area. Published output factors are used to calculate dose rates for clinical treatments. To validate onsite applicators, a correspondence factor (CFrev) is measured and compared to published values. The published CFrev is based on well chamber model SI HDR 1000 Plus. The CFrev is determined by correlating raw values of the source calibration setupmore » (Rcal,raw) and values taken when each applicator is mounted on the same well chamber with an adapter (Rapp,raw). The CFrev is calculated by using the equation CFrev =Rapp,raw/Rcal,raw. The CFrev was measured for each applicator in both the SI HDR 1000 Plus and the SI IVB 1000. A correction factor, CFIVB for the SI IVB 1000 was determined by finding the ratio of CFrev (SI IVB 1000) and CFrev (SI HDR 1000 Plus). Results: The average correction factors at dwell position 1121 were found to be 1.073, 1.039, 1.209, 1.091, and 1.058 for the Valencia V2, Valencia V3, Leipzig H1, Leipzig H2, and Leipzig H3 respectively. There were no significant variations in the correction factor for dwell positions 1119 through 1121. Conclusion: By using the appropriate correction factor, the correspondence factors for the Leipzig and Valencia surface applicators can be validated with the Standard Imaging IVB 1000. This allows users to correlate their measurements with the Standard Imaging IVB 1000 to the published data. The correction factor is included in the equation for the CFrev as follows: CFrev= Rapp,raw/(CFIVB*Rcal,raw). Each individual applicator has its own correction factor, so care must be taken that the appropriate factor is used.« less

  12. Development of kinks in car-following models

    NASA Astrophysics Data System (ADS)

    Kurtze, Douglas A.

    2017-03-01

    Many car-following models of traffic flow admit the possibility of absolute stability, a situation in which uniform traffic flow at any spacing is linearly stable. Near the threshold of absolute stability, these models can often be reduced to a modified Korteweg-deVries (mKdV) equation plus small corrections. The hyperbolic-tangent "kink" solutions of the mKdV equation are usually of particular interest, as they represent transition zones between regions of different traffic spacings. Solvability analysis is believed to show that only a single member of the one-parameter family of kink solutions is preserved by the correction terms, and this is interpreted as a kind of selection. We show, however, that the usual solvability calculation rests on an unstated, unjustified assumption, and that without this assumption it merely gives a first-order correction to the relation between the traffic spacings far behind and far ahead of the kink, rather than any kind of "selection" criterion for the family of kink solutions. On the other hand, we display a two-parameter family of traveling wave solutions of the mKdV equation, which describe regions of one traffic spacing embedded in traffic of a different spacing; this family includes the kink solutions as a limiting case. We carry out a multiple-time-scales calculation and find conditions under which the inclusions decay, conditions that lead to a selected inclusion, and conditions for which the inclusion evolves into a pair of kinks.

  13. Determination of Focal Mechanisms of Non-Volcanic Tremors Based on S-Wave Polarization Data Corrected for the Effects of Anisotropy

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Uchide, T.; Takeda, N.

    2014-12-01

    We propose a method to determine focal mechanisms of non-volcanic tremors (NVTs) based on S-wave polarization angles. The successful retrieval of polarization angles in low S/N tremor signals owes much to the observation that NVTs propagate slowly and therefore they do not change their location immediately. This feature of NVTs enables us to use a longer window to compute a polarization angle (e.g., one minute or longer), resulting in a stack of particle motions. Following Zhang and Schwartz (1994), we first correct for the splitting effect to recover the source polarization angle (anisotropy-corrected angle). This is a key step, because shear-wave splitting distorts the particle motion excited by a seismic source. We then determine the best double-couple solution using anisotropy-corrected angles of multiple stations. The present method was applied to a tremor sequence at Kii Peninsula, southwest Japan, which occurred at the beginning of April 2013. A standard splitting and polarization analysis were subject to a one-minute-long moving window to determine the splitting parameters as well as anisotropy-corrected angles. A grid search approach was performed at each hour to determine the best double-couple solution satisfying one-hour average polarization angles. Most solutions show NW-dipping low-angle planes consistent with the plate boundary or SE-dipping high-angle planes. Because of 180 degrees ambiguity in polarization angles, the present method alone cannot distinguish compressional quadrant from dilatational one. Together with the observation of very low-frequency earthquakes near the present study area (Ito et al., 2007), it is reasonable to consider that they represent shear slip on low-angle thrust faults. It is also noted that some of solutions contain strike-slip component. Acknowledgements: Seismograph stations used in this study include permanent stations operated by NIED (Hi-net), JMA, Earthquake Research Institute, together with Geological Survey of Japan, AIST. This work was supported by JSPS KAKENHI Grant Number 24540463.

  14. Proportional reasoning as a heuristic-based process: time constraint and dual task considerations.

    PubMed

    Gillard, Ellen; Van Dooren, Wim; Schaeken, Walter; Verschaffel, Lieven

    2009-01-01

    The present study interprets the overuse of proportional solution methods from a dual process framework. Dual process theories claim that analytic operations involve time-consuming executive processing, whereas heuristic operations are fast and automatic. In two experiments to test whether proportional reasoning is heuristic-based, the participants solved "proportional" problems, for which proportional solution methods provide correct answers, and "nonproportional" problems known to elicit incorrect answers based on the assumption of proportionality. In Experiment 1, the available solution time was restricted. In Experiment 2, the executive resources were burdened with a secondary task. Both manipulations induced an increase in proportional answers and a decrease in correct answers to nonproportional problems. These results support the hypothesis that the choice for proportional methods is heuristic-based.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, Lawrence R.; Chaudhari, Mangesh I.; Rempe, Susan B.

    Here this review focuses on the striking recent progress in solving for hydrophobic interactions between small inert molecules. We discuss several new understandings. First, the inverse temperature phenomenology of hydrophobic interactions, i.e., strengthening of hydrophobic bonds with increasing temperature, is decisively exhibited by hydrophobic interactions between atomic-scale hard sphere solutes in water. Second, inclusion of attractive interactions associated with atomic-size hydrophobic reference cases leads to substantial, nontrivial corrections to reference results for purely repulsive solutes. Hydrophobic bonds are weakened by adding solute dispersion forces to treatment of reference cases. The classic statistical mechanical theory for those corrections is not accuratemore » in this application, but molecular quasi-chemical theory shows promise. Lastly, because of the masking roles of excluded volume and attractive interactions, comparisons that do not discriminate the different possibilities face an interpretive danger.« less

  16. On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.

    1993-01-01

    We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.

  17. SiC MOSFET Based Single Phase Active Boost Rectifier with Power Factor Correction for Wireless Power Transfer Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onar, Omer C; Tang, Lixin; Chinthavali, Madhu Sudhan

    2014-01-01

    Wireless Power Transfer (WPT) technology is a novel research area in the charging technology that bridges the utility and the automotive industries. There are various solutions that are currently being evaluated by several research teams to find the most efficient way to manage the power flow from the grid to the vehicle energy storage system. There are different control parameters that can be utilized to compensate for the change in the impedance due to variable parameters such as battery state-of-charge, coupling factor, and coil misalignment. This paper presents the implementation of an active front-end rectifier on the grid side formore » power factor control and voltage boost capability for load power regulation. The proposed SiC MOSFET based single phase active front end rectifier with PFC resulted in >97% efficiency at 137mm air-gap and >95% efficiency at 160mm air-gap.« less

  18. Self-assessing target with automatic feedback

    DOEpatents

    Larkin, Stephen W.; Kramer, Robert L.

    2004-03-02

    A self assessing target with four quadrants and a method of use thereof. Each quadrant containing possible causes for why shots are going into that particular quadrant rather than the center mass of the target. Each possible cause is followed by a solution intended to help the marksman correct the problem causing the marksman to shoot in that particular area. In addition, the self assessing target contains possible causes for general shooting errors and solutions to the causes of the general shooting error. The automatic feedback with instant suggestions and corrections enables the shooter to improve their marksmanship.

  19. Stochastic solution to quantum dynamics

    NASA Technical Reports Server (NTRS)

    John, Sarah; Wilson, John W.

    1994-01-01

    The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.

  20. Large-Nc masses of light mesons from QCD sum rules for nonlinear radial Regge trajectories

    NASA Astrophysics Data System (ADS)

    Afonin, S. S.; Solomko, T. D.

    2018-04-01

    The large-Nc masses of light vector, axial, scalar and pseudoscalar mesons are calculated from QCD spectral sum rules for a particular ansatz interpolating the radial Regge trajectories. The ansatz includes a linear part plus exponentially degreasing corrections to the meson masses and residues. The form of corrections was proposed some time ago for consistency with analytical structure of Operator Product Expansion of the two-point correlation functions. We revised that original analysis and found the second solution for the proposed sum rules. The given solution describes better the spectrum of vector and axial mesons.

  1. 75 FR 5536 - Pipeline Safety: Control Room Management/Human Factors, Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts...: Control Room Management/Human Factors, Correction AGENCY: Pipeline and Hazardous Materials Safety... following correcting amendments: PART 192--TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM...

  2. Downward continuation of the free-air gravity anomalies to the ellipsoid using the gradient solution and terrain correction: An attempt of global numerical computations

    NASA Technical Reports Server (NTRS)

    Wang, Y. M.

    1989-01-01

    The formulas for the determination of the coefficients of the spherical harmonic expansion of the disturbing potential of the earth are defined for data given on a sphere. In order to determine the spherical harmonic coefficients, the gravity anomalies have to be analytically downward continued from the earth's surface to a sphere-at least to the ellipsoid. The goal is to continue the gravity anomalies from the earth's surface downward to the ellipsoid using recent elevation models. The basic method for the downward continuation is the gradient solution (the g sub 1 term). The terrain correction was also computed because of the role it can play as a correction term when calculating harmonic coefficients from surface gravity data. The fast Fourier transformation was applied to the computations.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less

  4. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  5. When 95% Accurate Isn't: Exploring Bayes's Theorem

    ERIC Educational Resources Information Center

    CadwalladerOlsker, Todd D.

    2011-01-01

    Bayes's theorem is notorious for being a difficult topic to learn and to teach. Problems involving Bayes's theorem (either implicitly or explicitly) generally involve calculations based on two or more given probabilities and their complements. Further, a correct solution depends on students' ability to interpret the problem correctly. Most people…

  6. Author Correction: Biochemical phosphates observed using hyperpolarized 31P in physiological aqueous solutions.

    PubMed

    Nardi-Schreiber, Atara; Gamliel, Ayelet; Harris, Talia; Sapir, Gal; Sosna, Jacob; Gomori, J Moshe; Katz-Brull, Rachel

    2018-05-22

    The original version of the Supplementary Information associated with this Article contained an error in Supplementary Figure 2 and Supplementary Figure 5 in which the 31 P NMR spectral lines were missing. The HTML has been updated to include a corrected version of the Supplementary Information.

  7. Use of Inappropriate and Inaccurate Conceptual Knowledge to Solve an Osmosis Problem.

    ERIC Educational Resources Information Center

    Zuckerman, June Trop

    1995-01-01

    Presents correct solutions to an osmosis problem of two high school science students who relied on inaccurate and inappropriate conceptual knowledge. Identifies characteristics of the problem solvers, salient properties of the problem that could contribute to the problem misrepresentation, and spurious correct answers. (27 references) (Author/MKR)

  8. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. Analytical and numerical analysis of frictional damage in quasi brittle materials

    NASA Astrophysics Data System (ADS)

    Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.

    2016-07-01

    Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.

  10. Bohm-criterion approximation versus optimal matched solution for a cylindrical probe in radial-motion theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Din, Alif

    2016-08-15

    The theory of positive-ion collection by a probe immersed in a low-pressure plasma was reviewed and extended by Allen et al. [Proc. Phys. Soc. 70, 297 (1957)]. The numerical computations for cylindrical and spherical probes in a sheath region were presented by F. F. Chen [J. Nucl. Energy C 7, 41 (1965)]. Here, in this paper, the sheath and presheath solutions for a cylindrical probe are matched through a numerical matching procedure to yield “matched” potential profile or “M solution.” The solution based on the Bohm criterion approach “B solution” is discussed for this particular problem. The comparison of cylindricalmore » probe characteristics obtained from the correct potential profile (M solution) and the approximated Bohm-criterion approach are different. This raises questions about the correctness of cylindrical probe theories relying only on the Bohm-criterion approach. Also the comparison between theoretical and experimental ion current characteristics shows that in an argon plasma the ions motion towards the probe is almost radial.« less

  11. The perturbation correction factors for cylindrical ionization chambers in high-energy photon beams.

    PubMed

    Yoshiyama, Fumiaki; Araki, Fujio; Ono, Takeshi

    2010-07-01

    In this study, we calculated perturbation correction factors for cylindrical ionization chambers in high-energy photon beams by using Monte Carlo simulations. We modeled four Farmer-type cylindrical chambers with the EGSnrc/Cavity code and calculated the cavity or electron fluence correction factor, P (cav), the displacement correction factor, P (dis), the wall correction factor, P (wall), the stem correction factor, P (stem), the central electrode correction factor, P (cel), and the overall perturbation correction factor, P (Q). The calculated P (dis) values for PTW30010/30013 chambers were 0.9967 +/- 0.0017, 0.9983 +/- 0.0019, and 0.9980 +/- 0.0019, respectively, for (60)Co, 4 MV, and 10 MV photon beams. The value for a (60)Co beam was about 1.0% higher than the 0.988 value recommended by the IAEA TRS-398 protocol. The P (dis) values had a substantial discrepancy compared to those of IAEA TRS-398 and AAPM TG-51 at all photon energies. The P (wall) values were from 0.9994 +/- 0.0020 to 1.0031 +/- 0.0020 for PTW30010 and from 0.9961 +/- 0.0018 to 0.9991 +/- 0.0017 for PTW30011/30012, in the range of (60)Co-10 MV. The P (wall) values for PTW30011/30012 were around 0.3% lower than those of the IAEA TRS-398. Also, the chamber response with and without a 1 mm PMMA water-proofing sleeve agreed within their combined uncertainty. The calculated P (stem) values ranged from 0.9945 +/- 0.0014 to 0.9965 +/- 0.0014, but they are not considered in current dosimetry protocols. The values were no significant difference on beam qualities. P (cel) for a 1 mm aluminum electrode agreed within 0.3% with that of IAEA TRS-398. The overall perturbation factors agreed within 0.4% with those for IAEA TRS-398.

  12. Solution to the spectral filter problem of residual terrain modelling (RTM)

    NASA Astrophysics Data System (ADS)

    Rexer, Moritz; Hirt, Christian; Bucha, Blažej; Holmes, Simon

    2018-06-01

    In physical geodesy, the residual terrain modelling (RTM) technique is frequently used for high-frequency gravity forward modelling. In the RTM technique, a detailed elevation model is high-pass-filtered in the topography domain, which is not equivalent to filtering in the gravity domain. This in-equivalence, denoted as spectral filter problem of the RTM technique, gives rise to two imperfections (errors). The first imperfection is unwanted low-frequency (LF) gravity signals, and the second imperfection is missing high-frequency (HF) signals in the forward-modelled RTM gravity signal. This paper presents new solutions to the RTM spectral filter problem. Our solutions are based on explicit modelling of the two imperfections via corrections. The HF correction is computed using spectral domain gravity forward modelling that delivers the HF gravity signal generated by the long-wavelength RTM reference topography. The LF correction is obtained from pre-computed global RTM gravity grids that are low-pass-filtered using surface or solid spherical harmonics. A numerical case study reveals maximum absolute signal strengths of ˜ 44 mGal (0.5 mGal RMS) for the HF correction and ˜ 33 mGal (0.6 mGal RMS) for the LF correction w.r.t. a degree-2160 reference topography within the data coverage of the SRTM topography model (56°S ≤ φ ≤ 60°N). Application of the LF and HF corrections to pre-computed global gravity models (here the GGMplus gravity maps) demonstrates the efficiency of the new corrections over topographically rugged terrain. Over Switzerland, consideration of the HF and LF corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 4.41 to 3.27 mGal, which translates into ˜ 26% improvement. Over a second test area (Canada), our corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 5.65 to 5.30 mGal (˜ 6% improvement). Particularly over Switzerland, geophysical signals (associated, e.g. with valley fillings) were found to stand out more clearly in the RTM-reduced gravity measurements when the HF and LF correction are taken into account. In summary, the new RTM filter corrections can be easily computed and applied to improve the spectral filter characteristics of the popular RTM approach. Benefits are expected, e.g. in the context of the development of future ultra-high-resolution global gravity models, smoothing of observed gravity data in mountainous terrain and geophysical interpretations of RTM-reduced gravity measurements.

  13. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  14. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    PubMed

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. An experimental phantom study of the effect of gadolinium-based MR contrast agents on PET attenuation coefficients and PET quantification in PET-MR imaging: application to cardiac studies.

    PubMed

    O' Doherty, Jim; Schleyer, Paul

    2017-12-01

    Simultaneous cardiac perfusion studies are an increasing trend in PET-MR imaging. During dynamic PET imaging, the introduction of gadolinium-based MR contrast agents (GBCA) at high concentrations during a dual injection of GBCA and PET radiotracer may cause increased attenuation effects of the PET signal, and thus errors in quantification of PET images. We thus aimed to calculate the change in linear attenuation coefficient (LAC) of a mixture of PET radiotracer and increasing concentrations of GBCA in solution and furthermore, to investigate if this change in LAC produced a measurable effect on the image-based PET activity concentration when attenuation corrected by three different AC strategies. We performed simultaneous PET-MR imaging of a phantom in a static scenario using a fixed activity of 40 MBq [18 F]-NaF, water, and an increasing GBCA concentration from 0 to 66 mM (based on an assumed maximum possible concentration of GBCA in the left ventricle in a clinical study). This simulated a range of clinical concentrations of GBCA. We investigated two methods to calculate the LAC of the solution mixture at 511 keV: (1) a mathematical mixture rule and (2) CT imaging of each concentration step and subsequent conversion to LAC at 511 keV. This comparison showed that the ranges of LAC produced by both methods are equivalent with an increase in LAC of the mixed solution of approximately 2% over the range of 0-66 mM. We then employed three different attenuation correction methods to the PET data: (1) each PET scan at a specific millimolar concentration of GBCA corrected by its corresponding CT scan, (2) each PET scan corrected by a CT scan with no GBCA present (i.e., at 0 mM GBCA), and (3) a manually generated attenuation map, whereby all CT voxels in the phantom at 0 mM were replaced by LAC = 0.1 cm -1 . All attenuation correction methods (1-3) were accurate to the true measured activity concentration within 5%, and there were no trends in image-based activity concentrations upon increasing the GBCA concentration of the solution. The presence of high GBCA concentration (representing a worst-case scenario in dynamic cardiac studies) in solution with PET radiotracer produces a minimal effect on attenuation-corrected PET quantification.

  16. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soh, R; Lee, J; Harianto, F

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute materialmore » for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.« less

  17. Factors Associated With Early Loss of Hallux Valgus Correction.

    PubMed

    Shibuya, Naohiro; Kyprios, Evangelos M; Panchani, Prakash N; Martin, Lanster R; Thorud, Jakob C; Jupiter, Daniel C

    Recurrence is common after hallux valgus corrective surgery. Although many investigators have studied the risk factors associated with a suboptimal hallux position at the end of long-term follow-up, few have evaluated the factors associated with actual early loss of correction. We conducted a retrospective cohort study to identify the predictors of lateral deviation of the hallux during the postoperative period. We evaluated the demographic data, preoperative severity of the hallux valgus, other angular measurements characterizing underlying deformities, amount of hallux valgus correction, and postoperative alignment of the corrected hallux valgus for associations with recurrence. After adjusting for the covariates, the only factor associated with recurrence was the postoperative tibial sesamoid position. The recurrence rate was ~50% and ~60% when the postoperative tibial sesamoid position was >4 and >5 on the 7-point scale, respectively. Published by Elsevier Inc.

  18. [Impact on non-adherence to farmacologic treatment on quality and sustainability of healthcare. Focus on cardiovascular diseases].

    PubMed

    Pelliccia, Francesco; Romeo, Francesco

    2016-01-01

    Adherence to drug treatment is key to successful therapeutic intervention, especially in chronic conditions. This holds particularly true in the setting of cardiovascular diseases, because poor adherence may have serious adverse effects in terms of morbidity and mortality. Many factors may contribute to poor adherence, which can be either patient-related or dependent on the healthcare system, the physician and the environment. The identification and appropriate correction of these factors may result in both clinical and economic benefits. In this setting it is also important to assess the implications of the increasing use of generic or equivalent drugs on adherence to pharmacological therapy. This topic has recently been addressed by an important Expert Consensus Document, endorsed by the Italian Societies of Cardiovascular Disease and Prevention, which was published in the Giornale Italiano di Cardiologia. The document addressed the relevance of the problem, potential determinants and possible solutions.

  19. Stress intensity factors for long, deep surface flaws in plates under extensional fields

    NASA Technical Reports Server (NTRS)

    Harms, A. E.; Smith, C. W.

    1973-01-01

    Using a singular solution for a part circular crack, a Taylor Series Correction Method (TSCM) was verified for extracting stress intensity factors from photoelastic data. Photoelastic experiments were then conducted on plates with part circular and flat bottomed cracks for flaw depth to thickness ratios of 0.25, 0.50 and 0.75 and for equivalent flaw depth to equivalent ellipse length values ranging from 0.066 to 0.319. Experimental results agreed well with the Smith theory but indicated that the use of the ''equivalent'' semi-elliptical flaw results was not valid for a/2c less than 0.20. Best overall agreement for the moderate (a/t approximately 0.5) to deep flaws (a/t approximatelly 0.75) and a/2c greater than 0.15 was found with a semi-empirical theory, when compared on the basis of equivalent flaw depth and area.

  20. Alignment methods: strategies, challenges, benchmarking, and comparative overview.

    PubMed

    Löytynoja, Ari

    2012-01-01

    Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.

  1. The inner filter effects and their correction in fluorescence spectra of salt marsh humic matter.

    PubMed

    Mendonça, Ana; Rocha, Ana C; Duarte, Armando C; Santos, Eduarda B H

    2013-07-25

    The inner filter effects in synchronous fluorescence spectra (Δλ=60 nm) of sedimentary humic substances from a salt marsh were studied. Accordingly to their type and the influence of plant colonization, these humic substances have different spectral features and the inner filter effects act in a different manner. The fluorescence spectra of the humic substances from sediments with colonizing plants have a protein like band (λexc=280 nm) which is strongly affected by primary and secondary inner filter effects. These effects were also observed for the bands situated at longer wavelengths, i.e., at λexc=350 nm and λex=454 nm for the fulvic acids (FA) and humic acids (HA), respectively. However, they are more important for the band at 280 nm, causing spectral distortions which can be clearly seen when the spectra of solutions 40 mg L(-1) of different samples (Dissolved Organic Carbon - DOC~20 mg L(-1)) are compared with and without correction of the inner filter effects. The importance of the spectral distortions caused by inner filter effects has been demonstrated in solutions containing a mixture of model compounds which represent the fluorophores detected in the spectra of sedimentary humic samples. The effectiveness of the mathematical correction of the inner filter effects in the spectra of those solutions and of solutions of sedimentary humic substances was studied. It was observed that inner filter effects in the sedimentary humic substances spectra can be mathematically corrected, allowing to obtain a linear relationship between the fluorescence intensity and humic substances concentration and preventing distortions at concentrations as high as 50 mg L(-1) which otherwise would obscure the protein like band. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. From bead to rod: Comparison of theories by measuring translational drag coefficients of micron-sized magnetic bead-chains in Stokes flow

    PubMed Central

    Lu, Chen; Zhao, Xiaodan; Kawamura, Ryo

    2017-01-01

    Frictional drag force on an object in Stokes flow follows a linear relationship with the velocity of translation and a translational drag coefficient. This drag coefficient is related to the size, shape, and orientation of the object. For rod-like objects, analytical solutions of the drag coefficients have been proposed based on three rough approximations of the rod geometry, namely the bead model, ellipsoid model, and cylinder model. These theories all agree that translational drag coefficients of rod-like objects are functions of the rod length and aspect ratio, but differ among one another on the correction factor terms in the equations. By tracking the displacement of the particles through stationary fluids of calibrated viscosity in magnetic tweezers setup, we experimentally measured the drag coefficients of micron-sized beads and their bead-chain formations with chain length of 2 to 27. We verified our methodology with analytical solutions of dimers of two touching beads, and compared our measured drag coefficient values of rod-like objects with theoretical calculations. Our comparison reveals several analytical solutions that used more appropriate approximation and derived formulae that agree with our measurement better. PMID:29145447

  3. New multigrid approach for three-dimensional unstructured, adaptive grids

    NASA Technical Reports Server (NTRS)

    Parthasarathy, Vijayan; Kallinderis, Y.

    1994-01-01

    A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.

  4. An improved error assessment for the GEM-T1 gravitational model

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.

    1988-01-01

    Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.

  5. Constructions with Obstructions Involving Arcs.

    ERIC Educational Resources Information Center

    Wood, Dick A.

    1993-01-01

    Presents six construction problems in which key parts of the figure are made inaccessible, that is, a lake or an obstruction is inserted. Encourages creative thinking while improving problem-solving skills. Students are to show the construction, describe the solution, and verify correctness of the solution. (LDR)

  6. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  7. Correction factor in temperature measurements by optoelectronic systems

    NASA Astrophysics Data System (ADS)

    Bikberdina, N.; Yunusov, R.; Boronenko, M.; Gulyaev, P.

    2017-11-01

    It is often necessary to investigate high temperature fast moving microobjects. If you want to measure their temperature, use optoelectronic measuring systems. Optoelectronic systems are always calibrated over a stationary absolutely black body. One of the problems of pyrometry is that you can not use this calibration to measure the temperature of moving objects. Two solutions are proposed in [1]. This article outlines the first results of validation [2]. An experimentally justified coefficient that allows one to take into account the influence of its motion on the decrease in the video signal of the photosensor in the regime of charge accumulation. The study was partially supported by RFBR in the framework of a research project № 15-42-00106

  8. Pathway Evidence of How Musical Perception Predicts Word-Level Reading Ability in Children with Reading Difficulties

    PubMed Central

    Cogo-Moreira, Hugo; Brandão de Ávila, Clara Regina; Ploubidis, George B.; de Jesus Mari, Jair

    2013-01-01

    Objective To investigate whether specific domains of musical perception (temporal and melodic domains) predict the word-level reading skills of eight- to ten-year-old children (n = 235) with reading difficulties, normal quotient of intelligence, and no previous exposure to music education classes. Method A general-specific solution of the Montreal Battery of Evaluation of Amusia (MBEA), which underlies a musical perception construct and is constituted by three latent factors (the general, temporal, and the melodic domain), was regressed on word-level reading skills (rate of correct isolated words/non-words read per minute). Results General and melodic latent domains predicted word-level reading skills. PMID:24358358

  9. SU-F-BRE-01: A Rapid Method to Determine An Upper Limit On a Radiation Detector's Correction Factor During the QA of IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamio, Y; Bouchard, H

    2014-06-15

    Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less

  10. Factors Influencing the Design, Establishment, Administration, and Governance of Correctional Education for Females

    ERIC Educational Resources Information Center

    Ellis, Johnica; McFadden, Cheryl; Colaric, Susan

    2008-01-01

    This article summarizes the results of a study conducted to investigate factors influencing the organizational design, establishment, administration, and governance of correctional education for females. The research involved interviews with correctional and community college administrators and practitioners representing North Carolina female…

  11. Phaser.MRage: automated molecular replacement

    PubMed Central

    Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J.; Oeffner, Robert D.; Adams, Paul D.; Read, Randy J.

    2013-01-01

    Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement. PMID:24189240

  12. Black holes thermodynamics in a new kind of noncommutative geometry

    NASA Astrophysics Data System (ADS)

    Faizal, Mir; Amorim, R. G. G.; Ulhoa, S. C.

    Motivated by the energy-dependent metric in gravity’s rainbow, we will propose a new kind of energy-dependent noncommutative geometry. It will be demonstrated that like gravity’s rainbow, this new noncommutative geometry is described by an energy-dependent metric. We will analyze the effect of this noncommutative deformation on the Schwarzschild black holes and Kerr black holes. We will perform our analysis by relating the commutative and this new energy-dependent noncommutative metrics using an energy-dependent Moyal star product. We will also analyze the thermodynamics of these new noncommutative black hole solutions. We will explicitly derive expression for the corrected entropy and temperature for these black hole solutions. It will be demonstrated that, for these deformed solutions, black remnants cannot form. This is because these corrections increase rather than reduce the temperature of the black holes.

  13. Comment on: Accurate and fast numerical solution of Poisson s equation for arbitrary, space-filling Voronoi polyhedra: Near-field corrections revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonis, Antonios; Zhang, Xiaoguang

    2012-01-01

    This is a comment on the paper by Aftab Alam, Brian G. Wilson, and D. D. Johnson [1], proposing the solution of the near-field corrections (NFC s) problem for the Poisson equation for extended, e.g., space filling, charge densities. We point out that the problem considered by the authors can be simply avoided by means of performing certain integrals in a particular order, while their method does not address the genuine problem of NFC s that arises when the solution of the Poisson equation is attempted within multiple scattering theory. We also point out a flaw in their line ofmore » reasoning leading to the expression for the potential inside the bounding sphere of a cell that makes it inapplicable to certain geometries.« less

  14. Phaser.MRage: automated molecular replacement.

    PubMed

    Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J; Oeffner, Robert D; Adams, Paul D; Read, Randy J

    2013-11-01

    Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement.

  15. Statistical Analyses of Hydrophobic Interactions: A Mini-Review

    DOE PAGES

    Pratt, Lawrence R.; Chaudhari, Mangesh I.; Rempe, Susan B.

    2016-07-14

    Here this review focuses on the striking recent progress in solving for hydrophobic interactions between small inert molecules. We discuss several new understandings. First, the inverse temperature phenomenology of hydrophobic interactions, i.e., strengthening of hydrophobic bonds with increasing temperature, is decisively exhibited by hydrophobic interactions between atomic-scale hard sphere solutes in water. Second, inclusion of attractive interactions associated with atomic-size hydrophobic reference cases leads to substantial, nontrivial corrections to reference results for purely repulsive solutes. Hydrophobic bonds are weakened by adding solute dispersion forces to treatment of reference cases. The classic statistical mechanical theory for those corrections is not accuratemore » in this application, but molecular quasi-chemical theory shows promise. Lastly, because of the masking roles of excluded volume and attractive interactions, comparisons that do not discriminate the different possibilities face an interpretive danger.« less

  16. Comment on ``Accurate and fast numerical solution of Poisson's equation for arbitrary, space-filling Voronoi polyhedra: Near-field corrections revisited''

    NASA Astrophysics Data System (ADS)

    Gonis, A.; Zhang, X.-G.

    2012-09-01

    This is a Comment on the paper by Alam, Wilson, and Johnson [Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.84.205106 84, 205106 (2011)], proposing the solution of the near-field corrections (NFCs) problem for the Poisson equation for extended, e.g., space-filling charge densities. We point out that the problem considered by the authors can be simply avoided by means of performing certain integrals in a particular order, whereas, their method does not address the genuine problem of NFCs that arises when the solution of the Poisson equation is attempted within multiple-scattering theory. We also point out a flaw in their line of reasoning, leading to the expression for the potential inside the bounding sphere of a cell that makes it inapplicable for certain geometries.

  17. Improving satellite retrievals of NO2 in biomass burning regions

    NASA Astrophysics Data System (ADS)

    Bousserez, N.; Martin, R. V.; Lamsal, L. N.; Mao, J.; Cohen, R. C.; Anderson, B. E.

    2010-12-01

    The quality of space-based nitrogen dioxide (NO2) retrievals from solar backscatter depends on a priori knowledge of the NO2 profile shape as well as the effects of atmospheric scattering. These effects are characterized by the air mass factor (AMF) calculation. Calculation of the AMF combines a radiative transfer calculation together with a priori information about aerosols and about NO2 profiles (shape factors), which are usually taken from a chemical transport model. In this work we assess the impact of biomass burning emissions on the AMF using the LIDORT radiative transfer model and a GEOS-Chem simulation based on a daily fire emissions inventory (FLAMBE). We evaluate the GEOS-Chem aerosol optical properties and NO2 shape factors using in situ data from the ARCTAS summer 2008 (North America) and DABEX winter 2006 (western Africa) experiments. Sensitivity studies are conducted to assess the impact of biomass burning on the aerosols and the NO2 shape factors used in the AMF calculation. The mean aerosol correction over boreal fires is negligible (+3%), in contrast with a large reduction (-18%) over African savanna fires. The change in sign and magnitude over boreal forest and savanna fires appears to be driven by the shielding effects that arise from the greater biomass burning aerosol optical thickness (AOT) above the African biomass burning NO2. In agreement with previous work, the single scattering albedo (SSA) also affects the aerosol correction. We further investigated the effect of clouds on the aerosol correction. For a fixed AOT, the aerosol correction can increase from 20% to 50% when cloud fraction increases from 0 to 30%. Over both boreal and savanna fires, the greatest impact on the AMF is from the fire-induced change in the NO2 profile (shape factor correction), that decreases the AMF by 38% over the boreal fires and by 62% of the savanna fires. Combining the aerosol and shape factor corrections together results in small differences compared to the shape factor correction alone (without the aerosol correction), indicating that a shape factor-only correction is a good approximation of the total AMF correction associated with fire emissions. We use this result to define a measurement-based correction of the AMF based on the relationship between the slant column variability and the variability of the shape factor in the lower troposphere. This method may be generalized to other types of emission sources.

  18. A strategy for recovery: Report of the HST Strategy Panel

    NASA Technical Reports Server (NTRS)

    Brown, R. A. (Editor); Ford, H. C. (Editor)

    1991-01-01

    The panel met to identify and assess strategies for recovering the Hubble Space Telescope (HST) capabilities degraded by a spherical aberration. The panels findings and recommendations to correct the problem with HST are given. The optical solution is a pair of mirrors for each science instrument field of view. The Corrective Optics Space Telescope Axial Replacement (COSTAR) is the proposed device to carry and deploy the corrective optics. A 1993 servicing mission is planned.

  19. Utilizing an Energy Management System with Distributed Resources to Manage Critical Loads and Reduce Energy Costs

    DTIC Science & Technology

    2014-09-01

    peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a system during...photovoltaic arrays during islanding, and power factor correction, the implementation of the ESS by itself is likely to prove cost prohibitive. The DOD...These functions include peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a

  20. Model-based MPC enables curvilinear ILT using either VSB or multi-beam mask writers

    NASA Astrophysics Data System (ADS)

    Pang, Linyong; Takatsukasa, Yutetsu; Hara, Daisuke; Pomerantsev, Michael; Su, Bo; Fujimura, Aki

    2017-07-01

    Inverse Lithography Technology (ILT) is becoming the choice for Optical Proximity Correction (OPC) of advanced technology nodes in IC design and production. Multi-beam mask writers promise significant mask writing time reduction for complex ILT style masks. Before multi-beam mask writers become the main stream working tools in mask production, VSB writers will continue to be the tool of choice to write both curvilinear ILT and Manhattanized ILT masks. To enable VSB mask writers for complex ILT style masks, model-based mask process correction (MB-MPC) is required to do the following: 1). Make reasonable corrections for complex edges for those features that exhibit relatively large deviations from both curvilinear ILT and Manhattanized ILT designs. 2). Control and manage both Edge Placement Errors (EPE) and shot count. 3. Assist in easing the migration to future multi-beam mask writer and serve as an effective backup solution during the transition. In this paper, a solution meeting all those requirements, MB-MPC with GPU acceleration, will be presented. One model calibration per process allows accurate correction regardless of the target mask writer.

  1. International comparison of activity measurements of a solution of 75Se

    NASA Astrophysics Data System (ADS)

    Ratel, Guy

    2002-04-01

    Activity measurements of a solution of 75Se, supplied by the BIPM, have been carried out by 21 laboratories within the framework of an international comparison. Seven different methods were used. Details on source preparation, experimental facilities and counting data are reported. The measured activity-concentration values show a total spread of 6.62% before correction and 6.02% after correction for delayed events, with standard deviations of the unweighted means of 0.45% and 0.36%, respectively. The correction for delayed events was measured directly by four laboratories. Unfortunately no consensus on the activity value could be deduced from their results. The results of the comparison have been entered in the tables of the International Reference System (SIR) for γ-ray emitting radionuclides. The half-life of the metastable state was also determined by two laboratories and found to be in good agreement with the values found in the literature.

  2. Single-image-based solution for optics temperature-dependent nonuniformity correction in an uncooled long-wave infrared camera.

    PubMed

    Cao, Yanpeng; Tisse, Christel-Loic

    2014-02-01

    In this Letter, we propose an efficient and accurate solution to remove temperature-dependent nonuniformity effects introduced by the imaging optics. This single-image-based approach computes optics-related fixed pattern noise (FPN) by fitting the derivatives of correction model to the gradient components, locally computed on an infrared image. A modified bilateral filtering algorithm is applied to local pixel output variations, so that the refined gradients are most likely caused by the nonuniformity associated with optics. The estimated bias field is subtracted from the raw infrared imagery to compensate the intensity variations caused by optics. The proposed method is fundamentally different from the existing nonuniformity correction (NUC) techniques developed for focal plane arrays (FPAs) and provides an essential image processing functionality to achieve completely shutterless NUC for uncooled long-wave infrared (LWIR) imaging systems.

  3. Computation of a spectrum from a single-beam fourier-transform infrared interferogram.

    PubMed

    Ben-David, Avishai; Ifarraguerri, Agustin

    2002-02-20

    A new high-accuracy method has been developed to transform asymmetric single-sided interferograms into spectra. We used a fraction (short, double-sided) of the recorded interferogram and applied an iterative correction to the complete recorded interferogram for the linear part of the phase induced by the various optical elements. Iterative phase correction enhanced the symmetry in the recorded interferogram. We constructed a symmetric double-sided interferogram and followed the Mertz procedure [Infrared Phys. 7,17 (1967)] but with symmetric apodization windows and with a nonlinear phase correction deduced from this double-sided interferogram. In comparing the solution spectrum with the source spectrum we applied the Rayleigh resolution criterion with a Gaussian instrument line shape. The accuracy of the solution is excellent, ranging from better than 0.1% for a blackbody spectrum to a few percent for a complicated atmospheric radiance spectrum.

  4. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE PAGES

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.; ...

    2017-09-07

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  5. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  6. Determination of the unsulfonated color concentration from D&C Yellow No. 10 by the derivative spectrophotometry

    NASA Astrophysics Data System (ADS)

    Berdie, A. D.; Jitian, S.

    2018-01-01

    The method that we used is based on the measurement of the first derivative of the mixture of the two colorants at the wavelength for which one of them has the first derivative equal to zero. The Code of Federal Regulations (21 CFR 74.1710) specifies for D&C Yellow No. 10 the maximum permitted levels of an unsulfonated subsidiary color and of diethyl ether-soluble matter other than that specified. In the proposed method a color additive sample is dissolved in water and the unsulfonated subsidiary color are extracted from this solution with dichloromethane. The analysts in dichloromethane solution are determined by spectrophotometry. The unsulfonated subsidiary colors determined are: - D&C Yellow No. 11 [2-(2-Quinolinyl)-1H-indene-1,3(2H)-dione] (Y11), from which D&C Yellow No. 10 is manufactured by sulfonating and - 1,5-Naphthyridinequinophthalone (1,5-NQ). Another compound soluble in water and dichloromethane (which I called S) is present in dichloromethane solution after extraction together with the other two colors and can affect the correct determination of the concentrations. The dichloromethane-soluble matter other than specified is a mixture consisting mostly of chlorinated derivatives of the unsulfonated subsidiary color. Because the S color is present both in aqueous and in dichloromethane solutions, the spectra of calibration solutions should be corrected. The applied correction does not affect the determination of the unsulfonated subsidiary colors concentrations. D&C Yellow No. 11 and 1,5-NQ are used as standard for unsulfonated subsidiary colors.

  7. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  8. Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients

    PubMed Central

    Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan

    2016-01-01

    Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989

  9. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  10. Nonergodicity of microfine binary systems

    NASA Astrophysics Data System (ADS)

    Son, L. D.; Sidorov, V. E.; Popel', P. S.; Shul'gin, D. B.

    2016-02-01

    The correction to the equation of state that is related to the nonergodicity of diffusion dynamics is discussed for a binary solid solution with a limited solubility. It is asserted that, apart from standard thermodynamic variables (temperature, volume, concentration), this correction should be taken into account in the form of the average local chemical potential fluctuations associated with microheterogeneity in order to plot a phase diagram. It is shown that a low value of this correction lowers the miscibility gap and that this gap splits when this correction increases. This situation is discussed for eutectic systems and Ga-Pb, Fe-Cu, and Cu-Zr alloys.

  11. Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity

    NASA Astrophysics Data System (ADS)

    Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.

    2004-12-01

    Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.

  12. Corrections to the thin wall approximation in general relativity

    NASA Technical Reports Server (NTRS)

    Garfinkle, David; Gregory, Ruth

    1989-01-01

    The question is considered whether the thin wall formalism of Israel applies to the gravitating domain walls of a lambda phi(exp 4) theory. The coupled Einstein-scalar equations that describe the thick gravitating wall are expanded in powers of the thickness of the wall. The solutions of the zeroth order equations reproduce the results of the usual Israel thin wall approximation for domain walls. The solutions of the first order equations provide corrections to the expressions for the stress-energy of the wall and to the Israel thin wall equations. The modified thin wall equations are then used to treat the motion of spherical and planar domain walls.

  13. Risk factors for postoperative intraretinal cystoid changes after peeling of idiopathic epiretinal membranes among patients randomized for balanced salt solution and air-tamponade.

    PubMed

    Leisser, Christoph; Hirnschall, Nino; Hackl, Christoph; Döller, Birgit; Varsits, Ralph; Ullrich, Marlies; Kefer, Katharina; Karl, Rigal; Findl, Oliver

    2018-02-20

    Epiretinal membranes (ERM) are macular disorders leading to loss of vision and metamorphopsia. Vitrectomy with membrane peeling displays the gold standard of care. Aim of this study was to assess risk factors for postoperative intraretinal cystoid changes in a study population randomized for balanced salt solution and air-tamponade at the end of surgery. A prospective randomized study, including 69 eyes with idiopathic ERM. Standard 23-gauge three-port pars plana vitrectomy with membrane peeling, using intraoperative optical coherence tomography (OCT), was performed. Randomization for BSS and air-tamponade was performed prior to surgery. Best-corrected visual acuity improved from 32.9 letters to 45.1 letters 3 months after surgery. Presence of preoperative intraretinal cystoid changes was found to be the only risk factor for presence of postoperative intraretinal cystoid changes 3 months after surgery (p = 0.01; odds ratio: 8.0). Other possible risk factors such as combined phacoemulsification with 23G-ppv and membrane peeling (p = 0.16; odds ratio: 2.4), intraoperative subfoveal hyporeflective zones (p = 0.23; odds ratio: 2.6), age over 70 years (p = 0.29; odds ratio: 0.5) and air-tamponade (p = 0.59; odds ratio: 1.5) were not found to be significant. There is strong evidence that preoperative intraretinal cystoid changes lead to smaller benefit from surgery. © 2018 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  14. Terrain Correction on the moving equal area cylindrical map projection of the surface of a reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A.; Safari, A.; Grafarend, E.

    2003-04-01

    An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in terms of Cartesian coordinates and with accuracy of ellipsoidal terrain correction has been achieved! In this way one can enjoy the simplicity of the solution of the Newton integral in terms of Cartesian coordinates and at the same time the accuracy of the ellipsoidal terrain correction, which is needed for the modern theory of geoid computations.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Monroy, J.A., E-mail: antosan@gmail.com; Quimbay, C.J., E-mail: cjquimbayh@unal.edu.co; Centro Internacional de Fisica, Bogota D.C.

    In the context of a semiclassical approach where vectorial gauge fields can be considered as classical fields, we obtain exact static solutions of the SU(N) Yang-Mills equations in an (n+1)-dimensional curved space-time, for the cases n=1,2,3. As an application of the results obtained for the case n=3, we consider the solutions for the anti-de Sitter and Schwarzschild metrics. We show that these solutions have a confining behavior and can be considered as a first step in the study of the corrections of the spectra of quarkonia in a curved background. Since the solutions that we find in this work aremore » valid also for the group U(1), the case n=2 is a description of the (2+1) electrodynamics in the presence of a point charge. For this case, the solution has a confining behavior and can be considered as an application of the planar electrodynamics in a curved space-time. Finally we find that the solution for the case n=1 is invariant under a parity transformation and has the form of a linear confining solution. - Highlights: Black-Right-Pointing-Pointer We study exact static confining solutions of the SU(N) Yang-Mills equations in an (n+1)-dimensional curved space-time. Black-Right-Pointing-Pointer The solutions found are a first step in the study of the corrections on the spectra of quarkonia in a curved background. Black-Right-Pointing-Pointer A expression for the confinement potential in low dimensionality is found.« less

  16. The Additional Secondary Phase Correction System for AIS Signals

    PubMed Central

    Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen

    2017-01-01

    This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330

  17. SU-E-T-123: Anomalous Altitude Effect in Permanent Implant Brachytherapy Seeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watt, E; Spencer, DP; Meyer, T

    Purpose: Permanent seed implant brachytherapy procedures require the measurement of the air kerma strength of seeds prior to implant. This is typically accomplished using a well-type ionization chamber. Previous measurements (Griffin et al., 2005; Bohm et al., 2005) of several low-energy seeds using the air-communicating HDR 1000 Plus chamber have demonstrated that the standard temperature-pressure correction factor, P{sub TP}, may overcompensate for air density changes induced by altitude variations by up to 18%. The purpose of this work is to present empirical correction factors for two clinically-used seeds (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) for which empiricalmore » altitude correction factors do not yet exist in the literature when measured with the HDR 1000 Plus chamber. Methods: An in-house constructed pressure vessel containing the HDR 1000 Plus well chamber and a digital barometer/thermometer was pumped or evacuated, as appropriate, to a variety of pressures from 725 to 1075 mbar. Current measurements, corrected with P{sub TP}, were acquired for each seed at these pressures and normalized to the reading at ‘standard’ pressure (1013.25 mbar). Results: Measurements in this study have shown that utilization of P{sub TP} can overcompensate in the corrected current reading by up to 20% and 17% for the IsoAid Pd-103 and the Nucletron I-125 seed respectively. Compared to literature correction factors for other seed models, the correction factors in this study diverge by up to 2.6% and 3.0% for iodine (with silver) and palladium respectively, indicating the need for seed-specific factors. Conclusion: The use of seed specific altitude correction factors can reduce uncertainty in the determination of air kerma strength. The empirical correction factors determined in this work can be applied in clinical quality assurance measurements of air kerma strength for two previously unpublished seed designs (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) with the HDR 1000 Plus well chamber.« less

  18. A national physician survey of diagnostic error in paediatrics.

    PubMed

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  19. Optical solutions for unbundled access network

    NASA Astrophysics Data System (ADS)

    Bacîş Vasile, Irina Bristena

    2015-02-01

    The unbundling technique requires finding solutions to guarantee the economic and technical performances imposed by the nature of the services that can be offered. One of the possible solutions is the optic one; choosing this solution is justified for the following reasons: it optimizes the use of the access network, which is the most expensive part of a network (about 50% of the total investment in telecommunications networks) while also being the least used (telephone traffic on the lines has a low cost); it increases the distance between the master station/central and the terminal of the subscriber; the development of the services offered to the subscribers is conditioned by the subscriber network. For broadband services there is a need for support for the introduction of high-speed transport. A proper identification of the factors that must be satisfied and a comprehensive financial evaluation of all resources involved, both the resources that are in the process of being bought as well as extensions are the main conditions that would lead to a correct choice. As there is no single optimal technology for all development scenarios, which can take into account all access systems, a successful implementation is always done by individual/particularized scenarios. The method used today for the selection of an optimal solution is based on statistics and analysis of the various, already implemented, solutions, and on the experience that was already gained; the main evaluation criterion and the most unbiased one is the ratio between the cost of the investment and the quality of service, while serving an as large as possible number of customers.

  20. Data Quality Control: Challenges, Methods, and Solutions from an Eco-Hydrologic Instrumentation Network

    NASA Astrophysics Data System (ADS)

    Eiriksson, D.; Jones, A. S.; Horsburgh, J. S.; Cox, C.; Dastrup, D.

    2017-12-01

    Over the past few decades, advances in electronic dataloggers and in situ sensor technology have revolutionized our ability to monitor air, soil, and water to address questions in the environmental sciences. The increased spatial and temporal resolution of in situ data is alluring. However, an often overlooked aspect of these advances are the challenges data managers and technicians face in performing quality control on millions of data points collected every year. While there is general agreement that high quantities of data offer little value unless the data are of high quality, it is commonly understood that despite efforts toward quality assurance, environmental data collection occasionally goes wrong. After identifying erroneous data, data managers and technicians must determine whether to flag, delete, leave unaltered, or retroactively correct suspect data. While individual instrumentation networks often develop their own QA/QC procedures, there is a scarcity of consensus and literature regarding specific solutions and methods for correcting data. This may be because back correction efforts are time consuming, so suspect data are often simply abandoned. Correction techniques are also rarely reported in the literature, likely because corrections are often performed by technicians rather than the researchers who write the scientific papers. Details of correction procedures are often glossed over as a minor component of data collection and processing. To help address this disconnect, we present case studies of quality control challenges, solutions, and lessons learned from a large scale, multi-watershed environmental observatory in Northern Utah that monitors Gradients Along Mountain to Urban Transitions (GAMUT). The GAMUT network consists of over 40 individual climate, water quality, and storm drain monitoring stations that have collected more than 200 million unique data points in four years of operation. In all of our examples, we emphasize that scientists should remain skeptical and seek independent verification of sensor data, even for sensors purchased from trusted manufacturers.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  2. Relativistic corrections to the form factors of Bc into P-wave orbitally excited charmonium

    NASA Astrophysics Data System (ADS)

    Zhu, Ruilin

    2018-06-01

    We investigated the form factors of the Bc meson into P-wave orbitally excited charmonium using the nonrelativistic QCD effective theory. Through the analytic computation, the next-to-leading order relativistic corrections to the form factors were obtained, and the asymptotic expressions were studied in the infinite bottom quark mass limit. Employing the general form factors, we discussed the exclusive decays of the Bc meson into P-wave orbitally excited charmonium and a light meson. We found that the relativistic corrections lead to a large correction for the form factors, which makes the branching ratios of the decay channels B (Bc ± →χcJ (hc) +π± (K±)) larger. These results are useful for the phenomenological analysis of the Bc meson decays into P-wave charmonium, which shall be tested in the LHCb experiments.

  3. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  4. Power corrections to TMD factorization for Z-boson production

    DOE PAGES

    Balitsky, I.; Tarasov, A.

    2018-05-24

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  5. Power corrections to TMD factorization for Z-boson production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balitsky, I.; Tarasov, A.

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  6. A new class of relativistic stellar models

    NASA Astrophysics Data System (ADS)

    Haggag, Salah

    1995-03-01

    Einstein field equations for a static and spherically symmetric perfect fluid are considered. A formulation given by Patino and Rago is used to obtain a class of nine solutions, two of them are Tolman solutions I, IV and the remaining seven are new. The solutions are the correct ones corresponding to expressions derived by Patino and Rago which have been shown by Knutsen to be incorrect. Similar to Tolan solution IV each of the new solutions satisfies energy conditions inside a sphere in some range of two independent parameters. Besides, each solution could be matched to the exterior Schwarzschild solution at a boundary where the pressure vanishes and thus the solutions constitute a class of new physically reasonable stellar models.

  7. An approximate JKR solution for a general contact, including rough contacts

    NASA Astrophysics Data System (ADS)

    Ciavarella, M.

    2018-05-01

    In the present note, we suggest a simple closed form approximate solution to the adhesive contact problem under the so-called JKR regime. The derivation is based on generalizing the original JKR energetic derivation assuming calculation of the strain energy in adhesiveless contact, and unloading at constant contact area. The underlying assumption is that the contact area distributions are the same as under adhesiveless conditions (for an appropriately increased normal load), so that in general the stress intensity factors will not be exactly equal at all contact edges. The solution is simply that the indentation is δ =δ1 -√{ 2 wA‧ /P″ } where w is surface energy, δ1 is the adhesiveless indentation, A‧ is the first derivative of contact area and P‧‧ the second derivative of the load with respect to δ1. The solution only requires macroscopic quantities, and not very elaborate local distributions, and is exact in many configurations like axisymmetric contacts, but also sinusoidal waves contact and correctly predicts some features of an ideal asperity model used as a test case and not as a real description of a rough contact problem. The solution permits therefore an estimate of the full solution for elastic rough solids with Gaussian multiple scales of roughness, which so far was lacking, using known adhesiveless simple results. The result turns out to depend only on rms amplitude and slopes of the surface, and as in the fractal limit, slopes would grow without limit, tends to the adhesiveless result - although in this limit the JKR model is inappropriate. The solution would also go to adhesiveless result for large rms amplitude of roughness hrms, irrespective of the small scale details, and in agreement with common sense, well known experiments and previous models by the author.

  8. Trajectory Correction and Locomotion Analysis of a Hexapod Walking Robot with Semi-Round Rigid Feet

    PubMed Central

    Zhu, Yaguang; Jin, Bo; Wu, Yongsheng; Guo, Tong; Zhao, Xiangmo

    2016-01-01

    Aimed at solving the misplaced body trajectory problem caused by the rolling of semi-round rigid feet when a robot is walking, a legged kinematic trajectory correction methodology based on the Least Squares Support Vector Machine (LS-SVM) is proposed. The concept of ideal foothold is put forward for the three-dimensional kinematic model modification of a robot leg, and the deviation value between the ideal foothold and real foothold is analyzed. The forward/inverse kinematic solutions between the ideal foothold and joint angular vectors are formulated and the problem of direct/inverse kinematic nonlinear mapping is solved by using the LS-SVM. Compared with the previous approximation method, this correction methodology has better accuracy and faster calculation speed with regards to inverse kinematics solutions. Experiments on a leg platform and a hexapod walking robot are conducted with multi-sensors for the analysis of foot tip trajectory, base joint vibration, contact force impact, direction deviation, and power consumption, respectively. The comparative analysis shows that the trajectory correction methodology can effectively correct the joint trajectory, thus eliminating the contact force influence of semi-round rigid feet, significantly improving the locomotion of the walking robot and reducing the total power consumption of the system. PMID:27589766

  9. SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.

    PubMed

    Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J

    2017-06-01

    This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Pogo summary report main propulsion test static firings 1-7 for shuttle development flight instrumentation

    NASA Technical Reports Server (NTRS)

    Haddick, C. M., Jr.

    1980-01-01

    Problems concerning the shuttle main propulsion system Polar Orbit Geophysical Observatory (POGO) instrumentation and the actions taken to correct them are summarized. Investigations and analyses appear to be providing solutions to correct the majority of questionable measurements. Corrective action in the handling of cables and connectors should increase the POGO measurement quality. Unacceptable levels of very low frequency noise and data level shifts may be related to test stand grounding configuration, but further investigation is required.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bena, Iosif; Kraus, Per; Warner, Nicholas P.

    We construct the most generic three-charge, three-dipole-charge, BPS black-ring solutions in a Taub-NUT background. These solutions depend on seven charges and six moduli, and interpolate between a four-dimensional black hole and a five-dimensional black ring. They are also instrumental in determining the correct microscopic description of the five-dimensional BPS black rings.

  12. Aperiodicity Correction for Rotor Tip Vortex Measurements

    DTIC Science & Technology

    2011-05-01

    where α = 1.25643. The Iversen and the transitional models are not closed-form solutions but are formulated as solutions to an ordinary differential ...edition, 1932, pp. 592– 593. [7] Oseen, C. W., “ Uber Wirbelbewegung in Einer Reibenden Flussigkeit,” Ark. J. Mat. Astrom. Fys., Vol. 7, (Nonumber), 1912

  13. Survey of Technical Preventative Measures to Reduce Whole-Body Vibration Effects when Designing Mobile Machinery

    NASA Astrophysics Data System (ADS)

    DONATI, P.

    2002-05-01

    Engineering solutions to minimize the effects on operators of vibrating mobile machinery can be conveniently grouped into three areas: Reduction of vibration at source by improvement of the quality of terrain, careful selection of vehicle or machine, correct loading, proper maintenance, etc.Reduction of vibration transmission by incorporating suspension systems (tyres, vehicle suspensions, suspension cab and seat) between the operator and the source of vibration.Improvement of cab ergonomics and seat profiles to optimize operator posture. These paper reviews the different techniques and problems linked to categories (2) and (3). According to epidemiological studies, the main health risk with whole-body vibration exposure would appear to be lower back pain. When designing new mobile machinery, all factors which may contribute to back injury should be considered in order to reduce risk. For example, optimized seat suspension is useless if the suspension seat cannot be correctly and easily adjusted to the driver's weight or if the driver is forced to drive in a bent position to avoid his head striking the ceiling due to the spatial requirement of the suspension seat.

  14. On Study of Air/Space-borne Dual-Wavelength Radar for Estimates of Rain Profiles

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert

    2004-01-01

    In this study, a framework is discussed to apply air/space-borne dual-wavelength radar for the estimation of characteristic parameters of hydrometeors. The focus of our study is on the Global Precipitation Measurements (GPM) precipitation radar, a dual-wavelength radar that operates at Ku (13.8 GHz) and Ka (35 GHz) bands. As the droplet size distributions (DSD) of rain are expressed as the Gamma function, a procedure is described to derive the median volume diameter (D(sub 0)) and particle number concentration (N(sub T)) of rain. The correspondences of an important quantity of dual-wavelength radar, defined as deferential frequency ratio (DFR), to the D(sub 0) in the melting region are given as a function of the distance from the 0 C isotherm. A self-consistent iterative algorithm that shows a promising to account for rain attenuation of radar and infer the DSD without use of surface reference technique (SRT) is examined by applying it to the apparent radar reflectivity profiles simulated from the DSD model and then comparing the estimates with the model (true) results. For light to moderate rain the self-consistent rain profiling approach converges to unique and correct solutions only if the same shape factors of Gamma functions are used both to generate and retrieve the rain profiles, but does not converges to the true solutions if the DSD form is not chosen correctly. To further examine the dual-wavelength techniques, the self-consistent algorithm, along with forward and backward rain profiling algorithms, is then applied to the measurements taken from the 2nd generation Precipitation Radar (PR-2) built by Jet Propulsion Laboratory. It is found that rain profiles estimated from the forward and backward approaches are not sensitive to shape factor of DSD Gamma distribution, but the self-consistent method is.

  15. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  16. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  17. Factors influencing workplace violence risk among correctional health workers: insights from an Australian survey.

    PubMed

    Cashmore, Aaron W; Indig, Devon; Hampton, Stephen E; Hegney, Desley G; Jalaludin, Bin B

    2016-11-01

    Little is known about the environmental and organisational determinants of workplace violence in correctional health settings. This paper describes the views of health professionals working in these settings on the factors influencing workplace violence risk. All employees of a large correctional health service in New South Wales, Australia, were invited to complete an online survey. The survey included an open-ended question seeking the views of participants about the factors influencing workplace violence in correctional health settings. Responses to this question were analysed using qualitative thematic analysis. Participants identified several factors that they felt reduced the risk of violence in their workplace, including: appropriate workplace health and safety policies and procedures; professionalism among health staff; the presence of prison guards and the quality of security provided; and physical barriers within clinics. Conversely, participants perceived workplace violence risk to be increased by: low health staff-to-patient and correctional officer-to-patient ratios; high workloads; insufficient or underperforming security staff; and poor management of violence, especially horizontal violence. The views of these participants should inform efforts to prevent workplace violence among correctional health professionals.

  18. Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.

    PubMed

    Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A

    2013-05-21

    The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence correction factor is almost solely influenced by non-elastic nuclear interactions the results are also relevant for plastic phantoms that consist of carbon, oxygen and hydrogen atoms as well as for soft tissues.

  19. Chameleonic dilaton, nonequivalent frames, and the cosmological constant problem in quantum string theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zanzi, Andrea

    2010-08-15

    The chameleonic behavior of the string theory dilaton is suggested. Some of the possible consequences of the chameleonic string dilaton are analyzed in detail. In particular, (1) we suggest a new stringy solution to the cosmological constant problem and (2) we point out the nonequivalence of different conformal frames at the quantum level. In order to obtain these results, we start taking into account the (strong coupling) string loop expansion in the string frame (S-frame), therefore the so-called form factors are present in the effective action. The correct dark energy scale is recovered in the Einstein frame (E-frame) without unnaturalmore » fine-tunings and this result is robust against all quantum corrections, granted that we assume a proper structure of the S-frame form factors in the strong coupling regime. At this stage, the possibility still exists that a certain amount of fine-tuning may be required to satisfy some phenomenological constraints. Moreover in the E-frame, in our proposal, all the interactions are switched off on cosmological length scales (i.e., the theory is IR-free), while higher derivative gravitational terms might be present locally (on short distances) and it remains to be seen whether these facts clash with phenomenology. A detailed phenomenological analysis is definitely necessary to clarify these points.« less

  20. Nonlinear thermal dynamic analysis of graphit/aluminum composite plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tenneti, R.; Chandrashekhara, K.

    1994-09-01

    Because of the increased application of composite materials in high-temperature environments, the thermoelastic analysis of laminated composite structures is important. Many researchers have applied the classical lamination theory to analyze laminated plates under thermomechanical loading, which neglects shear deformation effects. The transverse shear deformation effects are not negligible as the ratios of inplane elastic modulus to transverse shear modulus are relatively large for fiber-reinforced composite laminates. The application of first-order shear deformation theory for the thermoelastic analysis of laminated plates has been reported by only a few investigators. Reddy and Hsu have considered the thermal bending of laminated plates. Themore » analytical and finite element solutions for the thermal bucking of laminated plates have been reported by Tauchert and Chandrashekara, respectively. However, the first-order shear deformation theory, based on the assumption of constant distribution of transverse shear through the thickness, requires a shear correction factor to account for the parabolic shear strain distribution. Higher order theories have been proposed which eliminate the need for a shear correction factor. In the present work, nonlinear dynamic analysis of laminated plates subjected to rapid heating is investigated using a higher order shear deformation theory. A C(sup 0) finite element model with seven degrees of freedom per node is implmented and numerical results are presented for laminated graphite/aluminum plates.« less

  1. Residual diplopia in treated orbital bone fractures

    PubMed Central

    Balaji, S. M.

    2013-01-01

    Background: Residual diplopia (RD) is the main post-treatment complication of orbital bone fracture (OBF) reduction. The cause of RD is varied and often related to the degree of inflammation, surgical timing, graft requirement, and trauma to orbital musculature, fat, as well as nerves. The exact prevalence of these and the influence of these factors on RD is not widely reported in literature. Materials and Methods: This retrospective study was conducted from January 1, 2000 through December 31, 2011. Sixty nine patients fulfilling inclusion and exclusion criteria were enrolled in this study. The nature of the defect causing RD was identified. Demographics, nature of initial OBF, extent and type of treatment, and grafts were noted. Corrective surgeries were performed. Data entry and analysis were performed using SPSS. Descriptive statistics and Chi square tests were employed. P value ≤ 0.05 was taken as significant. Results: Inferior rectus muscle (71%) and other periorbital musculature (56.5%) was entrapped, leading to RD. Globe position abnormalities was observed in 52.1% of cases. Degree of inflammation, types of grafts (P = 0.000) were significantly related. Discussion: Preoperative swelling, musculature inflammation, and graft placement significantly influenced the surgical outcome of OBF. RD is related to these factors. Adequate control with OBF healing and remodeling needs to be considered while timing OBF. Author's modification with mesh and cartilage in secondary corrective surgery for RD provided an effective solution for immediate intervention. PMID:23662258

  2. SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haywood, J

    Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumesmore » in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.« less

  3. Improving Performance in Quantum Mechanics with Explicit Incentives to Correct Mistakes

    ERIC Educational Resources Information Center

    Brown, Benjamin R.; Mason, Andrew; Singh, Chandralekha

    2016-01-01

    An earlier investigation found that the performance of advanced students in a quantum mechanics course did not automatically improve from midterm to final exam on identical problems even when they were provided the correct solutions and their own graded exams. Here, we describe a study, which extended over four years, in which upper-level…

  4. How To Proofread and Edit Your Writing: A Guide for Student Writers.

    ERIC Educational Resources Information Center

    Morgan, M.C.

    Proofreading can be tedious and boring, especially if it is approached as correcting errors. But proofreading is not correcting errors so much as reviewing the paper for ideas and for readability. Sometimes classmates can help a student proofread--they can help assess the draft, propose some alternative solutions, and make some choices. This paper…

  5. Random function theory revisited - Exact solutions versus the first order smoothing conjecture

    NASA Technical Reports Server (NTRS)

    Lerche, I.; Parker, E. N.

    1975-01-01

    We remark again that the mathematical conjecture known as first order smoothing or the quasi-linear approximation does not give the correct dependence on correlation length (time) in many cases, although it gives the correct limit as the correlation length (time) goes to zero. In this sense, then, the method is unreliable.

  6. The complete proof on the optimal ordering policy under cash discount and trade credit

    NASA Astrophysics Data System (ADS)

    Chung, Kun-Jen

    2010-04-01

    Huang ((2005), 'Buyer's Optimal Ordering Policy and Payment Policy under Supplier Credit', International Journal of Systems Science, 36, 801-807) investigates the buyer's optimal ordering policy and payment policy under supplier credit. His inventory model is correct and interesting. Basically, he uses an algebraic method to locate the optimal solution of the annual total relevant cost TRC(T) and ignores the role of the functional behaviour of TRC(T) in locating the optimal solution of it. However, as argued in this article, Huang needs to explore the functional behaviour of TRC(T) to justify his solution. So, from the viewpoint of logic, the proof about Theorem 1 in Huang has some shortcomings such that the validity of Theorem 1 in Huang is questionable. The main purpose of this article is to remove and correct those shortcomings in Huang and present the complete proofs for Huang.

  7. Atmospheric Delta14C Record from Wellington (1954-1993)

    DOE Data Explorer

    Manning, M R. [National Institute of Water and Atmospheric Research, Ltd., Lower Hutt, New Zealand; Melhuish, W. H. [National Institute of Water and Atmospheric Research, Ltd., Lower Hutt, New Zealand

    1994-09-01

    Trays containing ~2 L of 5 normal NaOH carbonate-free solution are typically exposed for intervals of 1-2 weeks, and the atmospheric CO2 absorbed during that time is recovered by acid evolution. Considerable fractionation occurs during absorption into the NaOH solution, and the standard fractionation correction (Stuiver and Polach 1977) is used to determine a δ 14C value corrected to δ 13C = -25 per mil. Some samples reported here were taken using BaOH solution or with extended tray exposure times. These variations in procedure do not appear to affect the results (Manning et al. 1990). A few early measurements were made by bubbling air through columns of NaOH for several hours. These samples have higher δ 13C values. Greater details on the sampling methods are provided in Manning et al. (1990) and Rafter and Fergusson (1959).

  8. Fully synchronous solutions and the synchronization phase transition for the finite-N Kuramoto model

    NASA Astrophysics Data System (ADS)

    Bronski, Jared C.; DeVille, Lee; Jip Park, Moon

    2012-09-01

    We present a detailed analysis of the stability of phase-locked solutions to the Kuramoto system of oscillators. We derive an analytical expression counting the dimension of the unstable manifold associated to a given stationary solution. From this we are able to derive a number of consequences, including analytic expressions for the first and last frequency vectors to phase-lock, upper and lower bounds on the probability that a randomly chosen frequency vector will phase-lock, and very sharp results on the large N limit of this model. One of the surprises in this calculation is that for frequencies that are Gaussian distributed, the correct scaling for full synchrony is not the one commonly studied in the literature; rather, there is a logarithmic correction to the scaling which is related to the extremal value statistics of the random frequency vector.

  9. The plasticity of extracellular fluid homeostasis in insects.

    PubMed

    Beyenbach, Klaus W

    2016-09-01

    In chemistry, the ratio of all dissolved solutes to the solution's volume yields the osmotic concentration. The present Review uses this chemical perspective to examine how insects deal with challenges to extracellular fluid (ECF) volume, solute content and osmotic concentration (pressure). Solute/volume plots of the ECF (hemolymph) reveal that insects tolerate large changes in all three of these ECF variables. Challenges beyond those tolerances may be 'corrected' or 'compensated'. While a correction simply reverses the challenge, compensation accommodates the challenge with changes in the other two variables. Most insects osmoregulate by keeping ECF volume and osmotic concentration within a wide range of tolerance. Other insects osmoconform, allowing the ECF osmotic concentration to match the ambient osmotic concentration. Aphids are unique in handling solute and volume loads largely outside the ECF, in the lumen of the gut. This strategy may be related to the apparent absence of Malpighian tubules in aphids. Other insects can suspend ECF homeostasis altogether in order to survive extreme temperatures. Thus, ECF homeostasis in insects is highly dynamic and plastic, which may partly explain why insects remain the most successful class of animals in terms of both species number and biomass. © 2016. Published by The Company of Biologists Ltd.

  10. Asymptotic theory of two-dimensional trailing-edge flows

    NASA Technical Reports Server (NTRS)

    Melnik, R. E.; Chow, R.

    1975-01-01

    Problems of laminar and turbulent viscous interaction near trailing edges of streamlined bodies are considered. Asymptotic expansions of the Navier-Stokes equations in the limit of large Reynolds numbers are used to describe the local solution near the trailing edge of cusped or nearly cusped airfoils at small angles of attack in compressible flow. A complicated inverse iterative procedure, involving finite-difference solutions of the triple-deck equations coupled with asymptotic solutions of the boundary values, is used to accurately solve the viscous interaction problem. Results are given for the correction to the boundary-layer solution for drag of a finite flat plate at zero angle of attack and for the viscous correction to the lift of an airfoil at incidence. A rational asymptotic theory is developed for treating turbulent interactions near trailing edges and is shown to lead to a multilayer structure of turbulent boundary layers. The flow over most of the boundary layer is described by a Lighthill model of inviscid rotational flow. The main features of the model are discussed and a sample solution for the skin friction is obtained and compared with the data of Schubauer and Klebanoff for a turbulent flow in a moderately large adverse pressure gradient.

  11. Radiative corrections to the η(') Dalitz decays

    NASA Astrophysics Data System (ADS)

    Husek, Tomáš; Kampf, Karol; Novotný, Jiří; Leupold, Stefan

    2018-05-01

    We provide the complete set of radiative corrections to the Dalitz decays η(')→ℓ+ℓ-γ beyond the soft-photon approximation, i.e., over the whole range of the Dalitz plot and with no restrictions on the energy of a radiative photon. The corrections inevitably depend on the η(')→ γ*γ(*) transition form factors. For the singly virtual transition form factor appearing, e.g., in the bremsstrahlung correction, recent dispersive calculations are used. For the one-photon-irreducible contribution at the one-loop level (for the doubly virtual form factor), we use a vector-meson-dominance-inspired model while taking into account the η -η' mixing.

  12. Impact of creatine kinase correction on the predictive value of S-100B after mild traumatic brain injury.

    PubMed

    Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin

    2006-01-01

    To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.

  13. Satellite clock corrections estimation to accomplish real time ppp: experiments for brazilian real time network

    NASA Astrophysics Data System (ADS)

    Marques, Haroldo; Monico, João; Aquino, Marcio; Melo, Weyller

    2014-05-01

    The real time PPP method requires the availability of real time precise orbits and satellites clocks corrections. Currently, it is possible to apply the solutions of clocks and orbits available by BKG within the context of IGS Pilot project or by using the operational predicted IGU ephemeris. The accuracy of the satellite position available in the IGU is enough for several applications requiring good quality. However, the satellites clocks corrections do not provide enough accuracy (3 ns ~ 0.9 m) to accomplish real time PPP with the same level of accuracy. Therefore, for real time PPP application it is necessary to further research and develop appropriated methodologies for estimating the satellite clock corrections in real time with better accuracy. Currently, it is possible to apply the real time solutions of clocks and orbits available by Federal Agency for Cartography and Geodesy (BKG) within the context of IGS Pilot project. The BKG corrections are disseminated by a new proposed format of the RTCM 3.x and can be applied in the broadcasted orbits and clocks. Some investigations have been proposed for the estimation of the satellite clock corrections using GNSS code and phase observable at the double difference level between satellites and epochs (MERVAT, DOUSA, 2007). Another possibility consists of applying a Kalman Filter in the PPP network mode (HAUSCHILD, 2010) and it is also possible the integration of both methods, using network PPP and observables at double difference level in specific time intervals (ZHANG; LI; GUO, 2010). For this work the methodology adopted consists in the estimation of the satellite clock corrections based on the data adjustment in the PPP mode, but for a network of GNSS stations. The clock solution can be solved by using two types of observables: code smoothed by carrier phase or undifferenced code together with carrier phase. In the former, we estimate receiver clock error; satellite clock correction and troposphere, considering that the phase ambiguities are eliminated when applying differences between consecutive epochs. However, when using undifferenced code and phase, the ambiguities may be estimated together with receiver clock errors, satellite clock corrections and troposphere parameters. In both strategies it is also possible to correct the troposphere delay from a Numerical Weather Forecast Model instead of estimating it. The prediction of the satellite clock correction can be performed using a straight line or a second degree polynomial using the time series of the estimated satellites clocks. To estimate satellite clock correction and to accomplish real time PPP two pieces of software have been developed, respectively, "RT_PPP" and "RT_SAT_CLOCK". The system (RT_PPP) is able to process GNSS code and phase data using precise ephemeris and precise satellites clocks corrections together with several corrections required for PPP. In the software RT_SAT_CLOCK we apply a Kalman filter algorithm to estimate satellite clock correction in the network PPP mode. In this case, all PPP corrections must be applied for each station. The experiments were generated in real time and post-processed mode (simulating real time) considering data from the Brazilian continuous GPS network and also from the IGS network in a global satellite clock solution. We have used IGU ephemeris for satellite position and estimated the satellite clock corrections, performing the updates as soon as new ephemeris files were available. Experiments were accomplished in order to assess the accuracy of the estimated clocks when using the Brazilian Numerical Weather Forecast Model (BNWFM) from CPTEC/INPE and also using the ZTD from European Centre for Medium-Range Weather Forecasts (ECMWF) together with Vienna Mapping Function VMF or estimating troposphere with clocks and ambiguities in the Kalman Filter. The daily precision of the estimated satellite clock corrections reached the order of 0.15 nanoseconds. The clocks were applied in the Real Time PPP for Brazilian network stations and also for flight test of the Brazilian airplanes and the results show that it is possible to accomplish real time PPP in the static and kinematic modes with accuracy of the order of 10 to 20 cm, respectively.

  14. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  15. Ω-slow Solutions and Be Star Disks

    NASA Astrophysics Data System (ADS)

    Araya, I.; Jones, C. E.; Curé, M.; Silaj, J.; Cidale, L.; Granada, A.; Jiménez, A.

    2017-09-01

    As the disk formation mechanism(s) in Be stars is(are) as yet unknown, we investigate the role of rapidly rotating radiation-driven winds in this process. We implemented the effects of high stellar rotation on m-CAK models accounting for the shape of the star, the oblate finite disk correction factor, and gravity darkening. For a fast rotating star, we obtain a two-component wind model, I.e., a fast, thin wind in the polar latitudes and an Ω-slow, dense wind in the equatorial regions. We use the equatorial mass densities to explore Hα emission profiles for the following scenarios: (1) a spherically symmetric star, (2) an oblate star with constant temperature, and (3) an oblate star with gravity darkening. One result of this work is that we have developed a novel method for solving the gravity-darkened, oblate m-CAK equation of motion. Furthermore, from our modeling we find that (a) the oblate finite disk correction factor, for the scenario considering the gravity darkening, can vary by at least a factor of two between the equatorial and polar directions, influencing the velocity profile and mass-loss rate accordingly, (b) the Hα profiles predicted by our model are in agreement with those predicted by a standard power-law model for following values of the line-force parameters: 1.5≲ k≲ 3,α ˜ 0.6, and δ ≳ 0.1, and (c) the contribution of the fast wind component to the Hα emission line profile is negligible; therefore, the line profiles arise mainly from the equatorial disks of Be stars.

  16. Computer simulations of alkali-acetate solutions: Accuracy of the forcefields in difference concentrations

    NASA Astrophysics Data System (ADS)

    Ahlstrand, Emma; Zukerman Schpector, Julio; Friedman, Ran

    2017-11-01

    When proteins are solvated in electrolyte solutions that contain alkali ions, the ions interact mostly with carboxylates on the protein surface. Correctly accounting for alkali-carboxylate interactions is thus important for realistic simulations of proteins. Acetates are the simplest carboxylates that are amphipathic, and experimental data for alkali acetate solutions are available and can be compared with observables obtained from simulations. We carried out molecular dynamics simulations of alkali acetate solutions using polarizable and non-polarizable forcefields and examined the ion-acetate interactions. In particular, activity coefficients and association constants were studied in a range of concentrations (0.03, 0.1, and 1M). In addition, quantum-mechanics (QM) based energy decomposition analysis was performed in order to estimate the contribution of polarization, electrostatics, dispersion, and QM (non-classical) effects on the cation-acetate and cation-water interactions. Simulations of Li-acetate solutions in general overestimated the binding of Li+ and acetates. In lower concentrations, the activity coefficients of alkali-acetate solutions were too high, which is suggested to be due to the simulation protocol and not the forcefields. Energy decomposition analysis suggested that improvement of the forcefield parameters to enable accurate simulations of Li-acetate solutions can be achieved but may require the use of a polarizable forcefield. Importantly, simulations with some ion parameters could not reproduce the correct ion-oxygen distances, which calls for caution in the choice of ion parameters when protein simulations are performed in electrolyte solutions.

  17. Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory

    NASA Astrophysics Data System (ADS)

    Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.

    2018-03-01

    This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.

  18. High-throughput ab-initio dilute solute diffusion database.

    PubMed

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-07-19

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.

  19. Anamorphic quasiperiodic universes in modified and Einstein gravity with loop quantum gravity corrections

    NASA Astrophysics Data System (ADS)

    Amaral, Marcelo M.; Aschheim, Raymond; Bubuianu, Laurenţiu; Irwin, Klee; Vacaru, Sergiu I.; Woolridge, Daniel

    2017-09-01

    The goal of this work is to elaborate on new geometric methods of constructing exact and parametric quasiperiodic solutions for anamorphic cosmology models in modified gravity theories, MGTs, and general relativity, GR. There exist previously studied generic off-diagonal and diagonalizable cosmological metrics encoding gravitational and matter fields with quasicrystal like structures, QC, and holonomy corrections from loop quantum gravity, LQG. We apply the anholonomic frame deformation method, AFDM, in order to decouple the (modified) gravitational and matter field equations in general form. This allows us to find integral varieties of cosmological solutions determined by generating functions, effective sources, integration functions and constants. The coefficients of metrics and connections for such cosmological configurations depend, in general, on all spacetime coordinates and can be chosen to generate observable (quasi)-periodic/aperiodic/fractal/stochastic/(super) cluster/filament/polymer like (continuous, stochastic, fractal and/or discrete structures) in MGTs and/or GR. In this work, we study new classes of solutions for anamorphic cosmology with LQG holonomy corrections. Such solutions are characterized by nonlinear symmetries of generating functions for generic off-diagonal cosmological metrics and generalized connections, with possible nonholonomic constraints to Levi-Civita configurations and diagonalizable metrics depending only on a time like coordinate. We argue that anamorphic quasiperiodic cosmological models integrate the concept of quantum discrete spacetime, with certain gravitational QC-like vacuum and nonvacuum structures. And, that of a contracting universe that homogenizes, isotropizes and flattens without introducing initial conditions or multiverse problems.

  20. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  1. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    NASA Astrophysics Data System (ADS)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  2. [Raman spectroscopy fluorescence background correction and its application in clustering analysis of medicines].

    PubMed

    Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei

    2010-08-01

    During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.

  3. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.

    PubMed

    Czarnecki, D; Zink, K

    2013-04-21

    The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors Ω(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size.

  4. A new method for the prediction of combustion instability

    NASA Astrophysics Data System (ADS)

    Flanagan, Steven Meville

    This dissertation presents a new approach to the prediction of combustion instability in solid rocket motors. Previous attempts at developing computational tools to solve this problem have been largely unsuccessful, showing very poor agreement with experimental results and having little or no predictive capability. This is due primarily to deficiencies in the linear stability theory upon which these efforts have been based. Recent advances in linear instability theory by Flandro have demonstrated the importance of including unsteady rotational effects, previously considered negligible. Previous versions of the theory also neglected corrections to the unsteady flow field of the first order in the mean flow Mach number. This research explores the stability implications of extending the solution to include these corrections. Also, the corrected linear stability theory based upon a rotational unsteady flow field extended to first order in mean flow Mach number has been implemented in two computer programs developed for the Macintosh platform. A quasi one-dimensional version of the program has been developed which is based upon an approximate solution to the cavity acoustics problem. The three-dimensional program applies Greens's Function Discretization (GFD) to the solution for the acoustic mode shapes and frequency. GFD is a recently developed numerical method for finding fully three dimensional solutions for this class of problems. The analysis of complex motor geometries, previously a tedious and time consuming task, has also been greatly simplified through the development of a drawing package designed specifically to facilitate the specification of typical motor geometries. The combination of the drawing package, improved acoustic solutions, and new analysis, results in a tool which is capable of producing more accurate and meaningful predictions than have been possible in the past.

  5. Omni-focal refractive focus correction technology as a substitute for bi/multi-focal intraocular lenses, contact lenses, and spectacles

    NASA Astrophysics Data System (ADS)

    Ben Yaish, Shai; Zlotnik, Alex; Raveh, Ido; Yehezkel, Oren; Belkin, Michael; Lahav, Karen; Zalevsky, Zeev

    2009-02-01

    We present novel technology for extension in depth of focus of imaging lenses for use in ophthalmic lenses correcting myopia, hyperopia with regular/irregular astigmatism and presbyopia. This technology produces continuous focus without appreciable loss of energy. It is incorporated as a coating or engraving on the surface for spectacles, contact or intraocular lenses. It was fabricated and tested in simulations and in clinical trials. From the various testing this technology seems to provide a satisfactory single-lens solution. Obtained performance is apparently better than those of existing multi/bifocal lenses and it is modular enough to provide solution to various ophthalmic applications.

  6. Quantization of the Szekeres system

    NASA Astrophysics Data System (ADS)

    Paliathanasis, A.; Zampeli, Adamantia; Christodoulakis, T.; Mustafa, M. T.

    2018-06-01

    We study the quantum corrections on the Szekeres system in the context of canonical quantization in the presence of symmetries. We start from an effective point-like Lagrangian with two integrals of motion, one corresponding to the Hamiltonian and the other to a second rank killing tensor. Imposing their quantum version on the wave function results to a solution which is then interpreted in the context of Bohmian mechanics. In this semiclassical approach, it is shown that there is no quantum corrections, thus the classical trajectories of the Szekeres system are not affected at this level. Finally, we define a probability function which shows that a stationary surface of the probability corresponds to a classical exact solution.

  7. Elasticity of short DNA molecules: theory and experiment for contour lengths of 0.6-7 microm.

    PubMed

    Seol, Yeonee; Li, Jinyu; Nelson, Philip C; Perkins, Thomas T; Betterton, M D

    2007-12-15

    The wormlike chain (WLC) model currently provides the best description of double-stranded DNA elasticity for micron-sized molecules. This theory requires two intrinsic material parameters-the contour length L and the persistence length p. We measured and then analyzed the elasticity of double-stranded DNA as a function of L (632 nm-7.03 microm) using the classic solution to the WLC model. When the elasticity data were analyzed using this solution, the resulting fitted value for the persistence length p(wlc) depended on L; even for moderately long DNA molecules (L = 1300 nm), this apparent persistence length was 10% smaller than its limiting value for long DNA. Because p is a material parameter, and cannot depend on length, we sought a new solution to the WLC model, which we call the "finite wormlike chain (FWLC)," to account for effects not considered in the classic solution. Specifically we accounted for the finite chain length, the chain-end boundary conditions, and the bead rotational fluctuations inherent in optical trapping assays where beads are used to apply the force. After incorporating these corrections, we used our FWLC solution to generate force-extension curves, and then fit those curves with the classic WLC solution, as done in the standard experimental analysis. These results qualitatively reproduced the apparent dependence of p(wlc) on L seen in experimental data when analyzed with the classic WLC solution. Directly fitting experimental data to the FWLC solution reduces the apparent dependence of p(fwlc) on L by a factor of 3. Thus, the FWLC solution provides a significantly improved theoretical framework in which to analyze single-molecule experiments over a broad range of experimentally accessible DNA lengths, including both short (a few hundred nanometers in contour length) and very long (microns in contour length) molecules.

  8. Selecting the Correct Solution to a Physics Problem when Given Several Possibilities

    ERIC Educational Resources Information Center

    Richards, Evan Thomas

    2010-01-01

    Despite decades of research on what learning actions are associated with effective learners (Palincsar and Brown, 1984; Atkinson, et al., 2000), the literature has not fully addressed how to cue those actions (particularly within the realm of physics). Recent reforms that integrate incorrect solutions suggest a possible avenue to reach those…

  9. Difference-Equation/Flow-Graph Circuit Analysis

    NASA Technical Reports Server (NTRS)

    Mcvey, I. M.

    1988-01-01

    Numerical technique enables rapid, approximate analyses of electronic circuits containing linear and nonlinear elements. Practiced in variety of computer languages on large and small computers; for circuits simple enough, programmable hand calculators used. Although some combinations of circuit elements make numerical solutions diverge, enables quick identification of divergence and correction of circuit models to make solutions converge.

  10. Development of a two-equation turbulence model for hypersonic flows. Volume 1; Evaluation of a low Reynolds number correction to the Kappa - epsilon two equation compressible turbulence model

    NASA Technical Reports Server (NTRS)

    Knight, Doyle D.; Becht, Robert J.

    1995-01-01

    The objective of the current research is the development of an improved k-epsilon two-equation compressible turbulence model for turbulent boundary layer flows experiencing strong viscous-inviscid interactions. The development of an improved model is important in the design of hypersonic vehicles such as the National Aerospace Plane (NASP) and the High Speed Civil Transport (HSCT). Improvements have been made to the low Reynolds number functions in the eddy viscosity and dissipation of solenoidal dissipation of the k-epsilon turbulence mode. These corrections offer easily applicable modifications that may be utilized for more complex geometries. The low Reynolds number corrections are functions of the turbulent Reynolds number and are therefore independent of the coordinate system. The proposed model offers advantages over some current models which are based upon the physical distance from the wall, that modify the constants of the standard model, or that make more corrections than are necessary to the governing equations. The code has been developed to solve the Favre averaged, boundary layer equations for mass, momentum, energy, turbulence kinetic energy, and dissipation of solenoidal dissipation using Keller's box scheme and the Newton spatial marching method. The code has been validated by removing the turbulent terms and comparing the solution with the Blasius solution, and by comparing the turbulent solution with an existing k-epsilon model code using wall function boundary conditions. Excellent agreement is seen between the computed solution and the Blasius solution, and between the two codes. The model has been tested for both subsonic and supersonic flat-plate turbulent boundary layer flow by comparing the computed skin friction with the Van Driest II theory and the experimental data of Weighardt; by comparing the transformed velocity profile with the data of Weighardt, and the Law of the Wall and the Law of the Wake; and by comparing the computed results of an adverse pressure gradient with the experimental data of Fernando and Smits. Good agreement is obtained with the experimental correlations for all flow conditions.

  11. Viscous Corrections of the Time Incremental Minimization Scheme and Visco-Energetic Solutions to Rate-Independent Evolution Problems

    NASA Astrophysics Data System (ADS)

    Minotti, Luca; Savaré, Giuseppe

    2018-02-01

    We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.

  12. Brane Inflation, Solitons and Cosmological Solutions: I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, P.

    2005-01-25

    In this paper we study various cosmological solutions for a D3/D7 system directly from M-theory with fluxes and M2-branes. In M-theory, these solutions exist only if we incorporate higher derivative corrections from the curvatures as well as G-fluxes. We take these corrections into account and study a number of toy cosmologies, including one with a novel background for the D3/D7 system whose supergravity solution can be completely determined. Our new background preserves all the good properties of the original model and opens up avenues to investigate cosmological effects from wrapped branes and brane-antibrane annihilation, to name a few. We alsomore » discuss in some detail semilocal defects with higher global symmetries, for example exceptional ones, that occur in a slightly different regime of our D3/D7 model. We show that the D3/D7 system does have the required ingredients to realize these configurations as non-topological solitons of the theory. These constructions also allow us to give a physical meaning to the existence of certain underlying homogeneous quaternionic Kahler manifolds.« less

  13. Deriving analytic solutions for compact binary inspirals without recourse to adiabatic approximations

    NASA Astrophysics Data System (ADS)

    Galley, Chad R.; Rothstein, Ira Z.

    2017-05-01

    We utilize the dynamical renormalization group formalism to calculate the real space trajectory of a compact binary inspiral for long times via a systematic resummation of secularly growing terms. This method generates closed form solutions without orbit averaging, and the accuracy can be systematically improved. The expansion parameter is v5ν Ω (t -t0) where t0 is the initial time, t is the time elapsed, and Ω and v are the angular orbital frequency and initial speed, respectively. ν is the binary's symmetric mass ratio. We demonstrate how to apply the renormalization group method to resum solutions beyond leading order in two ways. First, we calculate the second-order corrections of the leading radiation reaction force, which involves highly nontrivial checks of the formalism (i.e., its renormalizability). Second, we show how to systematically include post-Newtonian corrections to the radiation reaction force. By avoiding orbit averaging, we gain predictive power and eliminate ambiguities in the initial conditions. Finally, we discuss how this methodology can be used to find analytic solutions to the spin equations of motion that are valid over long times.

  14. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  15. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE PAGES

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...

    2016-03-03

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  16. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  17. VISCOPLASTIC FLUID MODEL FOR DEBRIS FLOW ROUTING.

    USGS Publications Warehouse

    Chen, Cheng-lung

    1986-01-01

    This paper describes how a generalized viscoplastic fluid model, which was developed based on non-Newtonian fluid mechanics, can be successfully applied to routing a debris flow down a channel. The one-dimensional dynamic equations developed for unsteady clear-water flow can be used for debris flow routing if the flow parameters, such as the momentum (or energy) correction factor and the resistance coefficient, can be accurately evaluated. The writer's generalized viscoplastic fluid model can be used to express such flow parameters in terms of the rheological parameters for debris flow in wide channels. A preliminary analysis of the theoretical solutions reveals the importance of the flow behavior index and the so-called modified Froude number for uniformly progressive flow in snout profile modeling.

  18. System optimization on coded aperture spectrometer

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Wang, Helong; Chen, Hongliang; Guo, Chunjie; Zhou, Liwei

    2017-10-01

    For aim to find a simple multiple configuration solution and achieve higher refractive efficiency, and based on to reduce the situation disturbed by FOV change, especially in a two-dimensional spatial expansion. Coded aperture system is designed by these special structure, which includes an objective a coded component a prism reflex system components, a compensatory plate and an imaging lens Correlative algorithms and perfect imaging methods are available to ensure this system can be corrected and optimized adequately. Simulation results show that the system can meet the application requirements in MTF, REA, RMS and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration.

  19. Rapid and accurate assessment of the activity measurements in Brazilian hospitals and clinics.

    PubMed

    de Oliveira, A E; Iwahara, A; da Cruz, P A L; da Silva, C J; de Araújo, E B; Mengatti, J; da Silva, R L; Trindade, O L

    2018-04-01

    Traceability in Nuclear Medicine Service (NMS) measurements was checked by the Institute of Radioprotection and Dosimetry (IRD) through the Institute of Energy and Nuclear Research (IPEN). In 2016, IRD ran an intercomparison program and invited Brazilian NMS authorized to administer 131 I to patients. Sources of 131 I were distributed to 33 NMSs. Three other sources from the same solution were sent to IRD, after measurement at IPEN. These sources were calibrated in the IRD reference system. A correction factor of 1.013 was obtained. Ninety percent of the NMS comparisons results are within ±10% of the National Laboratory of Metrology of Ionizing Radiation (LNMRI) value, the Brazilian legal requirement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  1. Ellipsoidal terrain correction based on multi-cylindrical equal-area map projection of the reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.

    2004-09-01

    An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical direction, it can be concluded that a method for terrain correction (or local gravity field modeling) based on closed-form solution of the Newton integral in terms of Cartesian coordinates of a multi-cylindrical equal-area map projection of the reference ellipsoid has been developed which has the accuracy of terrain correction (or local gravity field modeling) based on the Newton integral in terms of ellipsoidal coordinates.

  2. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    NASA Astrophysics Data System (ADS)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  3. Awareness of disease in dementia: factor structure of the assessment scale of psychosocial impact of the diagnosis of dementia.

    PubMed

    Dourado, Marcia C N; Mograbi, Daniel C; Santos, Raquel L; Sousa, Maria Fernanda B; Nogueira, Marcela L; Belfort, Tatiana; Landeira-Fernandez, Jesus; Laks, Jerson

    2014-01-01

    Despite the growing understanding of the conceptual complexity of awareness, there currently exists no instrument for assessing different domains of awareness in dementia. In the current study, the psychometric properties of a multidimensional awareness scale, the Assessment Scale of Psychosocial Impact of the Diagnosis of Dementia (ASPIDD), are explored in a sample of 201 people with dementia and their family caregivers. Cronbach's alpha was high (α = 0.87), indicating excellent internal consistency. The mean of corrected item-total correlation coefficients was moderate. ASPIDD presented a four-factor solution with a well-defined structure: awareness of activities of daily living, cognitive functioning and health condition, emotional state, and social functioning and relationships. Functional disability was positively correlated with total ASPIDD, unawareness of activities of daily living, cognitive functioning, and with emotional state. Caregiver burden was correlated with total ASPIDD scores and unawareness of cognitive functioning. The results suggest that ASPIDD is indeed a multidimensional scale, providing a reliable measure of awareness of disease in dementia. Further studies should explore the risk factors associated with different dimensions of awareness in dementia.

  4. Effects of Spell Checkers on English as a Second Language Students' Incidental Spelling Learning: A Cognitive Load Perspective

    ERIC Educational Resources Information Center

    Lin, Po-Han; Liu, Tzu-Chien; Paas, Fred

    2017-01-01

    Computer-based spell checkers help to correct misspells instantly. Almost all the word processing devices are now equipped with a spell-check function that either automatically corrects errors or provides a list of intended words. However, it is not clear on how the reliance on this convenient technological solution affects spelling learning.…

  5. Re-evaluation of the correction factors for the GROVEX

    NASA Astrophysics Data System (ADS)

    Ketelhut, Steffen; Meier, Markus

    2018-04-01

    The GROVEX (GROssVolumige EXtrapolationskammer, large-volume extrapolation chamber) is the primary standard for the dosimetry of low-dose-rate interstitial brachytherapy at the Physikalisch-Technische Bundesanstalt (PTB). In the course of setup modifications and re-measuring of several dimensions, the correction factors have been re-evaluated in this work. The correction factors for scatter and attenuation have been recalculated using the Monte Carlo software package EGSnrc, and a new expression has been found for the divergence correction. The obtained results decrease the measured reference air kerma rate by approximately 0.9% for the representative example of a seed of type Bebig I25.S16C. This lies within the expanded uncertainty (k  =  2).

  6. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.

    1990-01-01

    A regular perturbation analysis is presented. Closed-loop simulations were performed with a first order correction including all of the atmospheric terms. In addition, a method was developed for independently checking the accuracy of the analysis and the rather extensive programming required to implement the complete first order correction with all of the aerodynamic effects included. This amounted to developing an equivalent Hamiltonian computed from the first order analysis. A second order correction was also completed for the neglected spherical Earth and back-pressure effects. Finally, an analysis was begun on a method for dealing with control inequality constraints. The results on including higher order corrections do show some improvement for this application; however, it is not known at this stage if significant improvement will result when the aerodynamic forces are included. The weak formulation for solving optimal problems was extended in order to account for state inequality constraints. The formulation was tested on three example problems and numerical results were compared to the exact solutions. Development of a general purpose computational environment for the solution of a large class of optimal control problems is under way. An example, along with the necessary input and the output, is given.

  7. Dilatation-dissipation corrections for advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1992-01-01

    This paper analyzes dilatation-dissipation based compressibility corrections for advanced turbulence models. Numerical computations verify that the dilatation-dissipation corrections devised by Sarkar and Zeman greatly improve both the k-omega and k-epsilon model predicted effect of Mach number on spreading rate. However, computations with the k-gamma model also show that the Sarkar/Zeman terms cause an undesired reduction in skin friction for the compressible flat-plate boundary layer. A perturbation solution for the compressible wall layer shows that the Sarkar and Zeman terms reduce the effective von Karman constant in the law of the wall. This is the source of the inaccurate k-gamma model skin-friction predictions for the flat-plate boundary layer. The perturbation solution also shows that the k-epsilon model has an inherent flaw for compressible boundary layers that is not compensated for by the dilatation-dissipation corrections. A compressibility modification for k-gamma and k-epsilon models is proposed that is similar to those of Sarkar and Zeman. The new compressibility term permits accurate predictions for the compressible mixing layer, flat-plate boundary layer, and a shock separated flow with the same values for all closure coefficients.

  8. Extended Reissner-Nordström solutions sourced by dynamical torsion

    NASA Astrophysics Data System (ADS)

    Cembranos, Jose A. R.; Valcarcel, Jorge Gigante

    2018-04-01

    We find a new exact vacuum solution in the framework of the Poincaré Gauge field theory with massive torsion. In this model, torsion operates as an independent field and introduces corrections to the vacuum structure present in General Relativity. The new static and spherically symmetric configuration shows a Reissner-Nordström-like geometry characterized by a spin charge. It extends the known massless torsion solution to the massive case. The corresponding Reissner-Nordström-de Sitter solution is also compatible with a cosmological constant and additional U (1) gauge fields.

  9. Detector signal correction method and system

    DOEpatents

    Carangelo, Robert M.; Duran, Andrew J.; Kudman, Irwin

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  10. Detector signal correction method and system

    DOEpatents

    Carangelo, R.M.; Duran, A.J.; Kudman, I.

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  11. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  12. Molding of strength testing samples using modern PDCPD material for purpose of automotive industry

    NASA Astrophysics Data System (ADS)

    Grabowski, L.; Baier, A.; Sobek, M.

    2017-08-01

    The casting of metal materials is widely known but the molding of composite polymer materials is not well-known method still. The initial choice of method for producing composite bodies was the method of casting of PDCPD material. For purpose of performing casting of polymer composite material, a special mold was made. Firstly, the 3D printed, using PLA material, mold was used. After several attempts of casting PDCPD many problems were encountered. The second step was to use mold milled from a firm and dense isocyanate foam. After several attempts research shown that this solution is more resistant to high-temperature peak, but this material is too fragile to use it several times. This solution also prevents mold from using external heating, which can be necessary for performing correct molding process. The last process was to use the aluminum mold, which is dedicated to PDCPD polymer composite, because of low adhesiveness. This solution leads to perform correct PDCPD polymer composite material injection. After performing casting operation every PDCPD testing samples were tested. These results were compared together. The result of performed work was to archive correct properties of injection of composite material. Research and results were described in detail in this paper.

  13. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    NASA Astrophysics Data System (ADS)

    Jeong, Hyunjo; Zhang, Shuzeng; Barnard, Dan; Li, Xiongbing

    2015-09-01

    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α2 ≃ 2α1.

  14. Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.

    PubMed

    Etemad, S Gh; Thibault, J; Hashemabadi, S H

    2003-10-01

    This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.

  15. Rapid and high-precision measurement of sulfur isotope and sulfur concentration in sediment pore water by multi-collector inductively coupled plasma mass spectrometry.

    PubMed

    Bian, Xiao-Peng; Yang, Tao; Lin, An-Jun; Jiang, Shao-Yong

    2015-01-01

    We have developed a technique for the rapid, precise and accurate determination of sulfur isotopes (δ(34)S) by MC-ICP-MS applicable to a range of sulfur-bearing solutions of different sulfur content. The 10 ppm Alfa-S solution (ammonium sulfate solution, working standard of the lab of the authors) was used to bracket other Alfa-S solutions of different concentrations and the measured δ(34)SV-CDT values of Alfa-S solutions deviate from the reference value to varying degrees (concentration effect). The stability of concentration effect has been verified and a correction curve has been constructed based on Alfa-S solutions to correct measured δ(34)SV-CDT values. The curve has been applied to AS solutions (dissolved ammonium sulfate from the lab of the authors) and pore water samples successfully, validating the reliability of our analytical method. This method also enables us to measure the sulfur concentration simultaneously when analyzing the sulfur isotope composition. There is a strong linear correlation (R(2)>0.999) between the sulfur concentrations and the intensity ratios of samples and the standard. We have constructed a regression curve based on Alfa-S solutions and this curve has been successfully used to determine sulfur concentrations of AS solutions and pore water samples. The analytical technique presented here enable rapid, precise and accurate S isotope measurement for a wide range of sulfur-bearing solutions - in particular for pore water samples with complex matrix and varying sulfur concentrations. Also, simultaneous measurement of sulfur concentrations is available. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Recovery correction technique for NMR spectroscopy of perchloric acid extracts using DL-valine-2,3-d2: validation and application to 5-fluorouracil-induced brain damage.

    PubMed

    Nakagami, Ryutaro; Yamaguchi, Masayuki; Ezawa, Kenji; Kimura, Sadaaki; Hamamichi, Shusei; Sekine, Norio; Furukawa, Akira; Niitsu, Mamoru; Fujii, Hirofumi

    2014-01-01

    We explored a recovery correction technique that can correct metabolite loss during perchloric acid (PCA) extraction and minimize inter-assay variance in quantitative (1)H nuclear magnetic resonance (NMR) spectroscopy of the brain and evaluated its efficacy in 5-fluorouracil (5-FU)- and saline-administered rats. We measured the recovery of creatine and dl-valine-2,3-d2 from PCA extract containing both compounds (0.5 to 8 mM). We intravenously administered either 5-FU for 4 days (total, 100 mg/kg body weight) or saline into 2 groups of 11 rats each. We subsequently performed PCA extraction of the whole brain on Day 9, externally adding 7 µmol of dl-valine-2,3-d2. We estimated metabolite concentrations using an NMR spectrometer with recovery correction, correcting metabolite concentrations based on the recovery factor of dl-valine-2,3-d2. For each metabolite concentration, we calculated the coefficient of variation (CEV) and compared differences between the 2 groups using unpaired t-test. Equivalent recoveries of dl-valine-2,3-d2 (89.4 ± 3.9%) and creatine (89.7 ± 3.9%) in the PCA extract of the mixed solution indicated the suitability of dl-valine-2,3-d2 as an internal reference. In the rat study, recovery of dl-valine-2,3-d2 was 90.6 ± 9.2%. Nine major metabolite concentrations adjusted by recovery of dl-valine-2,3-d2 in saline-administered rats were comparable to data in the literature. CEVs of these metabolites were reduced from 10 to 17% before to 7 to 16% after correction. The significance of differences in alanine and taurine between the 5-FU- and saline-administered groups was determined only after recovery correction (0.75 ± 0.12 versus 0.86 ± 0.07 for alanine; 5.17 ± 0.59 versus 5.66 ± 0.42 for taurine [µmol/g brain tissue]; P < 0.05). A new recovery correction technique corrected metabolite loss during PCA extraction, minimized inter-assay variance in quantitative (1)H NMR spectroscopy of brain tissue, and effectively detected inter-group differences in concentrations of brain metabolites between 5-FU- and saline-administered rats.

  17. Entrance dose measurements for in‐vivo diode dosimetry: Comparison of correction factors for two types of commercial silicon diode detectors

    PubMed Central

    Zhu, X. R.

    2000-01-01

    Silicon diode dosimeters have been used routinely for in‐vivo dosimetry. Despite their popularity, an appropriate implementation of an in‐vivo dosimetry program using diode detectors remains a challenge for clinical physicists. One common approach is to relate the diode readout to the entrance dose, that is, dose to the reference depth of maximum dose such as dmax for the 10×10 cm2 field. Various correction factors are needed in order to properly infer the entrance dose from the diode readout, depending on field sizes, target‐to‐surface distances (TSD), and accessories (such as wedges and compensate filters). In some clinical practices, however, no correction factor is used. In this case, a diode‐dosimeter‐based in‐vivo dosimetry program may not serve the purpose effectively; that is, to provide an overall check of the dosimetry procedure. In this paper, we provide a formula to relate the diode readout to the entrance dose. Correction factors for TSD, field size, and wedges used in this formula are also clearly defined. Two types of commercial diode detectors, ISORAD (n‐type) and the newly available QED (p‐type) (Sun Nuclear Corporation), are studied. We compared correction factors for TSDs, field sizes, and wedges. Our results are consistent with the theory of radiation damage of silicon diodes. Radiation damage has been shown to be more serious for n‐type than for p‐type detectors. In general, both types of diode dosimeters require correction factors depending on beam energy, TSD, field size, and wedge. The magnitudes of corrections for QED (p‐type) diodes are smaller than ISORAD detectors. PACS number(s): 87.66.–a, 87.52.–g PMID:11674824

  18. Simulation of floods caused by overloaded sewer systems: extensions of shallow-water equations

    NASA Astrophysics Data System (ADS)

    Hilden, Michael

    2005-03-01

    The outflow of water from a manhole onto a street is a typical flow problem within the simulation of floods in urban areas that are caused by overloaded sewer systems in the event of heavy rains. The reliable assessment of the flood risk for the connected houses requires accurate simulations of the water flow processes in the sewer system and in the street.The Navier-Stokes equations (NSEs) describe the free surface flow of the fluid water accurately, but since their numerical solution requires high CPU times and much memory, their application is not practical. However, their solutions for selected flow problems are applied as reference states to assess the results of other model approaches.The classical shallow-water equations (SWEs) require only fractions (factor 1/100) of the NSEs' computational effort. They assume hydrostatic pressure distribution, depth-averaged horizontal velocities and neglect vertical velocities. These shallow-water assumptions are not fulfilled for the outflow of water from a manhole onto the street. Accordingly, calculations show differences between NSEs and SWEs solutions.The SWEs are extended in order to assess the flood risks in urban areas reliably within applicable computational efforts. Separating vortex regions from the main flow and approximating vertical velocities to involve their contributions into a pressure correction yield suitable results.

  19. Revisiting Hansen Solubility Parameters by Including Thermodynamics.

    PubMed

    Louwerse, Manuel J; Maldonado, Ana; Rousseau, Simon; Moreau-Masselon, Chloe; Roux, Bernard; Rothenberg, Gadi

    2017-11-03

    The Hansen solubility parameter approach is revisited by implementing the thermodynamics of dissolution and mixing. Hansen's pragmatic approach has earned its spurs in predicting solvents for polymer solutions, but for molecular solutes improvements are needed. By going into the details of entropy and enthalpy, several corrections are suggested that make the methodology thermodynamically sound without losing its ease of use. The most important corrections include accounting for the solvent molecules' size, the destruction of the solid's crystal structure, and the specificity of hydrogen-bonding interactions, as well as opportunities to predict the solubility at extrapolated temperatures. Testing the original and the improved methods on a large industrial dataset including solvent blends, fit qualities improved from 0.89 to 0.97 and the percentage of correct predictions rose from 54 % to 78 %. Full Matlab scripts are included in the Supporting Information, allowing readers to implement these improvements on their own datasets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Platform control for space-based imaging: the TOPSAT mission

    NASA Astrophysics Data System (ADS)

    Dungate, D.; Morgan, C.; Hardacre, S.; Liddle, D.; Cropp, A.; Levett, W.; Price, M.; Steyn, H.

    2004-11-01

    This paper describes the imaging mode ADCS design for the TOPSAT satellite, an Earth observation demonstration mission targeted at military applications. The baselined orbit for TOPSAT is a 600-700km sun synchronous orbit from which images up to 30° off track can be captured. For this baseline, the imaging camera proves a resolution of 2.5m and a nominal image size of 15x15km. The ADCS design solution for the imaging mode uses a moving demand approach to enable a single control algorithm solution for both the preparatory reorientation prior to image capture and the post capture return to nadir pointing. During image capture proper, control is suspended to minimise the disturbances experienced by the satellite from the wheels. Prior to each imaging sequence, the moving demand attitude and rate profiles are calculated such that the correct attitude and rate are achieved at the correct orbital position, enabling the correct target area to be captured.

  1. The electric double layer at a metal electrode in pure water

    NASA Astrophysics Data System (ADS)

    Brüesch, Peter; Christen, Thomas

    2004-03-01

    Pure water is a weak electrolyte that dissociates into hydronium ions and hydroxide ions. In contact with a charged electrode a double layer forms for which neither experimental nor theoretical studies exist, in contrast to electrolytes containing extrinsic ions like acids, bases, and solute salts. Starting from a self-consistent solution of the one-dimensional modified Poisson-Boltzmann equation, which takes into account activity coefficients of point-like ions, we explore the properties of the electric double layer by successive incorporation of various correction terms like finite ion size, polarization, image charge, and field dissociation. We also discuss the effect of the usual approximation of an average potential as required for the one-dimensional Poisson-Boltzmann equation, and conclude that the one-dimensional approximation underestimates the ion density. We calculate the electric potential, the ion distributions, the pH-values, the ion-size corrected activity coefficients, and the dissociation constants close to the electric double layer and compare the results for the various model corrections.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  3. A drift correction optimization technique for the reduction of the inter-measurement dispersion of isotope ratios measured using a multi-collector plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Doherty, W.; Lightfoot, P. C.; Ames, D. E.

    2014-08-01

    The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.

  4. Scaling and kinematics optimisation of the scapula and thorax in upper limb musculoskeletal models

    PubMed Central

    Prinold, Joe A.I.; Bull, Anthony M.J.

    2014-01-01

    Accurate representation of individual scapula kinematics and subject geometries is vital in musculoskeletal models applied to upper limb pathology and performance. In applying individual kinematics to a model׳s cadaveric geometry, model constraints are commonly prescriptive. These rely on thorax scaling to effectively define the scapula׳s path but do not consider the area underneath the scapula in scaling, and assume a fixed conoid ligament length. These constraints may not allow continuous solutions or close agreement with directly measured kinematics. A novel method is presented to scale the thorax based on palpated scapula landmarks. The scapula and clavicle kinematics are optimised with the constraint that the scapula medial border does not penetrate the thorax. Conoid ligament length is not used as a constraint. This method is simulated in the UK National Shoulder Model and compared to four other methods, including the standard technique, during three pull-up techniques (n=11). These are high-performance activities covering a large range of motion. Model solutions without substantial jumps in the joint kinematics data were improved from 23% of trials with the standard method, to 100% of trials with the new method. Agreement with measured kinematics was significantly improved (more than 10° closer at p<0.001) when compared to standard methods. The removal of the conoid ligament constraint and the novel thorax scaling correction factor were shown to be key. Separation of the medial border of the scapula from the thorax was large, although this may be physiologically correct due to the high loads and high arm elevation angles. PMID:25011621

  5. Method of absorbance correction in a spectroscopic heating value sensor

    DOEpatents

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  6. Iterative CT shading correction with no prior information

    NASA Astrophysics Data System (ADS)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical and attractive as a general solution to CT shading correction.

  7. New Correction Factors Based on Seasonal Variability of Outdoor Temperature for Estimating Annual Radon Concentrations in UK.

    PubMed

    Daraktchieva, Z

    2017-06-01

    Indoor radon concentrations generally vary with season. Radon gas enters buildings from beneath due to a small air pressure difference between the inside of a house and outdoors. This underpressure which draws soil gas including radon into the house depends on the difference between the indoor and outdoor temperatures. The variation in a typical house in UK showed that the mean indoor radon concentration reaches a maximum in January and a minimum in July. Sine functions were used to model the indoor radon data and monthly average outdoor temperatures, covering the period between 2005 and 2014. The analysis showed a strong negative correlation between the modelled indoor radon data and outdoor temperature. This correlation was used to calculate new correction factors that could be used for estimation of annual radon concentration in UK homes. The comparison between the results obtained with the new correction factors and the previously published correction factors showed that the new correction factors perform consistently better on the selected data sets. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Analysis of diffuse radiation data for Beer Sheva: Measured (shadow ring) versus calculated (global-horizontal beam) values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudish, A.I.; Ianetz, A.

    1993-12-01

    The authors have utilized concurrently measured global, normal incidence beam, and diffuse radiation data, the latter measured by means of a shadow ring pyranometer to study the relative magnitude of the anisotropic contribution (circumsolar region and nonuniform sky conditions) to the diffuse radiation. In the case of Beer Sheva, the monthly average hourly anisotropic correction factor varies from 2.9 to 20.9%, whereas the [open quotes]standard[close quotes] geometric correction factor varies from 5.6 to 14.0%. The monthly average hourly overall correction factor (combined anisotropic and geometric factors) varies from 8.9 to 37.7%. The data have also been analyzed using a simplemore » model of sky radiance developed by Steven in 1984. His anisotropic correction factor is a function of the relative strength and angular width of the circumsolar radiation region. The results of this analysis are in agreement with those previously reported for Quidron on the Dead Sea, viz. the anisotropy and relative strength of the circumsolar radiation are significantly greater than at any of the sites analyzed by Steven. In addition, the data have been utilized to validate a model developed by LeBaron et al. in 1990 for correcting shadow ring diffuse radiation data. The monthly average deviation between the corrected and true diffuse radiation values varies from 4.55 to 7.92%.« less

  9. S-NPP VIIRS thermal emissive band gain correction during the blackbody warm-up-cool-down cycle

    NASA Astrophysics Data System (ADS)

    Choi, Taeyoung J.; Cao, Changyong; Weng, Fuzhong

    2016-09-01

    The Suomi National Polar orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) has onboard calibrators called blackbody (BB) and Space View (SV) for Thermal Emissive Band (TEB) radiometric calibration. In normal operation, the BB temperature is set to 292.5 K providing one radiance level. From the NOAA's Integrated Calibration and Validation System (ICVS) monitoring system, the TEB calibration factors (F-factors) have been trended and show very stable responses, however the BB Warm-Up-Cool-Down (WUCD) cycles provide detectors' gain and temperature dependent sensitivity measurements. Since the launch of S-NPP, the NOAA Sea Surface Temperature (SST) group noticed unexpected global SST anomalies during the WUCD cycles. In this study, the TEB Ffactors are calculated during the WUCD cycle on June 17th 2015. The TEB F-factors are analyzed by identifying the VIIRS On-Board Calibrator Intermediate Product (OBCIP) files to be Warm-Up or Cool-Down granules. To correct the SST anomaly, an F-factor correction parameter is calculated by the modified C1 (or b1) values which are derived from the linear portion of C1 coefficient during the WUCD. The F-factor correction factors are applied back to the original VIIRS SST bands showing significantly reducing the F-factor changes. Obvious improvements are observed in M12, M14 and M16, but corrections effects are hardly seen in M16. Further investigation is needed to find out the source of the F-factor oscillations during the WUCD.

  10. High-throughput ab-initio dilute solute diffusion database

    PubMed Central

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-01-01

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world. PMID:27434308

  11. Physics of heat pipe rewetting

    NASA Technical Reports Server (NTRS)

    Chan, S. H.

    1992-01-01

    Although several studies have been made to determine the rewetting characteristics of liquid films on heated rods, tubes, and flat plates, no solutions are yet available to describe the rewetting process of a hot plate subjected to a uniform heating. A model is presented to analyze the rewetting process of such plates with and without grooves. Approximate analytical solutions are presented for the prediction of the rewetting velocity and the transient temperature profiles of the plates. It is shown that the present rewetting velocity solution reduces correctly to the existing solution for the rewetting of an initially hot isothermal plate without heating from beneath the plate. Numerical solutions have also been obtained to validate the analytical solutions.

  12. Experimental setup for the determination of the correction factors of the neutron doseratemeters in fast neutron fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliescu, Elena; Bercea, Sorin; Dudu, Dorin

    2013-12-16

    The use of the U-120 Cyclotron of the IFIN-HH allowed to perform a testing bench with fast neutrons in order to determine the correction factors of the doseratemeters dedicated to neutron measurement. This paper deals with researchers performed in order to develop the irradiation facility testing the fast neutrons flux generated at the Cyclotron. This facility is presented, together with the results obtain in determining the correction factor for a doseratemeter dedicated to the neutron dose equivalent rate measurement.

  13. Understanding the atmospheric measurement and behavior of perfluorooctanoic acid.

    PubMed

    Webster, Eva M; Ellis, David A

    2012-09-01

    The recently reported quantification of the atmospheric sampling artifact for perfluorooctanoic acid (PFOA) was applied to existing gas and particle concentration measurements. Specifically, gas phase concentrations were increased by a factor of 3.5 and particle-bound concentrations by a factor of 0.1. The correlation constants in two particle-gas partition coefficient (K(QA)) estimation equations were determined for multiple studies with and without correcting for the sampling artifact. Correction for the sampling artifact gave correlation constants with improved agreement to those reported for other neutral organic contaminants, thus supporting the application of the suggested correction factors for perfluorinated carboxylic acids. Applying the corrected correlation constant to a recent multimedia modeling study improved model agreement with corrected, reported, atmospheric concentrations. This work confirms that there is sufficient partitioning to the gas phase to support the long-range atmospheric transport of PFOA. Copyright © 2012 SETAC.

  14. Slip Correction Measurements of Certified PSL Nanoparticles Using a Nanometer Differential Mobility Analyzer (Nano-DMA) for Knudsen Number From 0.5 to 83

    PubMed Central

    Kim, Jung Hyeun; Mulholland, George W.; Kukuck, Scott R.; Pui, David Y. H.

    2005-01-01

    The slip correction factor has been investigated at reduced pressures and high Knudsen number using polystyrene latex (PSL) particles. Nano-differential mobility analyzers (NDMA) were used in determining the slip correction factor by measuring the electrical mobility of 100.7 nm, 269 nm, and 19.90 nm particles as a function of pressure. The aerosol was generated via electrospray to avoid multiplets for the 19.90 nm particles and to reduce the contaminant residue on the particle surface. System pressure was varied down to 8.27 kPa, enabling slip correction measurements for Knudsen numbers as large as 83. A condensation particle counter was modified for low pressure application. The slip correction factor obtained for the three particle sizes is fitted well by the equation: C = 1 + Kn (α + β exp(−γ/Kn)), with α = 1.165, β = 0.483, and γ = 0.997. The first quantitative uncertainty analysis for slip correction measurements was carried out. The expanded relative uncertainty (95 % confidence interval) in measuring slip correction factor was about 2 % for the 100.7 nm SRM particles, about 3 % for the 19.90 nm PSL particles, and about 2.5 % for the 269 nm SRM particles. The major sources of uncertainty are the diameter of particles, the geometric constant associated with NDMA, and the voltage. PMID:27308102

  15. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, Robert M.; Hamblen, David G.; Brouillette, Carl R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  16. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, R.M.; Hamblen, D.G.; Brouillette, C.R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  17. Pedigrees, Prizes, and Prisoners: The Misuse of Conditional Probability

    ERIC Educational Resources Information Center

    Carlton, Matthew A.

    2005-01-01

    We present and discuss three examples of misapplication of the notion of conditional probability. In each example, we present the problem along with a published and/or well-known incorrect--but seemingly plausible--solution. We then give a careful treatment of the correct solution, in large part to show how careful application of basic probability…

  18. Incubation Provides Relief from Artificial Fixation in Problem Solving

    ERIC Educational Resources Information Center

    Penaloza, Alan A.; Calvillo, Dustin P.

    2012-01-01

    An incubation effect occurs when taking a break from a problem helps solvers arrive at the correct solution more often than working on it continuously. The forgetting-fixation account, a popular explanation of how incubation works, posits that a break from a problem allows the solver to forget the incorrect path to the solution and finally access…

  19. Little C Creativity: A Case for Our Science Classroom--An Indian Perspective

    ERIC Educational Resources Information Center

    Chander, Subhash

    2012-01-01

    The number of day-to-day challenges has increased at every stage of life, particularly in developing countries, and therefore there is a crying need for a search for solutions. Education plays an important role in providing correct direction, and science education can prove crucial in achieving this goal. Solutions to individual as well as…

  20. 77 FR 28765 - Homeless Emergency Assistance and Rapid Transition to Housing: Emergency Solutions Grants Program...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ... program. The heading for this rule displayed a RIN number of 2506-AC29, which was incorrect. RIN number.... ACTION: Interim rule; correction. SUMMARY: The document advises that the interim rule for the Emergency Solutions Grants program, published on December 5, 2011, displayed an incorrect RIN number. This document...

  1. Evaluation of a regional real-time precise positioning system based on GPS/BeiDou observations in Australia

    NASA Astrophysics Data System (ADS)

    Ding, Wenwu; Tan, Bingfeng; Chen, Yongchang; Teferle, Felix Norman; Yuan, Yunbin

    2018-02-01

    The performance of real-time (RT) precise positioning can be improved by utilizing observations from multiple Global Navigation Satellite Systems (GNSS) instead of one particular system. Since the end of 2012, BeiDou, independently established by China, began to provide operational services for users in the Asia-Pacific regions. In this study, a regional RT precise positioning system is developed to evaluate the performance of GPS/BeiDou observations in Australia in providing high precision positioning services for users. Fixing three hourly updated satellite orbits, RT correction messages are generated and broadcasted by processing RT observation/navigation data streams from the national network of GNSS Continuously Operating Reference Stations in Australia (AUSCORS) at the server side. At the user side, RT PPP is realized by processing RT data streams and the RT correction messages received. RT clock offsets, for which the accuracy reached 0.07 and 0.28 ns for GPS and BeiDou, respectively, can be determined. Based on these corrections, an accuracy of 12.2, 30.0 and 45.6 cm in the North, East and Up directions was achieved for the BeiDou-only solution after 30 min while the GPS-only solution reached 5.1, 15.3 and 15.5 cm for the same components at the same time. A further improvement of 43.7, 36.9 and 45.0 percent in the three directions, respectively, was achieved for the combined GPS/BeiDou solution. After the initialization process, the North, East and Up positioning accuracies were 5.2, 8.1 and 17.8 cm, respectively, for the BeiDou-only solution, while 1.5, 3.0, and 4.7 cm for the GPS-only solution. However, we only noticed a 20.9% improvement in the East direction was obtained for the GPS/BeiDou solution, while no improvements in the other directions were detected. It is expected that such improvements may become bigger with the increasing accuracy of the BeiDou-only solution.

  2. Application of Near Infrared Spectroscopy Coupled with Fluidized Bed Enrichment and Chemometrics to Detect Low Concentration of β-Naphthalenesulfonic Acid.

    PubMed

    Li, Wei; Zhang, Xuan; Zheng, Kaiyi; Du, Yiping; Cap, Peng; Sui, Tao; Geng, Jinpei

    2015-01-01

    A fluidized bed enrichment technique was developed to improve sensitivity of near infrared (NIR) spectroscopy with features of rapidness and large volume solution. D301 resin was used as an adsorption material to preconcentrate β-naphthalenesulfonic acid in solutions in a concentration range of 2.0-100.0 μg/mL, and NIR spectra were measured directly relative to the β-naphthalenesulfonic acid adsorbed on the material. An improved partial least squares (PLS) model was attained with the aid of multiplicative scatter correction pretreatment and stability competitive adaptive reweighted sampling wavenumber selection method. The root mean square error of cross validation was 1.87 μg/mL at PLS factor of 7. An independent test set was used to assess the model, with the relative error (RE) in an acceptable range of 0.46 to 10.03% and mean RE of 3.72%. This study confirmed the viability of the proposed method for the measurement of a low content of β-naphthalenesulfonic acid in water.

  3. ANMCO/AIIC/SIT Consensus Information Document: definition, precision, and suitability of electrocardiographic signals of electrocardiographs, ergometry, Holter electrocardiogram, telemetry, and bedside monitoring systems.

    PubMed

    Gulizia, Michele Massimo; Casolo, Giancarlo; Zuin, Guerrino; Morichelli, Loredana; Calcagnini, Giovanni; Ventimiglia, Vincenzo; Censi, Federica; Caldarola, Pasquale; Russo, Giancarmine; Leogrande, Lorenzo; Franco Gensini, Gian

    2017-05-01

    The electrocardiogram (ECG) signal can be derived from different sources. These include systems for surface ECG, Holter monitoring, ergometric stress tests, and telemetry systems and bedside monitoring of vital parameters, which are useful for rhythm and ST-segment analysis and ECG screening of electrical sudden cardiac death predictors. A precise ECG diagnosis is based upon correct recording, elaboration, and presentation of the signal. Several sources of artefacts and potential external causes may influence the quality of the original ECG waveforms. Other factors that may affect the quality of the information presented depend upon the technical solutions employed to improve the signal. The choice of the instrumentations and solutions used to offer a high-quality ECG signal are, therefore, of paramount importance. Some requirements are reported in detail in scientific statements and recommendations. The aim of this consensus document is to give scientific reference for the choice of systems able to offer high quality ECG signal acquisition, processing, and presentation suitable for clinical use.

  4. Improved Accuracy of Low Affinity Protein-Ligand Equilibrium Dissociation Constants Directly Determined by Electrospray Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Jaquillard, Lucie; Saab, Fabienne; Schoentgen, Françoise; Cadene, Martine

    2012-05-01

    There is continued interest in the determination by ESI-MS of equilibrium dissociation constants (KD) that accurately reflect the affinity of a protein-ligand complex in solution. Issues in the measurement of KD are compounded in the case of low affinity complexes. Here we present a KD measurement method and corresponding mathematical model dealing with both gas-phase dissociation (GPD) and aggregation. To this end, a rational mathematical correction of GPD (fsat) is combined with the development of an experimental protocol to deal with gas-phase aggregation. A guide to apply the method to noncovalent protein-ligand systems according to their kinetic behavior is provided. The approach is validated by comparing the KD values determined by this method with in-solution KD literature values. The influence of the type of molecular interactions and instrumental setup on fsat is examined as a first step towards a fine dissection of factors affecting GPD. The method can be reliably applied to a wide array of low affinity systems without the need for a reference ligand or protein.

  5. ANMCO/AIIC/SIT Consensus Information Document: definition, precision, and suitability of electrocardiographic signals of electrocardiographs, ergometry, Holter electrocardiogram, telemetry, and bedside monitoring systems

    PubMed Central

    Casolo, Giancarlo; Zuin, Guerrino; Morichelli, Loredana; Calcagnini, Giovanni; Ventimiglia, Vincenzo; Censi, Federica; Caldarola, Pasquale; Russo, Giancarmine; Leogrande, Lorenzo; Franco Gensini, Gian

    2017-01-01

    Abstract The electrocardiogram (ECG) signal can be derived from different sources. These include systems for surface ECG, Holter monitoring, ergometric stress tests, and telemetry systems and bedside monitoring of vital parameters, which are useful for rhythm and ST-segment analysis and ECG screening of electrical sudden cardiac death predictors. A precise ECG diagnosis is based upon correct recording, elaboration, and presentation of the signal. Several sources of artefacts and potential external causes may influence the quality of the original ECG waveforms. Other factors that may affect the quality of the information presented depend upon the technical solutions employed to improve the signal. The choice of the instrumentations and solutions used to offer a high-quality ECG signal are, therefore, of paramount importance. Some requirements are reported in detail in scientific statements and recommendations. The aim of this consensus document is to give scientific reference for the choice of systems able to offer high quality ECG signal acquisition, processing, and presentation suitable for clinical use. PMID:28751842

  6. Constraining Viewing Geometries of Pulsars with Single-Peaked Gamma-ray Profiles Using a Multiwavelength Approach

    NASA Technical Reports Server (NTRS)

    Seyffert, A. S.; Venter, C.; Johnson, T. J.; Harding, A. K.

    2012-01-01

    Since the launch of the Large Area Telescope (LAT) on board the Fermi spacecraft in June 2008, the number of observed gamma-ray pulsars has increased dramatically. A large number of these are also observed at radio frequencies. Constraints on the viewing geometries of 5 of 6 gamma-ray pulsars exhibiting single-peaked gamma-ray profiles were derived using high-quality radio polarization data [1]. We obtain independent constraints on the viewing geometries of 6 by using a geometric emission code to model the Fermi LAT and radio light curves (LCs). We find fits for the magnetic inclination and observer angles by searching the solution space by eye. Our results are generally consistent with those previously obtained [1], although we do find small differences in some cases. We will indicate how the gamma-ray and radio pulse shapes as well as their relative phase lags lead to constraints in the solution space. Values for the flux correction factor (f(omega)) corresponding to the fits are also derived (with errors).

  7. Absorption spectrum of neat liquid benzene and its concentrated solutions in n-hexane from 220 to 170 nm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saik, V.O.; Lipsky, S.

    The electronic absorption spectrum of benzene has been obtained by phototransmission measurements over a concentration range from 0.005 M in n-hexane to the neat liquid at 11.2 M and over a spectral range that extends down to 170 nm. Good agreement is obtained with previously reported measurements on the neat liquid. The oscillator strength of the strongly allowed A{sub 1g} {yields} E{sub 1u} transition is maintained at ca. 1.0 as the benzene concentration increases but is accompanied by extensive redistribution of the intensity such that the optical cross section at the position of the absorption maximum (which shifts from 184{submore » .2} nm in dilute solution to 189{sub .5} nm in the neat liquid) reduces by a factor of 2.7. An explanation for these changes in terms of Lorentz field corrections to the complex dielectric constant is developed, and its implication to the assignment of the neat liquid absorption as a collective excitation is considered. 43 refs., 5 figs., 1 tab.« less

  8. A comparison of quality of present-day heat flow obtained from BHTs, Horner Plots of Malay Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waples, D.W.; Mahadir, R.

    1994-07-01

    Reconciling temperature data obtained from measurement of single BHT, multiple BHT at a single depth, RFTs, and DSTs, is very difficult. Quality of data varied widely, however DST data were assumed to be most reliable. Data from 87 wells was used in this study, but only 47 wells have DST data. BASINMOD program was used to calculate the present-day heat flow, using measured thermal conductivity and calibrated against the DST data. The heat flows obtained from the DST data were assumed to be correct and representative throughout the basin. Then, heat flows using (1) uncorrected RFT data, (2) multiple BHTmore » data corrected by the Horner plot method, and (3) single BHT values corrected upward by a standard 10% were calculated. All of these three heat-flow populations had identically standard deviations to that for the DST data, but with significantly lower mean values. Correction factors were calculated to give each of the three erroneous populations the same mean value as the DST population. Heat flows calculated from RFT data had to be corrected upward by a factor of 1.12 to be equivalent to DST data; Horner plot data corrected by a factor of 1.18, and single BHT data by a factor of 1.2. These results suggest that present-day subsurface temperatures using RFT, Horner plot, and BHT data are considerably lower than they should be. The authors suspect qualitatively similar results would be found in other areas. Hence, they recommend significant corrections be routinely made until local calibration factors are established.« less

  9. Mass Evolution of Mediterranean, Black, Red, and Caspian Seas from GRACE and Altimetry: Accuracy Assessment and Solution Calibration

    NASA Technical Reports Server (NTRS)

    Loomis, B. D.; Luthcke, S. B.

    2016-01-01

    We present new measurements of mass evolution for the Mediterranean, Black, Red, and Caspian Seas as determined by the NASA Goddard Space Flight Center (GSFC) GRACE time-variable global gravity mascon solutions. These new solutions are compared to sea surface altimetry measurements of sea level anomalies with steric corrections applied. To assess their accuracy, the GRACE and altimetry-derived solutions are applied to the set of forward models used by GSFC for processing the GRACE Level-1B datasets, with the resulting inter-satellite range acceleration residuals providing a useful metric for analyzing solution quality.

  10. Pre-correction of distorted Bessel-Gauss beams without wavefront detection

    NASA Astrophysics Data System (ADS)

    Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing

    2017-12-01

    By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.

  11. SU-E-T-552: Monte Carlo Calculation of Correction Factors for a Free-Air Ionization Chamber in Support of a National Air-Kerma Standard for Electronic Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Bergstrom, P

    2015-06-15

    Purpose: To use Monte Carlo radiation transport methods to calculate correction factors for a free-air ionization chamber in support of a national air-kerma standard for low-energy, miniature x-ray sources used for electronic brachytherapy (eBx). Methods: The NIST is establishing a calibration service for well-type ionization chambers used to characterize the strength of eBx sources prior to clinical use. The calibration approach involves establishing the well-chamber’s response to an eBx source whose air-kerma rate at a 50 cm distance is determined through a primary measurement performed using the Lamperti free-air ionization chamber. However, the free-air chamber measurements of charge or currentmore » can only be related to the reference air-kerma standard after applying several corrections, some of which are best determined via Monte Carlo simulation. To this end, a detailed geometric model of the Lamperti chamber was developed in the EGSnrc code based on the engineering drawings of the instrument. The egs-fac user code in EGSnrc was then used to calculate energy-dependent correction factors which account for missing or undesired ionization arising from effects such as: (1) attenuation and scatter of the x-rays in air; (2) primary electrons escaping the charge collection region; (3) lack of charged particle equilibrium; (4) atomic fluorescence and bremsstrahlung radiation. Results: Energy-dependent correction factors were calculated assuming a monoenergetic point source with the photon energy ranging from 2 keV to 60 keV in 2 keV increments. Sufficient photon histories were simulated so that the Monte Carlo statistical uncertainty of the correction factors was less than 0.01%. The correction factors for a specific eBx source will be determined by integrating these tabulated results over its measured x-ray spectrum. Conclusion: The correction factors calculated in this work are important for establishing a national standard for eBx which will help ensure that dose is accurately and consistently delivered to patients.« less

  12. Insight solutions are correct more often than analytic solutions

    PubMed Central

    Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark

    2016-01-01

    How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960

  13. Heavy quark form factors at two loops

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Behring, A.; Blümlein, J.; Falcioni, G.; De Freitas, A.; Marquard, P.; Rana, N.; Schneider, C.

    2018-05-01

    We compute the two-loop QCD corrections to the heavy quark form factors in the case of the vector, axial-vector, scalar and pseudoscalar currents up to second order in the dimensional parameter ɛ =(4 -D )/2 . These terms are required in the renormalization of the higher-order corrections to these form factors.

  14. Determination of correction factors in beta radiation beams using Monte Carlo method.

    PubMed

    Polo, Ivón Oramas; Santos, William de Souza; Caldas, Linda V E

    2018-06-15

    The absorbed dose rate is the main characterization quantity for beta radiation. The extrapolation chamber is considered the primary standard instrument. To determine absorbed dose rates in beta radiation beams, it is necessary to establish several correction factors. In this work, the correction factors for the backscatter due to the collecting electrode and to the guard ring, and the correction factor for Bremsstrahlung in beta secondary standard radiation beams are presented. For this purpose, the Monte Carlo method was applied. The results obtained are considered acceptable, and they agree within the uncertainties. The differences between the backscatter factors determined by the Monte Carlo method and those of the ISO standard were 0.6%, 0.9% and 2.04% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. The differences between the Bremsstrahlung factors determined by the Monte Carlo method and those of the ISO were 0.25%, 0.6% and 1% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. MATHEMATICAL ANALYSIS OF STEADY-STATE SOLUTIONS IN COMPARTMENT AND CONTINUUM MODELS OF CELL POLARIZATION

    PubMed Central

    ZHENG, ZHENZHEN; CHOU, CHING-SHAN; YI, TAU-MU; NIE, QING

    2013-01-01

    Cell polarization, in which substances previously uniformly distributed become asymmetric due to external or/and internal stimulation, is a fundamental process underlying cell mobility, cell division, and other polarized functions. The yeast cell S. cerevisiae has been a model system to study cell polarization. During mating, yeast cells sense shallow external spatial gradients and respond by creating steeper internal gradients of protein aligned with the external cue. The complex spatial dynamics during yeast mating polarization consists of positive feedback, degradation, global negative feedback control, and cooperative effects in protein synthesis. Understanding such complex regulations and interactions is critical to studying many important characteristics in cell polarization including signal amplification, tracking dynamic signals, and potential trade-off between achieving both objectives in a robust fashion. In this paper, we study some of these questions by analyzing several models with different spatial complexity: two compartments, three compartments, and continuum in space. The step-wise approach allows detailed characterization of properties of the steady state of the system, providing more insights for biological regulations during cell polarization. For cases without membrane diffusion, our study reveals that increasing the number of spatial compartments results in an increase in the number of steady-state solutions, in particular, the number of stable steady-state solutions, with the continuum models possessing infinitely many steady-state solutions. Through both analysis and simulations, we find that stronger positive feedback, reduced diffusion, and a shallower ligand gradient all result in more steady-state solutions, although most of these are not optimally aligned with the gradient. We explore in the different settings the relationship between the number of steady-state solutions and the extent and accuracy of the polarization. Taken together these results furnish a detailed description of the factors that influence the tradeoff between a single correctly aligned but poorly polarized stable steady-state solution versus multiple more highly polarized stable steady-state solutions that may be incorrectly aligned with the external gradient. PMID:21936604

  16. Factors affecting the periapical healing process of endodontically treated teeth.

    PubMed

    Holland, Roberto; Gomes, João Eduardo; Cintra, Luciano Tavares Angelo; Queiroz, Índia Olinta de Azevedo; Estrela, Carlos

    2017-01-01

    Tissue repair is an essential process that reestablishes tissue integrity and regular function. Nevertheless, different therapeutic factors and clinical conditions may interfere in this process of periapical healing. This review aims to discuss the important therapeutic factors associated with the clinical protocol used during root canal treatment and to highlight the systemic conditions associated with the periapical healing process of endodontically treated teeth. The antibacterial strategies indicated in the conventional treatment of an inflamed and infected pulp and the modulation of the host's immune response may assist in tissue repair, if wound healing has been hindered by infection. Systemic conditions, such as diabetes mellitus and hypertension, can also inhibit wound healing. The success of root canal treatment is affected by the correct choice of clinical protocol. These factors are dependent on the sanitization process (instrumentation, irrigant solution, irrigating strategies, and intracanal dressing), the apical limit of the root canal preparation and obturation, and the quality of the sealer. The challenges affecting the healing process of endodontically treated teeth include control of the inflammation of pulp or infectious processes and simultaneous neutralization of unpredictable provocations to the periapical tissue. Along with these factors, one must understand the local and general clinical conditions (systemic health of the patient) that affect the outcome of root canal treatment prediction.

  17. Factors affecting the periapical healing process of endodontically treated teeth

    PubMed Central

    Holland, Roberto; Gomes, João Eduardo; Cintra, Luciano Tavares Angelo; Queiroz, Índia Olinta de Azevedo; Estrela, Carlos

    2017-01-01

    Abstract Tissue repair is an essential process that reestablishes tissue integrity and regular function. Nevertheless, different therapeutic factors and clinical conditions may interfere in this process of periapical healing. This review aims to discuss the important therapeutic factors associated with the clinical protocol used during root canal treatment and to highlight the systemic conditions associated with the periapical healing process of endodontically treated teeth. The antibacterial strategies indicated in the conventional treatment of an inflamed and infected pulp and the modulation of the host's immune response may assist in tissue repair, if wound healing has been hindered by infection. Systemic conditions, such as diabetes mellitus and hypertension, can also inhibit wound healing. The success of root canal treatment is affected by the correct choice of clinical protocol. These factors are dependent on the sanitization process (instrumentation, irrigant solution, irrigating strategies, and intracanal dressing), the apical limit of the root canal preparation and obturation, and the quality of the sealer. The challenges affecting the healing process of endodontically treated teeth include control of the inflammation of pulp or infectious processes and simultaneous neutralization of unpredictable provocations to the periapical tissue. Along with these factors, one must understand the local and general clinical conditions (systemic health of the patient) that affect the outcome of root canal treatment prediction. PMID:29069143

  18. Diagnosing the decline in pharmaceutical R&D efficiency.

    PubMed

    Scannell, Jack W; Blanckley, Alex; Boldon, Helen; Warrington, Brian

    2012-03-01

    The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (RD). Yet the number of new drugs approved per billion US dollars spent on RD has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining RD efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research-brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in RD efficiency.

  19. Combining the ensemble and Franck-Condon approaches for calculating spectral shapes of molecules in solution

    NASA Astrophysics Data System (ADS)

    Zuehlsdorff, T. J.; Isborn, C. M.

    2018-01-01

    The correct treatment of vibronic effects is vital for the modeling of absorption spectra of many solvated dyes. Vibronic spectra for small dyes in solution can be easily computed within the Franck-Condon approximation using an implicit solvent model. However, implicit solvent models neglect specific solute-solvent interactions on the electronic excited state. On the other hand, a straightforward way to account for solute-solvent interactions and temperature-dependent broadening is by computing vertical excitation energies obtained from an ensemble of solute-solvent conformations. Ensemble approaches usually do not account for vibronic transitions and thus often produce spectral shapes in poor agreement with experiment. We address these shortcomings by combining zero-temperature vibronic fine structure with vertical excitations computed for a room-temperature ensemble of solute-solvent configurations. In this combined approach, all temperature-dependent broadening is treated classically through the sampling of configurations and quantum mechanical vibronic contributions are included as a zero-temperature correction to each vertical transition. In our calculation of the vertical excitations, significant regions of the solvent environment are treated fully quantum mechanically to account for solute-solvent polarization and charge-transfer. For the Franck-Condon calculations, a small amount of frozen explicit solvent is considered in order to capture solvent effects on the vibronic shape function. We test the proposed method by comparing calculated and experimental absorption spectra of Nile red and the green fluorescent protein chromophore in polar and non-polar solvents. For systems with strong solute-solvent interactions, the combined approach yields significant improvements over the ensemble approach. For systems with weak to moderate solute-solvent interactions, both the high-energy vibronic tail and the width of the spectra are in excellent agreement with experiments.

  20. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    PubMed Central

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  1. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    PubMed

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  2. Incorporating convection into one-dimensional solute redistribution during crystal growth from the melt I. The steady-state solution

    NASA Astrophysics Data System (ADS)

    Yen, C. T.; Tiller, W. A.

    1992-03-01

    A one-dimensional mathematical analysis is made of the redistribution of solute which occurs during crystal growth from a convected melt. In this analysis, the important contribution from lateral melt convection to one-dimensional solute redistribution analysis is taken into consideration via an annihilation/creation term in the one-dimensional solute transport equation. Calculations of solute redistribution under steady-state conditions have been carried out analytically. It is found that this new solute redistribution model overcomes several weaknesses that occur when applying the Burton, Prim and Slichter solute segregation equation (1953) in real melt growth situations. It is also found that, with this correction, the diffusion coefficients for solute's in liquid silicon are now found to be in the same range as other liquid metal diffusion coefficients.

  3. Correction factors for self-selection when evaluating screening programmes.

    PubMed

    Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H

    2016-03-01

    In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.

  4. Simulation and Correction of Triana-Viewed Earth Radiation Budget with ERBE/ISCCP Data

    NASA Technical Reports Server (NTRS)

    Huang, Jian-Ping; Minnis, Patrick; Doelling, David R.; Valero, Francisco P. J.

    2002-01-01

    This paper describes the simulation of the earth radiation budget (ERB) as viewed by Triana and the development of correction models for converting Trianaviewed radiances into a complete ERB. A full range of Triana views and global radiation fields are simulated using a combination of datasets from ERBE (Earth Radiation Budget Experiment) and ISCCP (International Satellite Cloud Climatology Project) and analyzed with a set of empirical correction factors specific to the Triana views. The results show that the accuracy of global correction factors to estimate ERB from Triana radiances is a function of the Triana position relative to the Lagrange-1 (L1) or the Sun location. Spectral analysis of the global correction factor indicates that both shortwave (SW; 0.2 - 5.0 microns) and longwave (LW; 5 -50 microns) parameters undergo seasonal and diurnal cycles that dominate the periodic fluctuations. The diurnal cycle, especially its amplitude, is also strongly dependent on the seasonal cycle. Based on these results, models are developed to correct the radiances for unviewed areas and anisotropic emission and reflection. A preliminary assessment indicates that these correction models can be applied to Triana radiances to produce the most accurate global ERB to date.

  5. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  6. A single-stage flux-corrected transport algorithm for high-order finite-volume methods

    DOE PAGES

    Chaplin, Christopher; Colella, Phillip

    2017-05-08

    We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less

  7. [Treatment of children with acute diarrheal disease. Knowledge and attitudes of the health personnel].

    PubMed

    Mota-Hernández, F; Zamora-Escudero, G

    1992-10-01

    Diarrheal diseases are still one of the most frequent causes of death due to dehydration in children; lack of information regarding the adequate treatment of diarrhea is the main cause. The results of an inquire sent to 620 physicians and nurses were analyzed to determine the knowledge and attitudes of the health care workers that reside in different diarrheal mortality areas in Mexico. The less professional experience time was correlated with more knowledge in etiology of diarrhea. More physicians than nurses had correct answers regarding the place of diarrheal diseases in child mortality and the correct use of antimicrobial, and other drugs and liquids to prevent and treat dehydration. Most workers did not know the inconvenience of hypertonic solutions to prevent dehydration and the importance of the oral solution flavor. This results suggest that nurses will, be included in clinical training by means of seminars in oral hydration therapy. Furthermore it seems convenient to increase the access to oral hydration solutions as well as the diffusion of its advantages.

  8. Elegant Ince—Gaussian breathers in strongly nonlocal nonlinear media

    NASA Astrophysics Data System (ADS)

    Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi

    2012-06-01

    A novel class of optical breathers, called elegant Ince—Gaussian breathers, are presented in this paper. They are exact analytical solutions to Snyder and Mitchell's mode in an elliptic coordinate system, and their transverse structures are described by Ince-polynomials with complex arguments and a Gaussian function. We provide convincing evidence for the correctness of the solutions and the existence of the breathers via comparing the analytical solutions with numerical simulation of the nonlocal nonlinear Schrödinger equation.

  9. Centroid — moment tensor solutions for July-September 2000

    NASA Astrophysics Data System (ADS)

    Dziewonski, A. M.; Ekström, G.; Maternovskaya, N. N.

    2001-06-01

    Centroid-moment tensor (CMT) solutions are presented for 308 earthquakes that occurred during the third quarter of 2000. The solutions are obtained using corrections for aspherical earth structure represented by a whole mantle shear velocity model SH8/U4L8 of Dziewonski and Woodward [Acoustical Imaging, Vol. 19, Plenum Press, New York, 1992, p. 785]. A model of anelastic attenuation of Durek and Ekström [Bull. Seism. Soc. Am. 86 (1996) 144] is used to predict the decay of the wave forms.

  10. On the loop approximation in nucleon QCD sum rules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drukarev, E. G., E-mail: drukarev@thd.pnpi.spb.ru; Ryskin, M. G.; Sadovnikova, V. A.

    There was a general belief that the nucleon QCD sum rules which include only the quark loops and thus contain only the condensates of dimension d = 3 and d = 4 have only a trivial solution. We demonstrate that there is also a nontrivial solution. We show that it can be treated as the lowest order approximation to the solution which includes the higher terms of the Operator Product Expansion. Inclusion of the radiative corrections improves the convergence of the series.

  11. Comparison of observation level versus 24-hour average atmospheric loading corrections in VLBI analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.; van Dam, T. M.

    2009-04-01

    Variations in the horizontal distribution of atmospheric mass induce displacements of the Earth's surface. Theoretical estimates of the amplitude of the surface displacement indicate that the predicted surface displacement is often large enough to be detected by current geodetic techniques. In fact, the effects of atmospheric pressure loading have been detected in Global Positioning System (GPS) coordinate time series [van Dam et al., 1994; Dong et al., 2002; Scherneck et al., 2003; Zerbini et al., 2004] and very long baseline interferometery (VLBI) coordinates [Rabble and Schuh, 1986; Manabe et al., 1991; van Dam and Herring, 1994; Schuh et al., 2003; MacMillan and Gipson, 1994; and Petrov and Boy, 2004]. Some of these studies applied the atmospheric displacement at the observation level and in other studies, the predicted atmospheric and observed geodetic surface displacements have been averaged over 24 hours. A direct comparison of observation level and 24 hour corrections has not been carried out for VLBI to determine if one or the other approach is superior. In this presentation, we address the following questions: 1) Is it better to correct geodetic data at the observation level rather than applying corrections averaged over 24 hours to estimated geodetic coordinates a posteriori? 2) At the sub-daily periods, the atmospheric mass signal is composed of two components: a tidal component and a non-tidal component. If observation level corrections reduce the scatter of VLBI data more than a posteriori correction, is it sufficient to only model the atmospheric tides or must the entire atmospheric load signal be incorporated into the corrections? 3) When solutions from different geodetic techniques (or analysis centers within a technique) are combined (e.g., for ITRF2008), not all solutions may have applied atmospheric loading corrections. Are any systematic effects on the estimated TRF introduced when atmospheric loading is applied?

  12. Drug exposure in register-based research—An expert-opinion based evaluation of methods

    PubMed Central

    Taipale, Heidi; Koponen, Marjaana; Tolppanen, Anna-Maija; Hartikainen, Sirpa; Ahonen, Riitta; Tiihonen, Jari

    2017-01-01

    Background In register-based pharmacoepidemiological studies, construction of drug exposure periods from drug purchases is a major methodological challenge. Various methods have been applied but their validity is rarely evaluated. Our objective was to conduct an expert-opinion based evaluation of the correctness of drug use periods produced by different methods. Methods Drug use periods were calculated with three fixed methods: time windows, assumption of one Defined Daily Dose (DDD) per day and one tablet per day, and with PRE2DUP that is based on modelling of individual drug purchasing behavior. Expert-opinion based evaluation was conducted with 200 randomly selected purchase histories of warfarin, bisoprolol, simvastatin, risperidone and mirtazapine in the MEDALZ-2005 cohort (28,093 persons with Alzheimer’s disease). Two experts reviewed purchase histories and judged which methods had joined correct purchases and gave correct duration for each of 1000 drug exposure periods. Results The evaluated correctness of drug use periods was 70–94% for PRE2DUP, and depending on grace periods and time window lengths 0–73% for tablet methods, 0–41% for DDD methods and 0–11% for time window methods. The highest rate of evaluated correct solutions for each method class were observed for 1 tablet per day with 180 days grace period (TAB_1_180, 43–73%), and 1 DDD per day with 180 days grace period (1–41%). Time window methods produced at maximum only 11% correct solutions. The best performing fixed method TAB_1_180 reached highest correctness for simvastatin 73% (95% CI 65–81%) whereas 89% (95% CI 84–94%) of PRE2DUP periods were judged as correct. Conclusions This study shows inaccuracy of fixed methods and the urgent need for new data-driven methods. In the expert-opinion based evaluation, the lowest error rates were observed with data-driven method PRE2DUP. PMID:28886089

  13. Molar mass characterization of sodium carboxymethyl cellulose by SEC-MALLS.

    PubMed

    Shakun, Maryia; Maier, Helena; Heinze, Thomas; Kilz, Peter; Radke, Wolfgang

    2013-06-05

    Two series of sodium carboxymethyl celluloses (NaCMCs) derived from microcrystalline cellulose (Avicel samples) and cotton linters (BWL samples) with average degrees of substitution (DS) ranging from DS=0.45 to DS=1.55 were characterized by size exclusion chromatography with multi-angle laser light scattering detection (SEC-MALLS) in 100 mmol/L aqueous ammonium acetate (NH4OAc) as vaporizable eluent system. The application of vaporizable NH4OAc allows future use of the eluent system in two-dimensional separations employing evaporative light scattering detection (ELSD). The losses of samples during filtration and during the chromatographic experiment were determined. The scaling exponent as of the relation [Formula: see text] was approx. 0.61, showing that NaCMCs exhibit an expanded coil conformation in solution. No systematic dependencies of as on DS were observed. The dependences of molar mass on SEC-elution volume for samples of different DS can be well described by a common calibration curve, which is of advantage, as it allows the determination of molar masses of unknown samples by using the same calibration curve, irrespective of the DS of the NaCMC sample. Since no commercial NaCMC standards are available, correction factors were determined allowing converting a pullulan based calibration curve into a NaCMC calibration using the broad calibration approach. The weight average molar masses derived using the so established calibration curve closely agree with the ones determined by light scattering, proving the accuracy of the correction factors determined. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Measurement of microchannel fluidic resistance with a standard voltage meter.

    PubMed

    Godwin, Leah A; Deal, Kennon S; Hoepfner, Lauren D; Jackson, Louis A; Easley, Christopher J

    2013-01-03

    A simplified method for measuring the fluidic resistance (R(fluidic)) of microfluidic channels is presented, in which the electrical resistance (R(elec)) of a channel filled with a conductivity standard solution can be measured and directly correlated to R(fluidic) using a simple equation. Although a slight correction factor could be applied in this system to improve accuracy, results showed that a standard voltage meter could be used without calibration to determine R(fluidic) to within 12% error. Results accurate to within 2% were obtained when a geometric correction factor was applied using these particular channels. When compared to standard flow rate measurements, such as meniscus tracking in outlet tubing, this approach provided a more straightforward alternative and resulted in lower measurement error. The method was validated using 9 different fluidic resistance values (from ∼40 to 600kPa smm(-3)) and over 30 separately fabricated microfluidic devices. Furthermore, since the method is analogous to resistance measurements with a voltage meter in electrical circuits, dynamic R(fluidic) measurements were possible in more complex microfluidic designs. Microchannel R(elec) was shown to dynamically mimic pressure waveforms applied to a membrane in a variable microfluidic resistor. The variable resistor was then used to dynamically control aqueous-in-oil droplet sizes and spacing, providing a unique and convenient control system for droplet-generating devices. This conductivity-based method for fluidic resistance measurement is thus a useful tool for static or real-time characterization of microfluidic systems. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Measurement of Microchannel Fluidic Resistance with a Standard Voltage Meter

    PubMed Central

    Godwin, Leah A.; Deal, Kennon S.; Hoepfner, Lauren D.; Jackson, Louis A.; Easley, Christopher J.

    2012-01-01

    A simplified method for measuring the fluidic resistance (Rfluidic) of microfluidic channels is presented, in which the electrical resistance (Relec) of a channel filled with a conductivity standard solution can be measured and directly correlated to Rfluidic using a simple equation. Although a slight correction factor could be applied in this system to improve accuracy, results showed that a standard voltage meter could be used without calibration to determine Rfluidic to within 12% error. Results accurate to within 2% were obtained when a geometric correction factor was applied using these particular channels. When compared to standard flow rate measurements, such as meniscus tracking in outlet tubing, this approach provided a more straightforward alternative and resulted in lower measurement error. The method was validated using 9 different fluidic resistance values (from ~40 – 600 kPa s mm−3) and over 30 separately fabricated microfluidic devices. Furthermore, since the method is analogous to resistance measurements with a voltage meter in electrical circuits, dynamic Rfluidic measurements were possible in more complex microfluidic designs. Microchannel Relec was shown to dynamically mimic pressure waveforms applied to a membrane in a variable microfluidic resistor. The variable resistor was then used to dynamically control aqueous-in-oil droplet sizes and spacing, providing a unique and convenient control system for droplet-generating devices. This conductivity-based method for fluidic resistance measurement is thus a useful tool for static or real-time characterization of microfluidic systems. PMID:23245901

  16. Calibration of entrance dose measurement for an in vivo dosimetry programme.

    PubMed

    Ding, W; Patterson, W; Tremethick, L; Joseph, D

    1995-11-01

    An increasing number of cancer treatment centres are using in vivo dosimetry as a quality assurance tool for verifying dosimetry as either the entrance or exit surface of the patient undergoing external beam radiotherapy. Equipment is usually limited to either thermoluminescent dosimeters (TLD) or semiconductor detectors such as p-type diodes. The semiconductor detector is more popular than the TLD due to the major advantage of real time analysis of the actual dose delivered. If a discrepancy is observed between the calculated and the measured entrance dose, it is possible to eliminate several likely sources of errors by immediately verifying all treatment parameters. Five Scanditronix EDP-10 p-type diodes were investigated to determine their calibration and relevant correction factors for entrance dose measurements using a Victoreen White Water-RW3 tissue equivalent phantom and a 6 MV photon beam from a Varian Clinac 2100C linear accelerator. Correction factors were determined for individual diodes for the following parameters: source to surface distance (SSD), collimator size, wedge, plate (tray) and temperature. The directional dependence of diode response was also investigated. The SSD correction factor (CSSD) was found to increase by approximately 3% over the range of SSD from 80 to 130 cm. The correction factor for collimator size (Cfield) also varied by approximately 3% between 5 x 5 and 40 x 40 cm2. The wedge correction factor (Cwedge) and plate correction factor (Cplate) were found to be a function of collimator size. Over the range of measurement, these factors varied by a maximum of 1 and 1.5%, respectively. The Cplate variation between the solid and the drilled plates under the same irradiation conditions was a maximum of 2.4%. The diode sensitivity demonstrated an increase with temperature. A maximum of 2.5% variation for the directional dependence of diode response was observed for angle of +/- 60 degrees. In conclusion, in vivo dosimetry is an important and reliable method for checking the dose delivered to the patient. Preclinical calibration and determination of the relevant correction factors for each diode are essential in order to achieve a high accuracy of dose delivered to the patient.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cartas-Fuentevilla, Roberto; Escalante, Alberto; Germán, Gabriel

    Following recent studies which show that it is possible to localize gravity as well as scalar and gauge vector fields in a tachyonic de Sitter thick braneworld, we investigate the solution of the gauge hierarchy problem, the localization of fermion fields in this model, the recovering of the Coulomb law on the non-relativistic limit of the Yukawa interaction between bulk fermions and gauge bosons localized in the brane, and confront the predicted 5D corrections to the photon mass with its upper experimental/observational bounds, finding the model physically viable since it passes these tests. In order to achieve the latter aimsmore » we first consider the Yukawa interaction term between the fermionic and the tachyonic scalar fields MF(T)ΨΨ-bar in the action and analyze four distinct tachyonic functions F(T) that lead to four different structures of the respective fermionic mass spectra with different physics. In particular, localization of the massless left-chiral fermion zero mode is possible for three of these cases. We further analyze the phenomenology of these Yukawa interactions among fermion fields and gauge bosons localized on the brane and obtain the crucial and necessary information to compute the corrections to Coulomb’s law coming from massive KK vector modes in the non-relativistic limit. These corrections are exponentially suppressed due to the presence of the mass gap in the mass spectrum of the bulk gauge vector field. From our results we conclude that corrections to Coulomb’s law in the thin brane limit have the same form (up to a numerical factor) as far as the left-chiral massless fermion field is localized on the brane. Finally we compute the corrections to the Coulomb’s law for an arbitrarily thick brane scenario which can be interpreted as 5D corrections to the photon mass. By performing consistent estimations with brane phenomenology, we found that the predicted corrections to the photon mass, which are well bounded by the experimentally observed or astrophysically inferred photon mass, are far beyond its upper bound, positively testing the viability of our tachyonic braneworld. Moreover, the 5D parameters that define these corrections possess the same order, providing naturalness to our model, however, a fine-tuning between them is needed in order to fit the corresponding upper bound on the photon mass.« less

  18. Control circuit maintains unity power factor of reactive load

    NASA Technical Reports Server (NTRS)

    Kramer, M.; Martinage, L. H.

    1966-01-01

    Circuit including feedback control elements automatically corrects the power factor of a reactive load. It maintains power supply efficiency where negative load reactance changes and varies by providing corrective error signals to the control windings of a power supply transformer.

  19. Experimental Verification of the Theory of Wind-Tunnel Boundary Interference

    NASA Technical Reports Server (NTRS)

    Theodorsen, Theodore; Silverstein, Abe

    1935-01-01

    The results of an experimental investigation on the boundary-correction factor are presented in this report. The values of the boundary-correction factor from the theory, which at the present time is virtually completed, are given in the report for all conventional types of tunnels. With the isolation of certain disturbing effects, the experimental boundary-correction factor was found to be in satisfactory agreement with the theoretically predicted values, thus verifying the soundness and sufficiency of the theoretical analysis. The establishment of a considerable velocity distortion, in the nature of a unique blocking effect, constitutes a principal result of the investigation.

  20. Pupil size stability of the cubic phase mask solution for presbyopia

    NASA Astrophysics Data System (ADS)

    Almaguer, Citlalli; Acosta, Eva; Arines, Justo

    2018-01-01

    Presbyopia correction involves different types of studies such as lens design, clinical study, and the development of objective metrics, such as the visual Strehl ratio. Different contact lens designs have been proposed for presbyopia correction, but performance depends on pupil diameter. We will analyze the potential use of a nonsymmetrical element, a cubic phase mask (CPM) solution, to develop a contact or intraocular lens whose performance is nearly insensitive to changes in pupil diameter. We will show the through focus optical transfer function of the proposed element for different pupil diameters ranging from 3 to 7 mm. Additionally, we will show the images obtained through computation and experiment for a group of eye charts with different visual acuities. Our results show that a CPM shaped as 7.07 μm*(Z33-Z3-3)-0.9 μm Z20 is a good solution for a range of clear vision with a visual acuity of at least 0.1 logMar from 0.4 to 6 m for pupil diameters in the 3- to 7-mm range. Our results appear to be a good starting point for further development and study of this kind of CPM solution for presbyopia.

  1. Ambiguity resolution for satellite Doppler positioning systems

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Marini, J.

    1979-01-01

    The implementation of satellite-based Doppler positioning systems frequently requires the recovery of transmitter position from a single pass of Doppler data. The least-squares approach to the problem yields conjugate solutions on either side of the satellite subtrack. It is important to develop a procedure for choosing the proper solution which is correct in a high percentage of cases. A test for ambiguity resolution which is the most powerful in the sense that it maximizes the probability of a correct decision is derived. When systematic error sources are properly included in the least-squares reduction process to yield an optimal solution the test reduces to choosing the solution which provides the smaller valuation of the least-squares loss function. When systematic error sources are ignored in the least-squares reduction, the most powerful test is a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudoinverse of a reduced-rank square matrix. A formula for computing the power of the most powerful test is provided. Numerical examples are included in which the power of the test is computed for situations that are relevant to the design of a satellite-aided search and rescue system.

  2. Theory and computation of optimal low- and medium-thrust transfers

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1994-01-01

    This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.

  3. Improving global estimates of syphilis in pregnancy by diagnostic test type: A systematic review and meta-analysis.

    PubMed

    Ham, D Cal; Lin, Carol; Newman, Lori; Wijesooriya, N Saman; Kamb, Mary

    2015-06-01

    "Probable active syphilis," is defined as seroreactivity in both non-treponemal and treponemal tests. A correction factor of 65%, namely the proportion of pregnant women reactive in one syphilis test type that were likely reactive in the second, was applied to reported syphilis seropositivity data reported to WHO for global estimates of syphilis during pregnancy. To identify more accurate correction factors based on test type reported. Medline search using: "Syphilis [Mesh] and Pregnancy [Mesh]," "Syphilis [Mesh] and Prenatal Diagnosis [Mesh]," and "Syphilis [Mesh] and Antenatal [Keyword]. Eligible studies must have reported results for pregnant or puerperal women for both non-treponemal and treponemal serology. We manually calculated the crude percent estimates of subjects with both reactive treponemal and reactive non-treponemal tests among subjects with reactive treponemal and among subjects with reactive non-treponemal tests. We summarized the percent estimates using random effects models. Countries reporting both reactive non-treponemal and reactive treponemal testing required no correction factor. Countries reporting non-treponemal testing or treponemal testing alone required a correction factor of 52.2% and 53.6%, respectively. Countries not reporting test type required a correction factor of 68.6%. Future estimates should adjust reported maternal syphilis seropositivity by test type to ensure accuracy. Published by Elsevier Ireland Ltd.

  4. Improved scatterer property estimates from ultrasound backscatter for small gate lengths using a gate-edge correction factor

    NASA Astrophysics Data System (ADS)

    Oelze, Michael L.; O'Brien, William D.

    2004-11-01

    Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .

  5. Air-braked cycle ergometers: validity of the correction factor for barometric pressure.

    PubMed

    Finn, J P; Maxwell, B F; Withers, R T

    2000-10-01

    Barometric pressure exerts by far the greatest influence of the three environmental factors (barometric pressure, temperature and humidity) on power outputs from air-braked ergometers. The barometric pressure correction factor for power outputs from air-braked ergometers is in widespread use but apparently has never been empirically validated. Our experiment validated this correction factor by calibrating two air-braked cycle ergometers in a hypobaric chamber using a dynamic calibration rig. The results showed that if the power output correction for changes in air resistance at barometric pressures corresponding to altitudes of 38, 600, 1,200 and 1,800 m above mean sea level were applied, then the coefficients of variation were 0.8-1.9% over the range of 160-1,597 W. The overall mean error was 3.0 % but this included up to 0.73 % for the propagated error that was associated with errors in the measurement of: a) temperature b) relative humidity c) barometric pressure d) force, distance and angular velocity by the dynamic calibration rig. The overall mean error therefore approximated the +/- 2.0% of true load that was specified by the Laboratory Standards Assistance Scheme of the Australian Sports Commission. The validity of the correction factor for barometric pressure on power output was therefore demonstrated over the altitude range of 38-1,800 m.

  6. Study of the atmospheric effects on the radiation detected by the sensor aboard orbiting platforms (ERTS/LANDSAT). M.S. Thesis - October 1978; [Ribeirao Preto and Brasilia, Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Morimoto, T.

    1980-01-01

    The author has identified the following significant results. Multispectral scanner data for Brasilia was corrected for atmospheric interference using the LOWTRAN-3 computer program and the analytical solution of the radiative transfer equation. This improved the contrast between two natural targets and the corrected images of two different dates were more similar than the original ones. Corrected images of MSS data for Ribeirao Preto gave a classification accuracy for sugar cane about 10% higher as compared to the original images.

  7. The structure of aqueous sodium hydroxide solutions: a combined solution x-ray diffraction and simulation study.

    PubMed

    Megyes, Tünde; Bálint, Szabolcs; Grósz, Tamás; Radnai, Tamás; Bakó, Imre; Sipos, Pál

    2008-01-28

    To determine the structure of aqueous sodium hydroxide solutions, results obtained from x-ray diffraction and computer simulation (molecular dynamics and Car-Parrinello) have been compared. The capabilities and limitations of the methods in describing the solution structure are discussed. For the solutions studied, diffraction methods were found to perform very well in describing the hydration spheres of the sodium ion and yield structural information on the anion's hydration structure. Classical molecular dynamics simulations were not able to correctly describe the bulk structure of these solutions. However, Car-Parrinello simulation proved to be a suitable tool in the detailed interpretation of the hydration sphere of ions and bulk structure of solutions. The results of Car-Parrinello simulations were compared with the findings of diffraction experiments.

  8. Benchmarking of software tools for optical proximity correction

    NASA Astrophysics Data System (ADS)

    Jungmann, Angelika; Thiele, Joerg; Friedrich, Christoph M.; Pforr, Rainer; Maurer, Wilhelm

    1998-06-01

    The point when optical proximity correction (OPC) will become a routine procedure for every design is not far away. For such a daily use the requirements for an OPC tool go far beyond the principal functionality of OPC that was proven by a number of approaches and is documented well in literature. In this paper we first discuss the requirements for a productive OPC tool. Against these requirements a benchmarking was performed with three different OPC tools available on market (OPRX from TVT, OPTISSIMO from aiss and PROTEUS from TMA). Each of these tools uses a different approach to perform the correction (rules, simulation or model). To assess the accuracy of the correction, a test chip was fabricated, which contains corrections done by each software tool. The advantages and weakness of the several solutions are discussed.

  9. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  10. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    PubMed

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  11. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  12. Casimir force in a Lorentz violating theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Mariana; Turan, Ismail

    2006-08-01

    We study the effects of the minimal extension of the standard model including Lorentz violation on the Casimir force between two parallel conducting plates in the vacuum. We provide explicit solutions for the electromagnetic field using scalar field analogy, for both the cases in which the Lorentz violating terms come from the CPT-even or CPT-odd terms. We also calculate the effects of the Lorentz violating terms for a fermion field between two parallel conducting plates and analyze the modifications of the Casimir force due to the modifications of the Dirac equation. In all cases under consideration, the standard formulas formore » the Casimir force are modified by either multiplicative or additive correction factors, the latter case exhibiting different dependence on the distance between the plates.« less

  13. Hansen solubility parameters for polyethylene glycols by inverse gas chromatography.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam

    2006-11-03

    Inverse gas chromatography (IGC) has been applied to determine solubility parameter and its components for nonionic surfactants--polyethylene glycols (PEG) of different molecular weight. Flory-Huggins interaction parameter (chi) and solubility parameter (delta(2)) were calculated according to DiPaola-Baranyi and Guillet method from experimentally collected retention data for the series of carefully selected test solutes. The Hansen's three-dimensional solubility parameters concept was applied to determine components (delta(d), delta(p), delta(h)) of corrected solubility parameter (delta(T)). The molecular weight and temperature of measurement influence the solubility parameter data, estimated from the slope, intercept and total solubility parameter. The solubility parameters calculated from the intercept are lower than those calculated from the slope. Temperature and structural dependences of the entopic factor (chi(S)) are presented and discussed.

  14. Determination of ocean tides from the first year of TOPEX/POSEIDON altimeter measurements

    NASA Technical Reports Server (NTRS)

    Ma, X. C.; Shum, C. K.; Eanes, R. J.; Tapley, B. D.

    1994-01-01

    An improved geocentric global ocean tide model has been determined using 1 year of TOPEX/POSEIDON altimeter measurements to provide corrections to the Cartwright and Ray (1991) model (CR91). The corrections were determined on a 3 deg x 3 deg grid using both the harmonic analysis method and the response method. The two approaches produce similar solutions. The effect on the tide solution of simultaneously adjusting radial orbit correction parameters using altimeter measurements was examined. Four semidiurnal (N(sub 2), M(sub 2), S(sub 2) and K(sub 2)), four diurnal (Q(sdub 1), O(sub 1), P(sub 1), and K(sub 1)), and three long-period (S(sub sa), M(sub m), and M(sub f)) constituents, along with the variations at the annual frequency, were included in the harmomnic analysis solution. The observed annual variations represents the first global measurement describing accurate seasonal changes of the ocean during an El Nino year. The corrections to the M(sub 2) constituent have an root mean square (RMS) of 3.6 cm and display a clear banding pattern with regional highs and lows reaching 8 cm. The improved tide model reduces the weighted altimeter crossover residual from 9.8 cm RMS, when the CR91 tide model is used, to 8.2 cm on RMS. Comparison of the improved model to pelagic tidal constants determined from 80 tide gauges gives RMS differences of 2.7 cm for M(sub 2) and 1.7 cm for K(sub 1). Comparable values when the CR91 model is used are 3.9 cm and 2.0 cm, respectively. Examination of TOPEX/POSEIDON sea level anomaly variations using the new tide model further confirms that the tide model has been improved.

  15. Predicting membrane flux decline from complex mixtures using flow-field flow fractionation measurements and semi-empirical theory.

    PubMed

    Pellegrino, J; Wright, S; Ranvill, J; Amy, G

    2005-01-01

    Flow-Field Flow Fractionation (FI-FFF) is an idealization of the cross flow membrane filtration process in that, (1) the filtration flux and crossflow velocity are constant from beginning to end of the device, (2) the process is a relatively well-defined laminar-flow hydrodynamic condition, and (3) the solutes are introduced as a pulse-input that spreads due to interactions with each other and the membrane in the dilute-solution limit. We have investigated the potential for relating FI-FFF measurements to membrane fouling. An advection-dispersion transport model was used to provide 'ideal' (defined as spherical, non-interacting solutes) solute residence time distributions (RTDs) for comparison with 'real' RTDs obtained experimentally at different cross-field velocities and solution ionic strength. An RTD moment analysis based on a particle diameter probability density function was used to extract "effective" characteristic properties, rather than uniquely defined characteristics, of the standard solute mixture. A semi-empirical unsteady-state, flux decline model was developed that uses solute property parameters. Three modes of flux decline are included: (1) concentration polarization, (2) cake buildup, and (3) adsorption on/in pores, We have used this model to test the hypothesis-that an analysis of a residence time distribution using FI-FFF can describe 'effective' solute properties or indices that can be related to membrane flux decline in crossflow membrane filtration. Constant flux filtration studies included the changes of transport hydrodynamics (solvent flux to solute back diffusion (J/k) ratios), solution ionic strength, and feed water composition for filtration using a regenerated cellulose ultrafiltration membrane. Tests of the modeling hypothesis were compared with experimental results from the filtration measurements using several correction parameters based on the mean and variance of the solute RTDs. The corrections used to modify the boundary layer mass transfer coefficient and the specific resistance of cake or adsorption layers demonstrated that RTD analysis is potentially useful technique to describe colloid properties but requires improvements.

  16. The thermochemistry of london dispersion-driven transition metal reactions: getting the 'right answer for the right reason'.

    PubMed

    Hansen, Andreas; Bannwarth, Christoph; Grimme, Stefan; Petrović, Predrag; Werlé, Christophe; Djukic, Jean-Pierre

    2014-10-01

    Reliable thermochemical measurements and theoretical predictions for reactions involving large transition metal complexes in which long-range intramolecular London dispersion interactions contribute significantly to their stabilization are still a challenge, particularly for reactions in solution. As an illustrative and chemically important example, two reactions are investigated where a large dipalladium complex is quenched by bulky phosphane ligands (triphenylphosphane and tricyclohexylphosphane). Reaction enthalpies and Gibbs free energies were measured by isotherm titration calorimetry (ITC) and theoretically 'back-corrected' to yield 0 K gas-phase reaction energies (ΔE). It is shown that the Gibbs free solvation energy calculated with continuum models represents the largest source of error in theoretical thermochemistry protocols. The ('back-corrected') experimental reaction energies were used to benchmark (dispersion-corrected) density functional and wave function theory methods. Particularly, we investigated whether the atom-pairwise D3 dispersion correction is also accurate for transition metal chemistry, and how accurately recently developed local coupled-cluster methods describe the important long-range electron correlation contributions. Both, modern dispersion-corrected density functions (e.g., PW6B95-D3(BJ) or B3LYP-NL), as well as the now possible DLPNO-CCSD(T) calculations, are within the 'experimental' gas phase reference value. The remaining uncertainties of 2-3 kcal mol(-1) can be essentially attributed to the solvation models. Hence, the future for accurate theoretical thermochemistry of large transition metal reactions in solution is very promising.

  17. An advanced method to assess the diet of free-ranging large carnivores based on scats.

    PubMed

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P; Jago, Mark; Hofer, Heribert

    2012-01-01

    The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.

  18. An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats

    PubMed Central

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert

    2012-01-01

    Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373

  19. Non-ideal Solution Thermodynamics of Cytoplasm

    PubMed Central

    Ross-Rodriguez, Lisa U.; McGann, Locksley E.

    2012-01-01

    Quantitative description of the non-ideal solution thermodynamics of the cytoplasm of a living mammalian cell is critically necessary in mathematical modeling of cryobiology and desiccation and other fields where the passive osmotic response of a cell plays a role. In the solution thermodynamics osmotic virial equation, the quadratic correction to the linear ideal, dilute solution theory is described by the second osmotic virial coefficient. Herein we report, for the first time, intracellular solution second osmotic virial coefficients for four cell types [TF-1 hematopoietic stem cells, human umbilical vein endothelial cells (HUVEC), porcine hepatocytes, and porcine chondrocytes] and further report second osmotic virial coefficients indistinguishable from zero (for the concentration range studied) for human hepatocytes and mouse oocytes. PMID:23840923

  20. Asymptotic solution of the turbulent mixing layer for velocity ratio close to unity

    NASA Technical Reports Server (NTRS)

    Higuera, F. J.; Jimenez, J.; Linan, A.

    1996-01-01

    The equations describing the first two terms of an asymptotic expansion of the solution of the planar turbulent mixing layer for values of the velocity ratio close to one are obtained. The first term of this expansion is the solution of the well-known time-evolving problem and the second, which includes the effects of the increase of the turbulence scales in the stream-wise direction, obeys a linear system of equations. Numerical solutions of these equations for a two-dimensional reacting mixing layer show that the correction to the time-evolving solution may explain the asymmetry of the entrainment and the differences in product generation observed in flip experiments.

Top