Sample records for calculation methods implications

  1. Strategic environmental noise mapping: methodological issues concerning the implementation of the EU Environmental Noise Directive and their policy implications.

    PubMed

    Murphy, E; King, E A

    2010-04-01

    This paper explores methodological issues and policy implications concerning the implementation of the EU Environmental Noise Directive (END) across Member States. Methodologically, the paper focuses on two key thematic issues relevant to the Directive: (1) calculation methods and (2) mapping methods. For (1), the paper focuses, in particular, on how differing calculation methods influence noise prediction results as well as the value of the EU noise indicator L(den) and its associated implications for comparability of noise data across EU states. With regard to (2), emphasis is placed on identifying the issues affecting strategic noise mapping, estimating population exposure, noise action planning and dissemination of noise mapping results to the general public. The implication of these issues for future environmental noise policy is also examined. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  2. Implications to Postsecondary Faculty of Alternative Calculation Methods of Gender-Based Wage Differentials.

    ERIC Educational Resources Information Center

    Hagedorn, Linda Serra

    1998-01-01

    A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…

  3. Monte Carlo Techniques for Calculations of Charge Deposition and Displacement Damage from Protons in Visible and Infrared Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Marshall, Paul; Reed, Robert; Fodness, Bryan; Jordan, Tom; Pickel, Jim; Xapsos, Michael; Burke, Ed

    2004-01-01

    This slide presentation examines motivation for Monte Carlo methods, charge deposition in sensor arrays, displacement damage calculations, and future work. The discussion of charge deposition sensor arrays includes Si active pixel sensor APS arrays and LWIR HgCdTe FPAs. The discussion of displacement damage calculations includes nonionizing energy loss (NIEL), HgCdTe NIEL calculation results including variance, and implications for damage in HgCdTe detector arrays.

  4. Linear Transformation Method for Multinuclide Decay Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding Yuan

    2010-12-29

    A linear transformation method for generic multinuclide decay calculations is presented together with its properties and implications. The method takes advantage of the linear form of the decay solution N(t) = F(t)N{sub 0}, where N(t) is a column vector that represents the numbers of atoms of the radioactive nuclides in the decay chain, N{sub 0} is the initial value vector of N(t), and F(t) is a lower triangular matrix whose time-dependent elements are independent of the initial values of the system.

  5. Molecular Mechanics: Illustrations of Its Application.

    ERIC Educational Resources Information Center

    Cox, Philip J.

    1982-01-01

    The application of molecular mechanics (a nonquantum mechanical method for solving problems concerning molecular geometries) to calculate force fields for n-butane and cyclohexane is discussed. Implications regarding the stable conformations of the example molecules are also discussed. (Author/SK)

  6. An Investigation of Milk Sugar.

    ERIC Educational Resources Information Center

    Smith, Christopher A.; Dawson, Maureen M.

    1987-01-01

    Describes an experiment to identify lactose and estimate the concentration of lactose in a sample of milk. Gives a background of the investigation. Details the experimental method, results and calculations. Discusses the implications of the experiment to students. Suggests further experiments using the same technique used in…

  7. Neutrinos and the age of the universe

    NASA Technical Reports Server (NTRS)

    Symbalisty, E. M. D.; Yang, J.; Schramm, D. N.

    1980-01-01

    The age of the universe should be calculable by independent methods with similar results. Previous calculations using nucleochronometers, globular clusters and dynamical measurements coupled with Friedmann models and nucleosynthesis constraints have given different values of the age. A consistent age is reported, whose implications for the constituent mass density are very interesting and are affected by the existence of a third neutrino flavor, and by allowing the possibility that neutrinos may have a non-zero rest mass.

  8. Dropout policies and trends for students with and without disabilities.

    PubMed

    Kemp, Suzanne E

    2006-01-01

    Students with and without disabilities are dropping out of school at an alarming rate. However, the precise extent of the problem remains elusive because individual schools, school districts, and state departments of education often use different definitional criteria and calculation methods. In addition, specific reasons why students drop out continues to be speculative and minimal research exists validating current dropout prevention programs for students with and without disabilities. This study examined methods secondary school principals used to calculate dropout rates, reasons they believed students dropped out of school, and what prevention programs were being used for students with and without disabilities. Results indicated that school districts used calculation methods that minimized dropout rates, students with and without disabilities dropped out for similar reasons, and few empirically validated prevention programs were being implemented. Implications for practice and directions for future research are discussed.

  9. Relationships between thermal maturity indices calculated using Arrhenius equation and Lopatin method: implications for petroleum exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, D.A.

    1988-02-01

    Thermal maturity can be calculated with time-temperature indices (TTI) based on the Arrhenius equation using kinetics applicable to a range of Types II and III kerogens. These TTIs are compared with TTI calculations based on the Lopatin method and are related theoretically (and empirically via vitrinite reflectance) to the petroleum-generation window. The TTIs for both methods are expressed mathematically as integrals of temperature combined with variable linear heating rates for selected temperature intervals. Heating rates control the thermal-maturation trends of buried sediments. Relative to Arrhenius TTIs, Lopatin TTIs tend to underestimate thermal maturity at high heating rates and overestimate itmore » as low heating rates. Complex burial histories applicable to a range of tectonic environments illustrate the different exploration decisions that might be made on the basis of independent results of these two thermal-maturation models. 15 figures, 8 tables.« less

  10. Ab Initio and Improved Empirical Potentials for the Calculation of the Anharmonic Vibrational States and Intramolecular Mode Coupling of N-Methylacetamide

    NASA Technical Reports Server (NTRS)

    Gregurick, Susan K.; Chaban, Galina M.; Gerber, R. Benny; Kwak, Dochou (Technical Monitor)

    2001-01-01

    The second-order Moller-Plesset ab initio electronic structure method is used to compute points for the anharmonic mode-coupled potential energy surface of N-methylacetamide (NMA) in the trans(sub ct) configuration, including all degrees of freedom. The vibrational states and the spectroscopy are directly computed from this potential surface using the Correlation Corrected Vibrational Self-Consistent Field (CC-VSCF) method. The results are compared with CC-VSCF calculations using both the standard and improved empirical Amber-like force fields and available low temperature experimental matrix data. Analysis of our calculated spectroscopic results show that: (1) The excellent agreement between the ab initio CC-VSCF calculated frequencies and the experimental data suggest that the computed anharmonic potentials for N-methylacetamide are of a very high quality; (2) For most transitions, the vibrational frequencies obtained from the ab initio CC-VSCF method are superior to those obtained using the empirical CC-VSCF methods, when compared with experimental data. However, the improved empirical force field yields better agreement with the experimental frequencies as compared with a standard AMBER-type force field; (3) The empirical force field in particular overestimates anharmonic couplings for the amide-2 mode, the methyl asymmetric bending modes, the out-of-plane methyl bending modes, and the methyl distortions; (4) Disagreement between the ab initio and empirical anharmonic couplings is greater than the disagreement between the frequencies, and thus the anharmonic part of the empirical potential seems to be less accurate than the harmonic contribution;and (5) Both the empirical and ab initio CC-VSCF calculations predict a negligible anharmonic coupling between the amide-1 and other internal modes. The implication of this is that the intramolecular energy flow between the amide-1 and the other internal modes may be smaller than anticipated. These results may have important implications for the anharmonic force fields of peptides, for which N-methylacetamide is a model.

  11. Emissions from prescribed fire in temperate forest in south-east Australia: implications for carbon accounting

    NASA Astrophysics Data System (ADS)

    Possell, M.; Jenkins, M.; Bell, T. L.; Adams, M. A.

    2014-09-01

    We estimated of emissions of carbon, as CO2-equivalents, from planned fire in four sites in a south-eastern Australian forest. Emission estimates were calculated using measurements of fuel load and carbon content of different fuel types, before and after burning, and determination of fuel-specific emission factors. Median estimates of emissions for the four sites ranged from 20 to 139 T CO2-e ha-1. Variability in estimates was a consequence of different burning efficiencies of each fuel type from the four sites. Higher emissions resulted from more fine fuel (twigs, decomposing matter, near-surface live and leaf litter) or coarse woody debris (CWD; > 25 mm diameter) being consumed. In order to assess the effect of estimating emissions when only a few fuel variables are known, Monte-Carlo simulations were used to create seven scenarios where input parameters values were replaced by probability density functions. Calculation methods were: (1) all measured data were constrained between measured maximum and minimum values for each variable, (2) as for (1) except the proportion of carbon within a fuel type was constrained between 0 and 1, (3) as for (2) but losses of mass caused by fire were replaced with burning efficiency factors constrained between 0 and 1; and (4) emissions were calculated using default values in the Australian National Greenhouse Accounts (NGA), National Inventory Report 2011, as appropriate for our sites. Effects of including CWD in calculations were assessed for calculation Method 1, 2 and 3 but not for Method 4 as the NGA does not consider this fuel type. Simulations demonstrate that the probability of estimating true median emissions declines strongly as the amount of information available declines. Including CWD in scenarios increased uncertainty in calculations because CWD is the most variable contributor to fuel load. Inclusion of CWD in scenarios generally increased the amount of carbon lost. We discuss implications of these simulations and how emissions from prescribed burns in temperate Australian forests could be improved.

  12. Exact Solution of a Faraday's Law Problem that Includes a Nonlinear Term and Its Implication for Perturbation Theory.

    ERIC Educational Resources Information Center

    Fulcher, Lewis P.

    1979-01-01

    Presents an exact solution to the nonlinear Faraday's law problem of a rod sliding on frictionless rails with resistance. Compares the results with perturbation calculations based on the methods of Poisson and Pincare and of Kryloff and Bogoliuboff. (Author/GA)

  13. Quantum chemical determination of Young's modulus of lignin. Calculations on a beta-O-4' model compound.

    PubMed

    Elder, Thomas

    2007-11-01

    The calculation of Young's modulus of lignin has been examined by subjecting a dimeric model compound to strain, coupled with the determination of energy and stress. The computational results, derived from quantum chemical calculations, are in agreement with available experimental results. Changes in geometry indicate that modifications in dihedral angles occur in response to linear strain. At larger levels of strain, bond rupture is evidenced by abrupt changes in energy, structure, and charge. Based on the current calculations, the bond scission may be occurring through a homolytic reaction between aliphatic carbon atoms. These results may have implications in the reactivity of lignin especially when subjected to processing methods that place large mechanical forces on the structure.

  14. Comparison of the Young-Laplace law and finite element based calculation of ventricular wall stress: implications for postinfarct and surgical ventricular remodeling.

    PubMed

    Zhang, Zhihong; Tendulkar, Amod; Sun, Kay; Saloner, David A; Wallace, Arthur W; Ge, Liang; Guccione, Julius M; Ratcliffe, Mark B

    2011-01-01

    Both the Young-Laplace law and finite element (FE) based methods have been used to calculate left ventricular wall stress. We tested the hypothesis that the Young-Laplace law is able to reproduce results obtained with the FE method. Magnetic resonance imaging scans with noninvasive tags were used to calculate three-dimensional myocardial strain in 5 sheep 16 weeks after anteroapical myocardial infarction, and in 1 of those sheep 6 weeks after a Dor procedure. Animal-specific FE models were created from the remaining 5 animals using magnetic resonance images obtained at early diastolic filling. The FE-based stress in the fiber, cross-fiber, and circumferential directions was calculated and compared to stress calculated with the assumption that wall thickness is very much less than the radius of curvature (Young-Laplace law), and without that assumption (modified Laplace). First, circumferential stress calculated with the modified Laplace law is closer to results obtained with the FE method than stress calculated with the Young-Laplace law. However, there are pronounced regional differences, with the largest difference between modified Laplace and FE occurring in the inner and outer layers of the infarct borderzone. Also, stress calculated with the modified Laplace is very different than stress in the fiber and cross-fiber direction calculated with FE. As a consequence, the modified Laplace law is inaccurate when used to calculate the effect of the Dor procedure on regional ventricular stress. The FE method is necessary to determine stress in the left ventricle with postinfarct and surgical ventricular remodeling. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  15. Towards a converged barrier height for the entrance channel transition state of the N( 2D) + CH 4 reaction and its implication for the chemistry in Titan's atmosphere

    NASA Astrophysics Data System (ADS)

    Ouk, Chanda-Malis; Zvereva-Loëte, Natalia; Bussery-Honvault, Béatrice

    2011-10-01

    The N( 2D) + CH 4 reaction appears to be a key reaction for the chemistry of Titan's atmosphere, opening the door to nitrile formation as recently observed by the Cassini-Huygens mission. Faced to the controversy concerning the existence or not of a potential barrier for this reaction, we have carried out accurate ab initio calculations by means of multi-state multi-reference configuration interaction (MS-MR-SDCI) method. These calculations have been partially corrected for the size-consistency errors (SCE) by Davidson, Pople or AQCC corrections. We suggest a barrier height of 3.86 ± 0.84 kJ/mol, including ZPE, for the entrance transition state, in good agreement with the experimental value. Its implication in Titan's atmopsheric chemistry is discussed.

  16. Modified Runge-Kutta methods for solving ODES. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Vanvu, T.

    1981-01-01

    A class of Runge-Kutta formulas is examined which permit the calculation of an accurate solution anywhere in the interval of integration. This is used in a code which seldom has to reject a step; rather it takes a reduced step if the estimated error is too large. The absolute stability implications of this are examined.

  17. Quantifying complexity in translational research: an integrated approach

    PubMed Central

    Munoz, David A.; Nembhard, Harriet Black; Kraschnewski, Jennifer L.

    2014-01-01

    Purpose This article quantifies complexity in translational research. The impact of major operational steps and technical requirements (TR) is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. Design/Methodology/Approach A three-phase integrated Quality Function Deployment (QFD) and Analytic Hierarchy Process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Findings Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, we found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. Research limitations/implications As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. Practical implications The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Originality/value Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research. PMID:25417380

  18. Emissions from prescribed fires in temperate forest in south-east Australia: implications for carbon accounting

    NASA Astrophysics Data System (ADS)

    Possell, M.; Jenkins, M.; Bell, T. L.; Adams, M. A.

    2015-01-01

    We estimated emissions of carbon, as equivalent CO2 (CO2e), from planned fires in four sites in a south-eastern Australian forest. Emission estimates were calculated using measurements of fuel load and carbon content of different fuel types, before and after burning, and determination of fuel-specific emission factors. Median estimates of emissions for the four sites ranged from 20 to 139 Mg CO2e ha-1. Variability in estimates was a consequence of different burning efficiencies of each fuel type from the four sites. Higher emissions resulted from more fine fuel (twigs, decomposing matter, near-surface live and leaf litter) or coarse woody debris (CWD; > 25 mm diameter) being consumed. In order to assess the effect of declining information quantity and the inclusion of coarse woody debris when estimating emissions, Monte Carlo simulations were used to create seven scenarios where input parameters values were replaced by probability density functions. Calculation methods were (1) all measured data were constrained between measured maximum and minimum values for each variable; (2) as in (1) except the proportion of carbon within a fuel type was constrained between 0 and 1; (3) as in (2) but losses of mass caused by fire were replaced with burning efficiency factors constrained between 0 and 1; and (4) emissions were calculated using default values in the Australian National Greenhouse Accounts (NGA), National Inventory Report 2011, as appropriate for our sites. Effects of including CWD in calculations were assessed for calculation Method 1, 2 and 3 but not for Method 4 as the NGA does not consider this fuel type. Simulations demonstrate that the probability of estimating true median emissions declines strongly as the amount of information available declines. Including CWD in scenarios increased uncertainty in calculations because CWD is the most variable contributor to fuel load. Inclusion of CWD in scenarios generally increased the amount of carbon lost. We discuss implications of these simulations and how emissions from prescribed burns in temperate Australian forests could be improved.

  19. Muon simulations for Super-Kamiokande, KamLAND, and CHOOZ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Alfred; Horton-Smith, Glenn; Kudryavtsev, Vitaly A.

    2006-09-01

    Muon backgrounds at Super-Kamiokande, KamLAND, and CHOOZ are calculated using MUSIC. A modified version of the Gaisser sea-level muon distribution and a well-tested Monte Carlo integration method are introduced. Average muon energy, flux, and rate are tabulated. Plots of average energy and angular distributions are given. Implications for muon tracker design in future experiments are discussed.

  20. Theoretical Understanding the Relations of Melting-point Determination Methods from Gibbs Thermodynamic Surface and Applications on Melting Curves of Lower Mantle Minerals

    NASA Astrophysics Data System (ADS)

    Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.

    2016-12-01

    The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.

  1. Screening possible solid electrolytes by calculating the conduction pathways using Bond Valence method

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Chu, Geng; He, Meng; Zhang, Shu; Xiao, RuiJuan; Li, Hong; Chen, LiQuan

    2014-08-01

    Inorganic solid electrolytes have distinguished advantages in terms of safety and stability, and are promising to substitute for conventional organic liquid electrolytes. However, low ionic conductivity of typical candidates is the key problem. As connective diffusion path is the prerequisite for high performance, we screen for possible solid electrolytes from the 2004 International Centre for Diffraction Data (ICDD) database by calculating conduction pathways using Bond Valence (BV) method. There are 109846 inorganic crystals in the 2004 ICDD database, and 5295 of them contain lithium. Except for those with toxic, radioactive, rare, or variable valence elements, 1380 materials are candidates for solid electrolytes. The rationality of the BV method is approved by comparing the existing solid electrolytes' conduction pathways we had calculated with those from experiments or first principle calculations. The implication for doping and substitution, two important ways to improve the conductivity, is also discussed. Among them Li2CO3 is selected for a detailed comparison, and the pathway is reproduced well with that based on the density functional studies. To reveal the correlation between connectivity of pathways and conductivity, α/ γ-LiAlO2 and Li2CO3 are investigated by the impedance spectrum as an example, and many experimental and theoretical studies are in process to indicate the relationship between property and structure. The BV method can calculate one material within a few minutes, providing an efficient way to lock onto targets from abundant data, and to investigate the structure-property relationship systematically.

  2. Absorption and scattering of light by nonspherical particles. [in atmosphere

    NASA Technical Reports Server (NTRS)

    Bohren, C. F.

    1986-01-01

    Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.

  3. Dual-echo EPI for non-equilibrium fMRI - implications of different echo combinations and masking procedures.

    PubMed

    Beissner, Florian; Baudrexel, Simon; Volz, Steffen; Deichmann, Ralf

    2010-08-15

    Dual-echo EPI is based on the acquisition of two images with different echo times per excitation, thus allowing for the calculation of purely T2(*) weighted data. The technique can be used for the measurement of functional activation whenever the prerequisite of constant equilibrium magnetization cannot be fulfilled due to variable inter-volume delays. The latter is the case when image acquisition is triggered by physiological parameters (e.g. cardiac gating) or by the subject's response. Despite its frequent application, there is currently no standardized way of combining the information obtained from the two acquired echoes. The goal of this study was to quantify the implication of different echo combination methods (quotients of echoes and quantification of T(2)(*)) and calculation modalities, either pre-smoothing data before combination or subjecting unsmoothed combined data to masking (no masking, volume-wise masking, joint masking), on the theoretically predicted signal-to-noise ratio (SNR) of the BOLD response and on activation results of two fMRI experiments using finger tapping and visual stimulation in one group (n=5) and different motor paradigms to activate motor areas in the cortex and the brainstem in another group (n=21). A significant impact of echo combination and masking procedure was found for both SNR and activation results. The recommended choice is a direct calculation of T(2)(*) values, either using joint masking on unsmoothed data, or pre-smoothing images prior to T(2)(*) calculation. This method was most beneficial in areas close to the surface of the brain or adjacent to the ventricles and may be especially relevant to brainstem fMRI. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  4. A Computer-Assisted Instructional Software Program in Mathematical Problem-Solving Skills for Medication Administration for Beginning Baccalaureate Nursing Students at San Jose State University.

    ERIC Educational Resources Information Center

    Wahl, Sharon C.

    Nursing educators and administrators are concerned about medication errors made by students which jeopardize patient safety. The inability to conceptualize and calculate medication dosages, often related to math anxiety, is implicated in such errors. A computer-assisted instruction (CAI) program is seen as a viable method of allowing students to…

  5. Application of an inverse method for calculating three-dimensional fault geometries and clip vectors, Nun River Field, Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, H.G.; White, N.

    A general, automatic method for determining the three-dimensional geometry of a normal fault of any shape and size is applied to a three-dimensional seismic reflection data set from the Nun River field, Nigeria. In addition to calculating fault geometry, the method also automatically retrieves the extension direction without requiring any previous information about either the fault shape or the extension direction. Solutions are found by minimizing the misfit between sets of faults that are calculated from the observed geometries of two or more hanging-wall beds. In the example discussed here, the predicted fault surface is in excellent agreement with themore » shape of the seismically imaged fault. Although the calculated extension direction is oblique to the average strike of the fault, the value of this parameter is not well resolved. Our approach differs markedly from standard section-balancing models in two important ways. First, we do not assume that the extension direction is known, and second, the use of inverse theory ensures that formal confidence bounds can be determined for calculated fault geometries. This ability has important implications for a range of geological problems encountered at both exploration and production scales. In particular, once the three-dimensional displacement field has been constrained, the difficult but important problem of three-dimensional palinspastic restoration of hanging-wall structures becomes tractable.« less

  6. Understanding density functional theory (DFT) and completing it in practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagayoko, Diola

    2014-12-15

    We review some salient points in the derivation of density functional theory (DFT) and of the local density approximation (LDA) of it. We then articulate an understanding of DFT and LDA that seems to be ignored in the literature. We note the well-established failures of many DFT and LDA calculations to reproduce the measured energy gaps of finite systems and band gaps of semiconductors and insulators. We then illustrate significant differences between the results from self consistent calculations using single trial basis sets and those from computations following the Bagayoko, Zhao, and Williams (BZW) method, as enhanced by Ekuma andmore » Franklin (BZW-EF). Unlike the former, the latter calculations verifiably attain the absolute minima of the occupied energies, as required by DFT. These minima are one of the reasons for the agreement between their results and corresponding, experimental ones for the band gap and a host of other properties. Further, we note predictions of DFT BZW-EF calculations that have been confirmed by experiment. Our subsequent description of the BZW-EF method ends with the application of the Rayleigh theorem in the selection, among the several calculations the method requires, of the one whose results have a full, physics content ascribed to DFT. This application of the Rayleigh theorem adds to or completes DFT, in practice, to preserve the physical content of unoccupied, low energy levels. Discussions, including implications of the method, and a short conclusion follow the description of the method. The successive augmentation of the basis set in the BZW-EF method, needed for the application of the Rayleigh theorem, is also necessary in the search for the absolute minima of the occupied energies, in practice.« less

  7. Density Functional Theory Calculations of the Role of Defects in Amorphous Silicon Solar Cells

    NASA Astrophysics Data System (ADS)

    Johlin, Eric; Wagner, Lucas; Buonassisi, Tonio; Grossman, Jeffrey C.

    2010-03-01

    Amorphous silicon holds promise as a cheap and efficient material for thin-film photovoltaic devices. However, current device efficiencies are severely limited by the low mobility of holes in the bulk amorphous silicon material, the cause of which is not yet fully understood. This work employs a statistical analysis of density functional theory calculations to uncover the implications of a range of defects (including internal strain and substitution impurities) on the trapping and mobility of holes, and thereby also on the total conversion efficiency. We investigate the root causes of this low mobility and attempt to provide suggestions for simple methods of improving this property.

  8. Interaction of charge carriers with lattice and molecular phonons in crystalline pentacene

    NASA Astrophysics Data System (ADS)

    Girlando, Alberto; Grisanti, Luca; Masino, Matteo; Brillante, Aldo; Della Valle, Raffaele G.; Venuti, Elisabetta

    2011-08-01

    The computational protocol we have developed for the calculation of local (Holstein) and non-local (Peierls) carrier-phonon coupling in molecular organic semiconductors is applied to both the low temperature and high temperature bulk crystalline phases of pentacene. The electronic structure is calculated by the semimpirical INDO/S (Intermediate Neglect of Differential Overlap with Spectroscopic parametrization) method. In the phonon description, the rigid molecule approximation is removed, allowing mixing of low-frequency intra-molecular modes with inter-molecular (lattice) phonons. A clear distinction remains between the low-frequency phonons, which essentially modulate the transfer integral from a molecule to another (Peierls coupling), and the high-frequency intra-molecular phonons, which modulate the on-site energy (Holstein coupling). The results of calculation agree well with the values extracted from experiment. The comparison with similar calculations made for rubrene allows us to discuss the implications for the current models of mobility.

  9. Comparative Investigation of Normal Modes and Molecular Dynamics of Hepatitis C NS5B Protein

    NASA Astrophysics Data System (ADS)

    Asafi, M. S.; Yildirim, A.; Tekpinar, M.

    2016-04-01

    Understanding dynamics of proteins has many practical implications in terms of finding a cure for many protein related diseases. Normal mode analysis and molecular dynamics methods are widely used physics-based computational methods for investigating dynamics of proteins. In this work, we studied dynamics of Hepatitis C NS5B protein with molecular dynamics and normal mode analysis. Principal components obtained from a 100 nanoseconds molecular dynamics simulation show good overlaps with normal modes calculated with a coarse-grained elastic network model. Coarse-grained normal mode analysis takes at least an order of magnitude shorter time. Encouraged by this good overlaps and short computation times, we analyzed further low frequency normal modes of Hepatitis C NS5B. Motion directions and average spatial fluctuations have been analyzed in detail. Finally, biological implications of these motions in drug design efforts against Hepatitis C infections have been elaborated.

  10. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    PubMed

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calero, C.; Knorowski, C.; Travesset, A.

    We investigate a general method to calculate the free energy of crystalline solids by considering the harmonic approximation and quasistatically switching the anharmonic contribution. The advantage of this method is that the harmonic approximation provides an already very accurate estimate of the free energy, and therefore the anharmonic term is numerically very small and can be determined to high accuracy. We further show that the anharmonic contribution to the free energy satisfies a number of exact inequalities that place constraints on its magnitude and allows approximate but fast and accurate estimates. The method is implemented into a readily available generalmore » software by combining the code HOODLT (Highly Optimized Object Oriented Dynamic Lattice Theory) for the harmonic part and the molecular dynamics (MD) simulation package HOOMD-blue for the anharmonic part. We use the method to calculate the low temperature phase diagram for Lennard-Jones particles. We demonstrate that hcp is the equilibrium phase at low temperature and pressure and obtain the coexistence curve with the fcc phase, which exhibits reentrant behavior. Furthermore, several implications of the method are discussed.« less

  12. Determination of anharmonic free energy contributions: Low temperature phases of the Lennard-Jones system

    DOE PAGES

    Calero, C.; Knorowski, C.; Travesset, A.

    2016-03-22

    We investigate a general method to calculate the free energy of crystalline solids by considering the harmonic approximation and quasistatically switching the anharmonic contribution. The advantage of this method is that the harmonic approximation provides an already very accurate estimate of the free energy, and therefore the anharmonic term is numerically very small and can be determined to high accuracy. We further show that the anharmonic contribution to the free energy satisfies a number of exact inequalities that place constraints on its magnitude and allows approximate but fast and accurate estimates. The method is implemented into a readily available generalmore » software by combining the code HOODLT (Highly Optimized Object Oriented Dynamic Lattice Theory) for the harmonic part and the molecular dynamics (MD) simulation package HOOMD-blue for the anharmonic part. We use the method to calculate the low temperature phase diagram for Lennard-Jones particles. We demonstrate that hcp is the equilibrium phase at low temperature and pressure and obtain the coexistence curve with the fcc phase, which exhibits reentrant behavior. Furthermore, several implications of the method are discussed.« less

  13. Evaluation of Neutron-induced Cross Sections and their Related Covariances with Physical Constraints

    NASA Astrophysics Data System (ADS)

    De Saint Jean, C.; Archier, P.; Privas, E.; Noguère, G.; Habert, B.; Tamagno, P.

    2018-02-01

    Nuclear data, along with numerical methods and the associated calculation schemes, continue to play a key role in reactor design, reactor core operating parameters calculations, fuel cycle management and criticality safety calculations. Due to the intensive use of Monte-Carlo calculations reducing numerical biases, the final accuracy of neutronic calculations increasingly depends on the quality of nuclear data used. This paper gives a broad picture of all ingredients treated by nuclear data evaluators during their analyses. After giving an introduction to nuclear data evaluation, we present implications of using the Bayesian inference to obtain evaluated cross sections and related uncertainties. In particular, a focus is made on systematic uncertainties appearing in the analysis of differential measurements as well as advantages and drawbacks one may encounter by analyzing integral experiments. The evaluation work is in general done independently in the resonance and in the continuum energy ranges giving rise to inconsistencies in evaluated files. For future evaluations on the whole energy range, we call attention to two innovative methods used to analyze several nuclear reaction models and impose constraints. Finally, we discuss suggestions for possible improvements in the evaluation process to master the quantification of uncertainties. These are associated with experiments (microscopic and integral), nuclear reaction theories and the Bayesian inference.

  14. Calculation of continuum damping of Alfvén eigenmodes in tokamak and stellarator equilibria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowden, G. W.; Hole, M. J.; Könies, A.

    2015-09-15

    In an ideal magnetohydrodynamic (MHD) plasma, shear Alfvén eigenmodes may experience dissipationless damping due to resonant interaction with the shear Alfvén continuum. This continuum damping can make a significant contribution to the overall growth/decay rate of shear Alfvén eigenmodes, with consequent implications for fast ion transport. One method for calculating continuum damping is to solve the MHD eigenvalue problem over a suitable contour in the complex plane, thereby satisfying the causality condition. Such an approach can be implemented in three-dimensional ideal MHD codes which use the Galerkin method. Analytic functions can be fitted to numerical data for equilibrium quantities inmore » order to determine the value of these quantities along the complex contour. This approach requires less resolution than the established technique of calculating damping as resistivity vanishes and is thus more computationally efficient. The complex contour method has been applied to the three-dimensional finite element ideal MHD Code for Kinetic Alfvén waves. In this paper, we discuss the application of the complex contour technique to calculate the continuum damping of global modes in tokamak as well as torsatron, W7-X and H-1NF stellarator cases. To the authors' knowledge, these stellarator calculations represent the first calculation of continuum damping for eigenmodes in fully three-dimensional equilibria. The continuum damping of global modes in W7-X and H-1NF stellarator configurations investigated is found to depend sensitively on coupling to numerous poloidal and toroidal harmonics.« less

  15. Calculations on the orientation of the CH fragment in Co 3(CO) 9(μ 3-CH): Implications for metal surfaces

    NASA Astrophysics Data System (ADS)

    DeKock, Roger L.; Fehlner, Thomas P.

    1982-07-01

    A series of molecular orbital calculations using the Fenske-Hall method have been carried out on Co 3(CO) 9(μ 3-CH), in which the orientation of the CH fragment is varied with respect to the triangular plane of the three Co atoms. The calculations show that the energy differences between the orbitals that are predominantly CH in character are affected very little by the orientation of the CH fragment. These calculated differences are Δ(2 σ-1 σ)≅7 eV and Δ(1 π-1 σ)≅ 10.5 eV. The calculated splitting of the degenerate 1π orbitals for geometries with tilted CH fragments never amounted to more than 0.46 eV. Mixing of CH orbitals into the predominantly Co 3d manifold was extensive in all of the calculations. These calculations provide no support for the interpretation of energy loss and photoemission electron spectroscopy experiments in terms of CH fragments that are tilted with respect to the metal surface, but such an interpretation cannot be eliminated due to the diffuse nature of the spectral bands in the photoemission experiments.

  16. Analytic approach to photoelectron transport.

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.

    1972-01-01

    The equation governing the transport of photoelectrons in the ionosphere is shown to be equivalent to the equation of radiative transfer. In the single-energy approximation this equation is solved in closed form by the method of discrete ordinates for isotropic scattering and for a single-constituent atmosphere. The results include prediction of the angular distribution of photoelectrons at all altitudes and, in particular, the angular distribution of the escape flux. The implications of these solutions in real atmosphere calculations are discussed.

  17. Theoretical study of the coordination behavior of formate and formamidoximate with dioxovanadium( v ) cation: implications for selectivity towards uranyl

    DOE PAGES

    Mehio, Nada; Johnson, J. Casey; Dai, Sheng; ...

    2015-10-28

    Poly(acrylamidoxime)-based fibers bearing random mixtures of carboxylate and amidoxime groups are the most widely utilized materials for extracting uranium from seawater. However, the competition between uranyl (UO 2 2+) and vanadium ions poses a significant challenge to the industrial mining of uranium from seawater using the current generation of adsorbents. To design more selective adsorbents, a detailed understanding of how major competing ions interact with carboxylate and amidoxime ligands is required. In this work, we employ density functional theory (DFT) and wave-function methods to investigate potential binding motifs of the dioxovanadium ion, VO 2 +, with water, formate, and formamidoximatemore » ligands. Employing higher level of theory calculations (CCSD(T)) resolve the existing controversy between the experimental results and previous DFT calculations for the structure of the hydrated VO 2 + ion. Consistent with the EXAFS data, CCSD(T) calculations predict higher stability of the distorted octahedral geometry of VO 2 +(H 2O) 4 compared to the five-coordinate complex with a single water molecule in the second hydration shell, while all seven tested DFT methods yield the reverse stability of the two conformations. Analysis of the relative stabilities of formate-VO 2 + complexes indicates that both monodentate and bidentate forms may coexist in thermodynamic equilibrium in solution, with the equilibrium balance leaning more towards the formation of monodentate species. Investigations of VO 2 + coordination with the formamidoximate anion has revealed the existence of seven possible binding motifs, four of which are within ~ 4.0 kcal/mol of each other. Calculations establish that the most stable binding motif entails the coordination of oxime oxygen and amide nitrogen atoms via a tautomeric rearrangement of amidoxime to imino hydroxylamine. Lastly, the difference in the most stable VO 2 + and UO 2 2+ binding conformation has important implications for the design of more selective UO 2 2+ ligands.« less

  18. SU-E-T-375: Passive Scattering to Pencil-Beam-Scanning Comparison for Medulloblastoma Proton Therapy: LET Distributions and Radiobiological Implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D; MacDonald, S; Paganetti, H

    2014-06-01

    Purpose: To compare the linear energy transfer (LET) distributions between passive scattering and pencil beam scanning proton radiation therapy techniques for medulloblastoma patients and study the potential radiobiological implications. Methods: A group of medulloblastoma patients, previously treated with passive scattering (PS) proton craniospinal irradiation followed by prosterior fossa or involved field boost, were selected from the patient database of our institution. Using the beam geometry and planning computed tomography (CT) image sets of the original treatment plans, pencil beam scanning (PBS) treatment plans were generated for the cranial treatment for each patient, with average beam spot size of 8mm (sigmamore » in air at isocenter). 3-dimensional dose and LET distributions were calculated by Monte Carlo methods (TOPAS) both for the original passive scattering and new pencil beam scanning treatment plans. LET volume histograms were calculated for the target and OARs and compared for the two delivery methods. Variable RBE weighted dose distributions and volume histograms were also calculated using a variable dose and LET-based model. Results: Better dose conformity was achieved with PBS planning compared to PS, leading to increased dose coverage for the boost target area and decreased average dose to the structures adjacent to it and critical structures outside the whole brain treatment field. LET values for the target were lower for PBS plans. Elevated LET values for OARs close to the boosted target areas were noticed, due to end of range of proton beams falling inside these structures, resulting in higher RBE weighted dose for these structures compared to the clinical RBE value of 1.1. Conclusion: Transitioning from passive scattering to pencil beam scanning proton radiation treatment can be dosimetrically beneficial for medulloblastoma patients. LET–guided treatment planning could contribute to better decision making for these cases, especially for critical structures at close proximity to the boosted target area.« less

  19. Clinical Implications of TiGRT Algorithm for External Audit in Radiation Oncology

    PubMed Central

    Shahbazi-Gahrouei, Daryoush; Saeb, Mohsen; Monadi, Shahram; Jabbari, Iraj

    2017-01-01

    Background: Performing audits play an important role in quality assurance program in radiation oncology. Among different algorithms, TiGRT is one of the common application software for dose calculation. This study aimed to clinical implications of TiGRT algorithm to measure dose and compared to calculated dose delivered to the patients for a variety of cases, with and without the presence of inhomogeneities and beam modifiers. Materials and Methods: Nonhomogeneous phantom as quality dose verification phantom, Farmer ionization chambers, and PC-electrometer (Sun Nuclear, USA) as a reference class electrometer was employed throughout the audit in linear accelerators 6 and 18 MV energies (Siemens ONCOR Impression Plus, Germany). Seven test cases were performed using semi CIRS phantom. Results: In homogeneous regions and simple plans for both energies, there was a good agreement between measured and treatment planning system calculated dose. Their relative error was found to be between 0.8% and 3% which is acceptable for audit, but in nonhomogeneous organs, such as lung, a few errors were observed. In complex treatment plans, when wedge or shield in the way of energy is used, the error was in the accepted criteria. In complex beam plans, the difference between measured and calculated dose was found to be 2%–3%. All differences were obtained between 0.4% and 1%. Conclusions: A good consistency was observed for the same type of energy in the homogeneous and nonhomogeneous phantom for the three-dimensional conformal field with a wedge, shield, asymmetric using the TiGRT treatment planning software in studied center. The results revealed that the national status of TPS calculations and dose delivery for 3D conformal radiotherapy was globally within acceptable standards with no major causes for concern. PMID:28989910

  20. Learning Multiple Band-Pass Filters for Sleep Stage Estimation: Towards Care Support for Aged Persons

    NASA Astrophysics Data System (ADS)

    Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo

    This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.

  1. Quasiparticle energy bands and Fermi surfaces of monolayer NbSe2

    NASA Astrophysics Data System (ADS)

    Kim, Sejoong; Son, Young-Woo

    2017-10-01

    A quasiparticle band structure of a single layer 2 H -NbSe2 is reported by using first-principles G W calculation. We show that a self-energy correction increases the width of a partially occupied band and alters its Fermi surface shape when comparing those using conventional mean-field calculation methods. Owing to a broken inversion symmetry in the trigonal prismatic single layer structure, the spin-orbit interaction is included and its impact on the Fermi surface and quasiparticle energy bands are discussed. We also calculate the doping dependent static susceptibilities from the band structures obtained by the mean-field calculation as well as G W calculation with and without spin-orbit interactions. A complete tight-binding model is constructed within the three-band third nearest neighbor hoppings and is shown to reproduce our G W quasiparticle energy bands and Fermi surface very well. Considering variations of the Fermi surface shapes depending on self-energy corrections and spin-orbit interactions, we discuss the formations of charge density wave (CDW) with different dielectric environments and their implications on recent controversial experimental results on CDW transition temperatures.

  2. Modification of LAMPF's magnet-mapping code for offsets of center coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurd, J.W.; Gomulka, S.; Merrill, F.

    1991-01-01

    One of the magnet measurements performed at LAMPF is the determination of the cylindrical harmonics of a quadrupole magnet using a rotating coil. The data are analyzed with the code HARMAL to derive the amplitudes of the harmonics. Initially, the origin of the polar coordinate system is the axis of the rotating coil. A new coordinate system is found by a simple translation of the old system such that the dipole moment in the new system is zero. The origin of this translated system is referred to as the magnetic center. Given this translation, the code calculates the coefficients ofmore » the cylindrical harmonics in the new system. The code has been modified to use an analytical calculation to determine these new coefficients. The method of calculation is described and some implications of this formulation are presented. 8 refs., 2 figs.« less

  3. Assessment of PWR Steam Generator modelling in RELAP5/MOD2. International Agreement Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putney, J.M.; Preece, R.J.

    1993-06-01

    An assessment of Steam Generator (SG) modelling in the PWR thermal-hydraulic code RELAP5/MOD2 is presented. The assessment is based on a review of code assessment calculations performed in the UK and elsewhere, detailed calculations against a series of commissioning tests carried out on the Wolf Creek PWR and analytical investigations of the phenomena involved in normal and abnormal SG operation. A number of modelling deficiencies are identified and their implications for PWR safety analysis are discussed -- including methods for compensating for the deficiencies through changes to the input deck. Consideration is also given as to whether the deficiencies willmore » still be present in the successor code RELAP5/MOD3.« less

  4. Spectral properties of minimal-basis-set orbitals: Implications for molecular electronic continuum states

    NASA Astrophysics Data System (ADS)

    Langhoff, P. W.; Winstead, C. L.

    Early studies of the electronically excited states of molecules by John A. Pople and coworkers employing ab initio single-excitation configuration interaction (SECI) calculations helped to simulate related applications of these methods to the partial-channel photoionization cross sections of polyatomic molecules. The Gaussian representations of molecular orbitals adopted by Pople and coworkers can describe SECI continuum states when sufficiently large basis sets are employed. Minimal-basis virtual Fock orbitals stabilized in the continuous portions of such SECI spectra are generally associated with strong photoionization resonances. The spectral attributes of these resonance orbitals are illustrated here by revisiting previously reported experimental and theoretical studies of molecular formaldehyde (H2CO) in combination with recently calculated continuum orbital amplitudes.

  5. Positive ion densities and mobilities in the upper stratosphere and mesosphere

    NASA Technical Reports Server (NTRS)

    Leiden, S.

    1976-01-01

    A brief sketch of the theory concerning the use of the Gerdien condenser as a mobility spectrometer is presented. Data reduction of three parachute borne Gerdien condenser probes is given, as well as that of one blunt conductivity probe. Comparisons of concentrations calculated by two different methods indicate consistency of results. Mobility profiles demonstrating remarkable fine structure are discussed in detail. Finally, theoretical implications of the results on ionospheric structure, including possible night-day differences and latitudinal variations, are considered.

  6. Putting the environment into the NPV calculation -- Quantifying pipeline environmental costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dott, D.R.; Wirasinghe, S.C.; Chakma, A.

    1996-12-31

    Pipeline projects impact the environment through soil and habitat disturbance, noise during construction and compressor operation, river crossing disturbance and the risk of rupture. Assigning monetary value to these negative project consequences enables the environment to be represented in the project cost-benefit analysis. This paper presents the mechanics and implications of two environmental valuation techniques: (1) the contingent valuation method and (2) the stated preference method. The use of environmental value at the project economic-evaluation stage is explained. A summary of research done on relevant environmental attribute valuation is presented and discussed. Recommendations for further research in the field aremore » made.« less

  7. Biphasic and monophasic repair: comparative implications for biologically equivalent dose calculations in pulsed dose rate brachytherapy of cervical carcinoma

    PubMed Central

    Millar, W T; Davidson, S E

    2013-01-01

    Objective: To consider the implications of the use of biphasic rather than monophasic repair in calculations of biologically-equivalent doses for pulsed-dose-rate brachytherapy of cervix carcinoma. Methods: Calculations are presented of pulsed-dose-rate (PDR) doses equivalent to former low-dose-rate (LDR) doses, using biphasic vs monophasic repair kinetics, both for cervical carcinoma and for the organ at risk (OAR), namely the rectum. The linear-quadratic modelling calculations included effects due to varying the dose per PDR cycle, the dose reduction factor for the OAR compared with Point A, the repair kinetics and the source strength. Results: When using the recommended 1 Gy per hourly PDR cycle, different LDR-equivalent PDR rectal doses were calculated depending on the choice of monophasic or biphasic repair kinetics pertaining to the rodent central nervous and skin systems. These differences virtually disappeared when the dose per hourly cycle was increased to 1.7 Gy. This made the LDR-equivalent PDR doses more robust and independent of the choice of repair kinetics and α/β ratios as a consequence of the described concept of extended equivalence. Conclusion: The use of biphasic and monophasic repair kinetics for optimised modelling of the effects on the OAR in PDR brachytherapy suggests that an optimised PDR protocol with the dose per hourly cycle nearest to 1.7 Gy could be used. Hence, the durations of the new PDR treatments would be similar to those of the former LDR treatments and not longer as currently prescribed. Advances in knowledge: Modelling calculations indicate that equivalent PDR protocols can be developed which are less dependent on the different α/β ratios and monophasic/biphasic kinetics usually attributed to normal and tumour tissues for treatment of cervical carcinoma. PMID:23934965

  8. Numerical calculations of turbulent swirling flow

    NASA Technical Reports Server (NTRS)

    Kubo, I.; Gouldin, F. C.

    1974-01-01

    Description of a numerical technique for solving axisymmetric, incompressible, turbulent swirling flow problems. Isothermal flow calculations are presented for a coaxial flow configuration of special interest. The calculation results are discussed in regard to their implications for the design of gas turbine combustors.

  9. [Implication of inverse-probability weighting method in the evaluation of diagnostic test with verification bias].

    PubMed

    Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin

    2014-03-01

    To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.

  10. Flow field analysis of aircraft configurations using a numerical solution to the three-dimensional unified supersonic/hypersonic small disturbance equations, part 1

    NASA Technical Reports Server (NTRS)

    Gunness, R. C., Jr.; Knight, C. J.; Dsylva, E.

    1972-01-01

    The unified small disturbance equations are numerically solved using the well-known Lax-Wendroff finite difference technique. The method allows complete determination of the inviscid flow field and surface properties as long as the flow remains supersonic. Shock waves and other discontinuities are accounted for implicity in the numerical method. This technique was programed for general application to the three-dimensional case. The validity of the method is demonstrated by calculations on cones, axisymmetric bodies, lifting bodies, delta wings, and a conical wing/body combination. Part 1 contains the discussion of problem development and results of the study. Part 2 contains flow charts, subroutine descriptions, and a listing of the computer program.

  11. A laboratory study of nonlinear changes in the directionality of extreme seas

    NASA Astrophysics Data System (ADS)

    Latheef, M.; Swan, C.; Spinneken, J.

    2017-03-01

    This paper concerns the description of surface water waves, specifically nonlinear changes in the directionality. Supporting calculations are provided to establish the best method of directional wave generation, the preferred method of directional analysis and the inputs on which such a method should be based. These calculations show that a random directional method, in which the phasing, amplitude and direction of propagation of individual wave components are chosen randomly, has benefits in achieving the required ergodicity. In terms of analysis procedures, the extended maximum entropy principle, with inputs based upon vector quantities, produces the best description of directionality. With laboratory data describing the water surface elevation and the two horizontal velocity components at a single point, several steep sea states are considered. The results confirm that, as the steepness of a sea state increases, the overall directionality of the sea state reduces. More importantly, it is also shown that the largest waves become less spread or more unidirectional than the sea state as a whole. This provides an important link to earlier descriptions of deterministic wave groups produced by frequency focusing, helps to explain recent field observations and has important practical implications for the design of marine structures and vessels.

  12. Quasi-equilibria in reduced Liouville spaces.

    PubMed

    Halse, Meghan E; Dumez, Jean-Nicolas; Emsley, Lyndon

    2012-06-14

    The quasi-equilibrium behaviour of isolated nuclear spin systems in full and reduced Liouville spaces is discussed. We focus in particular on the reduced Liouville spaces used in the low-order correlations in Liouville space (LCL) simulation method, a restricted-spin-space approach to efficiently modelling the dynamics of large networks of strongly coupled spins. General numerical methods for the calculation of quasi-equilibrium expectation values of observables in Liouville space are presented. In particular, we treat the cases of a time-independent Hamiltonian, a time-periodic Hamiltonian (with and without stroboscopic sampling) and powder averaging. These quasi-equilibrium calculation methods are applied to the example case of spin diffusion in solid-state nuclear magnetic resonance. We show that there are marked differences between the quasi-equilibrium behaviour of spin systems in the full and reduced spaces. These differences are particularly interesting in the time-periodic-Hamiltonian case, where simulations carried out in the reduced space demonstrate ergodic behaviour even for small spins systems (as few as five homonuclei). The implications of this ergodic property on the success of the LCL method in modelling the dynamics of spin diffusion in magic-angle spinning experiments of powders is discussed.

  13. Nurse absence--the causes and the consequences.

    PubMed

    Beil-Hildebrand, M

    1996-01-01

    This paper addresses nurse absence as it occurs in health care organizations and as a form of withdrawal behaviour from work. Absence represents a traditional domain of conflict between nursing management and their employees in day-to-day practice. The aim of the following discussion is to extend nursing management's understanding of the topic as a precondition for well-balanced schedules and effective human resource planning. A discussion of planned and unplanned absence thus arises and appropriate types of measurement, taking employee absence behaviour into account, are outlined. The implications of the arguments, developed in detail in the first part of the paper, are applied in the second part using a hypothetical account. In order to illustrate the importance of managing absence by nursing management, a method for calculating schedules is described which investigates the organizational control of planned and unplanned absence. This method proposes a seven stage calculation and highlights the processes that are essential for taking absence into account.

  14. Time Dependent Density Functional Theory Calculations of Large Compact PAH Cations: Implications for the Diffuse Interstellar Bands

    NASA Technical Reports Server (NTRS)

    Weisman, Jennifer L.; Lee, Timothy J.; Salama, Farid; Gordon-Head, Martin; Kwak, Dochan (Technical Monitor)

    2002-01-01

    We investigate the electronic absorption spectra of several maximally pericondensed polycyclic aromatic hydrocarbon radical cations with time dependent density functional theory calculations. We find interesting trends in the vertical excitation energies and oscillator strengths for this series containing pyrene through circumcoronene, the largest species containing more than 50 carbon atoms. We discuss the implications of these new results for the size and structure distribution of the diffuse interstellar band carriers.

  15. A multi-scale study of the adsorption of lanthanum on the (110) surface of tungsten

    NASA Astrophysics Data System (ADS)

    Samin, Adib J.; Zhang, Jinsuo

    2016-07-01

    In this study, we utilize a multi-scale approach to studying lanthanum adsorption on the (110) plane of tungsten. The energy of the system is described from density functional theory calculations within the framework of the cluster expansion method. It is found that including two-body figures up to the sixth nearest neighbor yielded a reasonable agreement with density functional theory calculations as evidenced by the reported cross validation score. The results indicate that the interaction between the adsorbate atoms in the adlayer is important and cannot be ignored. The parameterized cluster expansion expression is used in a lattice gas Monte Carlo simulation in the grand canonical ensemble at 773 K and the adsorption isotherm is recorded. Implications of the obtained results for the pyroprocessing application are discussed.

  16. Exploring Flavor Physics with Lattice QCD

    NASA Astrophysics Data System (ADS)

    Du, Daping; Fermilab/MILC Collaborations Collaboration

    2016-03-01

    The Standard Model has been a very good description of the subatomic particle physics. In the search for physics beyond the Standard Model in the context of flavor physics, it is important to sharpen our probes using some gold-plated processes (such as B rare decays), which requires the knowledge of the input parameters, such as the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and other nonperturbative quantities, with sufficient precision. Lattice QCD is so far the only first-principle method which could compute these quantities with competitive and systematically improvable precision using the state of the art simulation techniques. I will discuss the recent progress of lattice QCD calculations on some of these nonpurturbative quantities and their applications in flavor physics. I will also discuss the implications and future perspectives of these calculations in flavor physics.

  17. Proton threshold states in the Na22(p,γ)Mg23 reaction and astrophysical implications

    NASA Astrophysics Data System (ADS)

    Comisel, H.; Hategan, C.; Graw, G.; Wolter, H. H.

    2007-04-01

    Proton threshold states in Mg23 are important for the astrophysically relevant proton capture reaction Na22(p,γ)Mg23. In the indirect determination of the resonance strength of the lowest states, which were not accessible by direct methods, some of the spin-parity-assignments remained experimentally uncertain. We have investigated these states with shell model, Coulomb displacement, and Thomas-Ehrman shift calculations. From the comparison of calculated and observed properties, we relate the lowest relevant resonance state at Ex=7643 keV to an excited 3/2+ state in accordance with a recent experimental determination by Jenkins From this we deduce significantly improved values for the Na22(p,γ)Mg23 reaction rate at stellar temperatures below T9=0.1 K.

  18. Accurate and Reliable Prediction of the Binding Affinities of Macrocycles to Their Protein Targets.

    PubMed

    Yu, Haoyu S; Deng, Yuqing; Wu, Yujie; Sindhikara, Dan; Rask, Amy R; Kimura, Takayuki; Abel, Robert; Wang, Lingle

    2017-12-12

    Macrocycles have been emerging as a very important drug class in the past few decades largely due to their expanded chemical diversity benefiting from advances in synthetic methods. Macrocyclization has been recognized as an effective way to restrict the conformational space of acyclic small molecule inhibitors with the hope of improving potency, selectivity, and metabolic stability. Because of their relatively larger size as compared to typical small molecule drugs and the complexity of the structures, efficient sampling of the accessible macrocycle conformational space and accurate prediction of their binding affinities to their target protein receptors poses a great challenge of central importance in computational macrocycle drug design. In this article, we present a novel method for relative binding free energy calculations between macrocycles with different ring sizes and between the macrocycles and their corresponding acyclic counterparts. We have applied the method to seven pharmaceutically interesting data sets taken from recent drug discovery projects including 33 macrocyclic ligands covering a diverse chemical space. The predicted binding free energies are in good agreement with experimental data with an overall root-mean-square error (RMSE) of 0.94 kcal/mol. This is to our knowledge the first time where the free energy of the macrocyclization of linear molecules has been directly calculated with rigorous physics-based free energy calculation methods, and we anticipate the outstanding accuracy demonstrated here across a broad range of target classes may have significant implications for macrocycle drug discovery.

  19. Yield and Depth of Burial Hydrodynamic Calculations in Granodiorite: Implications for the North Korean Test Site

    DTIC Science & Technology

    2011-09-01

    the existence of a test site body wave magnitude (mb) bias between U. S. and the former Soviet Union test sites in Nevada and Semipalatinsk . The use...YIELD AND DEPTH OF BURIAL HYDRODYNAMIC CALCULATIONS IN GRANODIORITE:IMPLICATIONS FOR THE NORTH KOREAN TEST SITE Esteban Rougier, Christopher R...Korean test site and the May 2009 test . When compared to the Denny and Johnson (1991) and to the Heard and Ackerman (1967) cavity radius scaling models

  20. Community assessment techniques and the implications for rarefaction and extrapolation with Hill numbers.

    PubMed

    Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E

    2017-12-01

    Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.

  1. Calculation of Cyclodextrin Binding Affinities: Energy, Entropy, and Implications for Drug Design

    PubMed Central

    Chen, Wei; Chang, Chia-En; Gilson, Michael K.

    2004-01-01

    The second generation Mining Minima method yields binding affinities accurate to within 0.8 kcal/mol for the associations of α-, β-, and γ-cyclodextrin with benzene, resorcinol, flurbiprofen, naproxen, and nabumetone. These calculations require hours to a day on a commodity computer. The calculations also indicate that the changes in configurational entropy upon binding oppose association by as much as 24 kcal/mol and result primarily from a narrowing of energy wells in the bound versus the free state, rather than from a drop in the number of distinct low-energy conformations on binding. Also, the configurational entropy is found to vary substantially among the bound conformations of a given cyclodextrin-guest complex. This result suggests that the configurational entropy must be accounted for to reliably rank docked conformations in both host-guest and ligand-protein complexes. In close analogy with the common experimental observation of entropy-enthalpy compensation, the computed entropy changes show a near-linear relationship with the changes in mean potential plus solvation energy. PMID:15339804

  2. Historical overfishing and the recent collapse of coastal ecosystems

    USGS Publications Warehouse

    Jackson, J.B.C.; Kirby, M.X.; Berger, W.H.; Bjorndal, K.A.; Botsford, L.W.; Bourque, B.J.; Bradbury, R.; Cooke, R.; Erlandson, J.; Estes, J.A.; Hughes, T.P.; Kidwell, S.; Lange, C.B.; Lenihan, H.S.; Pandolfi, J.M.; Peterson, C.H.; Steneck, R.S.; Tegner, M.J.; Warner, R.

    2001-01-01

    A method for calculating parameters necessary to maintain stable populations is described and the management implications of the method are discussed. This method depends upon knowledge of the population mortality rate schedule, the age at which the species reaches maturity, and recruitment rates or age ratios in the population. Four approaches are presented which yield information about the status of the population: (1) necessary production for a stable population, (2) allowable mortality for a stable population, (3) annual rate of change in population size, and (4) age ratios in the population which yield a stable condition. General formulas for these relationships, and formulas for several special cases, are presented. Tables are also presented showing production required to maintain a stable population with the simpler (more common) mortality and fecundity schedules.

  3. A multi-scale study of the adsorption of lanthanum on the (110) surface of tungsten

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samin, Adib J.; Zhang, Jinsuo

    In this study, we utilize a multi-scale approach to studying lanthanum adsorption on the (110) plane of tungsten. The energy of the system is described from density functional theory calculations within the framework of the cluster expansion method. It is found that including two-body figures up to the sixth nearest neighbor yielded a reasonable agreement with density functional theory calculations as evidenced by the reported cross validation score. The results indicate that the interaction between the adsorbate atoms in the adlayer is important and cannot be ignored. The parameterized cluster expansion expression is used in a lattice gas Monte Carlomore » simulation in the grand canonical ensemble at 773 K and the adsorption isotherm is recorded. Implications of the obtained results for the pyroprocessing application are discussed.« less

  4. First-Principles Study of Charge Diffusion between Proximate Solid-State Qubits and Its Implications on Sensor Applications

    NASA Astrophysics Data System (ADS)

    Chou, Jyh-Pin; Bodrog, Zoltán; Gali, Adam

    2018-03-01

    Solid-state qubits from paramagnetic point defects in solids are promising platforms to realize quantum networks and novel nanoscale sensors. Recent advances in materials engineering make it possible to create proximate qubits in solids that might interact with each other, leading to electron spin or charge fluctuation. Here we develop a method to calculate the tunneling-mediated charge diffusion between point defects from first principles and apply it to nitrogen-vacancy (NV) qubits in diamond. The calculated tunneling rates are in quantitative agreement with previous experimental data. Our results suggest that proximate neutral and negatively charged NV defect pairs can form a NV-NV molecule. A tunneling-mediated model for the source of decoherence of the near-surface NV qubits is developed based on our findings on the interacting qubits in diamond.

  5. Electronic Hand Calculators: The Implications for Pre-College Education. Final Report. Abbreviated Version.

    ERIC Educational Resources Information Center

    Suydam, Marilyn, Comp.

    This volume reports research conducted to provide the National Science Foundation (NSF) with information concerning the existing range of beliefs and opinions about the impact of the hand-held calculator on pre-college educational practice. A literature search and several surveys of groups of individuals involved in calculator manufacture and…

  6. Programmable Calculators: Implications for the Mathematics Curriculum.

    ERIC Educational Resources Information Center

    Spikell, Mark A., Ed.

    This document is a collection of reports presented at a programable calculator symposium held in Seattle, Washington, in April, 1980, as part of the annual meeting of the National Council of Teachers of Mathematics (NCTM). The session was designed to review whether the programable calculator has a place in the school mathematics program, in light…

  7. Error Patterns in Portuguese Students' Addition and Subtraction Calculation Tasks: Implications for Teaching

    ERIC Educational Resources Information Center

    Watson, Silvana Maria R.; Lopes, João; Oliveira, Célia; Judge, Sharon

    2018-01-01

    Purpose: The purpose of this descriptive study is to investigate why some elementary children have difficulties mastering addition and subtraction calculation tasks. Design/methodology/approach: The researchers have examined error types in addition and subtraction calculation made by 697 Portuguese students in elementary grades. Each student…

  8. Constraining pre-eruptive volatile contents and degassing histories in submarine lavas

    NASA Astrophysics Data System (ADS)

    Jones, M.; Soule, S. A.; Liao, Y.; Le Roux, V.; Brodsky, H.; Kurz, M. D.

    2017-12-01

    Vesicle textures in submarine lavas have been used to calculate total (pre-eruption) volatile concentrations in mid-ocean ridge basalts (MORB), which provide constraints on upper mantle volatile contents and CO2 fluxes along the global MOR. In this study, we evaluate vesicle size distributions and volatile contents in a suite of 20 MORB samples, which span the range of typical vesicularities and bubble number densities observed in global MORB. We demonstrate that 2D imaging coupled with traditional stereological methods closely reproduces vesicle size distributions and vesicularities measured using 3D x-ray micro-computed tomography (μ-CT). We further demonstrate that x-ray μ-CT provides additional information about bubble deformation and clustering that are linked to bubble nucleation and lava emplacement dynamics. The validation of vesicularity measurements allows us to evaluate the methods for calculating total CO2 concentrations in MORB using dissolved volatile content (SIMS), vesicularity, vesicle gas density, and equations of state. We model bubble and melt contraction during lava quenching and show that the melt viscosity prevents bubbles from reaching equilibrium at the glass transition temperature. Thus, we suggest that higher temperatures should be used to calculate exsolved volatile concentrations based on observed vesicularities. Our revised method reconciles discrepancies between exsolved volatile contents measured by gas manometry and calculated from vesicularity. In addition, our revised method suggests that some previous studies may have overestimated MORB volatile concentrations by up to a factor of two, with the greatest differences in samples with the highest vesicularities (e.g., `popping rock' 2πD43). These new results have important implications for CO2/Nb of `undegassed' MORB and global ridge CO2 fluxes. Lastly, our revised method yields constant total CO2 concentrations in sample suites from individual MOR eruptions that experienced syn-eruptive degassing. These results imply closed-system degassing during magma ascent and emplacement following equilibration at the depth of melt storage in the crust.

  9. Cost implications of self-management education intervention programmes in arthritis.

    PubMed

    Brady, Teresa J

    2012-10-01

    The purpose of this review is to examine cost implications, including cost-effectiveness analyses, cost-savings calculated from health-care utilisation and intervention delivery costs of arthritis-related self-management education (SME) interventions. Literature searches, covering 1980-March 2012, using arthritis, self-management and cost-related terms, identified 487 articles; abstracts were reviewed to identify those with cost information. Three formal cost-effectiveness analyses emerged; results were equivocal but analyses done from the societal perspective, including out-of-pocket and other indirect costs, were more promising. Eight studies of individual, group and telephone-delivered SME calculated cost-savings based on health-care utilisation changes. These studies had variable results but the costs-savings extrapolation methods are questionable. Meta-analyses of health-care utilisation changes in two specific SME interventions demonstrated only one significant result at 6 months, which did not persist at 12 months. Eleven studies reported intervention delivery costs ranging from $35 to $740 per participant; the variability is likely due to costing methods and differences in delivery mode. Economic analysis in arthritis-related SME is in its infancy; more robust economic evaluations are required to reach sound conclusions. The most common form of analysis used changes in health-care utilisation as a proxy for cost-savings; the results are less than compelling. However, other value metrics, including the value of SME as part of health systems' self-management support efforts, to population health (from improved self-efficacy, psychological well-being and physical activity), and to igniting patient activation, are all important to consider. Published by Elsevier Ltd.

  10. Three dimensional model calculations of the global dispersion of high speed aircraft exhaust and implications for stratospheric ozone loss

    NASA Technical Reports Server (NTRS)

    Douglass, Anne R.; Rood, Richard B.; Jackman, Charles H.; Weaver, Clark J.

    1994-01-01

    Two-dimensional (zonally averaged) photochemical models are commonly used for calculations of ozone changes due to various perturbations. These include calculating the ozone change expected as a result of change in the lower stratospheric composition due to the exhaust of a fleet of supersonic aircraft flying in the lower stratosphere. However, zonal asymmetries are anticipated to be important to this sort of calculation. The aircraft are expected to be restricted from flying over land at supersonic speed due to sonic booms, thus the pollutant source will not be zonally symmetric. There is loss of pollutant through stratosphere/troposphere exchange, but these processes are spatially and temporally inhomogeneous. Asymmetry in the pollutant distribution contributes to the uncertainty in the ozone changes calculated with two dimensional models. Pollutant distributions for integrations of at least 1 year of continuous pollutant emissions along flight corridors are calculated using a three dimensional chemistry and transport model. These distributions indicate the importance of asymmetry in the pollutant distributions to evaluation of the impact of stratospheric aircraft on ozone. The implications of such pollutant asymmetries to assessment calculations are discussed, considering both homogeneous and heterogeneous reactions.

  11. Characterization of primary standards for use in the HPLC analysis of the procyanidin content of cocoa and chocolate containing products.

    PubMed

    Hurst, William J; Stanley, Bruce; Glinski, Jan A; Davey, Matthew; Payne, Mark J; Stuart, David A

    2009-10-15

    This report describes the characterization of a series of commercially available procyanidin standards ranging from dimers DP = 2 to decamers DP = 10 for the determination of procyanidins from cocoa and chocolate. Using a combination of HPLC with fluorescence detection and MALDI-TOF mass spectrometry, the purity of each standard was determined and these data were used to determine relative response factors. These response factors were compared with other response factors obtained from published methods. Data comparing the procyanidin analysis of a commercially available US dark chocolate calculated using each of the calibration methods indicates divergent results and demonstrate that previous methods may significantly underreport the procyanidins in cocoa-containing products. These results have far reaching implications because the previous calibration methods have been used to develop data for a variety of scientific reports, including food databases and clinical studies.

  12. Renormalized Stress-Energy Tensor of an Evaporating Spinning Black Hole.

    PubMed

    Levi, Adam; Eilon, Ehud; Ori, Amos; van de Meent, Maarten

    2017-04-07

    We provide the first calculation of the renormalized stress-energy tensor (RSET) of a quantum field in Kerr spacetime (describing a stationary spinning black hole). More specifically, we employ a recently developed mode-sum regularization method to compute the RSET of a minimally coupled massless scalar field in the Unruh vacuum state, the quantum state corresponding to an evaporating black hole. The computation is done here for the case a=0.7M, using two different variants of the method: t splitting and φ splitting, yielding good agreement between the two (in the domain where both are applicable). We briefly discuss possible implications of the results for computing semiclassical corrections to certain quantities, and also for simulating dynamical evaporation of a spinning black hole.

  13. The Safe Yield and Climatic Variability: Implications for Groundwater Management.

    PubMed

    Loáiciga, Hugo A

    2017-05-01

    Methods for calculating the safe yield are evaluated in this paper using a high-quality and long historical data set of groundwater recharge, discharge, extraction, and precipitation in a karst aquifer. Consideration is given to the role that climatic variability has on the determination of a climatically representative period with which to evaluate the safe yield. The methods employed to estimate the safe yield are consistent with its definition as a long-term average extraction rate that avoids adverse impacts on groundwater. The safe yield is a useful baseline for groundwater planning; yet, it is herein shown that it is not an operational rule that works well under all climatic conditions. This paper shows that due to the nature of dynamic groundwater processes it may be most appropriate to use an adaptive groundwater management strategy that links groundwater extraction rates to groundwater discharge rates, thus achieving a safe yield that represents an estimated long-term sustainable yield. An example of the calculation of the safe yield of the Edwards Aquifer (Texas) demonstrates that it is about one-half of the average annual recharge. © 2016, National Ground Water Association.

  14. Gradient Calculation Methods on Arbitrary Polyhedral Unstructured Meshes for Cell-Centered CFD Solvers

    NASA Technical Reports Server (NTRS)

    Sozer, Emre; Brehm, Christoph; Kiris, Cetin C.

    2014-01-01

    A survey of gradient reconstruction methods for cell-centered data on unstructured meshes is conducted within the scope of accuracy assessment. Formal order of accuracy, as well as error magnitudes for each of the studied methods, are evaluated on a complex mesh of various cell types through consecutive local scaling of an analytical test function. The tests highlighted several gradient operator choices that can consistently achieve 1st order accuracy regardless of cell type and shape. The tests further offered error comparisons for given cell types, leading to the observation that the "ideal" gradient operator choice is not universal. Practical implications of the results are explored via CFD solutions of a 2D inviscid standing vortex, portraying the discretization error properties. A relatively naive, yet largely unexplored, approach of local curvilinear stencil transformation exhibited surprisingly favorable properties

  15. Noninvasive evaluation of left ventricular elastance according to pressure-volume curves modeling in arterial hypertension.

    PubMed

    Bonnet, Benjamin; Jourdan, Franck; du Cailar, Guilhem; Fesler, Pierre

    2017-08-01

    End-systolic left ventricular (LV) elastance ( E es ) has been previously calculated and validated invasively using LV pressure-volume (P-V) loops. Noninvasive methods have been proposed, but clinical application remains complex. The aims of the present study were to 1 ) estimate E es according to modeling of the LV P-V curve during ejection ("ejection P-V curve" method) and validate our method with existing published LV P-V loop data and 2 ) test the clinical applicability of noninvasively detecting a difference in E es between normotensive and hypertensive subjects. On the basis of the ejection P-V curve and a linear relationship between elastance and time during ejection, we used a nonlinear least-squares method to fit the pressure waveform. We then computed the slope and intercept of time-varying elastance as well as the volume intercept (V 0 ). As a validation, 22 P-V loops obtained from previous invasive studies were digitized and analyzed using the ejection P-V curve method. To test clinical applicability, ejection P-V curves were obtained from 33 hypertensive subjects and 32 normotensive subjects with carotid tonometry and real-time three-dimensional echocardiography during the same procedure. A good univariate relationship ( r 2  = 0.92, P < 0.005) and good limits of agreement were found between the invasive calculation of E es and our new proposed ejection P-V curve method. In hypertensive patients, an increase in arterial elastance ( E a ) was compensated by a parallel increase in E es without change in E a / E es In addition, the clinical reproducibility of our method was similar to that of another noninvasive method. In conclusion, E es and V 0 can be estimated noninvasively from modeling of the P-V curve during ejection. This approach was found to be reproducible and sensitive enough to detect an expected increase in LV contractility in hypertensive patients. Because of its noninvasive nature, this methodology may have clinical implications in various disease states. NEW & NOTEWORTHY The use of real-time three-dimensional echocardiography-derived left ventricular volumes in conjunction with carotid tonometry was found to be reproducible and sensitive enough to detect expected differences in left ventricular elastance in arterial hypertension. Because of its noninvasive nature, this methodology may have clinical implications in various disease states. Copyright © 2017 the American Physiological Society.

  16. Generation of dynamo magnetic fields in protoplanetary and other astrophysical accretion disks

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Levy, E. H.

    1988-01-01

    A computational method for treating the generation of dynamo magnetic fields in astrophysical disks is presented. The numerical difficulty of handling the boundary condition at infinity in the cylindrical disk geometry is overcome by embedding the disk in a spherical computational space and matching the solutions to analytically tractable spherical functions in the surrounding space. The lowest lying dynamo normal modes for a 'thick' astrophysical disk are calculated. The generated modes found are all oscillatory and spatially localized. Tha potential implications of the results for the properties of dynamo magnetic fields in real astrophysical disks are discussed.

  17. NEXAFS spectroscopy of ionic liquids: experiments versus calculations.

    PubMed

    Fogarty, Richard M; Matthews, Richard P; Clough, Matthew T; Ashworth, Claire R; Brandt-Talbot, Agnieszka; Corbett, Paul J; Palgrave, Robert G; Bourne, Richard A; Chamberlain, Thomas W; Vander Hoogerstraete, Tom; Thompson, Paul B J; Hunt, Patricia A; Besley, Nicholas A; Lovelock, Kevin R J

    2017-11-29

    Experimental near edge X-ray absorption fine structure (NEXAFS) spectra are reported for 12 ionic liquids (ILs) encompassing a range of chemical structures for both the sulfur 1s and nitrogen 1s edges and compared with time-dependent density functional theory (TD-DFT) calculations. The energy scales for the experimental data were carefully calibrated against literature data. Gas phase calculations were performed on lone ions, ion pairs and ion pair dimers, with a wide range of ion pair conformers considered. For the first time, it is demonstrated that TD-DFT is a suitable method for simulating NEXAFS spectra of ILs, although the number of ions included in the calculations and their conformations are important considerations. For most of the ILs studied, calculations on lone ions in the gas phase were sufficient to successfully reproduce the experimental NEXAFS spectra. However, for certain ILs - for example, those containing a protic ammonium cation - calculations on ion pairs were required to obtain a good agreement with experimental spectra. Furthermore, significant conformational dependence was observed for the protic ammonium ILs, providing insight into the predominant liquid phase cation-anion interactions. Among the 12 investigated ILs, we find that four have an excited state that is delocalised across both the cation and the anion, which has implications for any process that depends on the excited state, for example, radiolysis. Considering the collective experimental and theoretical data, we recommend that ion pairs should be the minimum number of ions used for the calculation of NEXAFS spectra of ILs.

  18. Ab initio results for intermediate-mass, open-shell nuclei

    NASA Astrophysics Data System (ADS)

    Baker, Robert B.; Dytrych, Tomas; Launey, Kristina D.; Draayer, Jerry P.

    2017-01-01

    A theoretical understanding of nuclei in the intermediate-mass region is vital to astrophysical models, especially for nucleosynthesis. Here, we employ the ab initio symmetry-adapted no-core shell model (SA-NCSM) in an effort to push first-principle calculations across the sd-shell region. The ab initio SA-NCSM's advantages come from its ability to control the growth of model spaces by including only physically relevant subspaces, which allows us to explore ultra-large model spaces beyond the reach of other methods. We report on calculations for 19Ne and 20Ne up through 13 harmonic oscillator shells using realistic interactions and discuss the underlying structure as well as implications for various astrophysical reactions. This work was supported by the U.S. NSF (OCI-0904874 and ACI -1516338) and the U.S. DOE (DE-SC0005248), and also benefitted from the Blue Waters sustained-petascale computing project and high performance computing resources provided by LSU.

  19. Calculation of afterbody flows with a composite velocity formulation

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rubin, S. G.; Khosla, P. K.

    1983-01-01

    A recently developed technique for numerical solution of the Navier-Stokes equations for subsonic, laminar flows is investigated. It is extended here to allow for the computation of transonic and turbulent flows. The basic approach involves a multiplicative composite of the appropriate velocity representations for the inviscid and viscous flow regions. The resulting equations are structured so that far from the surface of the body the momentum equations lead to the Bernoulli equation for the pressure, while the continuity equation reduces to the familiar potential equation. Close to the body surface, the governing equations and solution techniques are characteristic of those describing interacting boundary layers. The velocity components are computed with a coupled strongly implicity procedure. For transonic flows the artificial compressibility method is used to treat supersonic regions. Calculations are made for both laminar and turbulent flows over axisymmetric afterbody configurations. Present results compare favorably with other numerical solutions and/or experimental data.

  20. Optics of Water Cloud Droplets Mixed with Black-Carbon Aerosols

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Liu, Li; Cairns, Brian; Mackowski, Daniel W.

    2014-01-01

    We use the recently extended superposition T-matrix method to calculate scattering and absorption properties of micrometer-sized water droplets contaminated by black carbon. Our numerically exact results reveal that, depending on the mode of soot-water mixing, the soot specific absorption can vary by a factor exceeding 6.5. The specific absorption is maximized when the soot material is quasi-uniformly distributed throughout the droplet interior in the form of numerous small monomers. The range of mixing scenarios captured by our computations implies a wide range of remote sensing and radiation budget implications of the presence of black carbon in liquid-water clouds. We show that the popular Maxwell-Garnett effective-medium approximation can be used to calculate the optical cross sections, single-scattering albedo, and asymmetry parameter for the quasi-uniform mixing scenario, but is likely to fail in application to other mixing scenarios and in computations of the elements of the scattering matrix.

  1. Molecular microelectrostatic view on electronic states near pentacene grain boundaries

    NASA Astrophysics Data System (ADS)

    Verlaak, Stijn; Heremans, Paul

    2007-03-01

    Grain boundaries are the most inevitable and pronounced structural defects in pentacene films. To study the effect of those structural defects on the electronic state distribution, the energy levels of a hole on molecules at and near the defect have been calculated using a submolecular self-consistent-polarization-field approach in combination with atomic charge-quadrupole interaction energy calculations. This method has been benchmarked prior to application on four idealized grain boundaries: a grain boundary void, a void with molecules squeezed in between two grains, a boundary between two grains with different crystallographic orientations, and a grain boundary void in which a permanent dipole (e.g., a water molecule) has nested. While idealized, those views highlight different aspects of real grain boundaries. Implications on macroscopic charge transport models are discussed, as well as some relation between growth conditions and the formation of the grain boundary.

  2. Proton threshold states in the {sup 22}Na(p,{gamma}){sup 23}Mg reaction and astrophysical implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comisel, H.; Hategan, C.; Graw, G.

    Proton threshold states in {sup 23}Mg are important for the astrophysically relevant proton capture reaction {sup 22}Na(p,{gamma}){sup 23}Mg. In the indirect determination of the resonance strength of the lowest states, which were not accessible by direct methods, some of the spin-parity-assignments remained experimentally uncertain. We have investigated these states with shell model, Coulomb displacement, and Thomas-Ehrman shift calculations. From the comparison of calculated and observed properties, we relate the lowest relevant resonance state at E{sub x}=7643 keV to an excited 3/2{sup +} state in accordance with a recent experimental determination by Jenkins et al. From this we deduce significantly improvedmore » values for the {sup 22}Na(p,{gamma}){sup 23}Mg reaction rate at stellar temperatures below T{sub 9}=0.1 K.« less

  3. Molecular simulation of simple fluids and polymers in nanoconfinement

    NASA Astrophysics Data System (ADS)

    Rasmussen, Christopher John

    Prediction of phase behavior and transport properties of simple fluids and polymers confined to nanoscale pores is important to a wide range of chemical and biochemical engineering processes. A practical approach to investigate nanoscale systems is molecular simulation, specifically Monte Carlo (MC) methods. One of the most challenging problems is the need to calculate chemical potentials in simulated phases. Through the seminal work of Widom, practitioners have a powerful method for calculating chemical potentials. Yet, this method fails for dense and inhomogeneous systems, as well as for complex molecules such as polymers. In this dissertation, the gauge cell MC method, which had previously been successfully applied to confined simple fluids, was employed and extended to investigate nanoscale fluids in several key areas. Firstly, the process of cavitation (the formation and growth of bubbles) during desorption of fluids from nanopores was investigated. The dependence of cavitation pressure on pore size was determined with gauge cell MC calculations of the nucleation barriers correlated with experimental data. Additional computational studies elucidated the role of surface defects and pore connectivity in the formation of cavitation bubbles. Secondly, the gauge cell method was extended to polymers. The method was verified against the literature results and found significantly more efficient. It was used to examine adsorption of polymers in nanopores. These results were applied to model the dynamics of translocation, the act of a polymer threading through a small opening, which is implicated in drug packaging and delivery, and DNA sequencing. Translocation dynamics was studied as diffusion along the free energy landscape. Thirdly, we show how computer simulation of polymer adsorption could shed light on the specifics of polymer chromatography, which is a key tool for the analysis and purification of polymers. The quality of separation depends on the physico-chemical mechanisms of polymer/pore interaction. We considered liquid chromatography at critical conditions, and calculated the dependence of the partition coefficient on chain length. Finally, solvent-gradient chromatography was modeled using a statistical model of polymer adsorption. A model for predicting separation of complex polymers (with functional groups or copolymers) was developed for practical use in chromatographic separations.

  4. Characterization of Structures and Compositions of Quadrilateral Pyroxenes by Raman Spectroscopy - Implications for Future Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Wang, A.; Jolliff, Bradley L.; Haskin, Larry A.; Kuebler, K. E.

    2000-01-01

    Raman spectral data are used to distinguish the major structure types and to calculate the major compositional parameters (Mg' and Wo) of quadrilateral pyroxenes. The discrepancies between calculated and measured values are within +/-0.1 cation unit.

  5. An evidence based method to calculate pedestrian crossing speeds in vehicle collisions (PCSC).

    PubMed

    Bastien, C; Wellings, R; Burnett, B

    2018-06-07

    Pedestrian accident reconstruction is necessary to establish cause of death, i.e. establishing vehicle collision speed as well as circumstances leading to the pedestrian being impacted and determining culpability of those involved for subsequent court enquiry. Understanding the complexity of the pedestrian attitude during an accident investigation is necessary to ascertain the causes leading to the tragedy. A generic new method, named Pedestrian Crossing Speed Calculator (PCSC), based on vector algebra, is proposed to compute the pedestrian crossing speed at the moment of impact. PCSC uses vehicle damage and pedestrian anthropometric dimensions to establish a combination of head projection angles against the windscreen; this angle is then compared against the combined velocities angle created from the vehicle and the pedestrian crossing speed at the time of impact. This method has been verified using one accident fatality case in which the exact vehicle and pedestrian crossing speeds were known from Police forensic video analysis. PCSC was then applied on two other accident scenarios and correctly corroborated with the witness statements regarding the pedestrians crossing behaviours. The implications of PCSC could be significant once fully validated against further future accident data, as this method is reversible, allowing the computation of vehicle impact velocity from pedestrian crossing speed as well as verifying witness accounts. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Estimating mean QALYs in trial-based cost-effectiveness analysis: the importance of controlling for baseline utility.

    PubMed

    Manca, Andrea; Hawkins, Neil; Sculpher, Mark J

    2005-05-01

    In trial-based cost-effectiveness analysis baseline mean utility values are invariably imbalanced between treatment arms. A patient's baseline utility is likely to be highly correlated with their quality-adjusted life-years (QALYs) over the follow-up period, not least because it typically contributes to the QALY calculation. Therefore, imbalance in baseline utility needs to be accounted for in the estimation of mean differential QALYs, and failure to control for this imbalance can result in a misleading incremental cost-effectiveness ratio. This paper discusses the approaches that have been used in the cost-effectiveness literature to estimate absolute and differential mean QALYs alongside randomised trials, and illustrates the implications of baseline mean utility imbalance for QALY calculation. Using data from a recently conducted trial-based cost-effectiveness study and a micro-simulation exercise, the relative performance of alternative estimators is compared, showing that widely used methods to calculate differential QALYs provide incorrect results in the presence of baseline mean utility imbalance regardless of whether these differences are formally statistically significant. It is demonstrated that multiple regression methods can be usefully applied to generate appropriate estimates of differential mean QALYs and an associated measure of sampling variability, while controlling for differences in baseline mean utility between treatment arms in the trial. Copyright 2004 John Wiley & Sons, Ltd

  7. Geometric method for forming periodic orbits in the Lorenz system

    NASA Astrophysics Data System (ADS)

    Nicholson, S. B.; Kim, Eun-jin

    2016-04-01

    Many systems in nature are out of equilibrium and irreversible. The non-detailed balance observable representation (NOR) provides a useful methodology for understanding the evolution of such non-equilibrium complex systems, by mapping out the correlation between two states to a metric space where a small distance represents a strong correlation [1]. In this paper, we present the first application of the NOR to a continuous system and demonstrate its utility in controlling chaos. Specifically, we consider the evolution of a continuous system governed by the Lorenz equation and calculate the NOR by following a sufficient number of trajectories. We then show how to control chaos by converting chaotic orbits to periodic orbits by utilizing the NOR. We further discuss the implications of our method for potential applications given the key advantage that this method makes no assumptions of the underlying equations of motion and is thus extremely general.

  8. New graphic AUC-based method to estimate overall survival benefit: pomalidomide reanalysis.

    PubMed

    Fenix-Caballero, S; Diaz-Navarro, J; Prieto-Callejero, B; Rios-Sanchez, E; Alegre-del Rey, E J; Borrero-Rubio, J M

    2016-02-01

    Difference in median survival is an erratic measure and sometimes does not provide a good assessment of survival benefit. The aim of this study was to reanalyse the overall survival benefit of pomalidomide from pivotal clinical trial using a new area under curve (AUC)-based method. In the pivotal trial, pomalidomide plus low-dose dexamethasone showed a significant survival benefit over high-dose dexamethasone, with a difference between medians of 4.6 months. The new AUC method applied to the survival curves, obtained an overall survival benefit of 2.6 months for the pomalidomide treatment. This average difference in OS was calculated for the 61.5% of patients for whom the time to event is reliable enough. This 2-month differential would have major clinical and pharmacoeconomic implications, on both cost-effectiveness studies and on the willingness of the healthcare systems to pay for this treatment. © 2015 John Wiley & Sons Ltd.

  9. Individual Returns to Vocational Education and Training Qualifications: Their Implications for Lifelong Learning.

    ERIC Educational Resources Information Center

    Ryan, Chris

    The individual results to vocational education and training qualifications (VET) and the implications thereof for lifelong learning were examined by analyzing data from Australia's 1997 Survey of Education and Training. Private rates of return to various VET qualifications for individuals were calculated by using the internal rate of return, which…

  10. The abundance and relative volatility of refractory trace elements in Allende Ca,Al-rich inclusions - Implications for chemical and physical processes in the solar nebula

    NASA Technical Reports Server (NTRS)

    Kornacki, Alan S.; Fegley, Bruce, Jr.

    1986-01-01

    The relative volatilities of lithophile refractory trace elements (LRTE) were determined using calculated 50-percent condensation temperatures. Then, the refractory trace-element abundances were measured in about 100 Allende inclusions. The abundance patterns found in Allende Ca,Al-rich inclusions (CAIs) and ultrarefractory inclusions were used to empirically modify the calculated LRTE volatility sequence. In addition, the importance of crystal-chemical effects, diffusion constraints, and grain transport for the origin of the trace-element chemistry of Allende CAIs (which have important implications for chemical and physical processes in the solar nebula) is discussed.

  11. Calorimetric investigation of the excess entropy of mixing in analbite-sanidine solid solutions: lack of evidence for Na,K short- range order and implications for two-feldspar thermometry.

    USGS Publications Warehouse

    Haselton, H.T.; Hovis, G.L.; Hemingway, B.S.; Robie, R.A.

    1983-01-01

    Heat capacities (5-380 K) have been measured by adiabatic calorimetry for five highly disordered alkali feldspars (Ab99Or1, Ab85Or15, Ab55Or45, Ab25Or75 and Ab1Or99). The thermodynamic and mineralogical implications of the results are discussed. The new data are also combined with recent data for plagioclases in order to derive a revised expression for the two-feldspar thermometer. T calculated from the revised expression tend to be higher than previous calculations.-J.A.Z.

  12. Experimental Test of Data Analysis Methods from Staggered Pair X-ray Beam Position Monitors at Bending Magnet Beamlines

    NASA Astrophysics Data System (ADS)

    Buth, G.; Huttel, E.; Mangold, S.; Steininger, R.; Batchelor, D.; Doyle, S.; Simon, R.

    2013-03-01

    Different methods have been proposed to calculate the vertical position of the photon beam centroid from the four blade currents of staggered pair X-ray beam position monitors (XBPMs) at bending magnet beamlines since they emerged about 15 years ago. The original difference-over-sum method introduced by Peatman and Holldack is still widely used, even though it has been proven to be rather inaccurate at large beam displacements. By systematically generating bumps in the electron orbit of the ANKA storage ring and comparing synchronized data from electron BPMs and XBPM blade currents, we have been able to show that the log-ratio method by S. F. Lin, B.G. Sun et al. is superior (meaning the characteristic being closer to linear) to the ratio method, which in turn is superior to the difference over sum method. These findings are supported by simulations of the XBPM response to changes of the beam centroid. The heuristic basis for each of the methods is investigated. The implications on using XBPM readings for orbit correction are discussed

  13. Notice of Data Availability for Federal Implementation Plans To Reduce Interstate Transport of Fine Particulate Matter and Ozone: Request for Comment (76 FR 1109)

    EPA Pesticide Factsheets

    This NODA requests public comment on two alternative allocation methodologies for existing units, on the unit-level allocations calculated using those alternative methodologies, on the data supporting the calculations, and on any resulting implications.

  14. Dissolved and labile concentrations of Cd, Cu, Pb, and Zn in the South Fork Coeur d'Alene River, Idaho: Comparisons among chemical equilibrium models and implications for biotic ligand models

    USGS Publications Warehouse

    Balistrieri, L.S.; Blank, R.G.

    2008-01-01

    In order to evaluate thermodynamic speciation calculations inherent in biotic ligand models, the speciation of dissolved Cd, Cu, Pb, and Zn in aquatic systems influenced by historical mining activities is examined using equilibrium computer models and the diffusive gradients in thin films (DGT) technique. Several metal/organic-matter complexation models, including WHAM VI, NICA-Donnan, and Stockholm Humic model (SHM), are used in combination with inorganic speciation models to calculate the thermodynamic speciation of dissolved metals and concentrations of metal associated with biotic ligands (e.g., fish gills). Maximum dynamic metal concentrations, determined from total dissolved metal concentrations and thermodynamic speciation calculations, are compared with labile metal concentrations measured by DGT to assess which metal/organic-matter complexation model best describes metal speciation and, thereby, biotic ligand speciation, in the studied systems. Results indicate that the choice of model that defines metal/organic-matter interactions does not affect calculated concentrations of Cd and Zn associated with biotic ligands for geochemical conditions in the study area, whereas concentrations of Cu and Pb associated with biotic ligands depend on whether the speciation calculations use WHAM VI, NICA-Donnan, or SHM. Agreement between labile metal concentrations and dynamic metal concentrations occurs when WHAM VI is used to calculate Cu speciation and SHM is used to calculate Pb speciation. Additional work in systems that contain wide ranges in concentrations of multiple metals should incorporate analytical speciation methods, such as DGT, to constrain the speciation component of biotic ligand models. ?? 2008 Elsevier Ltd.

  15. Renormalization group analysis of turbulence

    NASA Technical Reports Server (NTRS)

    Smith, Leslie M.

    1989-01-01

    The objective is to understand and extend a recent theory of turbulence based on dynamic renormalization group (RNG) techniques. The application of RNG methods to hydrodynamic turbulence was explored most extensively by Yakhot and Orszag (1986). An eddy viscosity was calculated which was consistent with the Kolmogorov inertial range by systematic elimination of the small scales in the flow. Further, assumed smallness of the nonlinear terms in the redefined equations for the large scales results in predictions for important flow constants such as the Kolmogorov constant. It is emphasized that no adjustable parameters are needed. The parameterization of the small scales in a self-consistent manner has important implications for sub-grid modeling.

  16. Robustness analysis of multirate and periodically time varying systems

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1991-01-01

    A new method for analyzing the stability and robustness of multirate and periodically time varying systems is presented. It is shown that a multirate or periodically time varying system can be transformed into an equivalent time invariant system. For a SISO system, traditional gain and phase margins can be found by direct application of the Nyquist criterion to this equivalent time invariant system. For a MIMO system, structured and unstructured singular values can be used to determine the system's robustness. The limitations and implications of utilizing this equivalent time invariant system for calculating gain and phase margins, and for estimating robustness via singular value analysis are discussed.

  17. Model Calculations with Excited Nuclear Fragmentations and Implications of Current GCR Spectra

    NASA Astrophysics Data System (ADS)

    Saganti, Premkumar

    As a result of the fragmentation process in nuclei, energy from the excited states may also contribute to the radiation damage on the cell structure. Radiation induced damage to the human body from the excited states of oxygen and several other nuclei and its fragments are of a concern in the context of the measured abundance of the current galactic cosmic rays (GCR) environment. Nuclear Shell model based calculations of the Selective-Core (Saganti-Cucinotta) approach are being expanded for O-16 nuclei fragments into N-15 with a proton knockout and O-15 with a neutron knockout are very promising. In our on going expansions of these nuclear fragmentation model calculations and assessments, we present some of the prominent nuclei interactions from a total of 190 isotopes that were identified for the current model expansion based on the Quantum Multiple Scattering Fragmentation Model (QMSFRG) of Cucinotta. Radiation transport model calculations with the implementation of these energy level spectral characteristics are expected to enhance the understanding of radiation damage at the cellular level. Implications of these excited energy spectral calculations in the assessment of radiation damage to the human body may provide enhanced understanding of the space radiation risk assessment.

  18. Early-Stage Aggregation of Human Islet Amyloid Polypeptide

    NASA Astrophysics Data System (ADS)

    Guo, Ashley; de Pablo, Juan

    Human islet amyloid polypeptide (hIAPP, or human amylin) is implicated in the development of type II diabetes. hIAPP is known to aggregate into amyloid fibrils; however, it is prefibrillar oligomeric species, rather than mature fibrils, that are proposed to be cytotoxic. In order to better understand the role of hIAPP aggregation in the onset of disease, as well as to design effective diagnostics and therapeutics, it is crucial to understand the mechanism of early-stage hIAPP aggregation. In this work, we use atomistic molecular dynamics simulations combined with multiple advanced sampling techniques to examine the formation of the hIAPP dimer and trimer. Metadynamics calculations reveal a free energy landscape for the hIAPP dimer, which suggest multiple possible transition pathways. We employ finite temperature string method calculations to identify favorable pathways for dimer and trimer formation, along with relevant free energy barriers and intermediate structures. Results provide valuable insights into the mechanisms and energetics of hIAPP aggregation. In addition, this work demonstrates that the finite temperature string method is an effective tool in the study of protein aggregation. Funded by National Institute of Standards and Technology.

  19. Implicity restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations

    NASA Technical Reports Server (NTRS)

    Sorensen, Danny C.

    1996-01-01

    Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now available, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large-scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case. A recently developed variant of the Arnoldi/Lanczos scheme called the Implicitly Restarted Arnoldi Method is presented here in some depth. This method is highlighted because of its suitability as a basis for software development.

  20. Earthquake source tensor inversion with the gCAP method and 3D Green's functions

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.

    2013-12-01

    We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.

  1. Individual Differences in Mathematical Competence Modulate Brain Responses to Arithmetic Errors: An fMRI Study

    ERIC Educational Resources Information Center

    Ansari, Daniel; Grabner, Roland H.; Koschutnig, Karl; Reishofer, Gernot; Ebner, Franz

    2011-01-01

    Data from both neuropsychological and neuroimaging studies have implicated the left inferior parietal cortex in calculation. Comparatively less attention has been paid to the neural responses associated with the commission of calculation errors and how the processing of arithmetic errors is modulated by individual differences in mathematical…

  2. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water

    PubMed Central

    Wu, Xiongwu; Brooks, Bernard R.

    2015-01-01

    Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66’s pKa. PMID:26506245

  3. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water.

    PubMed

    Wu, Xiongwu; Brooks, Bernard R

    2015-10-01

    Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66's pKa.

  4. Models for integrated pest control and their biological implications.

    PubMed

    Tang, Sanyi; Cheke, Robert A

    2008-09-01

    Successful integrated pest management (IPM) control programmes depend on many factors which include host-parasitoid ratios, starting densities, timings of parasitoid releases, dosages and timings of insecticide applications and levels of host-feeding and parasitism. Mathematical models can help us to clarify and predict the effects of such factors on the stability of host-parasitoid systems, which we illustrate here by extending the classical continuous and discrete host-parasitoid models to include an IPM control programme. The results indicate that one of three control methods can maintain the host level below the economic threshold (ET) in relation to different ET levels, initial densities of host and parasitoid populations and host-parasitoid ratios. The effects of host intrinsic growth rate and parasitoid searching efficiency on host mean outbreak period can be calculated numerically from the models presented. The instantaneous pest killing rate of an insecticide application is also estimated from the models. The results imply that the modelling methods described can help in the design of appropriate control strategies and assist management decision-making. The results also indicate that a high initial density of parasitoids (such as in inundative releases) and high parasitoid inter-generational survival rates will lead to more frequent host outbreaks and, therefore, greater economic damage. The biological implications of this counter intuitive result are discussed.

  5. The extent of food waste generation across EU-27: different calculation methods and the reliability of their results.

    PubMed

    Bräutigam, Klaus-Rainer; Jörissen, Juliane; Priefer, Carmen

    2014-08-01

    The reduction of food waste is seen as an important societal issue with considerable ethical, ecological and economic implications. The European Commission aims at cutting down food waste to one-half by 2020. However, implementing effective prevention measures requires knowledge of the reasons and the scale of food waste generation along the food supply chain. The available data basis for Europe is very heterogeneous and doubts about its reliability are legitimate. This mini-review gives an overview of available data on food waste generation in EU-27 and discusses their reliability against the results of own model calculations. These calculations are based on a methodology developed on behalf of the Food and Agriculture Organization of the United Nations and provide data on food waste generation for each of the EU-27 member states, broken down to the individual stages of the food chain and differentiated by product groups. The analysis shows that the results differ significantly, depending on the data sources chosen and the assumptions made. Further research is much needed in order to improve the data stock, which builds the basis for the monitoring and management of food waste. © The Author(s) 2014.

  6. Computational study of the energetics and defect clustering tendencies for Y- and La-doped UO 2

    DOE PAGES

    Solomon, J. M.; Alexandrov, V.; Sadigh, B.; ...

    2014-07-24

    The energetics and defect-ordering tendencies in solid solutions of uoritestructured UO 2 with trivalent rare earth cations (M 3+=Y, La) are investigated computationally using a combination of ionic-pair-potential and densityfunctional- theory (DFT) based methods. Calculated enthalpies of formation with respect to constituent oxides show higher energetic stability for La solid solutions relative to Y, consistent with the di erences in experimentally measured solubility limits for the two systems. Additionally, calculations performed for di erent atomic con gurations show a preference for reduced (increased) oxygen vacancy coordination around La (Y) dopants. The current results are shown to be qualitatively consistent withmore » related calculations and calorimetry measurements in other trivalent-doped uorite-structured oxides, which show a tendency for increasing stability and increasing preference for higher oxygen coordination with increasing size of the trivalent impurity. The implications of these results are discussed in the context of the e ect of trivalent impurities on oxygen-ion mobilities in UO 2, which are relevant to the understanding of experimental observations concerning the e ect of trivalent ssion products on oxidative corrosion rates of spent nuclear fuel.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Bipasha; Davies, C. T. H.; Donald, G. C.

    Here, we compare correlators for pseudoscalar and vector mesons made from valence strange quarks using the clover quark and highly improved staggered quark (HISQ) formalisms in full lattice QCD. We use fully nonperturbative methods to normalise vector and axial vector current operators made from HISQ quarks, clover quarks and from combining HISQ and clover fields. This allows us to test expectations for the renormalisation factors based on perturbative QCD, with implications for the error budget of lattice QCD calculations of the matrix elements of clover-staggeredmore » $b$-light weak currents, as well as further HISQ calculations of the hadronic vacuum polarisation. We also compare the approach to the (same) continuum limit in clover and HISQ formalisms for the mass and decay constant of the $$\\phi$$ meson. Our final results for these parameters, using single-meson correlators and neglecting quark-line disconnected diagrams are: $$m_{\\phi} =$$ 1.023(5) GeV and $$f_{\\phi} = $$ 0.238(3) GeV in good agreement with experiment. These results come from calculations in the HISQ formalism using gluon fields that include the effect of $u$, $d$, $s$ and $c$ quarks in the sea with three lattice spacing values and $$m_{u/d}$$ values going down to the physical point.« less

  8. Puerto Rico School Characteristics and Student Graduation: Implications for Research and Policy. REL 2017-266

    ERIC Educational Resources Information Center

    Therriault, Susan; Li, Yibing; Bhatt, Monica P.; Narlock, Jason

    2017-01-01

    High school graduation is a critical milestone for students as it has implications for future opportunity and success on both individual and societal levels. In Puerto Rico recent changes in how high school graduation rates are calculated have drawn closer attention to the issue of high school graduation and thus a growing interest in…

  9. On the concept of critical surface excess of micellization.

    PubMed

    Talens-Alesson, Federico I

    2010-11-16

    The critical surface excess of micellization (CSEM) should be regarded as the critical condition for micellization of ionic surfactants instead of the critical micelle concentration (CMC). There is a correspondence between the surface excesses Γ of anionic, cationic, and zwitterionic surfactants at their CMCs, which would be the CSEM values, and the critical association distance for ionic pair association calculated using Bjerrum's correlation. Further support to this concept is given by an accurate method for the prediction of the relative binding of alkali cations onto dodecylsulfate (NaDS) micelles. This method uses a relative binding strength parameter calculated from the values of surface excess Γ at the CMC of the alkali dodecylsulfates. This links both the binding of a given cation onto micelles and the onset for micellization of its surfactant salt. The CSEM concept implies that micelles form at the air-water interface unless another surface with greater affinity for micelles exists. The process would start when surfactant monomers are close enough to each other for ionic pairing with counterions and the subsequent assembly of these pairs becomes unavoidable. This would explain why the surface excess Γ values of different surfactants are more similar than their CMCs: the latter are just the bulk phase concentrations in equilibrium with chemicals with different hydrophobicity. An intriguing implication is that CSEM values may be used to calculate the actual critical distances of ionic pair formation for different cations, replacing Bjerrum's estimates, which only discriminate by the magnitude of the charge.

  10. Reliability of the Walker Cranial Nonmetric Method and Implications for Sex Estimation.

    PubMed

    Lewis, Cheyenne J; Garvin, Heather M

    2016-05-01

    The cranial trait scoring method presented in Buikstra and Ubelaker (Standards for data collection from human skeletal remains. Fayetteville, AR: Arkansas Archeological Survey Research Series No. 44, 1994) and Walker (Am J Phys Anthropol, 136, 2008 and 39) is the most common nonmetric cranial sex estimation method utilized by physical and forensic anthropologists. As such, the reliability and accuracy of the method is vital to ensure its validity in forensic applications. In this study, inter- and intra-observer error rates for the Walker scoring method were calculated using a sample of U.S. White and Black individuals (n = 135). Cohen's weighted kappas, intraclass correlation coefficients, and percentage agreements indicate good agreement between trials and observers for all traits except the mental eminence. Slight disagreement in scoring, however, was found to impact sex classifications, leading to lower accuracy rates than those published by Walker. Furthermore, experience does appear to impact trait scoring and sex classification. The use of revised population-specific equations that avoid the mental eminence is highly recommended to minimize the potential for misclassifications. © 2016 American Academy of Forensic Sciences.

  11. Theoretical studies on atmospheric chemistry of HFE-245mc and perfluoro-ethyl formate: Reaction with OH radicals, atmospheric fate of alkoxy radical and global warming potential

    NASA Astrophysics Data System (ADS)

    Lily, Makroni; Baidya, Bidisha; Chandra, Asit K.

    2017-02-01

    Theoretical studies have been performed on the kinetics, mechanism and thermochemistry of the hydrogen abstraction reactions of CF3CF2OCH3 (HFE-245mc) and CF3CF2OCHO with OH radical using DFT based M06-2X method. IRC calculation shows that both hydrogen abstraction reactions proceed via weakly bound hydrogen-bonded complex preceding to the formation of transition state. The rate coefficients calculated by canonical transition state theory along with Eckart's tunnelling correction at 298 K: k1(CF3CF2OCH3 + OH) = 1.09 × 10-14 and k2(CF3CF2OCHO + OH) = 1.03 × 10-14 cm3 molecule-1 s-1 are in very good agreement with the experimental values. The atmospheric implications of CF3CF2OCH3 and CF3CF2OCHO are also discussed.

  12. Electronic structure of PPP@ZnO from all-electron quasiarticle calculations

    NASA Astrophysics Data System (ADS)

    Höffling, Benjamin; Nabok, Dimitri; Draxl, Claudia; Condensed Matter Theory Group, Humboldt University Berlin Team

    We investigate the electronic properties of poly(para-phenylene) (PPP) adsorbed on the non-polar (001) surface of rocksalt (rs) ZnO using all-electron density functional theory (DFT) as well as quasiparticle (QP) calculations within the GW approach. A particular focus is put on the electronic band discontinuities at the interface, where we investigate the impact of quantum confinement, molecular polarization, and charge rearrangement. For our prototypical system, PPP@ZnO, we find a type-I heterostructure. Comparison of the band offsets derived from a QP-treatment of the hybrid system with predictions based on mesoscopic methods, like the Shockley-Anderson model or alignment via the electrostatic potential, reveals the inadequacy of these simple approaches for the prediction of the electronic structure of such inorganic/organic heterosystems. Finally, we explore the optical excitations of the interface compared to the features of the pristine components and discuss the methodological implications for the ab-initio treatment of interface electronics.

  13. Monte Carlo study of skin optical clearing to enhance light penetration in the tissue: implications for photodynamic therapy of acne vulgaris

    NASA Astrophysics Data System (ADS)

    Bashkatov, Alexey N.; Genina, Elina A.; Tuchin, Valery V.; Altshuler, Gregory B.; Yaroslavsky, Ilya V.

    2008-06-01

    Result of Monte Carlo simulations of skin optical clearing is presented. The model calculations were carried out with the aim of studying of spectral response of skin under immersion liquids action and calculation of enhancement of light penetration depth. In summary, we have shown that: 1) application of glucose, propylene glycol and glycerol produced significant decrease of light scattering in different skin layers; 2) maximal clearing effect will be obtained in case of optical clearing of skin dermis, however, absorbed light fraction in skin dermis changed insignificantly, independently on clearing agent and place it administration; 3) in contrast to it, the light absorbed fraction in skin adipose layer increased significantly in case of optical clearing of skin dermis. It is very important because it can be used for development of optical methods of obesity treatment; 4) optical clearing of superficial skin layers can be used for decreasing of power of light radiation used for treatment of acne vulgaris.

  14. Hypersonic Viscous Flow Over Large Roughness Elements

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Choudhari, Meelan M.

    2009-01-01

    Viscous flow over discrete or distributed surface roughness has great implications for hypersonic flight due to aerothermodynamic considerations related to laminar-turbulent transition. Current prediction capability is greatly hampered by the limited knowledge base for such flows. To help fill that gap, numerical computations are used to investigate the intricate flow physics involved. An unstructured mesh, compressible Navier-Stokes code based on the space-time conservation element, solution element (CESE) method is used to perform time-accurate Navier-Stokes calculations for two roughness shapes investigated in wind tunnel experiments at NASA Langley Research Center. It was found through 2D parametric study that at subcritical Reynolds numbers of the boundary layers, absolute instability resulting in vortex shedding downstream, is likely to weaken at supersonic free-stream conditions. On the other hand, convective instability may be the dominant mechanism for supersonic boundary layers. Three-dimensional calculations for a rectangular or cylindrical roughness element at post-shock Mach numbers of 4.1 and 6.5 also confirm that no self-sustained vortex generation is present.

  15. Kinetic barriers in the isomerization of substituted ureas: implications for computer-aided drug design.

    PubMed

    Loeffler, Johannes R; Ehmki, Emanuel S R; Fuchs, Julian E; Liedl, Klaus R

    2016-05-01

    Urea derivatives are ubiquitously found in many chemical disciplines. N,N'-substituted ureas may show different conformational preferences depending on their substitution pattern. The high energetic barrier for isomerization of the cis and trans state poses additional challenges on computational simulation techniques aiming at a reproduction of the biological properties of urea derivatives. Herein, we investigate energetics of urea conformations and their interconversion using a broad spectrum of methodologies ranging from data mining, via quantum chemistry to molecular dynamics simulation and free energy calculations. We find that the inversion of urea conformations is inherently slow and beyond the time scale of typical simulation protocols. Therefore, extra care needs to be taken by computational chemists to work with appropriate model systems. We find that both knowledge-driven approaches as well as physics-based methods may guide molecular modelers towards accurate starting structures for expensive calculations to ensure that conformations of urea derivatives are modeled as adequately as possible.

  16. Marine Terrace Deposits along the Mediterranean Coast on the Southeastern Turkey and Their Implications for Tectonic Uplift and Sea Level Change

    NASA Astrophysics Data System (ADS)

    Tari, U.; Tüysüz, O.; Blackwell, B. A. B.; Genç, Ş. C.; Florentin, J. A.; Mahmud, Z.; Li, G. L.; Blickstein, J. I. B.; Skinner, A. R.

    2016-12-01

    Tectonic movements among the African, Arabian and Anatolian Plates have deformed the eastern Mediterranean. These movements caused transtensional opening of the NE-trending Antakya Graben since the late Pliocene. Tectonic uplift coupled with Quaternary sealevel fluctuations has produced several stacked marine terraces along the Mediterranean coasts on the graben. Here, marine terrace deposits that sit on both flanks of the graben at elevations between 3 and 175 m were dated using electron spin resonance (ESR) method in order to calculate uplift rates. The ESR ages range from 12 ka in late MIS 2 to 457 ka in MIS 9-11, but most of the terraces contain molluscs reworked from several earlier deposits due to successive tectonic movements and sealevel fluctuations. By dating in situ fossils, along the basal contacts of the marine terraces, uplift rates were calculated on both sides of the Antakya Graben. Results indicate that these deposits were mainly uplifted by local active faults rather than regional movements.

  17. Dynamo magnetic field modes in thin astrophysical disks - An adiabatic computational approximation

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Levy, E. H.

    1991-01-01

    An adiabatic approximation is applied to the calculation of turbulent MHD dynamo magnetic fields in thin disks. The adiabatic method is employed to investigate conditions under which magnetic fields generated by disk dynamos permeate the entire disk or are localized to restricted regions of a disk. Two specific cases of Keplerian disks are considered. In the first, magnetic field diffusion is assumed to be dominated by turbulent mixing leading to a dynamo number independent of distance from the center of the disk. In the second, the dynamo number is allowed to vary with distance from the disk's center. Localization of dynamo magnetic field structures is found to be a general feature of disk dynamos, except in the special case of stationary modes in dynamos with constant dynamo number. The implications for the dynamical behavior of dynamo magnetized accretion disks are discussed and the results of these exploratory calculations are examined in the context of the protosolar nebula and accretion disks around compact objects.

  18. Probability of detecting nematode infestations for quarantine sampling with imperfect extraction efficacy

    PubMed Central

    Chen, Peichen; Liu, Shih-Chia; Liu, Hung-I; Chen, Tse-Wei

    2011-01-01

    For quarantine sampling, it is of fundamental importance to determine the probability of finding an infestation when a specified number of units are inspected. In general, current sampling procedures assume 100% probability (perfect) of detecting a pest if it is present within a unit. Ideally, a nematode extraction method should remove all stages of all species with 100% efficiency regardless of season, temperature, or other environmental conditions; in practice however, no method approaches these criteria. In this study we determined the probability of detecting nematode infestations for quarantine sampling with imperfect extraction efficacy. Also, the required sample and the risk involved in detecting nematode infestations with imperfect extraction efficacy are presented. Moreover, we developed a computer program to calculate confidence levels for different scenarios with varying proportions of infestation and efficacy of detection. In addition, a case study, presenting the extraction efficacy of the modified Baermann's Funnel method on Aphelenchoides besseyi, is used to exemplify the use of our program to calculate the probability of detecting nematode infestations in quarantine sampling with imperfect extraction efficacy. The result has important implications for quarantine programs and highlights the need for a very large number of samples if perfect extraction efficacy is not achieved in such programs. We believe that the results of the study will be useful for the determination of realistic goals in the implementation of quarantine sampling. PMID:22791911

  19. On modeling the paleohydrologic response of closed-basin lakes to fluctuations in climate: Methods, applications, and implications

    NASA Astrophysics Data System (ADS)

    Liu, Ganming; Schwartz, Franklin W.

    2014-04-01

    Climate reconstructions using tree rings and lake sediments have contributed significantly to the understanding of Holocene climates. Approaches focused specifically on reconstructing the temporal water-level response of lakes, however, are much less developed. This paper describes a statistical correlation approach based on time series with Palmer Drought Severity Index (PDSI) values derived from instrumental records or tree rings as a basis for reconstructing stage hydrographs for closed-basin lakes. We use a distributed lag correlation model to calculate a variable, ωt that represents the water level of a lake at any time t as a result of integrated climatic forcing from preceding years. The method was validated using both synthetic and measured lake-stage data and the study found that a lake's "memory" of climate fades as time passes, following an exponential-decay function at rates determined by the correlation time lag. Calculated trends in ωt for Moon Lake, Rice Lake, and Lake Mina from A.D. 1401 to 1860 compared well with the established chronologies (salinity, moisture, and Mg/Ca ratios) reconstructed from sediments. This method provides an independent approach for developing high-resolution information on lake behaviors in preinstrumental times and has been able to identify problems of climate signal deterioration in sediment-based climate reconstructions in lakes with a long time lag.

  20. Potential predictors for the amount of intra-operative brain shift during deep brain stimulation surgery

    NASA Astrophysics Data System (ADS)

    Datteri, Ryan; Pallavaram, Srivatsan; Konrad, Peter E.; Neimat, Joseph S.; D'Haese, Pierre-François; Dawant, Benoit M.

    2011-03-01

    A number of groups have reported on the occurrence of intra-operative brain shift during deep brain stimulation (DBS) surgery. This has a number of implications for the procedure including an increased chance of intra-cranial bleeding and complications due to the need for more exploratory electrodes to account for the brain shift. It has been reported that the amount of pneumocephalus or air invasion into the cranial cavity due to the opening of the dura correlates with intraoperative brain shift. Therefore, pre-operatively predicting the amount of pneumocephalus expected during surgery is of interest toward accounting for brain shift. In this study, we used 64 DBS patients who received bilateral electrode implantations and had a post-operative CT scan acquired immediately after surgery (CT-PI). For each patient, the volumes of the pneumocephalus, left ventricle, right ventricle, third ventricle, white matter, grey matter, and cerebral spinal fluid were calculated. The pneumocephalus was calculated from the CT-PI utilizing a region growing technique that was initialized with an atlas-based image registration method. A multi-atlas-based image segmentation method was used to segment out the ventricles of each patient. The Statistical Parametric Mapping (SPM) software package was utilized to calculate the volumes of the cerebral spinal fluid (CSF), white matter and grey matter. The volume of individual structures had a moderate correlation with pneumocephalus. Utilizing a multi-linear regression between the volume of the pneumocephalus and the statistically relevant individual structures a Pearson's coefficient of r = 0.4123 (p = 0.0103) was found. This study shows preliminary results that could be used to develop a method to predict the amount of pneumocephalus ahead of the surgery.

  1. A fast calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations

    NASA Astrophysics Data System (ADS)

    Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.

    2016-05-01

    Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.

  2. Recording Adverse Events Following Joint Arthroplasty: Financial Implications and Validation of an Adverse Event Assessment Form.

    PubMed

    Lee, Matthew J; Mohamed, Khalid M S; Kelly, John C; Galbraith, John G; Street, John; Lenehan, Brian J

    2017-09-01

    In Ireland, funding of joint arthroplasty procedures has moved to a pay-by-results national tariff system. Typically, adverse clinical events are recorded via retrospective chart-abstraction methods by administrative staff. Missed or undocumented events not only affect the quality of patient care but also may unrealistically skew budgetary decisions that impact fiscal viability of the service. Accurate recording confers clinical benefits and financial transparency. The aim of this study was to compare a prospectively implemented adverse events form with the current national retrospective chart-abstraction method in terms of pay-by-results financial implications. An adverse events form adapted from a similar validated model was used to prospectively record complications in 51 patients undergoing total hip or knee arthroplasties. Results were compared with the same cohort using an existing data abstraction method. Both data sets were coded in accordance with current standards for case funding. Overall, 114 events were recorded during the study through prospective charting of adverse events, compared with 15 events documented by customary method (a significant discrepancy). Wound drainage (15.8%) was the most common complication, followed by anemia (7.9%), lower respiratory tract infections (7.9%), and cardiac events (7%). A total of €61,956 ($67,778) in missed funding was calculated as a result. This pilot study demonstrates the ability to improve capture of adverse events through use of a well-designed assessment form. Proper perioperative data handling is a critical aspect of financial subsidies, enabling optimal allocation of funds. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Mallard age and sex determination from wings

    USGS Publications Warehouse

    Carney, S.M.; Geis, A.D.

    1960-01-01

    This paper describes characters on the wing plumage of the mallard that indicate age and sex. A key outlines a logical order in which to check age and sex characters on wings. This method was tested and found to be more than 95 percent reliable, although it was found that considerable practice and training with known-age specimens was required to achieve this level of accuracy....The implications of this technique and the sampling procedure it permits are discussed. Wing collections could provide information on production, and, if coupled with a banding program could permit seasonal population estimates to be calculated. In addition, representative samples of wings would provide data to check the reliability of several other waterfowl surveys.

  4. Bold Diagrammatic Monte Carlo Method Applied to Fermionized Frustrated Spins

    NASA Astrophysics Data System (ADS)

    Kulagin, S. A.; Prokof'ev, N.; Starykh, O. A.; Svistunov, B.; Varney, C. N.

    2013-02-01

    We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing—cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.

  5. Stochastic optimization for the detection of changes in maternal heart rate kinetics during pregnancy

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Barakat, R. O.; Cordente Martínez, C. A.; Sampedro Molinuevo, J.

    2011-03-01

    The stochastic optimization method ALOPEX IV has been successfully applied to the problem of detecting possible changes in the maternal heart rate kinetics during pregnancy. For this reason, maternal heart rate data were recorded before, during and after gestation, during sessions of exercises of constant mild intensity; ALOPEX IV stochastic optimization was used to calculate the parameter values that optimally fit a dynamical systems model to the experimental data. The results not only demonstrate the effectiveness of ALOPEX IV stochastic optimization, but also have important implications in the area of exercise physiology, as they reveal important changes in the maternal cardiovascular dynamics, as a result of pregnancy.

  6. What do you measure when you measure the Hall effect?

    NASA Astrophysics Data System (ADS)

    Koon, D. W.; Knickerbocker, C. J.

    1993-02-01

    A formalism for calculating the sensitivity of Hall measurements to local inhomogeneities of the sample material or the magnetic field is developed. This Hall weighting function g(x,y) is calculated for various placements of current and voltage probes on square and circular laminar samples. Unlike the resistivity weighting function, it is nonnegative throughout the entire sample, provided all probes lie at the edge of the sample. Singularities arise in the Hall weighting function near the current and voltage probes except in the case where these probes are located at the corners of a square. Implications of the results for cross, clover, and bridge samples, and the implications of our results for metal-insulator transition and quantum Hall studies are discussed.

  7. [Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].

    PubMed

    Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao

    2014-05-01

    Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.

  8. Electronic structure of the alkyne-bridged dicobalt hexacarbonyl complex Co(2) micro-C(2)H(2) (CO)(6): evidence for singlet diradical character and implications for metal-metal bonding.

    PubMed

    Platts, James A; Evans, Gareth J S; Coogan, Michael P; Overgaard, Jacob

    2007-08-06

    A series of ab initio calculations are presented on the alkyne-bridged dicobalt hexacarbonyl cluster Co2 micro-C2H2 (CO)6, indicating that this compound has substantial multireference character, which we interpret as evidence of singlet diradical behavior. As a result, standard theoretical methods such as restricted Hartree-Fock (RHF) or Kohn-Sham (RKS) density functional theory cannot properly describe this compound. We have therefore used complete active space (CAS) methods to explore the bonding in and spectroscopic properties of Co2 micro-C2H2 (CO)6. CAS methods identify significant population of a Co-Co antibonding orbital, along with Co-pi* back-bonding, and a relatively large singlet-triplet energy splitting. Analysis of the electron density and related quantities, such as energy densities and atomic overlaps, indicates a small but significant amount of covalent bonding between cobalt centers.

  9. Mapping Urban Risk: Flood Hazards, Race, & Environmental Justice In New York”

    PubMed Central

    Maantay, Juliana; Maroko, Andrew

    2009-01-01

    This paper demonstrates the importance of disaggregating population data aggregated by census tracts or other units, for more realistic population distribution/location. A newly-developed mapping method, the Cadastral-based Expert Dasymetric System (CEDS), calculates population in hyper-heterogeneous urban areas better than traditional mapping techniques. A case study estimating population potentially impacted by flood hazard in New York City compares the impacted population determined by CEDS with that derived by centroid-containment method and filtered areal weighting interpolation. Compared to CEDS, 37 percent and 72 percent fewer people are estimated to be at risk from floods city-wide, using conventional areal weighting of census data, and centroid-containment selection, respectively. Undercounting of impacted population could have serious implications for emergency management and disaster planning. Ethnic/racial populations are also spatially disaggregated to determine any environmental justice impacts with flood risk. Minorities are disproportionately undercounted using traditional methods. Underestimating more vulnerable sub-populations impairs preparedness and relief efforts. PMID:20047020

  10. Risk analysis of chemical, biological, or radionuclear threats: implications for food security.

    PubMed

    Mohtadi, Hamid; Murshid, Antu Panini

    2009-09-01

    If the food sector is attacked, the likely agents will be chemical, biological, or radionuclear (CBRN). We compiled a database of international terrorist/criminal activity involving such agents. Based on these data, we calculate the likelihood of a catastrophic event using extreme value methods. At the present, the probability of an event leading to 5,000 casualties (fatalities and injuries) is between 0.1 and 0.3. However, pronounced, nonstationary patterns within our data suggest that the "reoccurrence period" for such attacks is decreasing every year. Similarly, disturbing trends are evident in a broader data set, which is nonspecific as to the methods or means of attack. While at the present the likelihood of CBRN events is quite low, given an attack, the probability that it involves CBRN agents increases with the number of casualties. This is consistent with evidence of "heavy tails" in the distribution of casualties arising from CBRN events.

  11. Increasing complexity of clinical research in gastroenterology: implications for the training of clinician-scientists.

    PubMed

    Scott, Frank I; McConnell, Ryan A; Lewis, Matthew E; Lewis, James D

    2012-04-01

    Significant advances have been made in clinical and epidemiologic research methods over the past 30 years. We sought to demonstrate the impact of these advances on published gastroenterology research from 1980 to 2010. Twenty original clinical articles were randomly selected from each of three journals from 1980, 1990, 2000, and 2010. Each article was assessed for topic, whether the outcome was clinical or physiologic, study design, sample size, number of authors and centers collaborating, reporting of various statistical methods, and external funding. From 1980 to 2010, there was a significant increase in analytic studies, clinical outcomes, number of authors per article, multicenter collaboration, sample size, and external funding. There was increased reporting of P values, confidence intervals, and power calculations, and increased use of large multicenter databases, multivariate analyses, and bioinformatics. The complexity of clinical gastroenterology and hepatology research has increased dramatically, highlighting the need for advanced training of clinical investigators.

  12. A clinically applicable non-invasive method to quantitatively assess the visco-hyperelastic properties of human heel pad, implications for assessing the risk of mechanical trauma.

    PubMed

    Behforootan, Sara; Chatzistergos, Panagiotis E; Chockalingam, Nachiappan; Naemi, Roozbeh

    2017-04-01

    Pathological conditions such as diabetic foot and plantar heel pain are associated with changes in the mechanical properties of plantar soft tissue. However, the causes and implications of these changes are not yet fully understood. This is mainly because accurate assessment of the mechanical properties of plantar soft tissue in the clinic remains extremely challenging. To develop a clinically viable non-invasive method of assessing the mechanical properties of the heel pad. Furthermore the effect of non-linear mechanical behaviour of the heel pad on its ability to uniformly distribute foot-ground contact loads in light of the effect of overloading is also investigated. An automated custom device for ultrasound indentation was developed along with custom algorithms for the automated subject-specific modeling of heel pad. Non-time-dependent and time-dependent material properties were inverse engineered from results from quasi-static indentation and stress relaxation test respectively. The validity of the calculated coefficients was assessed for five healthy participants. The implications of altered mechanical properties on the heel pad's ability to uniformly distribute plantar loading were also investigated in a parametric analysis. The subject-specific heel pad models with coefficients calculated based on quasi-static indentation and stress relaxation were able to accurately simulate dynamic indentation. Average error in the predicted forces for maximum deformation was only 6.6±4.0%. When the inverse engineered coefficients were used to simulate the first instance of heel strike the error in terms of peak plantar pressure was 27%. The parametric analysis indicated that the heel pad's ability to uniformly distribute plantar loads is influenced both by its overall deformability and by its stress-strain behaviour. When overall deformability stays constant, changes in stress/strain behaviour leading to a more "linear" mechanical behaviour appear to improve the heel pad's ability to uniformly distribute plantar loading. The developed technique can accurately assess the visco-hyperelastic behaviour of heel pad. It was observed that specific change in stress-strain behaviour can enhance/weaken the heel pad's ability to uniformly distribute plantar loading that will increase/decrease the risk for overloading and trauma. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Three-Dimensional Simulations of the Convective Urca Process in Pre-Supernova White Dwarfs

    NASA Astrophysics Data System (ADS)

    Willcox, Donald E.; Townsley, Dean; Zingale, Michael; Calder, Alan

    2017-01-01

    A significant source of uncertainty in modeling the progenitor systems of Type Ia supernovae is the dynamics of the convective Urca process in which beta decay and electron capture reactions remove energy from and decrease the buoyancy of carbon-fueled convection in the progenitor white dwarf. The details of the Urca process during this simmering phase have long remained computationally intractable in three-dimensional simulations because of the very low convective velocities and the associated timestep constraints of compressible hydrodynamics methods. We report on recent work simulating the A=23 (Ne/Na) Urca process in convecting white dwarfs in three dimensions using the low-Mach hydrodynamics code MAESTRO. We simulate white dwarf models inspired by one-dimensional stellar evolution calculations at the stage when the outer edge of the convection zone driven by core carbon burning reaches the A=23 Urca shell. We compare our methods and results to those of previous work in one and two dimensions, discussing the implications of three dimensional turbulence. We also comment on the prospect of our results informing one-dimensional stellar evolution calculations and the Type Ia supernovae progenitor problem.This work was supported in part by the Department of Energy under grant DE-FG02-87ER40317.

  14. Belief In Numbers: When and why women disbelieve tailored breast cancer risk statistics

    PubMed Central

    Scherer, Laura D.; Ubel, Peter A.; McClure, Jennifer; Green, Sarah M.; Alford, Sharon Hensley; Holtzman, Lisa; Exe, Nicole; Fagerlin, Angela

    2013-01-01

    Objective To examine when and why women disbelieve tailored information about their risk of developing breast cancer. Methods 690 women participated in an online program to learn about medications that can reduce the risk of breast cancer. The program presented tailored information about each woman’s personal breast cancer risk. Half of women were told how their risk numbers were calculated, whereas the rest were not. Later, they were asked whether they believed that the program was personalized, and whether they believed their risk numbers. If a woman did not believe her risk numbers, she was asked to explain why. Results Beliefs that the program was personalized were enhanced by explaining the risk calculation methods in more detail. Nonetheless, nearly 20% of women did not believe their personalized risk numbers. The most common reason for rejecting the risk estimate was a belief that it did not fully account for personal and family history. Conclusions The benefits of tailored risk statistics may be attenuated by a tendency for people to be skeptical that these risk estimates apply to them personally. Practice Implications Decision aids may provide risk information that is not accepted by patients, but addressing the patients’ personal circumstances may lead to greater acceptance. PMID:23623330

  15. Assessment of changing interdependencies between human electroencephalograms using nonlinear methods

    NASA Astrophysics Data System (ADS)

    Pereda, E.; Rial, R.; Gamundi, A.; González, J.

    2001-01-01

    We investigate the problems that might arise when two recently developed methods for detecting interdependencies between time series using state space embedding are applied to signals of different complexity. With this aim, these methods were used to assess the interdependencies between two electroencephalographic channels from 10 adult human subjects during different vigilance states. The significance and nature of the measured interdependencies were checked by comparing the results of the original data with those of different types of surrogates. We found that even with proper reconstructions of the dynamics of the time series, both methods may give wrong statistical evidence of decreasing interdependencies during deep sleep due to changes in the complexity of each individual channel. The main factor responsible for this result was the use of an insufficient number of neighbors in the calculations. Once this problem was surmounted, both methods showed the existence of a significant relationship between the channels which was mostly of linear type and increased from awake to slow wave sleep. We conclude that the significance of the qualitative results provided for both methods must be carefully tested before drawing any conclusion about the implications of such results.

  16. Adjusting case mix payment amounts for inaccurately reported comorbidity data.

    PubMed

    Sutherland, Jason M; Hamm, Jeremy; Hatcher, Jeff

    2010-03-01

    Case mix methods such as diagnosis related groups have become a basis of payment for inpatient hospitalizations in many countries. Specifying cost weight values for case mix system payment has important consequences; recent evidence suggests case mix cost weight inaccuracies influence the supply of some hospital-based services. To begin to address the question of case mix cost weight accuracy, this paper is motivated by the objective of improving the accuracy of cost weight values due to inaccurate or incomplete comorbidity data. The methods are suitable to case mix methods that incorporate disease severity or comorbidity adjustments. The methods are based on the availability of detailed clinical and cost information linked at the patient level and leverage recent results from clinical data audits. A Bayesian framework is used to synthesize clinical data audit information regarding misclassification probabilities into cost weight value calculations. The models are implemented through Markov chain Monte Carlo methods. An example used to demonstrate the methods finds that inaccurate comorbidity data affects cost weight values by biasing cost weight values (and payments) downward. The implications for hospital payments are discussed and the generalizability of the approach is explored.

  17. Cyclic Solvent Vapor Annealing for Rapid, Robust Vertical Orientation of Features in BCP Thin Films

    NASA Astrophysics Data System (ADS)

    Paradiso, Sean; Delaney, Kris; Fredrickson, Glenn

    2015-03-01

    Methods for reliably controlling block copolymer self assembly have seen much attention over the past decade as new applications for nanostructured thin films emerge in the fields of nanopatterning and lithography. While solvent assisted annealing techniques are established as flexible and simple methods for achieving long range order, solvent annealing alone exhibits a very weak thermodynamic driving force for vertically orienting domains with respect to the free surface. To address the desire for oriented features, we have investigated a cyclic solvent vapor annealing (CSVA) approach that combines the mobility benefits of solvent annealing with selective stress experienced by structures oriented parallel to the free surface as the film is repeatedly swollen with solvent and dried. Using dynamical self-consistent field theory (DSCFT) calculations, we establish the conditions under which the method significantly outperforms both static and cyclic thermal annealing and implicate the orientation selection as a consequence of the swelling/deswelling process. Our results suggest that CSVA may prove to be a potent method for the rapid formation of highly ordered, vertically oriented features in block copolymer thin films.

  18. Space sickness predictors suggest fluid shift involvement and possible countermeasures

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Moseley, E. C.; Charles, J. B.

    1992-01-01

    Preflight data from 64 first time Shuttle crew members were examined retrospectively to predict space sickness severity (NONE, MILD, MODERATE, or SEVERE) by discriminant analysis. From 9 input variables relating to fluid, electrolyte, and cardiovascular status, 8 variables were chosen by discriminant analysis that correctly predicted space sickness severity with 59 pct. success by one method of cross validation on the original sample and 67 pct. by another method. The 8 variables in order of their importance for predicting space sickness severity are sitting systolic blood pressure, serum uric acid, calculated blood volume, serum phosphate, urine osmolality, environmental temperature at the launch site, red cell count, and serum chloride. These results suggest the presence of predisposing physiologic factors to space sickness that implicate a fluid shift etiology. Addition of a 10th input variable, hours spent in the Weightless Environment Training Facility (WETF), improved the prediction of space sickness severity to 66 pct. success by the first method of cross validation on the original sample and to 71 pct. by the second method. The data suggest that WETF training may reduce space sickness severity.

  19. An Approach for Calculating Student-Centered Value in Education – A Link between Quality, Efficiency, and the Learning Experience in the Health Professions

    PubMed Central

    Ooi, Caryn; Reeves, Scott; Walsh, Kieran

    2016-01-01

    Health professional education is experiencing a cultural shift towards student-centered education. Although we are now challenging our traditional training methods, our methods for evaluating the impact of the training on the learner remains largely unchanged. What is not typically measured is student-centered value; whether it was ‘worth’ what the learner paid. The primary aim of this study was to apply a method of calculating student-centered value, applied to the context of a change in teaching methods within a health professional program. This study took place over the first semester of the third year of the Bachelor of Physiotherapy at Monash University, Victoria, Australia, in 2014. The entire third year cohort (n = 78) was invited to participate. Survey based design was used to collect the appropriate data. A blended learning model was implemented; subsequently students were only required to attend campus three days per week, with the remaining two days comprising online learning. This was compared to the previous year’s format, a campus-based face-to-face approach where students attended campus five days per week, with the primary outcome—Value to student. Value to student incorporates, user costs associated with transportation and equipment, the amount of time saved, the price paid and perceived gross benefit. Of the 78 students invited to participate, 76 completed the post-unit survey (non-participation rate 2.6%). Based on Value to student the blended learning approach provided a $1,314.93 net benefit to students. Another significant finding was that the perceived gross benefit for the blended learning approach was $4014.84 compared to the campus-based face-to-face approach of $3651.72, indicating that students would pay more for the blended learning approach. This paper successfully applied a novel method of calculating student-centered value. This is the first step in validating the value to student outcome. Measuring economic value to the student may be used as a way of evaluating effective change in a modern health professional curriculum. This could extend to calculate total value, which would incorporate the economic implications for the educational providers. Further research is required for validation of this outcome. PMID:27632427

  20. Study on of Seepage Flow Velocity in Sand Layer Profile as Affected by Water Depth and Slope Gradience

    NASA Astrophysics Data System (ADS)

    Han, Z.; Chen, X.

    2017-12-01

    BACKGROUND: The subsurface water flow velocity is of great significance in understanding the hydrodynamic characteristics of soil seepage and the influence of interaction between seepage flow and surface runoff on the soil erosion and sediment transport process. OBJECTIVE: To propose a visualized method and equipment for determining the seepage flow velocity and measuring the actual flow velocity and Darcy velocity as well as the relationship between them.METHOD: A transparent organic glass tank is used as the test soil tank, the white river sand is used as the seepage test material and the fluorescent dye is used as the indicator for tracing water flow, so as to determine the thickness and velocity of water flow in a visualized way. Water is supplied at the same flow rate (0.84 L h-1) to the three parts with an interval of 1m at the bottom of the soil tank and the pore water velocity and the thickness of each water layer are determined under four gradient conditions. The Darcy velocity of each layer is calculated according to the water supply flow and the discharge section area. The effective discharge flow pore is estimated according to the moisture content and porosity and then the relationship between Darcy velocity and the measured velocity is calculated based on the water supply flow and the water layer thickness, and finally the correctness of the calculation results is verified. RESULTS: According to the velocity calculation results, Darcy velocity increases significantly with the increase of gradient; in the sand layer profile, the flow velocity of pore water at different depths increases with the increase of gradient; under the condition of the same gradient, the lower sand layer has the maximum flow velocity of pore water. The air-filled porosity of sand layer determines the proportional relationship between Darcy velocity and pore flow velocity. CONCLUSIONS: The actual flow velocity and Darcy velocity can be measured by a visualized method and the relationship between Darcy velocity and pore velocity can be expressed well by the air-filled porosity of sand layer. The flow velocity measurement and test method adopted in the research is effective and feasible. IMPLICATIONS: The visualized flow velocity measurement method can be applied to simulate and measure the characteristics of subsurface water flow in the soil.

  1. An Approach for Calculating Student-Centered Value in Education - A Link between Quality, Efficiency, and the Learning Experience in the Health Professions.

    PubMed

    Nicklen, Peter; Rivers, George; Ooi, Caryn; Ilic, Dragan; Reeves, Scott; Walsh, Kieran; Maloney, Stephen

    2016-01-01

    Health professional education is experiencing a cultural shift towards student-centered education. Although we are now challenging our traditional training methods, our methods for evaluating the impact of the training on the learner remains largely unchanged. What is not typically measured is student-centered value; whether it was 'worth' what the learner paid. The primary aim of this study was to apply a method of calculating student-centered value, applied to the context of a change in teaching methods within a health professional program. This study took place over the first semester of the third year of the Bachelor of Physiotherapy at Monash University, Victoria, Australia, in 2014. The entire third year cohort (n = 78) was invited to participate. Survey based design was used to collect the appropriate data. A blended learning model was implemented; subsequently students were only required to attend campus three days per week, with the remaining two days comprising online learning. This was compared to the previous year's format, a campus-based face-to-face approach where students attended campus five days per week, with the primary outcome-Value to student. Value to student incorporates, user costs associated with transportation and equipment, the amount of time saved, the price paid and perceived gross benefit. Of the 78 students invited to participate, 76 completed the post-unit survey (non-participation rate 2.6%). Based on Value to student the blended learning approach provided a $1,314.93 net benefit to students. Another significant finding was that the perceived gross benefit for the blended learning approach was $4014.84 compared to the campus-based face-to-face approach of $3651.72, indicating that students would pay more for the blended learning approach. This paper successfully applied a novel method of calculating student-centered value. This is the first step in validating the value to student outcome. Measuring economic value to the student may be used as a way of evaluating effective change in a modern health professional curriculum. This could extend to calculate total value, which would incorporate the economic implications for the educational providers. Further research is required for validation of this outcome.

  2. MkMRCC, APUCC and APUBD approaches to 1,n-didehydropolyene diradicals: the nature of through-bond exchange interactions

    NASA Astrophysics Data System (ADS)

    Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi

    2010-10-01

    Mukherjee-type (Mk) state specific (SS) multi-reference (MR) coupled-cluster (CC) calculations of 1,n-didehydropolyene diradicals were carried out to elucidate singlet-triplet energy gaps via through-bond coupling between terminal radicals. Spin-unrestricted Hartree-Fock (UHF) based coupled-cluster (CC) computations of these diradicals were also performed. Comparison between symmetry-adapted MkMRCC and broken-symmetry (BS) UHF-CC computational results indicated that spin-contamination error of UHF-CC solutions was left at the SD level, although it had been thought that this error was negligible for the CC scheme in general. In order to eliminate the spin contamination error, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed eliminated the error to yield good agreement with MRCC in energy. The CCD with spin-unrestricted Brueckner's orbital (UB) was also employed for these polyene diradicals, showing that large spin-contamination errors at UHF solutions are dramatically improved, and therefore AP scheme for UBD removed easily the rest of spin-contaminations. Pure- and hybrid-density functional theory (DFT) calculations of the species were also performed. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid DFT. The AP DFT calculations yielded the singlet-triplet energy gaps that were in good agreement with those of MRCC, AP UHF-CC and AP UB-CC. Chemical indices such as the diradical character were calculated with all these methods. Implications of the present computational results are discussed in relation to previous RMRCC calculations of diradical species and BS calculations of large exchange coupled systems.

  3. Measuring geographical accessibility to rural and remote health care services: Challenges and considerations.

    PubMed

    Shah, Tayyab Ikram; Milosavljevic, Stephan; Bath, Brenna

    2017-06-01

    This research is focused on methodological challenges and considerations associated with the estimation of the geographical aspects of access to healthcare with a focus on rural and remote areas. With the assumption that GIS-based accessibility measures for rural healthcare services will vary across geographic units of analysis and estimation techniques, which could influence the interpretation of spatial access to rural healthcare services. Estimations of geographical accessibility depend on variations of the following three parameters: 1) quality of input data; 2) accessibility method; and 3) geographical area. This research investigated the spatial distributions of physiotherapists (PTs) in comparison to family physicians (FPs) across Saskatchewan, Canada. The three-steps floating catchment areas (3SFCA) method was applied to calculate the accessibility scores for both PT and FP services at two different geographical units. A comparison of accessibility scores to simple healthcare provider-to-population ratios was also calculated. The results vary considerably depending on the accessibility methods used and the choice of geographical area unit for measuring geographical accessibility for both FP and PT services. These findings raise intriguing questions regarding the nature and extent of technical issues and methodological considerations that can affect GIS-based measures in health services research and planning. This study demonstrates how the selection of geographical areal units and different methods for measuring geographical accessibility could affect the distribution of healthcare resources in rural areas. These methodological issues have implications for determining where there is reduced access that will ultimately impact health human resource priorities and policies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Implications of observed inconsistencies in carbonate chemistry measurements for ocean acidification studies

    NASA Astrophysics Data System (ADS)

    Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.

    2012-07-01

    The growing field of ocean acidification research is concerned with the investigation of organism responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small-scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30% lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.

  5. Implications of observed inconsistencies in carbonate chemistry measurements for ocean acidification studies

    NASA Astrophysics Data System (ADS)

    Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.

    2012-02-01

    The growing field of ocean acidification research is concerned with the investigation of organisms' responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30 % lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.

  6. Variations of High-Latitude Geomagnetic Pulsation Frequencies: A Comparison of Time-of-Flight Estimates and IMAGE Magnetometer Observations

    NASA Astrophysics Data System (ADS)

    Sandhu, J. K.; Yeoman, T. K.; James, M. K.; Rae, I. J.; Fear, R. C.

    2018-01-01

    The fundamental eigenfrequencies of standing Alfvén waves on closed geomagnetic field lines are estimated for the region spanning 5.9≤L < 9.5 over all MLT (Magnetic Local Time). The T96 magnetic field model and a realistic empirical plasma mass density model are employed using the time-of-flight approximation, refining previous calculations that assumed a relatively simplistic mass density model. An assessment of the implications of using different mass density models in the time-of-flight calculations is presented. The calculated frequencies exhibit dependences on field line footprint magnetic latitude and MLT, which are attributed to both magnetic field configuration and spatial variations in mass density. In order to assess the validity of the time-of-flight calculated frequencies, the estimates are compared to observations of FLR (Field Line Resonance) frequencies. Using IMAGE (International Monitor for Auroral Geomagnetic Effects) ground magnetometer observations obtained between 2001 and 2012, an automated FLR identification method is developed, based on the cross-phase technique. The average FLR frequency is determined, including variations with footprint latitude and MLT, and compared to the time-of-flight analysis. The results show agreement in the latitudinal and local time dependences. Furthermore, with the use of the realistic mass density model in the time-of-flight calculations, closer agreement with the observed FLR frequencies is obtained. The study is limited by the latitudinal coverage of the IMAGE magnetometer array, and future work will aim to extend the ground magnetometer data used to include additional magnetometer arrays.

  7. Developmental models for estimating ecological responses to environmental variability: structural, parametric, and experimental issues.

    PubMed

    Moore, Julia L; Remais, Justin V

    2014-03-01

    Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.

  8. Functional skeletal morphology and its implications for locomotory behavior among three genera of myosoricine shrews (Mammalia: Eulipotyphla: Soricidae)

    USGS Publications Warehouse

    Woodman, Neal; Stabile, Frank A.

    2015-01-01

    Myosoricinae is a small clade of shrews (Mammalia, Eulipotyphla, Soricidae) that is currently restricted to the African continent. Individual species have limited distributions that are often associated with higher elevations. Although the majority of species in the subfamily are considered ambulatory in their locomotory behavior, species of the myosoricine genus Surdisorex are known to be semifossorial. To better characterize variation in locomotory behaviors among myosoricines, we calculated 32 morphological indices from skeletal measurements from nine species representing all three genera that comprise the subfamily (i.e., Congosorex, Myosorex, Surdisorex) and compared them to indices calculated for two species with well-documented locomotory behaviors: the ambulatory talpid Uropsilus soricipes and the semifossorial talpid Neurotrichus gibbsii. We summarized the 22 most complete morphological variables by 1) calculating a mean percentile rank for each species and 2) using the first principal component from principal component analysis of the indices. The two methods yielded similar results and indicate grades of adaptations reflecting a range of potential locomotory behaviors from ambulatory to semifossorial that exceeds the range represented by the two talpids. Morphological variation reflecting grades of increased semifossoriality among myosoricine shrews is similar in many respects to that seen for soricines, but some features are unique to the Myosoricinae.

  9. Nonperturbative comparison of clover and highly improved staggered quarks in lattice QCD and the properties of the Φ meson

    DOE PAGES

    Chakraborty, Bipasha; Davies, C. T. H.; Donald, G. C.; ...

    2017-10-02

    Here, we compare correlators for pseudoscalar and vector mesons made from valence strange quarks using the clover quark and highly improved staggered quark (HISQ) formalisms in full lattice QCD. We use fully nonperturbative methods to normalise vector and axial vector current operators made from HISQ quarks, clover quarks and from combining HISQ and clover fields. This allows us to test expectations for the renormalisation factors based on perturbative QCD, with implications for the error budget of lattice QCD calculations of the matrix elements of clover-staggeredmore » $b$-light weak currents, as well as further HISQ calculations of the hadronic vacuum polarisation. We also compare the approach to the (same) continuum limit in clover and HISQ formalisms for the mass and decay constant of the $$\\phi$$ meson. Our final results for these parameters, using single-meson correlators and neglecting quark-line disconnected diagrams are: $$m_{\\phi} =$$ 1.023(5) GeV and $$f_{\\phi} = $$ 0.238(3) GeV in good agreement with experiment. These results come from calculations in the HISQ formalism using gluon fields that include the effect of $u$, $d$, $s$ and $c$ quarks in the sea with three lattice spacing values and $$m_{u/d}$$ values going down to the physical point.« less

  10. Review of life-cycle approaches coupled with data envelopment analysis: launching the CFP + DEA method for energy policy making.

    PubMed

    Vázquez-Rowe, Ian; Iribarren, Diego

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting.

  11. Review of Life-Cycle Approaches Coupled with Data Envelopment Analysis: Launching the CFP + DEA Method for Energy Policy Making

    PubMed Central

    Vázquez-Rowe, Ian

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting. PMID:25654136

  12. PREDICTING CME EJECTA AND SHEATH FRONT ARRIVAL AT L1 WITH A DATA-CONSTRAINED PHYSICAL MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, Phillip; Zhang, Jie, E-mail: phess4@gmu.edu

    2015-10-20

    We present a method for predicting the arrival of a coronal mass ejection (CME) flux rope in situ, as well as the sheath of solar wind plasma accumulated ahead of the driver. For faster CMEs, the front of this sheath will be a shock. The method is based upon geometrical separate measurement of the CME ejecta and sheath. These measurements are used to constrain a drag-based model, improved by including both a height dependence and accurate de-projected velocities. We also constrain the geometry of the model to determine the error introduced as a function of the deviation of the CMEmore » nose from the Sun–Earth line. The CME standoff-distance in the heliosphere fit is also calculated, fit, and combined with the ejecta model to determine sheath arrival. Combining these factors allows us to create predictions for both fronts at the L1 point and compare them against observations. We demonstrate an ability to predict the sheath arrival with an average error of under 3.5 hr, with an rms error of about 1.58 hr. For the ejecta the error is less than 1.5 hr, with an rms error within 0.76 hr. We also discuss the physical implications of our model for CME expansion and density evolution. We show the power of our method with ideal data and demonstrate the practical implications of having a permanent L5 observer with space weather forecasting capabilities, while also discussing the limitations of the method that will have to be addressed in order to create a real-time forecasting tool.« less

  13. Aromaticity Parameters in Asphalt Binders Calculated From Profile Fitting X-ray Line Spectra Using Pearson VII and Pseudo-Voigt Functions

    NASA Astrophysics Data System (ADS)

    Shirokoff, J.; Lewis, J. Courtenay

    2010-10-01

    The aromaticity and crystallite parameters in asphalt binders are calculated from data obtained after profile fitting x-ray line spectra using Pearson VII and pseudo-Voigt functions. The results are presented and discussed in terms of the peak profile fit parameters used, peak deconvolution procedure, and differences in calculated values that can arise owing to peak shape and additional peaks present in the pattern. These results have implications concerning the evaluation and performance of asphalt binders used in highways and road applications.

  14. Parity and Time-Reversal Violation in Atomic Systems

    NASA Astrophysics Data System (ADS)

    Roberts, B. M.; Dzuba, V. A.; Flambaum, V. V.

    2015-10-01

    Studying the violation of parity and time-reversal invariance in atomic systems has proven to be a very effective means of testing the electroweak theory at low energy and searching for physics beyond it. Recent developments in both atomic theory and experimental methods have led to the ability to make extremely precise theoretical calculations and experimental measurements of these effects. Such studies are complementary to direct high-energy searches, and can be performed for only a fraction of the cost. We review the recent progress in the field of parity and time-reversal violation in atoms, molecules, and nuclei, and examine the implications for physics beyond the Standard Model, with an emphasis on possible areas for development in the near future.

  15. Computer-Aided Drug Design in Epigenetics

    NASA Astrophysics Data System (ADS)

    Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng

    2018-03-01

    Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field.

  16. Close-slow analysis for head-on collision of two black holes in higher dimensions: Bowen-York initial data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshino, Hirotaka; Graduate School of Science and Engineering, Waseda University, Tokyo 169-8555; Shiromizu, Tetsuya

    2006-12-15

    Scenarios of large extra dimensions have enhanced the importance for the study of black holes in higher dimensions. In this paper, we analyze an axisymmetric system of two black holes. Specifically, the Bowen-York method is generalized for higher dimensions in order to calculate the initial data for head-on collision of two equal-mass black holes. Then, the initial data are evolved adopting the close-slow approximation to study gravitational waves emitted during the collision. We derive an empirical formula for radiation efficiency, which depends weakly on the dimensionality. Possible implications of our results for the black hole formation in particle colliders aremore » discussed.« less

  17. Computer-Aided Drug Design in Epigenetics

    PubMed Central

    Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng

    2018-01-01

    Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation, and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field. PMID:29594101

  18. FAST TRACK COMMUNICATION Understanding adhesion at as-deposited interfaces from ab initio thermodynamics of deposition growth: thin-film alumina on titanium carbide

    NASA Astrophysics Data System (ADS)

    Rohrer, Jochen; Hyldgaard, Per

    2010-12-01

    We investigate the chemical composition and adhesion of chemical vapour deposited thin-film alumina on TiC using and extending a recently proposed nonequilibrium method of ab initio thermodynamics of deposition growth (AIT-DG) (Rohrer and Hyldgaard 2010 Phys. Rev. B 82 045415). A previous study of this system (Rohrer et al 2010 J. Phys.: Condens. Matter 22 015004) found that use of equilibrium thermodynamics leads to predictions of a non-binding TiC/alumina interface, despite its industrial use as a wear-resistant coating. This discrepancy between equilibrium theory and experiment is resolved by the AIT-DG method which predicts interfaces with strong adhesion. The AIT-DG method combines density functional theory calculations, rate-equation modelling of the pressure evolution of the deposition environment and thermochemical data. The AIT-DG method was previously used to predict prevalent terminations of growing or as-deposited surfaces of binary materials. Here we extend the method to predict surface and interface compositions of growing or as-deposited thin films on a substrate and find that inclusion of the nonequilibrium deposition environment has important implications for the nature of buried interfaces.

  19. Pressure garment design tool to monitor exerted pressures.

    PubMed

    Macintyre, Lisa; Ferguson, Rhona

    2013-09-01

    Pressure garments are used in the treatment of hypertrophic scarring following serious burns. The use of pressure garments is believed to hasten the maturation process, reduce pruritus associated with immature hypertrophic scars and prevent the formation of contractures over flexor joints. Pressure garments are normally made to measure for individual patients from elastic fabrics and are worn continuously for up to 2 years or until scar maturation. There are 2 methods of constructing pressure garments. The most common method, called the Reduction Factor method, involves reducing the patient's circumferential measurements by a certain percentage. The second method uses the Laplace Law to calculate the dimensions of pressure garments based on the circumferential measurements of the patient and the tension profile of the fabric. The Laplace Law method is complicated to utilise manually and no design tool is currently available to aid this process. This paper presents the development and suggested use of 2 new pressure garment design tools that will aid pressure garment design using the Reduction Factor and Laplace Law methods. Both tools calculate the pressure garment dimensions and the mean pressure that will be exerted around the body at each measurement point. Monitoring the pressures exerted by pressure garments and noting the clinical outcome would enable clinicians to build an understanding of the implications of particular pressures on scar outcome, maturation times and patient compliance rates. Once the optimum pressure for particular treatments is known, the Laplace Law method described in this paper can be used to deliver those average pressures to all patients. This paper also presents the results of a small scale audit of measurements taken for the fabrication of pressure garments in two UK hospitals. This audit highlights the wide range of pressures that are exerted using the Reduction Factor method and that manual pattern 'smoothing' can dramatically change the actual Reduction Factors used. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  20. The CH/π hydrogen bond: Implication in chemistry

    NASA Astrophysics Data System (ADS)

    Nishio, M.

    2012-06-01

    The CH/π hydrogen bond is the weakest extreme of hydrogen bonds that occurs between a soft acid CH and a soft base π-system. Implication in chemistry of the CH/π hydrogen bond includes issues of conformation, crystal packing, and specificity in host/guest complexes. The result obtained by analyzing the Cambridge Structural Database is reviewed. The peculiar axial preference of isopropyl group in α-phellandrene and folded conformation of levopimaric acid have been explained in terms of the CH/π hydrogen bond, by high-level ab initio MO calculations. Implication of the CH/π hydrogen bond in structural biology is also discussed, briefly.

  1. An internally consistent set of thermodynamic data for twentyone CaO-Al2O3-SiO2- H2O phases by linear parametric programming

    NASA Astrophysics Data System (ADS)

    Halbach, Heiner; Chatterjee, Niranjan D.

    1984-11-01

    The technique of linear parametric programming has been applied to derive sets of internally consistent thermodynamic data for 21 condensed phases of the quaternary system CaO-Al2O3-SiO2-H2O (CASH) (Table 4). This was achieved by simultaneously processing: a) calorimetric data for 16 of these phases (Table 1), and b) experimental phase equilibria reversal brackets for 27 reactions (Table 3) involving these phases. Calculation of equilibrium P-T curves of several arbitrarily picked reactions employing the preferred set of internally consistent thermodynamic data from Table 4 shows that the input brackets are invariably satisfied by the calculations (Fig. 2a). By contrast, the same equilibria calculated on the basis of a set of thermodynamic data derived by applying statistical methods to a large body of comparable input data (Haas et al. 1981; Hemingway et al. 1982) do not necessarily agree with the experimental reversal brackets. Prediction of some experimentally investigated phase relations not included into the linear programming input database also appears to be remarkably successful. Indications are, therefore, that the thermodynamic data listed in Table 4 may be used with confidence to predict geologic phase relations in the CASH system with considerable accuracy. For such calculated phase diagrams and their petrological implications, the reader's attention is drawn to the paper by Chatterjee et al. (1984).

  2. Analytical and sampling constraints in ²¹⁰Pb dating.

    PubMed

    MacKenzie, A B; Hardie, S M L; Farmer, J G; Eades, L J; Pulford, I D

    2011-03-01

    ²¹⁰Pb dating provides a valuable, widely used means of establishing recent chronologies for sediments and other accumulating natural deposits. The Constant Rate of Supply (CRS) model is the most versatile and widely used method for establishing ²¹⁰Pb chronologies but, when using this model, care must be taken to account for limitations imposed by sampling and analytical factors. In particular, incompatibility of finite values for empirical data, which are constrained by detection limit and core length, with terms in the age calculation, which represent integrations to infinity, can generate erroneously old ages for deeper sections of cores. The bias in calculated ages increases with poorer limit of detection and the magnitude of the disparity increases with age. The origin and magnitude of this effect are considered below, firstly for an idealized, theoretical ²¹⁰Pb profile and secondly for a freshwater lake sediment core. A brief consideration is presented of the implications of this potential artefact for sampling and analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. The Profitability Analysis of PT. Garuda Indonesia (Persero) Tbk. Before and After Privatization

    NASA Astrophysics Data System (ADS)

    Nurasiah, I.; Anggara

    2017-03-01

    This study purposes to determine differences in the profitability of PT. Garuda Indonesia (Persero) Tbk. before and after privatization using Net Profit Margin (NPM), Return on Investmen (ROI) and Return on Equity (ROE). This research used a case study method with a qualitative approach. The data used are secondary data from official financial statements of PT. Garuda Indonesia (Persero) Tbk. periode 2008-2013, 3 years before privatization and 3 years after privatization. Data analysis was performed by reviewing the financial statement data, calculate & determine the value of profitability ratios before and after privatization, and determine the amount of the average difference before and after privatization. The result proved that the average ratio of profitability calculated by applying NPM, ROI and ROE in every year shows a decrease that caused imbalance components forming of NPM, ROI, ROE, where profit is getting down while the selling, total assets and equity increase more and more from the previous period. The implication for the next research is a research that focus on determine how long a company can emerged from the crisis by privatization decision.

  4. An analysis of the feasibility of carbon management policies as a mechanism to influence water conservation using optimization methods.

    PubMed

    Wright, Andrew; Hudson, Darren

    2014-10-01

    Studies of how carbon reduction policies would affect agricultural production have found that there is a connection between carbon emissions and irrigation. Using county level data we develop an optimization model that accounts for the gross carbon emitted during the production process to evaluate how carbon reducing policies applied to agriculture would affect the choices of what to plant and how much to irrigate by producers on the Texas High Plains. Carbon emissions were calculated using carbon equivalent (CE) calculations developed by researchers at the University of Arkansas. Carbon reduction was achieved in the model through a constraint, a tax, or a subsidy. Reducing carbon emissions by 15% resulted in a significant reduction in the amount of water applied to a crop; however, planted acreage changed very little due to a lack of feasible alternative crops. The results show that applying carbon restrictions to agriculture may have important implications for production choices in areas that depend on groundwater resources for agricultural production. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. A reevaluation of the costs of heart failure and its implications for allocation of health resources in the United States.

    PubMed

    Voigt, Jeff; Sasha John, M; Taylor, Andrew; Krucoff, Mitchell; Reynolds, Matthew R; Michael Gibson, C

    2014-05-01

    The annual cost of heart failure (HF) is estimated at $39.2 billion. This has been acknowledged to underestimate the true costs for care. The objective of this analysis is to more accurately assess these costs. Publicly available data sources were used. Cost calculations incorporated relevant factors such as Medicare hospital cost-to-charge ratios, reimbursement from both government and private insurance, and out-of-pocket expenditures. A recently published Atherosclerosis Risk in Communities (ARIC) HF scheme was used to adjust the HF classification scheme. Costs were calculated with HF as the primary diagnosis (HF in isolation, or HFI) or HF as one of the diagnoses/part of a disease milieu (HF syndrome, or HFS). Total direct costs for HF were calculated at $60.2 billion (HFI) and $115.4 billion (HFS). Indirect costs were $10.6 billion for both. Costs attributable to HF may represent a much larger burden to US health care than what is commonly referenced. These revised and increased costs have implications for policy makers.

  6. A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani

    2013-01-01

    This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.

  7. A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani

    2013-01-01

    This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes, the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.

  8. Non-stationarity in US droughts and implications for water resources planning and management

    NASA Astrophysics Data System (ADS)

    Apurv, T.; Cai, X.

    2017-12-01

    The concepts of return period and reliability are widely used in hydrology for quantifying the risk of extreme events. The conventional way of calculating return period and reliability requires the assumption of stationarity and independence of extreme events in successive years. These assumptions may not be true for droughts since a single drought event can last for more than one year. Further, droughts are known to be influenced by multi-year to multi-decadal oscillations (eg. El Nino Southern Oscillation (ENSO), Atlantic Multidecadal Oscillation (AMO), Pacific Decadal Oscillation (PDO)), which means that the underlying distribution can change with time. In this study, we develop a non-stationary frequency analysis for relating meteorological droughts in the continental US (CONUS) with physical covariates. We calculate the return period and reliability of meteorological droughts in different parts of CONUS by considering the correlation and the non-stationarity in drought events. We then compare the return period and reliability calculated assuming non-stationarity with that calculated assuming stationarity. The difference between the two estimates is used to quantify the extent of non-stationarity in droughts in different parts of CONUS. We also use the non-stationary frequency analysis model for attributing the causes of non-stationarity. Finally we will discuss the implications for water resources planning and management in the United States.

  9. Design with high strength steel: A case of failure and its implications

    NASA Astrophysics Data System (ADS)

    Rahka, Klaus

    1992-10-01

    A recent proof test failure of a high strength steel pressure vessel is scrutinized. Apparent deficiencies in the procedures to account for elasto-plastic local strain are indicated for the applicable routine (code) strength calculations. Tentative guidance is given for the use of material tensile fracture strain and its strain state (plane strain) correction in fracture margin estimation. A hypothesis that the calculated local strain is comparable with a gauge length weighted tensile ductility for fracture to initiate at a notch root is given. A discussion about the actual implications of the failure case and the suggested remedy in the light of the ASME Boiler and Pressure Vessel Code section 3 and 8 is presented. Further needs for research and development are delineated. Possible yield and ductility related design limits and their use as material quality indices are discussed.

  10. Defining the baseline for inhibition concentration calculations for hormetic hazards.

    PubMed

    Bailer, A J; Oris, J T

    2000-01-01

    The use of endpoint estimates based on modeling inhibition of test organism response relative to a baseline response is an important tool in the testing and evaluation of aquatic hazards. In the presence of a hormetic hazard, the definition of the baseline response is not clear because non-zero levels of the hazard stimulate an enhanced response prior to inhibition. In the present study, the methodology and implications of how one defines a baseline response for inhibition concentration estimation in aquatic toxicity tests was evaluated. Three possible baselines were considered: the control response level; the pooling of responses, including controls and all concentration conditions with responses enhanced relative to controls; and, finally, the maximal response. The statistical methods associated with estimating inhibition relative to the first two baseline definitions were described and a method for estimating inhibition relative to the third baseline definition was derived. These methods were illustrated with data from a standard aquatic zooplankton reproductive toxicity test in which the number of young produced in three broods of a cladoceran exposed to effluent was modeled as a function of effluent concentration. Copyright 2000 John Wiley & Sons, Ltd.

  11. Automatic and Direct Identification of Blink Components from Scalp EEG

    PubMed Central

    Kong, Wanzeng; Zhou, Zhanpeng; Hu, Sanqing; Zhang, Jianhai; Babiloni, Fabio; Dai, Guojun

    2013-01-01

    Eye blink is an important and inevitable artifact during scalp electroencephalogram (EEG) recording. The main problem in EEG signal processing is how to identify eye blink components automatically with independent component analysis (ICA). Taking into account the fact that the eye blink as an external source has a higher sum of correlation with frontal EEG channels than all other sources due to both its location and significant amplitude, in this paper, we proposed a method based on correlation index and the feature of power distribution to automatically detect eye blink components. Furthermore, we prove mathematically that the correlation between independent components and scalp EEG channels can be translating directly from the mixing matrix of ICA. This helps to simplify calculations and understand the implications of the correlation. The proposed method doesn't need to select a template or thresholds in advance, and it works without simultaneously recording an electrooculography (EOG) reference. The experimental results demonstrate that the proposed method can automatically recognize eye blink components with a high accuracy on entire datasets from 15 subjects. PMID:23959240

  12. SU-G-BRC-16: Theory and Clinical Implications of the Constant Dosimetric Leaf Gap (DLG) Approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumaraswamy, L; Xu, Z; Podgorsak, M

    Purpose: Commercial dose calculation algorithms incorporate a single DLG value for a given beam energy that is applied across an entire treatment field. However, the physical processes associated with beam generation and dose delivery suggest that the DLG is not constant. The aim of this study is to evaluate the variation of DLG among all leaf pairs, to quantify how this variation impacts delivered dose, and to establish a novel method to correct dose distributions calculated using the approximation of constant DLG. Methods: A 2D diode array was used to measure the DLG for all 60 leaf pairs at severalmore » points along each leaf pair travel direction. This approach was validated by comparison to DLG values measured at select points using a 0.6 cc ion chamber with the standard formalism. In-house software was developed to enable incorporation of position dependent DLG values into dose distribution optimization and calculation. The accuracy of beam delivery of both the corrected and uncorrected treatment plans was studied through gamma pass rate evaluation. A comparison of DVH statistics in corrected and uncorrected treatment plans was made. Results: The outer 20 MLC leaf pairs (1.0 cm width) have DLG values that are 0.32 mm (mean) to 0.65 mm (maximum) lower than the central leaf-pair. VMAT plans using a large number of 1 cm wide leaves were more accurately delivered (gamma pass rate increased by 5%) and dose coverage was higher (D100 increased by 3%) when the 2D DLG was modeled. Conclusion: Using a constant DLG value for a given beam energy will result in dose optimization, dose calculation and treatment delivery inaccuracies that become significant for treatment plans with high modulation complexity scores delivered with 1 cm wide leaves.« less

  13. Noise power spectra of images from digital mammography detectors.

    PubMed

    Williams, M B; Mangiafico, P A; Simoni, P U

    1999-07-01

    Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. We begin with a brief review of the fundamentals of NPS theory and measurement, derive explicit expressions for calculation of the one- and two-dimensional (1D and 2D) NPS, and discuss some of the considerations and tradeoffs when these concepts are applied to digital systems. Measurements of the NPS of two detectors for digital mammography are presented to illustrate some of the implications of the choices available. For both systems, two-dimensional noise power spectra obtained over a range of input fluence exhibit pronounced asymmetry between the orthogonal frequency dimensions. The 2D spectra of both systems also demonstrate dominant structures both on and off the primary frequency axes indicative of periodic noise components. Although the two systems share many common noise characteristics, there are significant differences, including markedly different dark-noise magnitudes, differences in NPS shape as a function of both spatial frequency and exposure, and differences in the natures of the residual fixed pattern noise following flat fielding corrections. For low x-ray exposures, quantum noise-limited operation may be possible only at low spatial frequency. Depending on the method of obtaining the 1D NPS (i.e., synthetic slit scanning or slice extraction from the 2D NPS), on-axis periodic structures can be misleadingly smoothed or missed entirely. Our measurements indicate that for these systems, 1D spectra useful for the purpose of detective quantum efficiency calculation may be obtained from thin cuts through the central portion of the calculated 2D NPS. On the other hand, low-frequency spectral values do not converge to an asymptotic value with increasing slit length when 1D spectra are generated using the scanned synthetic slit method. Aliasing can contribute significantly to the digital NPS, especially near the Nyquist frequency. Calculation of the theoretical presampling NPS and explicit inclusion of aliased noise power shows good agreement with measured values.

  14. VUV photodynamics and chiral asymmetry in the photoionization of gas phase alanine enantiomers.

    PubMed

    Tia, Maurice; Cunha de Miranda, Barbara; Daly, Steven; Gaie-Levrel, François; Garcia, Gustavo A; Nahon, Laurent; Powis, Ivan

    2014-04-17

    The valence shell photoionization of the simplest proteinaceous chiral amino acid, alanine, is investigated over the vacuum ultraviolet region from its ionization threshold up to 18 eV. Tunable and variable polarization synchrotron radiation was coupled to a double imaging photoelectron/photoion coincidence (i(2)PEPICO) spectrometer to produce mass-selected threshold photoelectron spectra and derive the state-selected fragmentation channels. The photoelectron circular dichroism (PECD), an orbital-sensitive, conformer-dependent chiroptical effect, was also recorded at various photon energies and compared to continuum multiple scattering calculations. Two complementary vaporization methods-aerosol thermodesorption and a resistively heated sample oven coupled to an adiabatic expansion-were applied to promote pure enantiomers of alanine into the gas phase, yielding neutral alanine with different internal energy distributions. A comparison of the photoelectron spectroscopy, fragmentation, and dichroism measured for each of the vaporization methods was rationalized in terms of internal energy and conformer populations and supported by theoretical calculations. The analytical potential of the so-called PECD-PICO detection technique-where the electron spectroscopy and circular dichroism can be obtained as a function of mass and ion translational energy-is underlined and applied to characterize the origin of the various species found in the experimental mass spectra. Finally, the PECD findings are discussed within an astrochemical context, and possible implications regarding the origin of biomolecular asymmetry are identified.

  15. Energy system contribution to 400-metre and 800-metre track running.

    PubMed

    Duffield, Rob; Dawson, Brian; Goodman, Carmel

    2005-03-01

    As a wide range of values has been reported for the relative energetics of 400-m and 800-m track running events, this study aimed to quantify the respective aerobic and anaerobic energy contributions to these events during track running. Sixteen trained 400-m (11 males, 5 females) and 11 trained 800-m (9 males, 2 females) athletes participated in this study. The participants performed (on separate days) a laboratory graded exercsie test and multiple race time-trials. The relative energy system contribution was calculated by multiple methods based upon measures of race VO2, accumulated oxygen deficit (AOD), blood lactate and estimated phosphocreatine degradation (lactate/PCr). The aerobic/anaerobic energy system contribution (AOD method) to the 400-m event was calculated as 41/59% (male) and 45/55% (female). For the 800-m event, an increased aerobic involvement was noted with a 60/40% (male) and 70/30% (female) respective contribution. Significant (P < 0.05) negative correlations were noted between race performance and anaerobic energy system involvement (lactate/PCr) for the male 800-m and female 400-m events (r = - 0.77 and - 0.87 respectively). These track running data compare well with previous estimates of the relative energy system contributions to the 400-m and 800-m events. Additionally, the relative importance and speed of interaction of the respective metabolic pathways has implications to training for these events.

  16. Gravity field of Jupiter’s moon Amalthea and the implication on a spacecraft trajectory

    NASA Astrophysics Data System (ADS)

    Weinwurm, Gudrun

    2006-01-01

    Before its final plunge into Jupiter in September 2003, GALILEO made a last 'visit' to one of Jupiter's moons - Amalthea. This final flyby of the spacecraft's successful mission occurred on November 5, 2002. In order to analyse the spacecraft data with respect to Amalthea's gravity field, interior models of the moon had to be provided. The method used for this approach is based on the numerical integration of infinitesimal volume elements of a three-axial ellipsoid in elliptic coordinates. To derive the gravity field coefficients of the body, the second method of Neumann was applied. Based on the spacecraft trajectory data provided by the Jet Propulsion Laboratory, GALILEO's velocity perturbations at closest approach could be calculated. The harmonic coefficients of Amalthea's gravity field have been derived up to degree and order six, for both homogeneous and reasonable heterogeneous cases. Founded on these numbers the impact on the trajectory of GALILEO was calculated and compared to existing Doppler data. Furthermore, predictions for future spacecraft flybys were derived. No two-way Doppler-data was available during the flyby and the harmonic coefficients of the gravity field are buried in the one-way Doppler-noise. Nevertheless, the generated gravity field models reflect the most likely interior structure of the moon and can be a basis for further exploration of the Jovian system.

  17. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    NASA Astrophysics Data System (ADS)

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-10-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  18. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: dispersion, induction, and basis set superposition error.

    PubMed

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T; Dannenberg, J J

    2012-10-07

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  19. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    PubMed Central

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-01-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states. PMID:23039587

  20. Maintaining the NA atmosphere of Mercury

    NASA Astrophysics Data System (ADS)

    Killen, R. M.; Morgan, T. H.

    1993-02-01

    The possible sources of the Na atmosphere of Mercury are calculatively studied. The likely structure, composition, and temperature of the planet's upper crust is examined along with the probable flux of Na from depth by grain boundary diffusion and by Knudsen flow. The creation of fresh regolith is considered along with mechanisms for supplying Na from the surface to the exosphere. The implications of the calculations for the probable abundances in the regolith are discussed.

  1. Implications of pressure diffusion for shock waves

    NASA Technical Reports Server (NTRS)

    Ram, Ram Bachan

    1989-01-01

    The report deals with the possible implications of pressure diffusion for shocks in one dimensional traveling waves in an ideal gas. From this new hypothesis all aspects of such shocks can be calculated except shock thickness. Unlike conventional shock theory, the concept of entropy is not needed or used. Our analysis shows that temperature rises near a shock, which is of course an experimental fact; however, it also predicts that very close to a shock, density increases faster than pressure. In other words, a shock itself is cold.

  2. Mapping of compositional properties of coal using isometric log-ratio transformation and sequential Gaussian simulation - A comparative study for spatial ultimate analyses data.

    PubMed

    Karacan, C Özgen; Olea, Ricardo A

    2018-03-01

    Chemical properties of coal largely determine coal handling, processing, beneficiation methods, and design of coal-fired power plants. Furthermore, these properties impact coal strength, coal blending during mining, as well as coal's gas content, which is important for mining safety. In order for these processes and quantitative predictions to be successful, safer, and economically feasible, it is important to determine and map chemical properties of coals accurately in order to infer these properties prior to mining. Ultimate analysis quantifies principal chemical elements in coal. These elements are C, H, N, S, O, and, depending on the basis, ash, and/or moisture. The basis for the data is determined by the condition of the sample at the time of analysis, with an "as-received" basis being the closest to sampling conditions and thus to the in-situ conditions of the coal. The parts determined or calculated as the result of ultimate analyses are compositions, reported in weight percent, and pose the challenges of statistical analyses of compositional data. The treatment of parts using proper compositional methods may be even more important in mapping them, as most mapping methods carry uncertainty due to partial sampling as well. In this work, we map the ultimate analyses parts of the Springfield coal from an Indiana section of the Illinois basin, USA, using sequential Gaussian simulation of isometric log-ratio transformed compositions. We compare the results with those of direct simulations of compositional parts. We also compare the implications of these approaches in calculating other properties using correlations to identify the differences and consequences. Although the study here is for coal, the methods described in the paper are applicable to any situation involving compositional data and its mapping.

  3. Prediction of nonlinear optical properties of organic materials. General theoretical considerations

    NASA Technical Reports Server (NTRS)

    Cardelino, B.; Moore, C.; Zutaut, S.

    1993-01-01

    The prediction of nonlinear optical properties of organic materials is geared to assist materials scientists in the selection of good candidate molecules. A brief summary of the quantum mechanical methods used for estimating hyperpolarizabilities will be presented. The advantages and limitations of each technique will be discussed. Particular attention will be given to the finite-field method for calculating first and second order hyperpolarizabilities, since this method is better suited for large molecules. Corrections for dynamic fields and bulk effects will be discussed in detail, focusing on solvent effects, conformational isomerization, core effects, dispersion, and hydrogen bonding. Several results will be compared with data obtained from third-harmonic-generation (THG) and dc-induced second harmonic generation (EFISH) measurements. These comparisons will demonstrate the qualitative ability of the method to predict the relative strengths of hyperpolarizabilities of a class of compounds. The future application of molecular mechanics, as well as other techniques, in the study of bulk properties and solid state defects will be addressed. The relationship between large values for nonlinear optical properties and large conjugation lengths is well known, and is particularly important for third-order processes. For this reason, the materials with the largest observed nonresonant third-order properties are conjugated polymers. An example of this type of polymer is polydiacetylene. One of the problems in dealing with polydiacetylene is that substituents which may enhance its nonlinear properties may ultimately prevent it from polymerizing. A model which attempts to predict the likelihood of solid-state polymerization is considered, along with the implications of the assumptions that are used. Calculations of the third-order optical properties and their relationship to first-order properties and energy gaps will be discussed. The relationship between monomeric and polymeric third-order optical properties will also be considered.

  4. Noninvasive PK11195-PET Image Analysis Techniques Can Detect Abnormal Cerebral Microglial Activation in Parkinson's Disease.

    PubMed

    Kang, Yeona; Mozley, P David; Verma, Ajay; Schlyer, David; Henchcliffe, Claire; Gauthier, Susan A; Chiao, Ping C; He, Bin; Nikolopoulou, Anastasia; Logan, Jean; Sullivan, Jenna M; Pryor, Kane O; Hesterman, Jacob; Kothari, Paresh J; Vallabhajosula, Shankar

    2018-05-04

    Neuroinflammation has been implicated in the pathophysiology of Parkinson's disease (PD), which might be influenced by successful neuroprotective drugs. The uptake of [ 11 C](R)-PK11195 (PK) is often considered to be a proxy for neuroinflammation, and can be quantified using the Logan graphical method with an image-derived blood input function, or the Logan reference tissue model using automated reference region extraction. The purposes of this study were (1) to assess whether these noninvasive image analysis methods can discriminate between patients with PD and healthy volunteers (HVs), and (2) to establish the effect size that would be required to distinguish true drug-induced changes from system variance in longitudinal trials. The sample consisted of 20 participants with PD and 19 HVs. Two independent teams analyzed the data to compare the volume of distribution calculated using image-derived input functions (IDIFs), and binding potentials calculated using the Logan reference region model. With all methods, the higher signal-to-background in patients resulted in lower variability and better repeatability than in controls. We were able to use noninvasive techniques showing significantly increased uptake of PK in multiple brain regions of participants with PD compared to HVs. Although not necessarily reflecting absolute values, these noninvasive image analysis methods can discriminate between PD patients and HVs. We see a difference of 24% in the substantia nigra between PD and HV with a repeatability coefficient of 13%, showing that it will be possible to estimate responses in longitudinal, within subject trials of novel neuroprotective drugs. © 2018 The Authors. Journal of Neuroimaging published by Wiley Periodicals, Inc. on behalf of American Society of Neuroimaging.

  5. Influence of critical closing pressure on systemic vascular resistance and total arterial compliance: A clinical invasive study.

    PubMed

    Chemla, Denis; Lau, Edmund M T; Hervé, Philippe; Millasseau, Sandrine; Brahimi, Mabrouk; Zhu, Kaixian; Sattler, Caroline; Garcia, Gilles; Attal, Pierre; Nitenberg, Alain

    2017-12-01

    Systemic vascular resistance (SVR) and total arterial compliance (TAC) modulate systemic arterial load, and their product is the time constant (Tau) of the Windkessel. Previous studies have assumed that aortic pressure decays towards a pressure asymptote (P∞) close to 0mmHg, as right atrial pressure is considered the outflow pressure. Using these assumptions, aortic Tau values of ∼1.5seconds have been documented. However, a zero P∞ may not be physiological because of the high critical closing pressure previously documented in vivo. To calculate precisely the Tau and P∞ of the Windkessel, and to determine the implications for the indices of systemic arterial load. Aortic pressure decay was analysed using high-fidelity recordings in 16 subjects. Tau was calculated assuming P∞=0mmHg, and by two methods that make no assumptions regarding P∞ (the derivative and best-fit methods). Assuming P∞=0mmHg, we documented a Tau value of 1372±308ms, with only 29% of Windkessel function manifested by end-diastole. In contrast, Tau values of 306±109 and 353±106ms were found from the derivative and best-fit methods, with P∞ values of 75±12 and 71±12mmHg, and with ∼80% completion of Windkessel function. The "effective" resistance and compliance were ∼70% and ∼40% less than SVR and TAC (area method), respectively. We did not challenge the Windkessel model, but rather the estimation technique of model variables (Tau, SVR, TAC) that assumes P∞=0. The study favoured a shorter Tau of the Windkessel and a higher P∞ compared with previous studies. This calls for a reappraisal of the quantification of systemic arterial load. Crown Copyright © 2017. Published by Elsevier Masson SAS. All rights reserved.

  6. First-principles theory of iron up to earth-core pressures: Structural, vibrational, and elastic properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soederlind, P.; Moriarty, J.A.; Wills, J.M.

    1996-06-01

    {ital Ab} {ital initio} electronic-structure calculations, based on density-functional theory and a full-potential linear-muffin-tin-orbital method, have been used to predict crystal-structure phase stabilities, elastic constants, and Brillouin-zone-boundary phonons for iron under compression. Total energies for five crystal structures, bcc, fcc, bct, hcp, and dhcp, have been calculated over a wide volume range. In agreement with experiment and previous theoretical calculations, a magnetic bcc ground state is obtained at ambient pressure and a nonmagnetic hcp ground state is found at high pressure, with a predicted bcc {r_arrow} hcp phase transition at about 10 GPa. Also in agreement with very recent diamond-anvil-cellmore » experiments, a metastable dhcp phase is found at high pressure, which remains magnetic and consequently accessible at high temperature up to about 50 GPa. In addition, the bcc structure becomes mechanically unstable at pressures above 2 Mbar (200 GPa) and a metastable, but still magnetic, bct phase ({ital c}/{ital a} {approx_equal} 0.875) develops. For high-pressure nonmagnetic iron, fcc and hcp elastic constants and fcc phonon frequencies have been calculated to above 4 Mbar. These quantities rise smoothly with pressure, but an increasing tendency towards elastic anisotropy as a function of compression is observed, and this has important implications for the solid inner-core of the earth. The fcc elastic-constant and phonon data have also been used in combination with generalized pseudopotential theory to develop many-body interatomic potentials, from which high-temperature thermodynamic properties and melting can be obtained. In this paper, these potentials have been used to calculate full fcc and hcp phonon spectra and corresponding Debye temperatures as a function of compression. {copyright} {ital 1996 The American Physical Society.}« less

  7. Collaborative Research: Neutrinos & Nucleosynthesis in Hot Dense Matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddy, Sanjay

    2013-09-06

    It is now firmly established that neutrinos, which are copiously produced in the hot and dense core of the supernova, play a role in the supernova explosion mechanism and in the synthesis of heavy elements through a phenomena known as r-process nucleosynthesis. They are also detectable in terrestrial neutrino experiments, and serve as a probe of the extreme environment and complex dynamics encountered in the supernova. The major goal of the UW research activity relevant to this project was to calculate the neutrino interaction rates in hot and dense matter of relevance to core collapse supernova. These serve as keymore » input physics in large scale computer simulations of the supernova dynamics and nucleosynthesis being pursued at national laboratories here in the United States and by other groups in Europe and Japan. Our calculations show that neutrino production and scattering rate are altered by the nuclear interactions and that these modifications have important implications for nucleosynthesis and terrestrial neutrino detection. The calculation of neutrino rates in dense matter are difficult because nucleons in the dense matter are strongly coupled. A neutrino interacts with several nucleons and the quantum interference between scattering off different nucleons depends on the nature of correlations between them in dense matter. To describe these correlations we used analytic methods based on mean field theory and hydrodynamics, and computational methods such as Quantum Monte Carlo. We found that due to nuclear effects neutrino production rates at relevant temperatures are enhanced, and that electron neutrinos are more easily absorbed than anti-electron neutrinos in dense matter. The latter, was shown to favor synthesis of heavy neutron-rich elements in the supernova.« less

  8. In search of the economic sustainability of Hadron therapy: the real cost of setting up and operating a Hadron facility.

    PubMed

    Vanderstraeten, Barbara; Verstraete, Jan; De Croock, Roger; De Neve, Wilfried; Lievens, Yolande

    2014-05-01

    To determine the treatment cost and required reimbursement for a new hadron therapy facility, considering different technical solutions and financing methods. The 3 technical solutions analyzed are a carbon only (COC), proton only (POC), and combined (CC) center, each operating 2 treatment rooms and assumed to function at full capacity. A business model defines the required reimbursement and analyzes the financial implications of setting up a facility over time; activity-based costing (ABC) calculates the treatment costs per type of patient for a center in a steady state of operation. Both models compare a private, full-cost approach with public sponsoring, only taking into account operational costs. Yearly operational costs range between €10.0M (M = million) for a publicly sponsored POC to €24.8M for a CC with private financing. Disregarding inflation, the average treatment cost calculated with ABC (COC: €29,450; POC: €46,342; CC: €46,443 for private financing; respectively €16,059, €28,296, and €23,956 for public sponsoring) is slightly lower than the required reimbursement based on the business model (between €51,200 in a privately funded POC and €18,400 in COC with public sponsoring). Reimbursement for privately financed centers is very sensitive to a delay in commissioning and to the interest rate. Higher throughput and hypofractionation have a positive impact on the treatment costs. Both calculation methods are valid and complementary. The financially most attractive option of a publicly sponsored COC should be balanced to the clinical necessities and the sociopolitical context. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Retinal vasculature classification using novel multifractal features

    NASA Astrophysics Data System (ADS)

    Ding, Y.; Ward, W. O. C.; Duan, Jinming; Auer, D. P.; Gowland, Penny; Bai, L.

    2015-11-01

    Retinal blood vessels have been implicated in a large number of diseases including diabetic retinopathy and cardiovascular diseases, which cause damages to retinal blood vessels. The availability of retinal vessel imaging provides an excellent opportunity for monitoring and diagnosis of retinal diseases, and automatic analysis of retinal vessels will help with the processes. However, state of the art vascular analysis methods such as counting the number of branches or measuring the curvature and diameter of individual vessels are unsuitable for the microvasculature. There has been published research using fractal analysis to calculate fractal dimensions of retinal blood vessels, but so far there has been no systematic research extracting discriminant features from retinal vessels for classifications. This paper introduces new methods for feature extraction from multifractal spectra of retinal vessels for classification. Two publicly available retinal vascular image databases are used for the experiments, and the proposed methods have produced accuracies of 85.5% and 77% for classification of healthy and diabetic retinal vasculatures. Experiments show that classification with multiple fractal features produces better rates compared with methods using a single fractal dimension value. In addition to this, experiments also show that classification accuracy can be affected by the accuracy of vessel segmentation algorithms.

  10. Predictive equation of state method for heavy materials based on the Dirac equation and density functional theory

    NASA Astrophysics Data System (ADS)

    Wills, John M.; Mattsson, Ann E.

    2012-02-01

    Density functional theory (DFT) provides a formally predictive base for equation of state properties. Available approximations to the exchange/correlation functional provide accurate predictions for many materials in the periodic table. For heavy materials however, DFT calculations, using available functionals, fail to provide quantitative predictions, and often fail to be even qualitative. This deficiency is due both to the lack of the appropriate confinement physics in the exchange/correlation functional and to approximations used to evaluate the underlying equations. In order to assess and develop accurate functionals, it is essential to eliminate all other sources of error. In this talk we describe an efficient first-principles electronic structure method based on the Dirac equation and compare the results obtained with this method with other methods generally used. Implications for high-pressure equation of state of relativistic materials are demonstrated in application to Ce and the light actinides. Sandia National Laboratories is a multi-program laboratory managed andoperated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Influence of each Zernike aberration on the propagation of laser beams through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Azarian, Adrian; Gladysz, Szymon

    2014-10-01

    We study the influence of each Zernike mode on the propagation of a laser beam through the atmosphere by two different numerical methods. In the first method, an idealized adaptive optics system is modeled to subtract a certain number of Zernike modes from the beam. The effect of each aberration is quantified using the Strehl ratio of the longterm exposure in target/receiver plane. In the second method, the strength of each Zernike mode is varied using a numerical space-filling design during the generation of the phase screens. The resulting central intensity for each point of the design is then studied by a linear discriminant analysis, which yields to the importance of each Zernike mode. The results of the two methods are consistent. They indicate that, for a focused Gaussian beam and for certain geometries and turbulence strengths, the hypothesis of diminishing gains with correction of each new mode is not true. For such cases, we observe jumps in the calculated criteria, which indicate an increased importance of some particular modes, especially coma. The implications of these results for the design of adaptive optics systems are discussed.

  12. Developments in optical modeling methods for metrology

    NASA Astrophysics Data System (ADS)

    Davidson, Mark P.

    1999-06-01

    Despite the fact that in recent years the scanning electron microscope has come to dominate the linewidth measurement application for wafer manufacturing, there are still many applications for optical metrology and alignment. These include mask metrology, stepper alignment, and overlay metrology. Most advanced non-optical lithographic technologies are also considering using topics for alignment. In addition, there have been a number of in-situ technologies proposed which use optical measurements to control one aspect or another of the semiconductor process. So optics is definitely not dying out in the semiconductor industry. In this paper a description of recent advances in optical metrology and alignment modeling is presented. The theory of high numerical aperture image simulation for partially coherent illumination is discussed. The implications of telecentric optics on the image simulation is also presented. Reciprocity tests are proposed as an important measure of numerical accuracy. Diffraction efficiencies for chrome gratings on reticles are one good way to test Kirchoff's approximation as compared to rigorous calculations. We find significant differences between the predictions of Kirchoff's approximation and rigorous methods. The methods for simulating brightfield, confocal, and coherence probe microscope imags are outlined, as are methods for describing aberrations such as coma, spherical aberration, and illumination aperture decentering.

  13. Consumerism and consumer complexity: implications for university teaching and teaching evaluation.

    PubMed

    Hall, Wendy A

    2013-07-01

    A contemporary issue is the effects of a corporate production metaphor and consumerism on university education. Efforts by universities to attract students and teaching strategies aimed at 'adult learners' tend to treat student consumers as a homogeneous group with similar expectations. In this paper, I argue that consumer groups are not uniform. I use Dagevos' theoretical approach to categorize consumers as calculating, traditional, unique, and responsible. Based on the characteristics of consumers occupying these categories, I describe the implications of the varying consumer expectations for teaching. I also consider the implications for evaluation of teaching and call for research taking consumer types into account when evaluating teaching. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Interstellar polycyclic aromatic hydrocarbons - The infrared emission bands, the excitation/emission mechanism, and the astrophysical implications

    NASA Technical Reports Server (NTRS)

    Allamandola, L. J.; Tielens, G. G. M.; Barker, J. R.

    1989-01-01

    A comprehensive study of the PAH hypothesis is presented, including the interstellar, IR spectral features which have been attributed to emission from highly vibrationally excited PAHs. Spectroscopic and IR emission features are discussed in detail. A method for calculating the IR fluorescence spectrum from a vibrationally excited molecule is described. Analysis of interstellar spectrum suggests that the PAHs which dominate the IR spectra contain between 20 and 40 C atoms. The results are compared with results from a thermal approximation. It is found that, for high levels of vibrational excitation and emission from low-frequency modes, the two methods produce similar results. Also, consideration is given to the relationship between PAH molecules and amorphous C particles, the most likely interstellar PAH molecular structures, the spectroscopic structure produced by PAHs and PAH-related materials in the UV portion of the interstellar extinction curve, and the influence of PAH charge on the UV, visible, and IR regions.

  15. Microplastic pollution in the Northeast Atlantic Ocean: validated and opportunistic sampling.

    PubMed

    Lusher, Amy L; Burke, Ann; O'Connor, Ian; Officer, Rick

    2014-11-15

    Levels of marine debris, including microplastics, are largely un-documented in the Northeast Atlantic Ocean. Broad scale monitoring efforts are required to understand the distribution, abundance and ecological implications of microplastic pollution. A method of continuous sampling was developed to be conducted in conjunction with a wide range of vessel operations to maximise vessel time. Transects covering a total of 12,700 km were sampled through continuous monitoring of open ocean sub-surface water resulting in 470 samples. Items classified as potential plastics were identified in 94% of samples. A total of 2315 particles were identified, 89% were less than 5mm in length classifying them as microplastics. Average plastic abundance in the Northeast Atlantic was calculated as 2.46 particles m(-3). This is the first report to demonstrate the ubiquitous nature of microplastic pollution in the Northeast Atlantic Ocean and to present a potential method for standardised monitoring of microplastic pollution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Recall of past use of mobile phone handsets.

    PubMed

    Parslow, R C; Hepworth, S J; McKinney, P A

    2003-01-01

    Previous studies investigating health effects of mobile phones have based their estimation of exposure on self-reported levels of phone use. This UK validation study assesses the accuracy of reported voice calls made from mobile handsets. Data collected by postal questionnaire from 93 volunteers was compared to records obtained prospectively over 6 months from four network operators. Agreement was measured for outgoing calls using the kappa statistic, log-linear modelling, Spearman correlation coefficient and graphical methods. Agreement for number of calls gained moderate classification (kappa = 0.39) with better agreement for duration (kappa = 0.50). Log-linear modelling produced similar results. The Spearman correlation coefficient was 0.48 for number of calls and 0.60 for duration. Graphical agreement methods demonstrated patterns of over-reporting call numbers (by a factor of 1.7) and duration (by a factor of 2.8). These results suggest that self-reported mobile phone use may not fully represent patterns of actual use. This has implications for calculating exposures from questionnaire data.

  17. A Trojan Horse Approach to the Production of {sup 18}F in Novae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cognata, M. La; Pizzone, R. G.; Cherubini, S.

    Crucial information on nova nucleosynthesis can be potentially inferred from γ -ray signals powered by {sup 18}F decay. Therefore, the reaction network producing and destroying this radioactive isotope has been extensively studied in the last years. Among those reactions, the {sup 18}F(p, α ){sup 15}O cross-section has been measured by means of several dedicated experiments, both using direct and indirect methods. The presence of interfering resonances in the energy region of astrophysical interest has been reported by many authors including the recent applications of the Trojan Horse Method. In this work, we evaluate what changes are introduced by the Trojanmore » Horse data in the {sup 18}F(p, α ){sup 15}O astrophysical factor recommended in a recent R-matrix analysis, accounting for existing direct and indirect measurements. Then the updated reaction rate is calculated and parameterized and implications of the new results on nova nucleosynthesis are thoroughly discussed.« less

  18. New evaluated radioxenon decay data and its implications in nuclear explosion monitoring.

    PubMed

    Galan, Monica; Kalinowski, Martin; Gheddou, Abdelhakim; Yamba, Kassoum

    2018-03-07

    This work presents the last updated evaluations of the nuclear and decay data of the four radioxenon isotopes of interest for the Comprehensive Nuclear-Test-Ban Treaty (CTBT): Xe-131 m, Xe-133, Xe-133 m and Xe-135. This includes the most recent measured values on the half-lives, gamma emission probabilities (Pγ) and internal conversion coefficients (ICC). The evaluation procedure has been made within the Decay Data Evaluation Project (DDEP) framework and using the latest available versions of nuclear and atomic data evaluation software tools and compilations. The consistency of the evaluations was confirmed by the very close result between the total available energy calculated with the present evaluated data and the tabulated Q-value. The article also analyzes the implications on the variation of the activity ratio calculations from radioxenon monitoring facilities depending on the nuclear database of reference. Copyright © 2018. Published by Elsevier Ltd.

  19. Zero point motion effect on the electronic properties of diamond, trans-polyacetylene and polyethylene

    NASA Astrophysics Data System (ADS)

    Cannuccia, E.; Marini, A.

    2012-09-01

    It has been recently shown, using ab-initio methods, that bulk diamond is characterized by a large band-gap renormalization (˜0.6 eV) induced by the electron-phonon interaction. In this work we show that in polymers, compared to bulk materials, the larger amplitude of the atomic vibrations makes the real excitations of the system be composed by entangled electron-phonon states. We prove that these states carry only a fraction of the electronic charge, thus leading, inevitably, to the failure of the electronic picture. The present results cast doubts on the accuracy of purely electronic calculations. They also lead to a critical revision of the state-of-the-art description of carbon-based nanostructures, opening a wealth of potential implications.

  20. Isaac Newton and the astronomical refraction.

    PubMed

    Lehn, Waldemar H

    2008-12-01

    In a short interval toward the end of 1694, Isaac Newton developed two mathematical models for the theory of the astronomical refraction and calculated two refraction tables, but did not publish his theory. Much effort has been expended, starting with Biot in 1836, in the attempt to identify the methods and equations that Newton used. In contrast to previous work, a closed form solution is identified for the refraction integral that reproduces the table for his first model (in which density decays linearly with elevation). The parameters of his second model, which includes the exponential variation of pressure in an isothermal atmosphere, have also been identified by reproducing his results. The implication is clear that in each case Newton had derived exactly the correct equations for the astronomical refraction; furthermore, he was the first to do so.

  1. A review of cost measures for the economic impact of domestic violence.

    PubMed

    Chan, Ko Ling; Cho, Esther Yin-Nei

    2010-07-01

    Although economic analyses of domestic violence typically guide decisions concerning resource allocation, allowing policy makers to make better informed decisions on how to prioritize and allocate scarce resources, the methods adopted to calculate domestic violence costs have varied widely from study to study. In particular, only a few studies have reviewed the cost measures of the economic impact of domestic violence. This article reviews and compares these measures by covering approaches to categorizing costs, the cost components, and ways to estimate them and recommends an integrated framework that brings the various approaches together. Some issues still need to be addressed when further developing measures such as including omitted but significant measures and expanding the time horizons of others. The implications for future study of domestic violence costs are discussed.

  2. A Meta-Analysis of Single-Subject Research on Behavioral Momentum to Enhance Success in Students with Autism.

    PubMed

    Cowan, Richard J; Abel, Leah; Candel, Lindsay

    2017-05-01

    We conducted a meta-analysis of single-subject research studies investigating the effectiveness of antecedent strategies grounded in behavioral momentum for improving compliance and on-task performance for students with autism. First, we assessed the research rigor of those studies meeting our inclusionary criteria. Next, in order to apply a universal metric to help determine the effectiveness of this category of antecedent strategies investigated via single-subject research methods, we calculated effect sizes via omnibus improvement rate differences (IRDs). Outcomes provide additional support for behavioral momentum, especially interventions incorporating the high-probability command sequence. Implications for research and practice are discussed, including the consideration of how single-subject research is systematically reviewed to assess the rigor of studies and assist in determining overall intervention effectiveness .

  3. Regionally adaptive histogram equalization of the chest.

    PubMed

    Sherrier, R H; Johnson, G A

    1987-01-01

    Advances in the area of digital chest radiography have resulted in the acquisition of high-quality images of the human chest. With these advances, there arises a genuine need for image processing algorithms specific to the chest, in order to fully exploit this digital technology. We have implemented the well-known technique of histogram equalization, noting the problems encountered when it is adapted to chest images. These problems have been successfully solved with our regionally adaptive histogram equalization method. With this technique histograms are calculated locally and then modified according to both the mean pixel value of that region as well as certain characteristics of the cumulative distribution function. This process, which has allowed certain regions of the chest radiograph to be enhanced differentially, may also have broader implications for other image processing tasks.

  4. The Role of Affect Spin in the Relationships between Proactive Personality, Career Indecision, and Career Maturity

    PubMed Central

    Park, In-Jo

    2015-01-01

    This study attempted to investigate the influence of proactive personality on career indecision and career maturity, and to examine the moderating effects of affect spin. The author administered proactive personality, career indecision, and career maturity scales to 70 college students. Affect spin was calculated using the day reconstruction method, wherein participants evaluated their affective experiences by using 20 affective terms at the same time each day for 21 consecutive days. Hierarchical regression analyses showed that proactive personality significantly predicted career indecision and career maturity, even after controlling for valence and activation variability, neuroticism, age, and gender. Furthermore, affect spin moderated the associations of proactive personality with career indecision and maturity. The theoretical and practical implications of the moderating effects of affect spin are discussed. PMID:26635665

  5. Dark matter effective field theory scattering in direct detection experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneck, K.; Cabrera, B.; Cerdeño, D. G.

    2015-05-18

    We examine the consequences of the effective field theory (EFT) of dark matter-nucleon scattering for current and proposed direct detection experiments. Exclusion limits on EFT coupling constants computed using the optimum interval method are presented for SuperCDMS Soudan, CDMS II, and LUX, and the necessity of combining results from multiple experiments in order to determine dark matter parameters is discussed. Here. we demonstrate that spectral differences between the standard dark matter model and a general EFT interaction can produce a bias when calculating exclusion limits and when developing signal models for likelihood and machine learning techniques. In conclusion, we discussmore » the implications of the EFT for the next-generation (G2) direct detection experiments and point out regions of complementarity in the EFT parameter space.« less

  6. Reactivity of BrCl, Br₂, BrOCl, Br₂O, and HOBr toward dimethenamid in solutions of bromide + aqueous free chlorine.

    PubMed

    Sivey, John D; Arey, J Samuel; Tentscher, Peter R; Roberts, A Lynn

    2013-02-05

    HOBr, formed via oxidation of bromide by free available chlorine (FAC), is frequently assumed to be the sole species responsible for generating brominated disinfection byproducts (DBPs). Our studies reveal that BrCl, Br(2), BrOCl, and Br(2)O can also serve as brominating agents of the herbicide dimethenamid in solutions of bromide to which FAC was added. Conditions affecting bromine speciation (pH, total free bromine concentration ([HOBr](T)), [Cl(-)], and [FAC](o)) were systematically varied, and rates of dimethenamid bromination were measured. Reaction orders in [HOBr](T) ranged from 1.09 (±0.17) to 1.67 (±0.16), reaching a maximum near the pK(a) of HOBr. This complex dependence on [HOBr](T) implicates Br(2)O as an active brominating agent. That bromination rates increased with increasing [Cl(-)], [FAC](o) (at constant [HOBr](T)), and excess bromide (where [Br(-)](o)>[FAC](o)) implicate BrCl, BrOCl, and Br(2), respectively, as brominating agents. As equilibrium constants for the formation of Br(2)O and BrOCl (aq) have not been previously reported, we have calculated these values (and their gas-phase analogues) using benchmark-quality quantum chemical methods [CCSD(T) up to CCSDTQ calculations plus solvation effects]. The results allow us to compute bromine speciation and hence second-order rate constants. Intrinsic brominating reactivity increased in the order: HOBr ≪ Br(2)O < BrOCl ≈ Br(2) < BrCl. Our results indicate that species other than HOBr can influence bromination rates under conditions typical of drinking water and wastewater chlorination.

  7. Prediction of core level binding energies in density functional theory: Rigorous definition of initial and final state contributions and implications on the physical meaning of Kohn-Sham energies.

    PubMed

    Pueyo Bellafont, Noèlia; Bagus, Paul S; Illas, Francesc

    2015-06-07

    A systematic study of the N(1s) core level binding energies (BE's) in a broad series of molecules is presented employing Hartree-Fock (HF) and the B3LYP, PBE0, and LC-BPBE density functional theory (DFT) based methods with a near HF basis set. The results show that all these methods give reasonably accurate BE's with B3LYP being slightly better than HF but with both PBE0 and LCBPBE being poorer than HF. A rigorous and general decomposition of core level binding energy values into initial and final state contributions to the BE's is proposed that can be used within either HF or DFT methods. The results show that Koopmans' theorem does not hold for the Kohn-Sham eigenvalues. Consequently, Kohn-Sham orbital energies of core orbitals do not provide estimates of the initial state contribution to core level BE's; hence, they cannot be used to decompose initial and final state contributions to BE's. However, when the initial state contribution to DFT BE's is properly defined, the decompositions of initial and final state contributions given by DFT, with several different functionals, are very similar to those obtained with HF. Furthermore, it is shown that the differences of Kohn-Sham orbital energies taken with respect to a common reference do follow the trend of the properly calculated initial state contributions. These conclusions are especially important for condensed phase systems where our results validate the use of band structure calculations to determine initial state contributions to BE shifts.

  8. Sea level rise from the Greenland Ice Sheet during the Eemian interglacial: Review of previous work with focus on the surface mass balance

    NASA Astrophysics Data System (ADS)

    Plach, Andreas; Hestnes Nisancioglu, Kerim

    2016-04-01

    The contribution from the Greenland Ice Sheet (GIS) to the global sea level rise during the Eemian interglacial (about 125,000 year ago) was the focus of many studies in the past. A main reason for the interest in this period is the considerable warmer climate during the Eemian which is often seen as an equivalent for possible future climate conditions. Simulated sea level rise during the Eemian can therefore be used to better understand a possible future sea level rise. The most recent assessment report of the Intergovernmental Panel on Climate Change (IPCC AR5) gives an overview of several studies and discusses the possible implications for a future sea level rise. The report also reveals the big differences between these studies in terms of simulated GIS extent and corresponding sea level rise. The present study gives a more exhaustive review of previous work discussing sea level rise from the GIS during the Eemian interglacial. The smallest extents of the GIS simulated by various authors are shown and summarized. A focus is thereby given to the methods used to calculate the surface mass balance. A hypothesis of the present work is that the varying results of the previous studies can largely be explained due to the various methods used to calculate the surface mass balance. In addition, as a first step for future work, the surface mass balance of the GIS for a proxy-data derived forcing ("index method") and a direct forcing with a General Circulation Model (GCM) are shown and discussed.

  9. Supertrees Based on the Subtree Prune-and-Regraft Distance

    PubMed Central

    Whidden, Christopher; Zeh, Norbert; Beiko, Robert G.

    2014-01-01

    Supertree methods reconcile a set of phylogenetic trees into a single structure that is often interpreted as a branching history of species. A key challenge is combining conflicting evolutionary histories that are due to artifacts of phylogenetic reconstruction and phenomena such as lateral gene transfer (LGT). Many supertree approaches use optimality criteria that do not reflect underlying processes, have known biases, and may be unduly influenced by LGT. We present the first method to construct supertrees by using the subtree prune-and-regraft (SPR) distance as an optimality criterion. Although calculating the rooted SPR distance between a pair of trees is NP-hard, our new maximum agreement forest-based methods can reconcile trees with hundreds of taxa and > 50 transfers in fractions of a second, which enables repeated calculations during the course of an iterative search. Our approach can accommodate trees in which uncertain relationships have been collapsed to multifurcating nodes. Using a series of benchmark datasets simulated under plausible rates of LGT, we show that SPR supertrees are more similar to correct species histories than supertrees based on parsimony or Robinson–Foulds distance criteria. We successfully constructed an SPR supertree from a phylogenomic dataset of 40,631 gene trees that covered 244 genomes representing several major bacterial phyla. Our SPR-based approach also allowed direct inference of highways of gene transfer between bacterial classes and genera. A Small number of these highways connect genera in different phyla and can highlight specific genes implicated in long-distance LGT. [Lateral gene transfer; matrix representation with parsimony; phylogenomics; prokaryotic phylogeny; Robinson–Foulds; subtree prune-and-regraft; supertrees.] PMID:24695589

  10. Electromigration and the structure of metallic nanocontacts

    NASA Astrophysics Data System (ADS)

    Hoffmann-Vogel, R.

    2017-09-01

    This article reviews efforts to structurally characterize metallic nanocontacts. While the electronic characterization of such junctions is relatively straight forward, usually it is technically challenging to study the nanocontact's structure at small length scales. However, knowing that the structure is the basis for understanding the electronic properties of the nanocontact, for example, it is necessary to explain the electronic properties by calculations based on structural models. Besides using a gate electrode, controlling the structure is an important way of understanding how the electronic transport properties can be influenced. A key to make structural information directly accessible is to choose a fabrication method that is adapted to the structural characterization method. Special emphasis is given to transmission electron microscopy fabrication and to thermally assisted electromigration methods due to their potential for obtaining information on both electrodes of the forming nanocontact. Controlled electromigration aims at studying the contact at constant temperature of the contact during electromigration compared to studies at constant temperature of the environment as done previously. We review efforts to calculate electromigration forces. We describe how hot spots are formed during electromigration. We summarize implications for the structure obtained from studies of the ballistic transport regime, tunneling, and Coulomb-blockade. We review the structure of the nanocontacts known from direct structural characterization. Single-crystalline wires allow suppressing grain boundary electromigration. In thin films, the substrate plays an important role in influencing the defect and temperature distribution. Hot-spot formation and recrystallization are observed. We add information on the local temperature and current density and on alloys important for microelectronic interconnects.

  11. SOM quality and phosphorus fractionation to evaluate degradation organic matter: implications for the restoration of soils after fire

    NASA Astrophysics Data System (ADS)

    Merino, Agustin; Fonturbel, Maria T.; Omil, Beatriz; Chávez-Vergara, Bruno; Fernandez, Cristina; Garcia-Oliva, Felipe; Vega, Jose A.

    2016-04-01

    The design of emergency treatment for the rehabilitation of fire-affected soils requires a quick diagnosis to assess the degree of degradation. For its implication in the erosion and subsequent evolution, the quality of soil organic matter (OM) plays a particularly important role. This paper presents a methodology that combines the visual recognition of the severity of soil burning with the use of simple analytical techniques to assess the degree of degradation of OM. The content and quality of the OM was evaluated in litter and mineral soils using thermogravimetry-differential scanning calorimetry (DSC-TG) spectroscopy, and the results were contrasted with 13C CP-MAS NMR. The types of methodologies were texted to assess the thermal analysis: a) the direct calculation of the Q areas related to three degrees of thermal stabilities: Q1 (200-375 °C; labil OM); Q2 (375-475 °C, recalcitrant OM); and Q3 (475-550 °C). b) deconvolution of DSC curves and calculation of each peak was expressed as a fraction of the total DSC curve area. Additionally, a P fractionation was done following the Hedley sequential extraction method. The severity levels visually showed different degrees of SOM degradation. Although the fire caused important SOM losses in moderate severities, changes in the quality of OM only occurred at higher severities. Besides, the labile organic P fraction decreased and the occluded inorganic P fraction increased in the high severity soils. These changes affect the OM processes such as hydrophobicity and erosion largely responsible for soil degradation post-fire. The strong correlations between the thermal parameters and NMR regions and derived measurements such as hydrophobicity and aromaticity show the usefulness of this technique as rapid diagnosis to assess the soil degradation.The marked loss of polysaccharide and transition to highly thermic-resistant compounds, visible in deconvoluted thermograms, which would explain the changes in microbial activity and soil nutrients availability (basal respiration, microbial biomass, qCO2, and enzymatic activity). And also it would have implications in hydrophobicity and stability of soil aggregates, leading to the extreme erosion rates that occur usually are found in soils affected by higher severities.

  12. Rapid Convergence of Energy and Free Energy Profiles with Quantum Mechanical Size in Quantum Mechanical-Molecular Mechanical Simulations of Proton Transfer in DNA.

    PubMed

    Das, Susanta; Nam, Kwangho; Major, Dan Thomas

    2018-03-13

    In recent years, a number of quantum mechanical-molecular mechanical (QM/MM) enzyme studies have investigated the dependence of reaction energetics on the size of the QM region using energy and free energy calculations. In this study, we revisit the question of QM region size dependence in QM/MM simulations within the context of energy and free energy calculations using a proton transfer in a DNA base pair as a test case. In the simulations, the QM region was treated with a dispersion-corrected AM1/d-PhoT Hamiltonian, which was developed to accurately describe phosphoryl and proton transfer reactions, in conjunction with an electrostatic embedding scheme using the particle-mesh Ewald summation method. With this rigorous QM/MM potential, we performed rather extensive QM/MM sampling, and found that the free energy reaction profiles converge rapidly with respect to the QM region size within ca. ±1 kcal/mol. This finding suggests that the strategy of QM/MM simulations with reasonably sized and selected QM regions, which has been employed for over four decades, is a valid approach for modeling complex biomolecular systems. We point to possible causes for the sensitivity of the energy and free energy calculations to the size of the QM region, and potential implications.

  13. A Reinvestigation of the Dimer of para-Benzoquinone with Pyrimidine with MP2, CCSD(T) and DFT using Functionals including those Designed to Describe Dispersion

    PubMed Central

    Marianski, Mateusz; Oliva, Antoni

    2012-01-01

    We reevaluate the interaction of pyridine and p-benzoquinone using functionals designed to treat dispersion. We compare the relative energies of four different structures: stacked, T-shaped (identified for the first time) and two planar H-bonded geometries using these functionals (B97-D, ωB97x-D, M05, M05-2X, M06, M06L, M06-2X), other functionals (PBE1PBE, B3LYP, X3LYP), MP2 and CCSD(T) using basis sets as large as cc-pVTZ. The functionals designed to treat dispersion behave erratically as the predictions of the most stable structure vary considerably. MP2 predicts the experimentally observed structure (H-bonded) to be the least stable, while single point CCSD(T) at the MP2 optimized geometry correctly predicts the observed structure to be most stable. We have confirmed the assignment of the experimental structure using new calculations of the vibrational frequency shifts previously used to identify the structure. The MP2/cc-pVTZ vibrational calculations are in excellent agreement with the observations. All methods used to calculate the energies provide vibrational shifts that agree with the observed structure even though most do not predict this structure to be most stable. The implications for evaluating possible π-stacking in biologically important systems are discussed. PMID:22765283

  14. A reinvestigation of the dimer of para-benzoquinone and pyrimidine with MP2, CCSD(T), and DFT using functionals including those designed to describe dispersion.

    PubMed

    Marianski, Mateusz; Oliva, Antoni; Dannenberg, J J

    2012-08-02

    We reevaluate the interaction of pyridine and p-benzoquinone using functionals designed to treat dispersion. We compare the relative energies of four different structures: stacked, T-shaped (identified for the first time), and two planar H-bonded geometries using these functionals (B97-D, ωB97x-D, M05, M05-2X, M06, M06L, and M06-2X), other functionals (PBE1PBE, B3LYP, X3LYP), MP2, and CCSD(T) using basis sets as large as cc-pVTZ. The functionals designed to treat dispersion behave erratically as the predictions of the most stable structure vary considerably. MP2 predicts the experimentally observed structure (H-bonded) to be the least stable, while single-point CCSD(T) at the MP2 optimized geometry correctly predicts the observed structure to be the most stable. We have confirmed the assignment of the experimental structure using new calculations of the vibrational frequency shifts previously used to identify the structure. The MP2/cc-pVTZ vibrational calculations are in excellent agreement with the observations. All methods used to calculate the energies provide vibrational shifts that agree with the observed structure even though most do not predict this structure to be most stable. The implications for evaluating possible π-stacking in biologically important systems are discussed.

  15. BMI calculation in older people: The effect of using direct and surrogate measures of height in a community-based setting.

    PubMed

    Butler, Rose; McClinchy, Jane; Morreale-Parker, Claudia; Marsh, Wendy; Rennie, Kirsten L

    2017-12-01

    There is currently no consensus on which measure of height should be used in older people's body mass index (BMI) calculation. Most estimates of nutritional status include a measurement of body weight and height which should be reliable and accurate, however at present several different methods are used interchangeably. BMI, a key marker in malnutrition assessment, does not reflect age-related changes in height or changes in body composition such as loss of muscle mass or presence of oedema. The aim of this pilot study was to assess how the use of direct and surrogate measures of height impacts on BMI calculation in people aged ≥75 years. A cross-sectional study of 64 free-living older people (75-96 yrs) quantified height by two direct measurements, current height (H C ), and self-report (H R ) and surrogate equations using knee height (H K ) and ulna length (H U ). BMI calculated from current height measurement (BMI C ) was compared with BMI calculated using self-reported height (BMI R ) and height estimated from surrogate equations for knee height (BMI K ) and ulna length (BMI U ). Median difference of BMI C -BMI R was 2.31 kg/m 2 . BMI K gave the closest correlation to BMI C . The percentage of study participants identified at increased risk of under-nutrition (BMI < 20 kg/m 2 ) varied depending on which measure of height was used to calculate BMI; from 5% (BMI C ), 7.8% (BMI K ), 12.5% (BMI U ), to 14% (BMI R ) respectively. The results of this pilot study in a relatively healthy sample of older people suggest that interchangeable use of current and reported height in people ≥75 years can introduce substantial significant systematic error. This discrepancy could impact nutritional assessment of older people in poor health and lead to misclassification during nutritional screening if other visual and clinical clues are not taken into account. This could result in long-term clinical and cost implications if individuals who need nutrition support are not correctly identified. A consensus is required on which method should be used to quantify height in older people to improve accuracy of nutritional assessment and clinical care. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  16. Calculation of evapotranspiration: Recursive and explicit methods

    USDA-ARS?s Scientific Manuscript database

    Crop yield is proportional to crop evapotranspiration (ETc) and it is important to calculate ETc correctly. Methods to calculate ETc have combined empirical and theoretical approaches. The combination method was used to calculate potential ETp. It is a combination method because it combined the ener...

  17. Loss of conformational entropy in protein folding calculated using realistic ensembles and its implications for NMR-based calculations

    PubMed Central

    Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.

    2014-01-01

    The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044

  18. An improved stereologic method for three-dimensional estimation of particle size distribution from observations in two dimensions and its application.

    PubMed

    Xu, Yi-Hua; Pitot, Henry C

    2003-09-01

    Single enzyme-altered hepatocytes; altered hepatic foci (AHF); and nodular lesions have been implicated, respectively in the processes of initiation, promotion, and progression in rodent hepatocarcinogenesis. Qualitative and quantitative analyses of such lesions have been utilized both to identify and to determine the potency of initiating, promoting, and progressor agents in rodent liver. Of a number of possible parameters determined in the study of such lesions, estimation of the number of foci or nodules in the liver is very important. The method of Saltykov has been used for estimating the number of AHF in rat liver. However, in practice, the Saltykov calculation has at least two weak points: (a) the size class range is limited to 12, which in many instances is too narrow to cover the range of AHF data obtained; and (b) under some conditions, the Saltykov equation generates negative values in several size classes, an obvious impossibility in the real world. In order to overcome these limitations in the Saltykov calculations, a study of the particle size distribution in a wide-range, polydispersed sphere system was performed. A stereologic method, termed the 25F Association method, was developed from this study. This method offers 25 association factors that are derived from the frequency of different-sized transections obtained from transecting a spherical particle, thus expanding the size class range to be analyzed up to 25, which is sufficiently wide to encompass all rat AHF found in most cases. This method exhibits greater flexibility, which allows adjustments to be made within the calculation process when NA((k,k)), the net number of transections from the same size spheres, was found to be a negative value, which is not possible in real situations. The reliability of the 25F Association method was tested thoroughly by computer simulation in both monodispersed and polydispersed sphere systems. The test results were compared with the original Saltykov method. We found that the 25F Association method yielded a better estimate of the total number of spheres in the three-dimensional tissue sample as well as the detailed size distribution information. Although the 25F Association method was derived from the study of a polydispersed sphere system, it can be used for continuous size distribution sphere systems. Application of this method to the estimation of parameters of preneoplastic foci in rodent liver is presented as an example of its utility. An application software program, 3D_estimation.exe, which uses the 25F Association method to estimate the number of AHF in rodent liver, has been developed and is now available at the website of this laboratory.

  19. Theoretical study of the A prime 5Sigma(+)g and C double prime 5Pi u states of N2 - Implications for the N2 afterglow

    NASA Technical Reports Server (NTRS)

    Partridge, Harry; Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.; Schwenke, David W.

    1988-01-01

    Theoretical spectroscopic constants are reported for the A prime 5Sigma(+)g and C double prime 5Pi u states of N2 based on CASSCF/MRCI calculations using large ANO Gaussian basis sets. The calculated A prime Sigma(+)g potential differs qualitatively from previous calculations in that the inner well is significantly deeper (De = 3450/cm). This deeper well provides considerable support for the suggestion of Berkowitz et al. (1956) that A prime 5Sigma(+)g is the primary precursor state involved in the yellow Lewis-Rayleigh afterglow of N2.

  20. Measuring the Population Burden of Injuries—Implications for Global and National Estimates: A Multi-centre Prospective UK Longitudinal Study

    PubMed Central

    Lyons, Ronan A.; Kendrick, Denise; Towner, Elizabeth M.; Christie, Nicola; Macey, Steven; Coupland, Carol; Gabbe, Belinda J.

    2011-01-01

    Background Current methods of measuring the population burden of injuries rely on many assumptions and limited data available to the global burden of diseases (GBD) studies. The aim of this study was to compare the population burden of injuries using different approaches from the UK Burden of Injury (UKBOI) and GBD studies. Methods and Findings The UKBOI was a prospective cohort of 1,517 injured individuals that collected patient-reported outcomes. Extrapolated outcome data were combined with multiple sources of morbidity and mortality data to derive population metrics of the burden of injury in the UK. Participants were injured patients recruited from hospitals in four UK cities and towns: Swansea, Nottingham, Bristol, and Guildford, between September 2005 and April 2007. Patient-reported changes in quality of life using the EQ-5D at baseline, 1, 4, and 12 months after injury provided disability weights used to calculate the years lived with disability (YLDs) component of disability adjusted life years (DALYs). DALYs were calculated for the UK and extrapolated to global estimates using both UKBOI and GBD disability weights. Estimated numbers (and rates per 100,000) for UK population extrapolations were 750,999 (1,240) for hospital admissions, 7,982,947 (13,339) for emergency department (ED) attendances, and 22,185 (36.8) for injury-related deaths in 2005. Nonadmitted ED-treated injuries accounted for 67% of YLDs. Estimates for UK DALYs amounted to 1,771,486 (82% due to YLDs), compared with 669,822 (52% due to YLDs) using the GBD approach. Extrapolating patient-derived disability weights to GBD estimates would increase injury-related DALYs 2.6-fold. Conclusions The use of disability weights derived from patient experiences combined with additional morbidity data on ED-treated patients and inpatients suggests that the absolute burden of injury is higher than previously estimated. These findings have substantial implications for improving measurement of the national and global burden of injury. Please see later in the article for the Editors' Summary PMID:22162954

  1. Self-force calculations with matched expansions and quasinormal mode sums

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casals, Marc; Dolan, Sam; Ottewill, Adrian C.

    2009-06-15

    Accurate modeling of gravitational wave emission by extreme-mass ratio inspirals is essential for their detection by the LISA mission. A leading perturbative approach involves the calculation of the self-force acting upon the smaller orbital body. In this work, we present the first application of the Poisson-Wiseman-Anderson method of 'matched expansions' to compute the self-force acting on a point particle moving in a curved spacetime. The method employs two expansions for the Green function, which are, respectively, valid in the 'quasilocal' and 'distant past' regimes, and which may be matched together within the normal neighborhood. We perform our calculation in amore » static region of the spherically symmetric Nariai spacetime (dS{sub 2}xS{sup 2}), in which scalar-field perturbations are governed by a radial equation with a Poeschl-Teller potential (frequently used as an approximation to the Schwarzschild radial potential) whose solutions are known in closed form. The key new ingredients in our study are (i) very high order quasilocal expansions and (ii) expansion of the distant past Green function in quasinormal modes. In combination, these tools enable a detailed study of the properties of the scalar-field Green function. We demonstrate that the Green function is singular whenever x and x{sup '} are connected by a null geodesic, and apply asymptotic methods to determine the structure of the Green function near the null wave front. We show that the singular part of the Green function undergoes a transition each time the null wave front passes through a caustic point, following a repeating fourfold sequence {delta}({sigma}), 1/{pi}{sigma}, -{delta}({sigma}), -1/{pi}{sigma}, etc., where {sigma} is Synge's world function. The matched-expansion method provides insight into the nonlocal properties of the self-force. We show that the self-force generated by the segment of the worldline lying outside the normal neighborhood is not negligible. We apply the matched-expansion method to compute the scalar self-force acting on a static particle on the Nariai spacetime, and validate against an alternative method, obtaining agreement to six decimal places. We conclude with a discussion of the implications for wave propagation and self-force calculations. On black hole spacetimes, any expansion of the Green function in quasinormal modes must be augmented by a branch-cut integral. Nevertheless, we expect the Green function in Schwarzschild spacetime to inherit certain key features, such as a fourfold singular structure manifesting itself through the asymptotic behavior of quasinormal modes. In this way, the Nariai spacetime provides a fertile testing ground for developing insight into the nonlocal part of the self-force on black hole spacetimes.« less

  2. Seasonal and spatial variability of the OM/OC mass ratios and high regional correlation between oxalic acid and zinc in Chinese urban organic aerosols

    NASA Astrophysics Data System (ADS)

    Xing, L.; Fu, T.-M.; Cao, J. J.; Lee, S. C.; Wang, G. H.; Ho, K. F.; Cheng, M.-C.; You, C.-F.; Wang, T. J.

    2013-04-01

    We calculated the organic matter to organic carbon mass ratios (OM/OC mass ratios) in PM2.5 collected from 14 Chinese cities during summer and winter of 2003 and analyzed the causes for their seasonal and spatial variability. The OM/OC mass ratios were calculated two ways. Using a mass balance method, the calculated OM/OC mass ratios averaged 1.92 ± 0.39 year-round, with no significant seasonal or spatial variation. The second calculation was based on chemical species analyses of the organic compounds extracted from the PM2.5 samples using dichloromethane/methanol and water. The calculated OM/OC mass ratio in summer was relatively high (1.75 ± 0.13) and spatially-invariant due to vigorous photochemistry and secondary organic aerosol (OA) production throughout the country. The calculated OM/OC mass ratio in winter (1.59 ± 0.18) was significantly lower than that in summer, with lower values in northern cities (1.51 ± 0.07) than in southern cities (1.65 ± 0.15). This likely reflects the wider usage of coal for heating purposes in northern China in winter, in contrast to the larger contributions from biofuel and biomass burning in southern China in winter. On average, organic matter constituted 36% and 34% of Chinese urban PM2.5 mass in summer and winter, respectively. We report, for the first time, a high regional correlation between Zn and oxalic acid in Chinese urban aerosols in summer. This is consistent with the formation of stable Zn oxalate complex in the aerosol phase previously proposed by Furukawa and Takahashi (2011). We found that many other dicarboxylic acids were also highly correlated with Zn in the summer Chinese urban aerosol samples, suggesting that they may also form stable organic complexes with Zn. Such formation may have profound implications for the atmospheric abundance and hygroscopic properties of aerosol dicarboxylic acids.

  3. Seasonal and spatial variability of the organic matter-to-organic carbon mass ratios in Chinese urban organic aerosols and a first report of high correlations between aerosol oxalic acid and zinc

    NASA Astrophysics Data System (ADS)

    Xing, L.; Fu, T.-M.; Cao, J. J.; Lee, S. C.; Wang, G. H.; Ho, K. F.; Cheng, M.-C.; You, C.-F.; Wang, T. J.

    2013-01-01

    We calculated the organic matter to organic carbon mass ratios (OM/OC mass ratios) in PM2.5 collected from 14 Chinese cities during summer and winter of 2003 and analyzed the causes for their seasonal and spatial variability. The OM/OC mass ratios were calculated two ways. Using a mass balance method, the calculated OM/OC mass ratios averaged 1.92 ± 0.39 yr-round, with no significant seasonal or spatial variation. The second calculation was based on chemical species analyses of the organic compounds extracted from the PM2.5 samples using dichloromethane/methanol and water. The calculated OM/OC mass ratio in summer was relatively high (1.75 ± 0.13) and spatially-invariant, due to vigorous photochemistry and secondary OA production throughout the country. The calculated OM/OC mass ratio in winter (1.59 ± 0.18) was significantly lower than that in summer, with lower values in northern cities (1.51 ± 0.07) than in southern cities (1.65 ± 0.15). This likely reflects the wider usage of coal for heating purposes in northern China in winter, in contrast to the larger contributions from biofuel and biomass burning in southern China in winter. On average, organic matters constituted 36% and 34% of Chinese urban PM2.5 mass in summer and winter, respectively. We reported, for the first time, high correlations between Zn and oxalic acid in Chinese urban aerosols in summer. This is consistent with the formation of stable Zn oxalate complex in the aerosol phase previously proposed by Furukawa and Takahashi (2011). We found that many other dicarboxylic acids were also highly correlated with Zn in the summer Chinese urban aerosol samples, suggesting that they may also form stable organic complexes with Zn. Such formation may have profound implications for the atmospheric abundance and hygroscopic property of aerosol dicarboxylic acids.

  4. Technology Changes Intelligence: Societal Implications and Soaring IQs.

    ERIC Educational Resources Information Center

    Sternberg, Robert J.

    1997-01-01

    Discusses the effects technology may have on human intelligence. Topics include the use of computational devices, including calculators, in schools; the changes word processing has brought about in writing; the use of television; and the effects of weapons on children. (LRW)

  5. The excitation of normal modes by a curved line source

    NASA Astrophysics Data System (ADS)

    Mochizuki, E.

    1987-12-01

    The polynomial moments, up to total degree two, of the stress glut are calculated for a curved line source. The significance of the moments, whose total degree is one, is emphasized and the implication for inversion is discussed.

  6. Evaluating Evidence for Conceptually Related Constructs Using Bivariate Correlations

    ERIC Educational Resources Information Center

    Swank, Jacqueline M.; Mullen, Patrick R.

    2017-01-01

    The article serves as a guide for researchers in developing evidence of validity using bivariate correlations, specifically construct validity. The authors outline the steps for calculating and interpreting bivariate correlations. Additionally, they provide an illustrative example and discuss the implications.

  7. Queuing Theory and Reference Transactions.

    ERIC Educational Resources Information Center

    Terbille, Charles

    1995-01-01

    Examines the implications of applying the queuing theory to three different reference situations: (1) random patron arrivals; (2) random durations of transactions; and (3) use of two librarians. Tables and figures represent results from spreadsheet calculations of queues for each reference situation. (JMV)

  8. Electromagnetic effects on the light hadron spectrum

    DOE PAGES

    Basak, S.; Bazavov, A.; Bernard, C.; ...

    2015-09-28

    Calculations studying electromagnetic effects on light mesons are reported. The calculations use fully dynamical QCD, but only quenched photons, which suffices to NLO in χPT; that is, the sea quarks are electrically neutral, while the valence quarks carry charge. The non-compact formalism is used for photons. New results are obtained with lattice spacing as small as 0.045 fm and a large range of volumes. The success of chiral perturbation theory in describing these results and the implications for light quark masses are considered.

  9. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  10. Propellant Mass Fraction Calculation Methodology for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Monk, Timothy S.

    2009-01-01

    Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between competing launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of a generic launch vehicle. This includes fundamental methods of pmf calculation which consider only the loaded propellant and the inert mass of the vehicle, more involved methods which consider the residuals and any other unusable propellant remaining in the vehicle, and other calculations which exclude large mass quantities such as the installed engine mass. Finally, a historic comparison is made between launch vehicles on the basis of the differing calculation methodologies.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, L.; Zaslawsky, M.

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.

  12. A novel statistical approach shows evidence for multi-system physiological dysregulation during aging.

    PubMed

    Cohen, Alan A; Milot, Emmanuel; Yong, Jian; Seplaki, Christopher L; Fülöp, Tamàs; Bandeen-Roche, Karen; Fried, Linda P

    2013-03-01

    Previous studies have identified many biomarkers that are associated with aging and related outcomes, but the relevance of these markers for underlying processes and their relationship to hypothesized systemic dysregulation is not clear. We address this gap by presenting a novel method for measuring dysregulation via the joint distribution of multiple biomarkers and assessing associations of dysregulation with age and mortality. Using longitudinal data from the Women's Health and Aging Study, we selected a 14-marker subset from 63 blood measures: those that diverged from the baseline population mean with age. For the 14 markers and all combinatorial sub-subsets we calculated a multivariate distance called the Mahalanobis distance (MHBD) for all observations, indicating how "strange" each individual's biomarker profile was relative to the baseline population mean. In most models, MHBD correlated positively with age, MHBD increased within individuals over time, and higher MHBD predicted higher risk of subsequent mortality. Predictive power increased as more variables were incorporated into the calculation of MHBD. Biomarkers from multiple systems were implicated. These results support hypotheses of simultaneous dysregulation in multiple systems and confirm the need for longitudinal, multivariate approaches to understanding biomarkers in aging. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Electron capture in collisions of S4+ with helium

    NASA Astrophysics Data System (ADS)

    Wang, J. G.; Turner, A. R.; Cooper, D. L.; Schultz, D. R.; Rakovic, M. J.; Fritsch, W.; Stancil, P. C.; Zygelman, B.

    2002-07-01

    Charge transfer due to collisions of ground-state S4+(3s2 1S) ions with helium is investigated for energies between 0.1 meV u-1 and 10 MeV u-1. Total and state-selective single electron capture (SEC) cross sections and rate coefficients are obtained utilizing the quantum mechanical molecular-orbital close-coupling (MOCC), atomic-orbital close-coupling (AOCC), classical trajectory Monte Carlo (CTMC) and continuum distorted wave methods. The MOCC calculations utilize ab initio adiabatic potentials and nonadiabatic radial coupling matrix elements obtained with the spin-coupled valence-bond approach. Previous data are limited to a calculation of the total SEC rate coefficient using the Landau-Zener model that is, in comparison to the results presented here, three orders of magnitude smaller. The MOCC SEC cross sections at low energy reveal a multichannel interference effect. True double capture is also investigated with the AOCC and CTMC approaches while autoionizing double capture and transfer ionization (TI) is explored with CTMC. SEC is found to be the dominant process except for E>200 keV u-1 when TI becomes the primary capture channel. Astrophysical implications are briefly discussed.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jing; Liang, Zheng-Liang; Wu, Yue-Liang

    We investigate the implications of the long-rang self-interaction on both the self-capture and the annihilation of the self-interacting dark matter (SIDM) trapped in the Sun. Our discussion is based on a specific SIDM model in which DM particles self-interact via a light scalar mediator, or Yukawa potential, in the context of quantum mechanics. Within this framework, we calculate the self-capture rate across a broad region of parameter space. While the self-capture rate can be obtained separately in the Born regime with perturbative method, and in the classical limits with the Rutherford formula, our calculation covers the gap between in amore » non-perturbative fashion. Besides, the phenomenology of both the Sommerfeld-enhanced s- and p-wave annihilation of the solar SIDM is also involved in our discussion. Moreover, by combining the analysis of the Super-Kamiokande (SK) data and the observed DM relic density, we constrain the nuclear capture rate of the DM particles in the presence of the dark Yukawa potential. The consequence of the long-range dark force on probing the solar SIDM turns out to be significant if the force-carrier is much lighter than the DM particle, and a quantitative analysis is provided.« less

  15. Hypersonic Viscous Flow Over Large Roughness Elements

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Choudhari, Meelan M.

    2009-01-01

    Viscous flow over discrete or distributed surface roughness has great implications for hypersonic flight due to aerothermodynamic considerations related to laminar-turbulent transition. Current prediction capability is greatly hampered by the limited knowledge base for such flows. To help fill that gap, numerical computations are used to investigate the intricate flow physics involved. An unstructured mesh, compressible Navier-Stokes code based on the space-time conservation element, solution element (CESE) method is used to perform time-accurate Navier-Stokes calculations for two roughness shapes investigated in wind tunnel experiments at NASA Langley Research Center. It was found through 2D parametric study that at subcritical Reynolds numbers, spontaneous absolute instability accompanying by sustained vortex shedding downstream of the roughness is likely to take place at subsonic free-stream conditions. On the other hand, convective instability may be the dominant mechanism for supersonic boundary layers. Three-dimensional calculations for both a rectangular and a cylindrical roughness element at post-shock Mach numbers of 4.1 and 6.5 also confirm that no self-sustained vortex generation from the top face of the roughness is observed, despite the presence of flow unsteadiness for the smaller post-shock Mach number case.

  16. Long-range Self-interacting Dark Matter in the Sun

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Liang, Zheng-Liang; Wu, Yue-Liang; Zhou, Yu-Feng

    2015-12-01

    We investigate the implications of the long-rang self-interaction on both the self-capture and the annihilation of the self-interacting dark matter (SIDM) trapped in the Sun. Our discussion is based on a specific SIDM model in which DM particles self-interact via a light scalar mediator, or Yukawa potential, in the context of quantum mechanics. Within this framework, we calculate the self-capture rate across a broad region of parameter space. While the self-capture rate can be obtained separately in the Born regime with perturbative method, and in the classical limits with the Rutherford formula, our calculation covers the gap between in a non-perturbative fashion. Besides, the phenomenology of both the Sommerfeld-enhanced s- and p-wave annihilation of the solar SIDM is also involved in our discussion. Moreover, by combining the analysis of the Super-Kamiokande (SK) data and the observed DM relic density, we constrain the nuclear capture rate of the DM particles in the presence of the dark Yukawa potential. The consequence of the long-range dark force on probing the solar SIDM turns out to be significant if the force-carrier is much lighter than the DM particle, and a quantitative analysis is provided.

  17. The calculation of viscosity of liquid n-decane and n-hexadecane by the Green-Kubo method

    NASA Astrophysics Data System (ADS)

    Cui, S. T.; Cummings, P. T.; Cochran, H. D.

    This short commentary presents the result of long molecular dynamics simulation calculations of the shear viscosity of liquid n-decane and n-hexadecane using the Green-Kubo integration method. The relaxation time of the stress-stress correlation function is compared with those of rotation and diffusion. The rotational and diffusional relaxation times, which are easy to calculate, provide useful guides for the required simulation time in viscosity calculations. Also, the computational time required for viscosity calculations of these systems by the Green-Kubo method is compared with the time required for previous non-equilibrium molecular dynamics calculations of the same systems. The method of choice for a particular calculation is determined largely by the properties of interest, since the efficiencies of the two methods are comparable for calculation of the zero strain rate viscosity.

  18. Empirical determination of collimator scatter data for use in Radcalc commercial monitor unit calculation software: Implication for prostate volumetric modulated-arc therapy calculations.

    PubMed

    Richmond, Neil; Tulip, Rachael; Walker, Chris

    2016-01-01

    The aim of this work was to determine, by measurement and independent monitor unit (MU) check, the optimum method for determining collimator scatter for an Elekta Synergy linac with an Agility multileaf collimator (MLC) within Radcalc, a commercial MU calculation software package. The collimator scatter factors were measured for 13 field shapes defined by an Elekta Agility MLC on a Synergy linac with 6MV photons. The value of the collimator scatter associated with each field was also calculated according to the equation Sc=Sc(mlc)+Sc(corr)(Sc(open)-Sc(mlc)) with Sc(corr) varied between 0 and 1, where Sc(open) is the value of collimator scatter calculated from the rectangular collimator-defined field and Sc(mlc) the value using only the MLC-defined field shape by applying sector integration. From this the optimum value of the correction was determined as that which gives the minimum difference between measured and calculated Sc. Single (simple fluence modulation) and dual-arc (complex fluence modulation) treatment plans were generated on the Monaco system for prostate volumetric modulated-arc therapy (VMAT) delivery. The planned MUs were verified by absolute dose measurement in phantom and by an independent MU calculation. The MU calculations were repeated with values of Sc(corr) between 0 and 1. The values of the correction yielding the minimum MU difference between treatment planning system (TPS) and check MU were established. The empirically derived value of Sc(corr) giving the best fit to the measured collimator scatter factors was 0.49. This figure however was not found to be optimal for either the single- or dual-arc prostate VMAT plans, which required 0.80 and 0.34, respectively, to minimize the differences between the TPS and independent-check MU. Point dose measurement of the VMAT plans demonstrated that the TPS MUs were appropriate for the delivered dose. Although the value of Sc(corr) may be obtained by direct comparison of calculation with measurement, the efficacy of the value determined for VMAT-MU calculations are very much dependent on the complexity of the MLC delivery. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  19. Various methods for assessing static lower extremity alignment: implications for prospective risk-factor screenings.

    PubMed

    Nguyen, Anh-Dung; Boling, Michelle C; Slye, Carrie A; Hartley, Emily M; Parisi, Gina L

    2013-01-01

    Accurate, efficient, and reliable measurement methods are essential to prospectively identify risk factors for knee injuries in large cohorts. To determine tester reliability using digital photographs for the measurement of static lower extremity alignment (LEA) and whether values quantified with an electromagnetic motion-tracking system are in agreement with those quantified with clinical methods and digital photographs. Descriptive laboratory study. Laboratory. Thirty-three individuals participated and included 17 (10 women, 7 men; age = 21.7 ± 2.7 years, height = 163.4 ± 6.4 cm, mass = 59.7 ± 7.8 kg, body mass index = 23.7 ± 2.6 kg/m2) in study 1, in which we examined the reliability between clinical measures and digital photographs in 1 trained and 1 novice investigator, and 16 (11 women, 5 men; age = 22.3 ± 1.6 years, height = 170.3 ± 6.9 cm, mass = 72.9 ± 16.4 kg, body mass index = 25.2 ± 5.4 kg/m2) in study 2, in which we examined the agreement among clinical measures, digital photographs, and an electromagnetic tracking system. We evaluated measures of pelvic angle, quadriceps angle, tibiofemoral angle, genu recurvatum, femur length, and tibia length. Clinical measures were assessed using clinically accepted methods. Frontal- and sagittal-plane digital images were captured and imported into a computer software program. Anatomic landmarks were digitized using an electromagnetic tracking system to calculate static LEA. Intraclass correlation coefficients and standard errors of measurement were calculated to examine tester reliability. We calculated 95% limits of agreement and used Bland-Altman plots to examine agreement among clinical measures, digital photographs, and an electromagnetic tracking system. Using digital photographs, fair to excellent intratester (intraclass correlation coefficient range = 0.70-0.99) and intertester (intraclass correlation coefficient range = 0.75-0.97) reliability were observed for static knee alignment and limb-length measures. An acceptable level of agreement was observed between clinical measures and digital pictures for limb-length measures. When comparing clinical measures and digital photographs with the electromagnetic tracking system, an acceptable level of agreement was observed in measures of static knee angles and limb-length measures. The use of digital photographs and an electromagnetic tracking system appears to be an efficient and reliable method to assess static knee alignment and limb-length measurements.

  20. Equations of state and stability of MgSiO 3 perovskite and post-perovskite phases from quantum Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yangzheng; Cohen, Ronald E.; Stackhouse, Stephen

    2014-11-10

    In this study, we have performed quantum Monte Carlo (QMC) simulations and density functional theory calculations to study the equations of state of MgSiO 3 perovskite (Pv, bridgmanite) and post-perovskite (PPv) up to the pressure and temperature conditions of the base of Earth's lower mantle. The ground-state energies were derived using QMC simulations and the temperature-dependent Helmholtz free energies were calculated within the quasiharmonic approximation and density functional perturbation theory. The equations of state for both phases of MgSiO 3 agree well with experiments, and better than those from generalized gradient approximation calculations. The Pv-PPv phase boundary calculated from ourmore » QMC equations of state is also consistent with experiments, and better than previous local density approximation calculations. Lastly, we discuss the implications for double crossing of the Pv-PPv boundary in the Earth.« less

  1. Effect of blood sampling schedule and method of calculating the area under the curve on validity and precision of glycaemic index values.

    PubMed

    Wolever, Thomas M S

    2004-02-01

    To evaluate the suitability for glycaemic index (GI) calculations of using blood sampling schedules and methods of calculating area under the curve (AUC) different from those recommended, the GI values of five foods were determined by recommended methods (capillary blood glucose measured seven times over 2.0 h) in forty-seven normal subjects and different calculations performed on the same data set. The AUC was calculated in four ways: incremental AUC (iAUC; recommended method), iAUC above the minimum blood glucose value (AUCmin), net AUC (netAUC) and iAUC including area only before the glycaemic response curve cuts the baseline (AUCcut). In addition, iAUC was calculated using four different sets of less than seven blood samples. GI values were derived using each AUC calculation. The mean GI values of the foods varied significantly according to the method of calculating GI. The standard deviation of GI values calculating using iAUC (20.4), was lower than six of the seven other methods, and significantly less (P<0.05) than that using netAUC (24.0). To be a valid index of food glycaemic response independent of subject characteristics, GI values in subjects should not be related to their AUC after oral glucose. However, calculating GI using AUCmin or less than seven blood samples resulted in significant (P<0.05) relationships between GI and mean AUC. It is concluded that, in subjects without diabetes, the recommended blood sampling schedule and method of AUC calculation yields more valid and/or more precise GI values than the seven other methods tested here. The only method whose results agreed reasonably well with the recommended method (ie. within +/-5 %) was AUCcut.

  2. An accelerated hologram calculation using the wavefront recording plane method and wavelet transform

    NASA Astrophysics Data System (ADS)

    Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi

    2017-06-01

    Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.

  3. Growth in Head Size during Infancy: Implications for Sound Localization.

    ERIC Educational Resources Information Center

    Clifton, Rachel K.; And Others

    1988-01-01

    Compared head circumference and interaural distance in infants between birth and 22 weeks of age and in a small sample of preschool children and adults. Calculated changes in interaural time differences according to age. Found a large shift in distance. (SKC)

  4. Perceived Intoxication: Implications for Alcohol Education.

    ERIC Educational Resources Information Center

    Nicholson, Mary E.; And Others

    1994-01-01

    This study examined the relationships among perceived levels of intoxication, blood alcohol levels, and impairment of selected psychomotor skills used in driving. Results reinforced previous findings which correlated perceptions of intoxication and other measures. These findings suggest that alcohol consumption tables, which calculate one's…

  5. Topographic and Roughness Characteristics of the Vastitas Borealis Formation on Mars Described by Fractal Statistics

    NASA Technical Reports Server (NTRS)

    Garneau, S.; Plaut, J. J.

    2000-01-01

    The surface roughness of the Vastitas Borealis Formation on Mars was analyzed with fractal statistics. Root mean square slopes and fractal dimensions were calculated for 74 topographic profiles. Results have implications for radar scattering models.

  6. Incidence, prevalence, and hybrid approaches to calculating disability-adjusted life years

    PubMed Central

    2012-01-01

    When disability-adjusted life years are used to measure the burden of disease on a population in a time interval, they can be calculated in several different ways: from an incidence, pure prevalence, or hybrid perspective. I show that these calculation methods are not equivalent and discuss some of the formal difficulties each method faces. I show that if we don’t discount the value of future health, there is a sense in which the choice of calculation method is a mere question of accounting. Such questions can be important, but they don’t raise deep theoretical concerns. If we do discount, however, choice of calculation method can change the relative burden attributed to different conditions over time. I conclude by recommending that studies involving disability-adjusted life years be explicit in noting what calculation method is being employed and in explaining why that calculation method has been chosen. PMID:22967055

  7. Extracting Forest Canopy Characteristics from Remote Sensing Imagery: Implications for Sentinel-2 Mission

    NASA Astrophysics Data System (ADS)

    Gholizadeh, Asa; Kopaekova, Veronika; Rogass, Christian; Mielke, Christian; Misurec, Jan

    2016-08-01

    Systematic quantification and monitoring of forest biophysical and biochemical variables is required to assess the response of ecosystems to climate change. Remote sensing has been introduced as a time and cost- efficient way to carry out large scale monitoring of vegetation parameters. Red-Edge Position (REP) is a hyperspectrally detectable parameter which is sensitive to vegetation Chl. In the current study, REP was modelled for the Norway spruce forest canopy resampled to HyMap and Sentinel-2 spectral resolution as well as calculated from the real HyMap and Sentinel-2 simulated data. Different REP extraction methods (4PLI, PF, LE, 4PLIH and 4PLIS) were assessed. The study showed the way for effective utilization of the forthcoming hyper and superspectral remote sensing sensors from orbit to monitor vegetation attributes.

  8. Ab initio studies of electronic transport through amine-Au-linked junctions of photoactive molecules

    NASA Astrophysics Data System (ADS)

    Strubbe, David A.; Quek, Su Ying; Venkataraman, Latha; Choi, Hyoung Joon; Neaton, J. B.; Louie, Steven G.

    2008-03-01

    Molecules linked to Au electrodes via amine groups have been shown to result in reproducible molecular conductance values for a wide range of single-molecule junctions [1,2]. Recent calculations have shown that these linkages result in a junction conductance relatively insensitive to atomic structure [3]. Here we exploit these well-defined linkages to study the effect of isomerization on conductance for the photoactive molecule 4,4'-diaminoazobenzene. We use a first-principles scattering-state method based on density-functional theory to explore structure and transport properties of the cis and trans isomers of the molecule, and we discuss implications for experiment. [1] L Venkataraman et al., Nature 442, 904-907 (2006); [2] L Venkataraman et al., Nano Lett. 6, 458-462 (2006); [3] SY Quek et al., Nano Lett. 7, 3477-3482 (2007).

  9. High-resolution, far-ultraviolet study of Beta Draconis (G2 Ib-II) - Transition region structure and energy balance

    NASA Technical Reports Server (NTRS)

    Brown, A.; Jordan, C.; Stencel, R. E.; Linsky, J. L.; Ayres, T. R.

    1984-01-01

    High-resolution far ultraviolet spectra of the star Beta Draconis have been obtained with the IUE satellite. The observations and emission line data from the spectra are presented, the interpretation of the emission line widths and shifts is discussed, and the implications are given in terms of atmospheric properties. The emission measure distribution is derived, and density diagnostics involving both line ratios and line opacity arguments is investigated. The methods for calculating spherically symmetric models of the atmospheric structure are outlined, and several such models are presented. The extension of these models to log T(e) greater than 5.3 using the observed X-ray flux is addressed, the energy balance of an 'optimum' model is investigated, and possible models of energy transport and deposition are discussed.

  10. Impact of actuarial assumptions on pension costs: A simulation analysis

    NASA Astrophysics Data System (ADS)

    Yusof, Shaira; Ibrahim, Rose Irnawaty

    2013-04-01

    This study investigates the sensitivity of pension costs to changes in the underlying assumptions of a hypothetical pension plan in order to gain a perspective on the relative importance of the various actuarial assumptions via a simulation analysis. Simulation analyses are used to examine the impact of actuarial assumptions on pension costs. There are two actuarial assumptions will be considered in this study which are mortality rates and interest rates. To calculate pension costs, Accrued Benefit Cost Method, constant amount (CA) modification, constant percentage of salary (CS) modification are used in the study. The mortality assumptions and the implied mortality experience of the plan can potentially have a significant impact on pension costs. While for interest rate assumptions, it is inversely related to the pension costs. Results of the study have important implications for analyst of pension costs.

  11. Implantable magnetic nanocomposites for the localized treatment of breast cancer

    NASA Astrophysics Data System (ADS)

    Kan-Dapaah, Kwabena; Rahbar, Nima; Soboyejo, Wole

    2014-12-01

    This paper explores the potential of implantable magnetic nanocomposites for the localized treatment of breast cancer via hyperthermia. Magnetite (Fe3O4)-reinforced polydimethylsiloxane composites were fabricated and characterized to determine their structural, magnetic, and thermal properties. The thermal properties and degree of optimization were shown to be strongly dependent on material properties of magnetic nanoparticles (MNPs). The in-vivo temperature profiles and thermal doses were investigated by the use of a 3D finite element method (FEM) model to simulate the heating of breast tissue. Heat generation was calculated using the linear response theory model. The 3D FEM model was used to investigate the effects of MNP volume fraction, nanocomposite geometry, and treatment parameters on thermal profiles. The implications of the results were then discussed for the development of implantable devices for the localized treatment of breast cancer.

  12. Plasma Equilibrium in a Magnetic Field with Stochastic Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.A. Krommes and Allan H. Reiman

    The nature of plasma equilibrium in a magnetic field with stochastic regions is examined. It is shown that the magnetic differential equation that determines the equilibrium Pfirsch-Schluter currents can be cast in a form similar to various nonlinear equations for a turbulent plasma, allowing application of the mathematical methods of statistical turbulence theory. An analytically tractable model, previously studied in the context of resonance-broadening theory, is applied with particular attention paid to the periodicity constraints required in toroidal configurations. It is shown that even a very weak radial diffusion of the magnetic field lines can have a significant effect onmore » the equilibrium in the neighborhood of the rational surfaces, strongly modifying the near-resonant Pfirsch-Schluter currents. Implications for the numerical calculation of 3D equilibria are discussed« less

  13. Predicting runoff induced mass loads in urban watersheds: Linking land use and pyrethroid contamination.

    PubMed

    Chinen, Kazue; Lau, Sim-Lin; Nonezyan, Michael; McElroy, Elizabeth; Wolfe, Becky; Suffet, Irwin H; Stenstrom, Michael K

    2016-10-01

    Pyrethroid pesticide mass loadings in the Ballona Creek Watershed were calculated using the volume-concentration method with a Geographic Information Systems (GIS) to explore potential relationships between urban land use, impervious surfaces, and pyrethroid runoff flowing into an urban stream. A calibration of the GIS volume-concentration model was performed using 2013 and 2014 wet-weather sampling data. Permethrin and lambda-cyhalothrin were detected as the highest concentrations; deltamethrin, lambda-cyhalothrin, permethrin and cyfluthrin were the most frequently detected synthetic pyrethroids. Eight neighborhoods within the watershed were highlighted as target areas based on a Weighted Overlay Analysis (WOA) in GIS. Water phase concentration of synthetic pyrethroids (SPs) were calculated from the reported usage. The need for stricter BMP and consumer product controls was identified as a possible way of reducing the detections of pyrethroids in Ballona Creek. This model has significant implications for determining mass loadings due to land use influence, and offers a flexible method to extrapolate data for a limited amount of samplings for a larger watershed, particularly for chemicals that are not subject to environmental monitoring. Offered as a simple approach to watershed management, the GIS-volume concentration model has the potential to be applied to other target pesticides and is useful for simulating different watershed scenarios. Further research is needed to compare results against other similar urban watersheds situated in mediterranean climates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Dissociation of I II in chemical oxygen-iodine lasers: experiment, modeling, and pre-dissociation by electrical discharge

    NASA Astrophysics Data System (ADS)

    Katz, A.; Waichman, K.; Dahan, Z.; Rybalkin, V.; Barmashenko, B. D.; Rosenwaks, S.

    2007-06-01

    The dissociation of I II molecules at the optical axis of a supersonic chemical oxygen-iodine laser (COIL) was studied via detailed measurements and three dimensional computational fluid dynamics calculations. Comparing the measurements and the calculations enabled critical examination of previously proposed dissociation mechanisms and suggestion of a mechanism consistent with the experimental and theoretical results obtained in a supersonic COIL for the gain, temperature and I II dissociation fraction at the optical axis. The suggested mechanism combines the recent scheme of Azyazov and Heaven (AIAA J. 44, 1593 (2006)), where I II(A' 3Π 2u), I II(A 3Π 1u) and O II(a1Δ g, v) are significant dissociation intermediates, with the "standard" chain branching mechanism of Heidner et al. (J. Phys. Chem. 87, 2348 (1983)), involving I(2P 1/2) and I II(X1Σ + g, v). In addition, we examined a new method for enhancement of the gain and power in a COIL by applying DC corona/glow discharge in the transonic section of the secondary flow in the supersonic nozzle, dissociating I II prior to its mixing with O II(1Δ). The loss of O II(1Δ) consumed for dissociation was thus reduced and the consequent dissociation rate downstream of the discharge increased, resulting in up to 80% power enhancement. The implication of this method for COILs operating beyond the specific conditions reported here is assessed.

  15. Bulk renormalization and particle spectrum in codimension-two brane worlds

    NASA Astrophysics Data System (ADS)

    Salvio, Alberto

    2013-04-01

    We study the Casimir energy due to bulk loops of matter fields in codimension-two brane worlds and discuss how effective field theory methods allow us to use this result to renormalize the bulk and brane operators. In the calculation we explicitly sum over the Kaluza-Klein (KK) states with a new convenient method, which is based on a combined use of zeta function and dimensional regularization. Among the general class of models we consider we include a supersymmetric example, 6D gauged chiral supergravity. Although much of our discussion is more general, we treat in some detail a class of compactifications, where the extra dimensions parametrize a rugby ball shaped space with size stabilized by a bulk magnetic flux. The rugby ball geometry requires two branes, which can host the Standard Model fields and carry both tension and magnetic flux (of the bulk gauge field), the leading terms in a derivative expansion. The brane properties have an impact on the KK spectrum and therefore on the Casimir energy as well as on the renormalization of the brane operators. A very interesting feature is that when the two branes carry exactly the same amount of flux, one half of the bulk supersymmetries survives after the compactification, even if the brane tensions are large. We also discuss the implications of these calculations for the natural value of the cosmological constant when the bulk has two large extra dimensions and the bulk supersymmetry is partially preserved (or completely broken).

  16. Molecular simulation of the thermophysical properties and phase behaviour of impure CO2 relevant to CCS.

    PubMed

    Cresswell, Alexander J; Wheatley, Richard J; Wilkinson, Richard D; Graham, Richard S

    2016-10-20

    Impurities from the CCS chain can greatly influence the physical properties of CO 2 . This has important design, safety and cost implications for the compression, transport and storage of CO 2 . There is an urgent need to understand and predict the properties of impure CO 2 to assist with CCS implementation. However, CCS presents demanding modelling requirements. A suitable model must both accurately and robustly predict CO 2 phase behaviour over a wide range of temperatures and pressures, and maintain that predictive power for CO 2 mixtures with numerous, mutually interacting chemical species. A promising technique to address this task is molecular simulation. It offers a molecular approach, with foundations in firmly established physical principles, along with the potential to predict the wide range of physical properties required for CCS. The quality of predictions from molecular simulation depends on accurate force-fields to describe the interactions between CO 2 and other molecules. Unfortunately, there is currently no universally applicable method to obtain force-fields suitable for molecular simulation. In this paper we present two methods of obtaining force-fields: the first being semi-empirical and the second using ab initio quantum-chemical calculations. In the first approach we optimise the impurity force-field against measurements of the phase and pressure-volume behaviour of CO 2 binary mixtures with N 2 , O 2 , Ar and H 2 . A gradient-free optimiser allows us to use the simulation itself as the underlying model. This leads to accurate and robust predictions under conditions relevant to CCS. In the second approach we use quantum-chemical calculations to produce ab initio evaluations of the interactions between CO 2 and relevant impurities, taking N 2 as an exemplar. We use a modest number of these calculations to train a machine-learning algorithm, known as a Gaussian process, to describe these data. The resulting model is then able to accurately predict a much broader set of ab initio force-field calculations at comparatively low numerical cost. Although our method is not yet ready to be implemented in a molecular simulation, we outline the necessary steps here. Such simulations have the potential to deliver first-principles simulation of the thermodynamic properties of impure CO 2 , without fitting to experimental data.

  17. Comparison of Dorris-Gray and Schultz methods for the calculation of surface dispersive free energy by inverse gas chromatography.

    PubMed

    Shi, Baoli; Wang, Yue; Jia, Lina

    2011-02-11

    Inverse gas chromatography (IGC) is an important technique for the characterization of surface properties of solid materials. A standard method of surface characterization is that the surface dispersive free energy of the solid stationary phase is firstly determined by using a series of linear alkane liquids as molecular probes, and then the acid-base parameters are calculated from the dispersive parameters. However, for the calculation of surface dispersive free energy, generally, two different methods are used, which are Dorris-Gray method and Schultz method. In this paper, the results calculated from Dorris-Gray method and Schultz method are compared through calculating their ratio with their basic equations and parameters. It can be concluded that the dispersive parameters calculated with Dorris-Gray method will always be larger than the data calculated with Schultz method. When the measuring temperature increases, the ratio increases large. Compared with the parameters in solvents handbook, it seems that the traditional surface free energy parameters of n-alkanes listed in the papers using Schultz method are not enough accurate, which can be proved with a published IGC experimental result. © 2010 Elsevier B.V. All rights reserved.

  18. Rebuilding DEMATEL threshold value: an example of a food and beverage information system.

    PubMed

    Hsieh, Yi-Fang; Lee, Yu-Cheng; Lin, Shao-Bin

    2016-01-01

    This study demonstrates how a decision-making trial and evaluation laboratory (DEMATEL) threshold value can be quickly and reasonably determined in the process of combining DEMATEL and decomposed theory of planned behavior (DTPB) models. Models are combined to identify the key factors of a complex problem. This paper presents a case study of a food and beverage information system as an example. The analysis of the example indicates that, given direct and indirect relationships among variables, if a traditional DTPB model only simulates the effects of the variables without considering that the variables will affect the original cause-and-effect relationships among the variables, then the original DTPB model variables cannot represent a complete relationship. For the food and beverage example, a DEMATEL method was employed to reconstruct a DTPB model and, more importantly, to calculate reasonable DEMATEL threshold value for determining additional relationships of variables in the original DTPB model. This study is method-oriented, and the depth of investigation into any individual case is limited. Therefore, the methods proposed in various fields of study should ideally be used to identify deeper and more practical implications.

  19. Autonomous Aerobraking Using Thermal Response Surface Analysis

    NASA Technical Reports Server (NTRS)

    Prince, Jill L.; Dec, John A.; Tolson, Robert H.

    2007-01-01

    Aerobraking is a proven method of significantly increasing the science payload that can be placed into low Mars orbits when compared to an all propulsive capture. However, the aerobraking phase is long and has mission cost and risk implications. The main cost benefit is that aerobraking permits the use of a smaller and cheaper launch vehicle, but additional operational costs are incurred during the long aerobraking phase. Risk is increased due to the repeated thermal loading of spacecraft components and the multiple attitude and propulsive maneuvers required for successful aerobraking. Both the cost and risk burdens can be significantly reduced by automating the aerobraking operations phase. All of the previous Mars orbiter missions that have utilized aerobraking have increasingly relied on onboard calculations during aerobraking. Even though the temperature of spacecraft components has been the limiting factor, operational methods have relied on using a surrogate variable for mission control. This paper describes several methods, based directly on spacecraft component maximum temperature, for autonomously predicting the subsequent aerobraking orbits and prescribing apoapsis propulsive maneuvers to maintain the spacecraft within specified temperature limits. Specifically, this paper describes the use of thermal response surface analysis in predicting the temperature of the spacecraft components and the corresponding uncertainty in this temperature prediction.

  20. Monitoring and sustainable management of oil polluting wrecks and chemical munitions dump sites in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Hassellöv, Ida-Maja; Tengberg, Anders

    2017-04-01

    The Baltic Sea region contains a dark legacy of about 100 000 tons of dumped chemical warfare agents. As time passes the gun shells corrode and the risks of release of contaminants increase. A major goal of the EU-flagship project Daimon is to support governmental organisations with case-to-case adapted methods for sustainable management of dumped toxic munitions. At the Chalmers University of Technology, a partner of Daimon, a unique ISO 31000 adapted method was developed to provide decision support regarding potentially oilpolluting shipwrecks. The method is called VRAKA and is based on probability calculations. It includes site-specific information as well as expert knowledge. VRAKA is now being adapted to dumped chemical munitions. To estimate corrosion potential of gun shells and ship wrecks along with sediment re-suspension and transport multiparameter instruments are deployed at dump sites. Parameters measured include Currents, Salinity, Temperature, Oxygen, Depth, Waves and Suspended particles. These measurements have revealed how trawling at dump sites seems to have large implications in spreading toxic substances (Arsenic) over larger areas. This presentation will shortly describe the decision support model, the used instrumentation and discuss some of the obtain results.

  1. Stomatal response to decreasing humidity implicated in recent decline in U.S. evaporation

    NASA Astrophysics Data System (ADS)

    Rigden, A. J.; Salvucci, G.

    2015-12-01

    We detect and attribute long-term changes in evapotranspiration (ET) over the contiguous United States from 1961 to 2013 using an approach we refer to as the ETRHEQ method (Evapotranspiration from Relative Humidity at Equilibrium). The ETRHEQ method primarily uses meteorological data collected at common weather stations. Daily ET is inferred by choosing the surface conductance to water vapor transport that minimizes the vertical variance of the calculated relative humidity profile averaged over the day. The key advantage of the ETRHEQ method is that it does not require knowledge of the surface state (soil moisture, stomatal conductance, leaf are index, etc.) or site-specific calibration. We estimate daily ET at 229 weather stations for 53 years. Across the U.S., we find a decrease in summertime (JJAS) ET of 0.21 cm/yr/yr from 1961-2013 with recent (1998-2013) declines in summertime ET of 1.08 cm/yr/yr. We decompose the ET trends into the dominant environmental drivers. Our results suggest that the recent decline in ET is due to increased vegetation stress induced by increases in vapor pressure deficit. We will present out results in context of other commonly used, regional ET data products.

  2. Mapping underwater sound noise and assessing its sources by using a self-organizing maps method.

    PubMed

    Rako, Nikolina; Vilibić, Ivica; Mihanović, Hrvoje

    2013-03-01

    This study aims to provide an objective mapping of the underwater noise and its sources over an Adriatic coastal marine habitat by applying the self-organizing maps (SOM) method. Systematic sampling of sea ambient noise (SAN) was carried out at ten predefined acoustic stations between 2007 and 2009. Analyses of noise levels were performed for 1/3 octave band standard centered frequencies in terms of instantaneous sound pressure levels averaged over 300 s to calculate the equivalent continuous sound pressure levels. Data on vessels' presence, type, and distance from the monitoring stations were also collected at each acoustic station during the acoustic sampling. Altogether 69 noise surveys were introduced to the SOM predefined 2 × 2 array. The overall results of the analysis distinguished two dominant underwater soundscapes, associating them mainly to the seasonal changes in the nautical tourism and fishing activities within the study area and to the wind and wave action. The analysis identified recreational vessels as the dominant anthropogenic source of underwater noise, particularly during the tourist season. The method demonstrated to be an efficient tool in predicting the SAN levels based on the vessel distribution, indicating also the possibility of its wider implication for marine conservation.

  3. Calculating Time-Integral Quantities in Depletion Calculations

    DOE PAGES

    Isotalo, Aarno

    2016-06-02

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  4. Mining TCGA Data Using Boolean Implications

    PubMed Central

    Sinha, Subarna; Tsang, Emily K.; Zeng, Haoyang; Meister, Michela; Dill, David L.

    2014-01-01

    Boolean implications (if-then rules) provide a conceptually simple, uniform and highly scalable way to find associations between pairs of random variables. In this paper, we propose to use Boolean implications to find relationships between variables of different data types (mutation, copy number alteration, DNA methylation and gene expression) from the glioblastoma (GBM) and ovarian serous cystadenoma (OV) data sets from The Cancer Genome Atlas (TCGA). We find hundreds of thousands of Boolean implications from these data sets. A direct comparison of the relationships found by Boolean implications and those found by commonly used methods for mining associations show that existing methods would miss relationships found by Boolean implications. Furthermore, many relationships exposed by Boolean implications reflect important aspects of cancer biology. Examples of our findings include cis relationships between copy number alteration, DNA methylation and expression of genes, a new hierarchy of mutations and recurrent copy number alterations, loss-of-heterozygosity of well-known tumor suppressors, and the hypermethylation phenotype associated with IDH1 mutations in GBM. The Boolean implication results used in the paper can be accessed at http://crookneck.stanford.edu/microarray/TCGANetworks/. PMID:25054200

  5. Interpretation of various radiation backgrounds observed in the gamma-ray spectrometer experiments carried on the Apollo missions and implications for diffuse gamma-ray measurements

    NASA Technical Reports Server (NTRS)

    Dyer, C. S.; Trombka, J. I.; Metzger, A. E.; Seltzer, S. M.; Bielefeld, M. J.; Evans, L. G.

    1975-01-01

    Since the report of a preliminary analysis of cosmic gamma-ray measurements made during the Apollo 15 mission, an improved calculation of the spallation activation contribution has been made including the effects of short-lived spallation fragments, which can extend the correction to 15 MeV. In addition, a difference between Apollo 15 and 16 data enables an electron bremsstrahlung contribution to be calculated. A high level of activation observed in a crystal returned on Apollo 17 indicates a background contribution from secondary neutrons. These calculations and observations enable an improved extraction of spurious components and suggest important improvements for future detectors.

  6. Development of a quantum mechanics-based free-energy perturbation method: use in the calculation of relative solvation free energies.

    PubMed

    Reddy, M Rami; Singh, U C; Erion, Mark D

    2004-05-26

    Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.

  7. Increasing Complexity of Clinical Research in Gastroenterology: Implications for Training Clinician-Scientists

    PubMed Central

    Scott, Frank I.; McConnell, Ryan A.; Lewis, Matthew E.; Lewis, James D.

    2014-01-01

    Background Significant advances have been made in clinical and epidemiologic research methods over the past 30 years. We sought to demonstrate the impact of these advances on published research in gastroenterology from 1980 to 2010. Methods Three journals (Gastroenterology, Gut, and American Journal of Gastroenterology) were selected for evaluation given their continuous publication during the study period. Twenty original clinical articles were randomly selected from each journal from 1980, 1990, 2000, and 2010. Each article was assessed for topic studied, whether the outcome was clinical or physiologic, study design, sample size, number of authors and centers collaborating, and reporting of statistical methods such as sample size calculations, p-values, confidence intervals, and advanced techniques such as bioinformatics or multivariate modeling. Research support with external funding was also recorded. Results A total of 240 articles were included in the study. From 1980 to 2010, there was a significant increase in analytic studies (p<0.001), clinical outcomes (p=0.003), median number of authors per article (p<0.001), multicenter collaboration (p<0.001), sample size (p<0.001), and external funding (p<0.001)). There was significantly increased reporting of p-values (p=0.01), confidence intervals (p<0.001), and power calculations (p<0.001). There was also increased utilization of large multicenter databases (p=0.001), multivariate analyses (p<0.001), and bioinformatics techniques (p=0.001). Conclusions There has been a dramatic increase in complexity in clinical research related to gastroenterology and hepatology over the last three decades. This increase highlights the need for advanced training of clinical investigators to conduct future research. PMID:22475957

  8. Pediatric patient safety events during hospitalization: approaches to accounting for institution-level effects.

    PubMed

    Slonim, Anthony D; Marcin, James P; Turenne, Wendy; Hall, Matt; Joseph, Jill G

    2007-12-01

    To determine the rates, patient, and institutional characteristics associated with the occurrence of patient safety indicators (PSIs) in hospitalized children and the degree of statistical difference derived from using three approaches of controlling for institution level effects. Pediatric Health Information System Dataset consisting of all pediatric discharges (<21 years of age) from 34 academic, freestanding children's hospitals for calendar year 2003. The rates of PSIs were computed for all discharges. The patient and institutional characteristics associated with these PSIs were calculated. The analyses sequentially applied three increasingly conservative methods to control for the institution-level effects robust standard error estimation, a fixed effects model, and a random effects model. The degree of difference from a "base state," which excluded institution-level variables, and between the models was calculated. The effects of these analyses on the interpretation of the PSIs are presented. PSIs are relatively infrequent events in hospitalized children ranging from 0 per 10,000 (postoperative hip fracture) to 87 per 10,000 (postoperative respiratory failure). Significant variables associated PSIs included age (neonates), race (Caucasians), payor status (public insurance), severity of illness (extreme), and hospital size (>300 beds), which all had higher rates of PSIs than their reference groups in the bivariable logistic regression results. The three different approaches of adjusting for institution-level effects demonstrated that there were similarities in both the clinical and statistical significance across each of the models. Institution-level effects can be appropriately controlled for by using a variety of methods in the analyses of administrative data. Whenever possible, resource-conservative methods should be used in the analyses especially if clinical implications are minimal.

  9. Prior exercise alters the difference between arterialised and venous glycaemia: implications for blood sampling procedures.

    PubMed

    Edinburgh, Robert M; Hengist, Aaron; Smith, Harry A; Betts, James A; Thompson, Dylan; Walhin, Jean-Philippe; Gonzalez, Javier T

    2017-05-01

    Oral glucose tolerance and insulin sensitivity are common measures, but are determined using various blood sampling methods, employed under many different experimental conditions. This study established whether measures of oral glucose tolerance and oral glucose-derived insulin sensitivity (insulin sensitivity indices; ISI) differ when calculated from venous v. arterialised blood. Critically, we also established whether any differences between sampling methods are consistent across distinct metabolic conditions (after rest v. after exercise). A total of ten healthy men completed two trials in a randomised order, each consisting of a 120-min oral glucose tolerance test (OGTT), either at rest or post-exercise. Blood was sampled simultaneously from a heated hand (arterialised) and an antecubital vein of the contralateral arm (venous). Under both conditions, glucose time-averaged AUC was greater from arterialised compared with venous plasma but importantly, this difference was larger after rest relative to after exercise (0·99 (sd 0·46) v. 0·56 (sd 0·24) mmol/l, respectively; P<0·01). OGTT-derived ISIMatsuda and ISICederholm were lower when calculated from arterialised relative to venous plasma and the arterialised-venous difference was greater after rest v. after exercise (ISIMatsuda: 1·97 (sd 0·81) v. 1·35 (sd 0·57) arbitrary units (au), respectively; ISICederholm : 14·76 (sd 7·83) v. 8·70 (sd 3·95) au, respectively; both P<0·01). Venous blood provides lower postprandial glucose concentrations and higher estimates of insulin sensitivity, compared with arterialised blood. Most importantly, these differences between blood sampling methods are not consistent after rest v. post-exercise, preventing standardised venous-to-arterialised corrections from being readily applied.

  10. Generalized approximate spin projection calculations of effective exchange integrals of the CaMn4O5 cluster in the S1 and S3 states of the oxygen evolving complex of photosystem II.

    PubMed

    Isobe, H; Shoji, M; Yamanaka, S; Mino, H; Umena, Y; Kawakami, K; Kamiya, N; Shen, J-R; Yamaguchi, K

    2014-06-28

    Full geometry optimizations followed by the vibrational analysis were performed for eight spin configurations of the CaMn4O4X(H2O)3Y (X = O, OH; Y = H2O, OH) cluster in the S1 and S3 states of the oxygen evolution complex (OEC) of photosystem II (PSII). The energy gaps among these configurations obtained by vertical, adiabatic and adiabatic plus zero-point-energy (ZPE) correction procedures have been used for computation of the effective exchange integrals (J) in the spin Hamiltonian model. The J values are calculated by the (1) analytical method and the (2) generalized approximate spin projection (AP) method that eliminates the spin contamination errors of UB3LYP solutions. Using J values derived from these methods, exact diagonalization of the spin Hamiltonian matrix was carried out, yielding excitation energies and spin densities of the ground and lower-excited states of the cluster. The obtained results for the right (R)- and left (L)-opened structures in the S1 and S3 states are found to be consistent with available optical and magnetic experimental results. Implications of the computational results are discussed in relation to (a) the necessity of the exact diagonalization for computations of reliable energy levels, (b) magneto-structural correlations in the CaMn4O5 cluster of the OEC of PSII, (c) structural symmetry breaking in the S1 and S3 states, and (d) the right- and left-handed scenarios for the O-O bond formation for water oxidation.

  11. A transition from using multi‐step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies

    PubMed Central

    Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-01-01

    Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561

  12. Savant Syndrome: Case Studies, Hypotheses, and Implications for Special Education.

    ERIC Educational Resources Information Center

    Cheatham, Susan Klug; And Others

    1995-01-01

    The concept of savant syndrome, encompassing those individuals historically known as "idiot savants," is reviewed. Case studies demonstrating special abilities in the areas of calendar calculating, musical ability, artistic talent, memorization, mathematical skills, mechanical achievement, and fine sensory discrimination are discussed,…

  13. Ecological impacts of the Deepwater Horizon oil spill: implications for immunotoxicity

    EPA Science Inventory

    Summary of major Federal and multi-stake holder research efforts in response to the DWH spill, including laboratory oil dispersant testing, estimation of oil release rates and oil fate calculations, subsea monitoring, and post-spill assessments. Impacts from shoreline oiling, wil...

  14. A Longitudinal Study on Predictors of Early Calculation Development among Young Children At-Risk for Learning Difficulties

    PubMed Central

    Peng, Peng; Namkung, Jessica M.; Fuchs, Douglas; Fuchs, Lynn S.; Patton, Samuel; Yen, Loulee; Compton, Donald L.; Zhang, Wenjuan; Miller, Amanda; Hamlett, Carol

    2016-01-01

    The purpose of this study was to explore domain-general cognitive skills, domain-specific academic skills, and demographic characteristics that are associated with calculation development from first through third grade among young children with learning difficulties. Participants were 176 children identified with reading and mathematics difficulties at the beginning of first grade. Data were collected on working memory, language, nonverbal reasoning, processing speed, decoding, numerical competence, incoming calculations, socioeconomic status, and gender at the beginning of first grade and on calculation performance at 4 time points: the beginning of first grade, the end of first grade, the end of second grade, and the end of third grade. Latent growth modelling analysis showed that numerical competence, incoming calculation, processing speed, and decoding skills significantly explained the variance of calculation performance at the beginning of first grade. Numerical competence and processing speed significantly explained the variance of calculation performance at the end of third grade. However, numerical competence was the only significant predictor of calculation development from the beginning of first grade to the end of third grade. Implications of these findings for early calculation instructions among young at-risk children are discussed. PMID:27572520

  15. Calculation method for laser radar cross sections of rotationally symmetric targets.

    PubMed

    Cao, Yunhua; Du, Yongzhi; Bai, Lu; Wu, Zhensen; Li, Haiying; Li, Yanhui

    2017-07-01

    The laser radar cross section (LRCS) is a key parameter in the study of target scattering characteristics. In this paper, a practical method for calculating LRCSs of rotationally symmetric targets is presented. Monostatic LRCSs for four kinds of rotationally symmetric targets (cone, rotating ellipsoid, super ellipsoid, and blunt cone) are calculated, and the results verify the feasibility of the method. Compared with the results for the triangular patch method, the correctness of the method is verified, and several advantages of the method are highlighted. For instance, the method does not require geometric modeling and patch discretization. The method uses a generatrix model and double integral, and its calculation is concise and accurate. This work provides a theory analysis for the rapid calculation of LRCS for common basic targets.

  16. Shot noise cross-correlation functions and cross spectra - Implications for models of QPO X-ray sources

    NASA Technical Reports Server (NTRS)

    Shibazaki, N.; Elsner, R. F.; Bussard, R. W.; Ebisuzaki, T.; Weisskopf, M. C.

    1988-01-01

    The cross-correlation functions (CCFs) and cross spectra expected for quasi-periodic oscillation (QPO) shot noise models are calculated under various assumptions, and the results are compared to observations. Effects due to possible coherence of the QPO oscillations are included. General formulas for the cross spectrum, the cross-phase spectrum, and the time-delay spectrum for QPO shot models are calculated and discussed. It is shown that the CCFs, cross spectra, and power spectra observed for Cyg X-e2 imply that the spectrum of the shots evolves with time, with important implications for the interpretation of these functions as well as of observed average energy spectra. The possible origins for the observed hard lags are discussed, and some physical difficulties for the Comptonization model are described. Classes of physical models for QPO sources are briefly addressed, and it is concluded that models involving shot formation at the surface of neutron stars are favored by observation.

  17. Comparison of nurse staffing based on changes in unit-level workload associated with patient churn.

    PubMed

    Hughes, Ronda G; Bobay, Kathleen L; Jolly, Nicholas A; Suby, Chrysmarie

    2015-04-01

    This analysis compares the staffing implications of three measures of nurse staffing requirements: midnight census, turnover adjustment based on length of stay, and volume of admissions, discharges and transfers. Midnight census is commonly used to determine registered nurse staffing. Unit-level workload increases with patient churn, the movement of patients in and out of the nursing unit. Failure to account for patient churn in staffing allocation impacts nurse workload and may result in adverse patient outcomes. Secondary data analysis of unit-level data from 32 hospitals, where nursing units are grouped into three unit-type categories: intensive care, intermediate care, and medical surgical. Midnight census alone did not account adequately for registered nurse workload intensity associated with patient churn. On average, units were staffed with a mixture of registered nurses and other nursing staff not always to budgeted levels. Adjusting for patient churn increases nurse staffing across all units and shifts. Use of the discharges and transfers adjustment to midnight census may be useful in adjusting RN staffing on a shift basis to account for patient churn. Nurse managers should understand the implications to nurse workload of various methods of calculating registered nurse staff requirements. © 2013 John Wiley & Sons Ltd.

  18. Implications of lower risk thresholds for statin treatment in primary prevention: analysis of CPRD and simulation modelling of annual cholesterol monitoring.

    PubMed

    McFadden, Emily; Stevens, Richard; Glasziou, Paul; Perera, Rafael

    2015-01-01

    To estimate numbers affected by a recent change in UK guidelines for statin use in primary prevention of cardiovascular disease. We modelled cholesterol ratio over time using a sample of 45,151 men (≥40years) and 36,168 women (≥55years) in 2006, without statin treatment or previous cardiovascular disease, from the Clinical Practice Research Datalink. Using simulation methods, we estimated numbers indicated for new statin treatment, if cholesterol was measured annually and used in the QRISK2 CVD risk calculator, using the previous 20% and newly recommended 10% thresholds. We estimate that 58% of men and 55% of women would be indicated for treatment by five years and 71% of men and 73% of women by ten years using the 20% threshold. Using the proposed threshold of 10%, 84% of men and 90% of women would be indicated for treatment by 5years and 92% of men and 98% of women by ten years. The proposed change of risk threshold from 20% to 10% would result in the substantial majority of those recommended for cholesterol testing being indicated for statin treatment. Implications depend on the value of statins in those at low to medium risk, and whether there are harms. Copyright © 2014. Published by Elsevier Inc.

  19. The Policy Implications of the Cost Structure of Home Health Agencies

    PubMed Central

    Mukamel, Dana B; Fortinsky, Richard H; White, Alan; Harrington, Charlene; White, Laura M; Ngo-Metzger, Quyen

    2014-01-01

    Purpose To examine the cost structure of home health agencies by estimating an empirical cost function for those that are Medicare-certified, ten years following the implementation of prospective payment. Design and Methods 2010 national Medicare cost report data for certified home health agencies were merged with case-mix information from the Outcome and Assessment Information Set (OASIS). We estimated a fully interacted (by tax status) hybrid cost function for 7,064 agencies and calculated marginal costs as percent of total costs for all variables. Results The home health industry is dominated by for-profit agencies, which tend to be newer than the non-profit agencies and to have higher average costs per patient but lower costs per visit. For-profit agencies tend to have smaller scale operations and different cost structures, and are less likely to be affiliated with chains. Our estimates suggest diseconomies of scale, zero marginal cost for contracting with therapy workers, and a positive marginal cost for contracting with nurses, when controlling for quality. Implications Our findings suggest that efficiencies may be achieved by promoting non-profit, smaller agencies, with fewer contract nursing staff. This conclusion should be tested further in future studies that address some of the limitations of our study. PMID:24949224

  20. Clinical Implications of TiGRT Algorithm for External Audit in Radiation Oncology.

    PubMed

    Shahbazi-Gahrouei, Daryoush; Saeb, Mohsen; Monadi, Shahram; Jabbari, Iraj

    2017-01-01

    Performing audits play an important role in quality assurance program in radiation oncology. Among different algorithms, TiGRT is one of the common application software for dose calculation. This study aimed to clinical implications of TiGRT algorithm to measure dose and compared to calculated dose delivered to the patients for a variety of cases, with and without the presence of inhomogeneities and beam modifiers. Nonhomogeneous phantom as quality dose verification phantom, Farmer ionization chambers, and PC-electrometer (Sun Nuclear, USA) as a reference class electrometer was employed throughout the audit in linear accelerators 6 and 18 MV energies (Siemens ONCOR Impression Plus, Germany). Seven test cases were performed using semi CIRS phantom. In homogeneous regions and simple plans for both energies, there was a good agreement between measured and treatment planning system calculated dose. Their relative error was found to be between 0.8% and 3% which is acceptable for audit, but in nonhomogeneous organs, such as lung, a few errors were observed. In complex treatment plans, when wedge or shield in the way of energy is used, the error was in the accepted criteria. In complex beam plans, the difference between measured and calculated dose was found to be 2%-3%. All differences were obtained between 0.4% and 1%. A good consistency was observed for the same type of energy in the homogeneous and nonhomogeneous phantom for the three-dimensional conformal field with a wedge, shield, asymmetric using the TiGRT treatment planning software in studied center. The results revealed that the national status of TPS calculations and dose delivery for 3D conformal radiotherapy was globally within acceptable standards with no major causes for concern.

  1. Solution of Cubic Equations by Iteration Methods on a Pocket Calculator

    ERIC Educational Resources Information Center

    Bamdad, Farzad

    2004-01-01

    A method to provide students a vision of how they can write iteration programs on an inexpensive programmable pocket calculator, without requiring a PC or a graphing calculator is developed. Two iteration methods are used, successive-approximations and bisection methods.

  2. Comparison of Methodologies Using Estimated or Measured Values of Total Corneal Astigmatism for Toric Intraocular Lens Power Calculation.

    PubMed

    Ferreira, Tiago B; Ribeiro, Paulo; Ribeiro, Filomena J; O'Neill, João G

    2017-12-01

    To compare the prediction error in the calculation of toric intraocular lenses (IOLs) associated with methods that estimate the power of the posterior corneal surface (ie, Barrett toric calculator and Abulafia-Koch formula) with that of methods that consider real measures obtained using Scheimpflug imaging: a software that uses vectorial calculation (Panacea toric calculator: http://www.panaceaiolandtoriccalculator.com) and a ray tracing software (PhacoOptics, Aarhus Nord, Denmark). In 107 eyes of 107 patients undergoing cataract surgery with toric IOL implantation (Acrysof IQ Toric; Alcon Laboratories, Inc., Fort Worth, TX), predicted residual astigmatism by each calculation method was compared with manifest refractive astigmatism. Prediction error in residual astigmatism was calculated using vector analysis. All calculation methods resulted in overcorrection of with-the-rule astigmatism and undercorrection of against-the-rule astigmatism. Both estimation methods resulted in lower mean and centroid astigmatic prediction errors, and a larger number of eyes within 0.50 diopters (D) of absolute prediction error than methods considering real measures (P < .001). Centroid prediction error (CPE) was 0.07 D at 172° for the Barrett toric calculator and 0.13 D at 174° for the Abulafia-Koch formula (combined with Holladay calculator). For methods using real posterior corneal surface measurements, CPE was 0.25 D at 173° for the Panacea calculator and 0.29 D at 171° for the ray tracing software. The Barrett toric calculator and Abulafia-Koch formula yielded the lowest astigmatic prediction errors. Directly evaluating total corneal power for toric IOL calculation was not superior to estimating it. [J Refract Surg. 2017;33(12):794-800.]. Copyright 2017, SLACK Incorporated.

  3. Useful lower limits to polarization contributions to intermolecular interactions using a minimal basis of localized orthogonal orbitals: theory and analysis of the water dimer.

    PubMed

    Azar, R Julian; Horn, Paul Richard; Sundstrom, Eric Jon; Head-Gordon, Martin

    2013-02-28

    The problem of describing the energy-lowering associated with polarization of interacting molecules is considered in the overlapping regime for self-consistent field wavefunctions. The existing approach of solving for absolutely localized molecular orbital (ALMO) coefficients that are block-diagonal in the fragments is shown based on formal grounds and practical calculations to often overestimate the strength of polarization effects. A new approach using a minimal basis of polarized orthogonal local MOs (polMOs) is developed as an alternative. The polMO basis is minimal in the sense that one polarization function is provided for each unpolarized orbital that is occupied; such an approach is exact in second-order perturbation theory. Based on formal grounds and practical calculations, the polMO approach is shown to underestimate the strength of polarization effects. In contrast to the ALMO method, however, the polMO approach yields results that are very stable to improvements in the underlying AO basis expansion. Combining the ALMO and polMO approaches allows an estimate of the range of energy-lowering due to polarization. Extensive numerical calculations on the water dimer using a large range of basis sets with Hartree-Fock theory and a variety of different density functionals illustrate the key considerations. Results are also presented for the polarization-dominated Na(+)CH4 complex. Implications for energy decomposition analysis of intermolecular interactions are discussed.

  4. A pure dipole analysis of the Gondwana apparent polar wander path: Paleogeographic implications in the evolution of Pangea

    NASA Astrophysics Data System (ADS)

    Gallo, L. C.; Tomezzoli, R. N.; Cristallini, E. O.

    2017-04-01

    The paleogeography of prebreakup Pangea at the beginning of the Atlantic Spreading has been a subject of debate for the past 50 years. Reconciling this debate involves theoretical corrections that cast doubt on available data and paleomagnetism as an effective tool for performing paleoreconstructions. This 50 year old debate focuses specifically on magnetic remanence and its ability to correctly record the inclination of the paleomagnetic field. In this paper, a selection of paleopoles was made to find the great circles containing the paleomagnetic pole and the respective sampling site. The true dipole pole (TDP) was then calculated by intersecting these great circles, effectively avoiding nondipolar contributions and inclination shallowing, in an innovative method. The great circle distance between each of these TDPs and the paleomagnetic means show the accuracy of paleomagnetic determinations in the context of a dominantly geocentric, axial, and dipolar geomagnetic field. The TDPs calculated allowed a bootstrap analysis to be performed to further consider the flattening factor that should be applied to the sedimentary-derived paleopoles. It is argued that the application of a single theoretical correction factor for clastic sedimentary-derived records could lead to a bias in the paleolatitude calculation and therefore to incorrect paleogeographic reconstructions. The unbiased APWP makes it necessary to slide Laurentia to the west in relation to Gondwana in a B-type Pangea during the Upper Carboniferous, later evolving, during the Early Permian, to reach the final A-type Pangea configuration of the Upper Permian.

  5. Prognostic implications of 62Cu-diacetyl-bis (N4-methylthiosemicarbazone) PET/CT in patients with glioma.

    PubMed

    Toriihara, Akira; Ohtake, Makoto; Tateishi, Kensuke; Hino-Shishikura, Ayako; Yoneyama, Tomohiro; Kitazume, Yoshio; Inoue, Tomio; Kawahara, Nobutaka; Tateishi, Ukihide

    2018-05-01

    The potential of positron emission tomography/computed tomography using 62 Cu-diacetyl-bis (N 4 -methylthiosemicarbazone) ( 62 Cu-ATSM PET/CT), which was originally developed as a hypoxic tracer, to predict therapeutic resistance and prognosis has been reported in various cancers. Our purpose was to investigate prognostic value of 62 Cu-ATSM PET/CT in patients with glioma, compared to PET/CT using 2-deoxy-2-[ 18 F]fluoro-D-glucose ( 18 F-FDG). 56 patients with glioma of World Health Organization grade 2-4 were enrolled. All participants had undergone both 62 Cu-ATSM PET/CT and 18 F-FDG PET/CT within mean 33.5 days prior to treatment. Maximum standardized uptake value and tumor/background ratio were calculated within areas of increased radiotracer uptake. The prognostic significance for progression-free survival and overall survival were assessed by log-rank test and Cox's proportional hazards model. Disease progression and death were confirmed in 37 and 27 patients in follow-up periods, respectively. In univariate analysis, there was significant difference of both progression-free survival and overall survival in age, tumor grade, history of chemoradiotherapy, maximum standardized uptake value and tumor/background ratio calculated using 62 Cu-ATSM PET/CT. Multivariate analysis revealed that maximum standardized uptake value calculated using 62 Cu-ATSM PET/CT was an independent predictor of both progression-free survival and overall survival (p < 0.05). In a subgroup analysis including patients of grade 4 glioma, only the maximum standardized uptake values calculated using 62 Cu-ATSM PET/CT showed significant difference of progression-free survival (p < 0.05). 62 Cu-ATSM PET/CT is a more promising imaging method to predict prognosis of patients with glioma compared to 18 F-FDG PET/CT.

  6. Poster - 07: Investigations of the Advanced Collapsed-cone Engine for HDR Brachytherapy Scalp Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cawston-Grant, Brie; Morrison, Hali; Sloboda, Ron

    Purpose: To present an investigation of the Advanced Collapsed-cone Engine (ACE) in Oncentraê Brachy (OcB) v4.5 using a tissue equivalent phantom modeling scalp brachytherapy (BT) treatments. Methods: A slab phantom modeling the skin, skull, brain and mold was used. A dose of 400cGy was prescribed to just above the skull layer using TG-43 and was delivered using an HDR afterloader. Measurements were made using Gafchromic™ EBT3 film at four depths within the phantom. The TG-43 planned and film measured doses were compared to the standard (sACE) and high (hACE) accuracy ACE options in OcB between the surface and below themore » skull. Results: The average difference between the TG-43 calculated and film measured doses was −11.25±3.38% when there was no air gap between the mold and skin; sACE and hACE doses were on average lower than TG-43 calculated doses by 3.41±0.03% and 2.45±0.03%, respectively. With a 3mm air gap between the mold and skin, the difference between the TG-43 calculated and measured doses was −8.28±5.76%; sACE and hACE calculations yielded average doses 1.87±0.03% and 1.78±0.04% greater than TG-43, respectively. Conclusions: TG-43, sACE, and hACE were found to overestimate doses below the skull layer compared to film. With a 3mm air gap between the mold and skin, sACE and hACE more accurately predicted the film dose to the skin surface than TG-43. More clinical variations and their implications are currently being investigated.« less

  7. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  8. The decay of highly excited open strings

    NASA Technical Reports Server (NTRS)

    Mitchell, D.; Turok, N.; Wilkinson, R.; Jetzer, P.

    1988-01-01

    The decay rates of leading edge Regge trajectory states are calculated for very high level number in open bosonic string theories, ignoring tachyon final states. The optical theorem simplifies the analysis while enabling identification of the different mass level decay channels. The main result is that (in four dimensions) the greatest single channel is the emission of a single photon and a state of the next mass level down. A simple asymptotic formula for arbitrarily high level number is given for this process. Also calculated is the total decay rate exactly up to N=100. It shows little variation over this range but appears to decrease for larger N. The formalism is checked in examples and the decay rate of the first excited level calculated for open superstring theories. The calculation may also have implications for high spin meson resonances.

  9. Effects of self-graphing and goal setting on the math fact fluency of students with disabilities.

    PubMed

    Figarola, Patricia M; Gunter, Philip L; Reffel, Julia M; Worth, Susan R; Hummel, John; Gerber, Brian L

    2008-01-01

    We evaluated the impact of goal setting and students' participation in graphing their own performance data on the rate of math fact calculations. Participants were 3 students with mild disabilities in the first and second grades; 2 of the 3 students were also identified with Attention-Deficit/Hyperactivity Disorder (ADHD). They were taught to use Microsoft Excel® software to graph their rate of correct calculations when completing timed, independent practice sheets consisting of single-digit mathematics problems. Two students' rates of correct calculations nearly always met or exceeded the aim line established for their correct calculations. Additional interventions were required for the third student. Results are discussed in terms of implications and future directions for increasing the use of evaluation components in classrooms for students at risk for behavior disorders and academic failure.

  10. Practical and Scholarly Implications of Information Behaviour Research: A Pilot Study of Research Literature

    ERIC Educational Resources Information Center

    Koh, Kyungwon; Rubenstein, Ellen; White, Kelvin

    2015-01-01

    Introduction: This pilot study examined how current information behaviour research addresses the implications and potential impacts of its findings. The goal was to understand what implications and contributions the field has made and how effectively authors communicate implications of their findings. Methods: We conducted a content analysis of 30…

  11. EuroFIR Guideline on calculation of nutrient content of foods for food business operators.

    PubMed

    Machackova, Marie; Giertlova, Anna; Porubska, Janka; Roe, Mark; Ramos, Carlos; Finglas, Paul

    2018-01-01

    This paper presents a Guideline for calculating nutrient content of foods by calculation methods for food business operators and presents data on compliance between calculated values and analytically determined values. In the EU, calculation methods are legally valid to determine the nutrient values of foods for nutrition labelling (Regulation (EU) No 1169/2011). However, neither a specific calculation method nor rules for use of retention factors are defined. EuroFIR AISBL (European Food Information Resource) has introduced a Recipe Calculation Guideline based on the EuroFIR harmonized procedure for recipe calculation. The aim is to provide food businesses with a step-by-step tool for calculating nutrient content of foods for the purpose of nutrition declaration. The development of this Guideline and use in the Czech Republic is described and future application to other Member States is discussed. Limitations of calculation methods and the importance of high quality food composition data are discussed. Copyright © 2017. Published by Elsevier Ltd.

  12. Propellant Mass Fraction Calculation Methodology for Launch Vehicles and Application to Ares Vehicles

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Monk, Timothy S.

    2009-01-01

    Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between candidate launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of launch vehicles. This includes fundamental methods of pmf calculation that consider only the total propellant mass and the dry mass of the vehicle; more involved methods that consider the residuals, reserves and any other unusable propellant remaining in the vehicle; and calculations excluding large mass quantities such as the installed engine mass. Finally, a historical comparison is made between launch vehicles on the basis of the differing calculation methodologies, while the unique mission and design requirements of the Ares V Earth Departure Stage (EDS) are examined in terms of impact to pmf.

  13. 78 FR 64030 - Monitoring Criteria and Methods To Calculate Occupational Radiation Doses

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-25

    ... NUCLEAR REGULATORY COMMISSION [NRC-2013-0234] Monitoring Criteria and Methods To Calculate... regulatory guide (DG), DG-8031, ``Monitoring Criteria and Methods to Calculate Occupational Radiation Doses.'' This guide describes methods that the NRC staff considers acceptable for licensees to use to determine...

  14. Proposed method to calculate FRMAC intervention levels for the assessment of radiologically contaminated food and comparison of the proposed method to the U.S. FDA's method to calculate derived intervention levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, Terrence D.; Hunt, Brian D.

    This report reviews the method recommended by the U.S. Food and Drug Administration for calculating Derived Intervention Levels (DILs) and identifies potential improvements to the DIL calculation method to support more accurate ingestion pathway analyses and protective action decisions. Further, this report proposes an alternate method for use by the Federal Emergency Radiological Assessment Center (FRMAC) to calculate FRMAC Intervention Levels (FILs). The default approach of the FRMAC during an emergency response is to use the FDA recommended methods. However, FRMAC recommends implementing the FIL method because we believe it to be more technically accurate. FRMAC will only implement themore » FIL method when approved by the FDA representative on the Federal Advisory Team for Environment, Food, and Health.« less

  15. Favorite Demonstrations: Simple "Jack-in-the-Box" Demonstrations for Physical Science Courses.

    ERIC Educational Resources Information Center

    Cole, Theodor C. H.

    1993-01-01

    The demonstrations presented in this article relate to everyday life, address interdisciplinary aspects, and have implications for the life sciences. Topics of the demonstrations are electricity calculations, astronomy, electrolysis of water, ester synthesis from butyric acid and pentanol, catalysis, and minerals. (PR)

  16. Revisiting Photoemission and Inverse Photoemission Spectra of Nickel Oxide from First Principles: Implications for Solar Energy Conversion

    PubMed Central

    2015-01-01

    We use two different ab initio quantum mechanics methods, complete active space self-consistent field theory applied to electrostatically embedded clusters and periodic many-body G0W0 calculations, to reanalyze the states formed in nickel(II) oxide upon electron addition and ionization. In agreement with interpretations of earlier measurements, we find that the valence and conduction band edges consist of oxygen and nickel states, respectively. However, contrary to conventional wisdom, we find that the oxygen states of the valence band edge are localized whereas the nickel states at the conduction band edge are delocalized. We argue that these characteristics may lead to low electron–hole recombination and relatively efficient electron transport, which, coupled with band gap engineering, could produce higher solar energy conversion efficiency compared to that of other transition-metal oxides. Both methods find a photoemission/inverse-photoemission gap of 3.6–3.9 eV, in good agreement with the experimental range, lending credence to our analysis of the electronic structure of NiO. PMID:24689856

  17. Enhanced carrier mobility of multilayer MoS2 thin-film transistors by Al2O3 encapsulation

    NASA Astrophysics Data System (ADS)

    Kim, Seong Yeoul; Park, Seonyoung; Choi, Woong

    2016-10-01

    We report the effect of Al2O3 encapsulation on the carrier mobility and contact resistance of multilayer MoS2 thin-film transistors by statistically investigating 70 devices with SiO2 bottom-gate dielectric. After Al2O3 encapsulation by atomic layer deposition, calculation based on Y-function method indicates that the enhancement of carrier mobility from 24.3 cm2 V-1 s-1 to 41.2 cm2 V-1 s-1 occurs independently from the reduction of contact resistance from 276 kΩ.μm to 118 kΩ.μm. Furthermore, contrary to the previous literature, we observe a negligible effect of thermal annealing on contact resistance and carrier mobility during the atomic layer deposition of Al2O3. These results demonstrate that Al2O3 encapsulation is a useful method of improving the carrier mobility of multilayer MoS2 transistors, providing important implications on the application of MoS2 and other two-dimensional materials into high-performance transistors.

  18. Ab initio calculations and kinetic modeling of thermal conversion of methyl chloride: implications for gasification of biomass.

    PubMed

    Singla, Mallika; Rasmussen, Morten Lund; Hashemi, Hamid; Wu, Hao; Glarborg, Peter; Pelucchi, Matteo; Faravelli, Tiziano; Marshall, Paul

    2018-04-25

    Limitations in current hot gas cleaning methods for chlorine species from biomass gasification may be a challenge for end use such as gas turbines, engines, and fuel cells, all requiring very low levels of chlorine. During devolatilization of biomass, chlorine is released partly as methyl chloride. In the present work, the thermal conversion of CH3Cl under gasification conditions was investigated. A detailed chemical kinetic model for pyrolysis and oxidation of methyl chloride was developed and validated against selected experimental data from the literature. Key reactions of CH2Cl with O2 and C2H4 for which data are scarce were studied by ab initio methods. The model was used to analyze the fate of methyl chloride in gasification processes. The results indicate that CH3Cl emissions will be negligible for most gasification technologies, but could be a concern for fluidized bed gasifiers, in particular in low-temperature gasification. The present work illustrates how ab initio theory and chemical kinetic modeling can help to resolve emission issues for thermal processes in industrial scale.

  19. Study of high-performance canonical molecular orbitals calculation for proteins

    NASA Astrophysics Data System (ADS)

    Hirano, Toshiyuki; Sato, Fumitoshi

    2017-11-01

    The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.

  20. Periodicity of microfilariae of human filariasis analysed by a trigonometric method (Aikat and Das).

    PubMed

    Tanaka, H

    1981-04-01

    The microfilarial periodicity of human filariae was characterized statistically by fitting the observed change of microfilaria (mf) counts to the formula of a simple harmonic wave using two parameters, the peak hour (K) and periodicity index (D) (Sasa & Tanaka, 1972, 1974). Later Aikat and Das (1976) proposed a simple calculation method using trigonometry (A-D method) to determine the peak hour (K) and periodicity index (P). All data of microfilarial periodicity analysed previously by the method of Sasa and Tanaka (S-T method) were calculated again by the A-D method in the present study to evaluate the latter method. The results of calculations showed that P was not proportional to D and the ratios of P/D were mostly smaller than expected, especially when P or D was small in less periodic forms. The peak hour calculated by the A-D method did not differ much from that calculated by the S-T method. Goodness of fit was improved slightly by the A-K method in two thirds of analysed data. The classification of human filariae in respect of the type of periodicity was, however, changed little by the results calculated by the A-D method.

  1. Characterization and Thermodynamic Relationship of Three Polymorphs of a Xanthine Oxidase Inhibitor, Febuxostat.

    PubMed

    Patel, Jinish; Jagia, Moksh; Bansal, Arvind Kumar; Patel, Sarsvatkumar

    2015-11-01

    Febuxostat (FXT), a xanthine oxidase inhibitor, is an interesting and unique molecule, which exhibits extensive polymorphism, with over 15 polymorphic forms reported to date. The primary purpose of the study was to characterize the three polymorphic forms with respect to their thermodynamic quantities and establish thermodynamic relationship between them. The polymorphs were characterized by thermal and powder X-ray diffraction methods. Three different methods were used to calculate the transition temperatures (Ttr) and thereby their thermodynamic relationships. Although the first and second method used calorimetric data (melting point and heat of fusion), the third method employed the use of configurational free energy phase diagram. The onset melting points of three polymorphic forms were found to be 482.89 ± 0.37 K for form I, 476.30 ± 1.21 K for form II, and 474.19 ± 0.11 K for form III. Moreover, the powder X-ray diffraction patterns for each form were also unique. The polymorphic pair of form I and II and of form I and III was found to be enantiotropic, whereas pair of form II and III was monotropic. Besides the relative thermodynamic aspects (free energy differences, enthalpy, entropy contributions) using different methods, the pharmaceutical implications and phase transformation aspects have also been covered. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  2. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  3. Critical Analysis of Existing Recyclability Assessment Methods for New Products in Order to Define a Reference Method

    NASA Astrophysics Data System (ADS)

    Maris, E.; Froelich, D.

    The designers of products subject to the European regulations on waste have an obligation to improve the recyclability of their products from the very first design stages. The statutory texts refer to ISO standard 22 628, which proposes a method to calculate vehicle recyclability. There are several scientific studies that propose other calculation methods as well. Yet the feedback from the CREER club, a group of manufacturers and suppliers expert in ecodesign and recycling, is that the product recyclability calculation method proposed in this standard is not satisfactory, since only a mass indicator is used, the calculation scope is not clearly defined, and common data on the recycling industry does not exist to allow comparable calculations to be made for different products. For these reasons, it is difficult for manufacturers to have access to a method and common data for calculation purposes.

  4. Ray-tracing in three dimensions for calculation of radiation-dose calculations. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, D.R.

    1986-05-27

    This thesis addresses several methods of calculating the radiation-dose distribution for use by technicians or clinicians in radiation-therapy treatment planning. It specifically covers the calculation of the effective pathlength of the radiation beam for use in beam models representing the dose distribution. A two-dimensional method by Bentley and Milan is compared to the method of Strip Trees developed by Duda and Hart and then a three-dimensional algorithm built to perform the calculations in three dimensions. The use of PRISMS conforms easily to the obtained CT Scans and provides a means of only doing two-dimensional ray-tracing while performing three-dimensional dose calculations.more » This method is already being applied and used in actual calculations.« less

  5. Gamow-Teller Strength Distributions for pf-shell Nuclei and its Implications in Astrophysics

    NASA Astrophysics Data System (ADS)

    Rahman, M.-U.; Nabi, J.-U.

    2009-08-01

    The {pf}-shell nuclei are present in abundance in the pre-supernova and supernova phases and these nuclei are considered to play an important role in the dynamics of core collapse supernovae. The B(GT) values are calculated for the {pf}-shell nuclei 55Co and 57Zn using the pn-QRPA theory. The calculated B(GT) strengths have differences with earlier reported shell model calculations, however, the results are in good agreement with the experimental data. These B(GT) strengths are used in the calculations of weak decay rates which play a decisive role in the core-collapse supernovae dynamics and nucleosynthesis. Unlike previous calculations the so-called Brink's hypothesis is not assumed in the present calculation which leads to a more realistic estimate of weak decay rates. The electron capture rates are calculated over wide grid of temperature ({0.01} × 109 - 30 × 109 K) and density (10-1011 g-cm-3). Our rates are enhanced compared to the reported shell model rates. This enhancement is attributed partly to the liberty of selecting a huge model space, allowing consideration of many more excited states in the present electron capture rates calculations.

  6. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Lu, W

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less

  7. SU-E-T-645: Dose Enhancement to Cell Nucleus Due to Hard Collisions of Protons with Electrons in Gold Nanospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eley, J; Krishnan, S

    2014-06-15

    Purpose: The purpose of this study was to investigate the theoretical dose enhancement to a cell nucleus due to increased fluence of secondary electrons when gold nanospheres are present in the cytoplasm during proton therapy. Methods: We modeled the irradiation of prostate cancer cells using protons of variable energies when 10,000 gold nanoparticles, each with radius of 10 nm, were randomly distributed in the cytoplasm. Using simple analytical equations, we calculated the increased mean dose to the cell nucleus due to secondary electrons produced by hard collisions of 0.1, 1, 10, and 100 MeV protons with orbital electrons in gold.more » We only counted electrons with kinetic energy higher than 1 keV. In addition to calculating the increase in the mean dose to the cell nucleus, we also calculated the increase in local dose in the “shadow,” i.e., the umbra, of individual gold nanospheres due to forward scattered electrons. Results: For proton energies of 0.1, 1, 10, and 100 MeV, we calculated increases to the mean nuclear dose of 0.15, 0.09, 0.05, and 0.04%, respectively. When we considered local dose increases in the shadows of individual gold spheres, we calculated local dose increases of 5.5, 3.2, 1.9, and 1.3%, respectively. Conclusion: We found negligible, less than 0.2%, increases in the mean dose to the cell nucleus due to electrons produced by hard collisions of protons with electrons in gold nanospheres. However, we observed increases up to 5.5% in the local dose in the shadow of gold nanospheres. Considering the shadow radius of 10 nm, these local dose enhancements may have implications for slightly increased probability of clustered DNA damage when gold nanoparticles are close to the nuclear membrane.« less

  8. Accuracy of computer-calculated and manual QRS duration assessments: Clinical implications to select candidates for cardiac resynchronization therapy.

    PubMed

    De Pooter, Jan; El Haddad, Milad; Stroobandt, Roland; De Buyzere, Marc; Timmermans, Frank

    2017-06-01

    QRS duration (QRSD) plays a key role in the field of cardiac resynchronization therapy (CRT). Computer-calculated QRSD assessments are widely used, however inter-manufacturer differences have not been investigated in CRT candidates. QRSD was assessed in 377 digitally stored ECGs: 139 narrow QRS, 140 LBBB and 98 ventricular paced ECGs. Manual QRSD was measured as global QRSD, using digital calipers, by two independent observers. Computer-calculated QRSD was assessed by Marquette 12SL (GE Healthcare, Waukesha, WI, USA) and SEMA3 (Schiller, Baar, Switzerland). Inter-manufacturer differences of computer-calculated QRSD assessments vary among different QRS morphologies: narrow QRSD: 4 [2-9] ms (median [IQR]), p=0.010; LBBB QRSD: 7 [2-10] ms, p=0.003 and paced QRSD: 13 [6-18] ms, p=0.007. Interobserver differences of manual QRSD assessments measured: narrow QRSD: 4 [2-6] ms, p=non-significant; LBBB QRSD: 6 [3-12] ms, p=0.006; paced QRSD: 8 [4-18] ms, p=0.001. In LBBB ECGs, intraclass correlation coefficients (ICCs) were comparable for inter-manufacturer and interobserver agreement (ICC 0.830 versus 0.837). When assessing paced QRSD, manual measurements showed higher ICC compared to inter-manufacturer agreement (ICC 0.902 versus 0.776). Using guideline cutoffs of 130ms, up to 15% of the LBBB ECGs would be misclassified as <130ms or ≥130ms by at least one method. Using a cutoff of 150ms, this number increases to 33% of ECGs being misclassified. However, by combining LBBB-morphology and QRSD, the number of misclassified ECGs can be decreased by half. Inter-manufacturer differences in computer-calculated QRSD assessments are significant and may compromise adequate selection of individual CRT candidates when using QRSD as sole parameter. Paced QRSD should preferentially be assessed by manual QRSD measurements. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Methods for calculating the vergence of an astigmatic ray bundle in an optical system that contains a freeform surface

    NASA Astrophysics Data System (ADS)

    Shirayanagi, Moriyasu

    2016-10-01

    A method using the generalized Coddington equations enables calculating the vergence of an astigmatic ray bundle in the vicinity of a skew ray in an optical system containing a freeform surface. Because this method requires time-consuming calculations, however, there is still room for increasing the calculation speed. In addition, this method cannot be applied to optical systems containing a medium with a gradient index. Therefore, we propose two new calculation methods in this paper. The first method, using differential ray tracing, enables us to shorten computation time by using simpler algorithms than those used by conventional methods. The second method, using proximate rays, employs only the ray data obtained from the rays exiting an optical system. Therefore, this method can be applied to an optical system that contains a medium with a gradient index. We show some sample applications of these methods in the field of ophthalmic optics.

  11. Learning Profiles: The Learning Crisis Is Not (Mostly) about Enrollment

    ERIC Educational Resources Information Center

    Sandefur, Justin; Pritchett, Lant; Beatty, Amanda

    2016-01-01

    The differential patterns of grade progression have direct implications for the calculation of learning profiles. Researchers measure learning in primary school using survey data on reading and math skills of a nationally representative, population-based sample of children in India, Pakistan, Kenya, Tanzania, and Uganda. Research demonstrates that…

  12. When Do Girls Prefer Football to Fashion? An Analysis of Female Underachievement in Relation to "Realistic" Mathematics Context.

    ERIC Educational Resources Information Center

    Boaler, Jo

    1994-01-01

    Reports on a study of the move away from abstract calculations toward "mathematics in context" among 50 British female secondary school students. Discusses implications of findings in relation to reported female underachievement and disinterest in school mathematics. (CFR)

  13. Leadership Competencies of Tennessee Extension Agents: Implications for Professional Development

    ERIC Educational Resources Information Center

    Hall, John L.; Broyles, Thomas W.

    2016-01-01

    The study's purpose was to determine Extension agents' (n= 111) perceived level of importance, knowledge, and training needs for leadership skills. Mean Weighted Discrepancy Scores were calculated to determine training needs. Participants' perceived responses were average to above average importance for all skills; however, the participants'…

  14. The comparison of fossil carbon fraction and greenhouse gas emissions through an analysis of exhaust gases from urban solid waste incineration facilities.

    PubMed

    Kim, Seungjin; Kang, Seongmin; Lee, Jeongwoo; Lee, Seehyung; Kim, Ki-Hyun; Jeon, Eui-Chan

    2016-10-01

    In this study, in order to understand accurate calculation of greenhouse gas emissions of urban solid waste incineration facilities, which are major waste incineration facilities, and problems likely to occur at this time, emissions were calculated by classifying calculation methods into 3 types. For the comparison of calculation methods, the waste characteristics ratio, dry substance content by waste characteristics, carbon content in dry substance, and (12)C content were analyzed; and in particular, CO2 concentration in incineration gases and (12)C content were analyzed together. In this study, 3 types of calculation methods were made through the assay value, and by using each calculation method, emissions of urban solid waste incineration facilities were calculated then compared. As a result of comparison, with Calculation Method A, which used the default value as presented in the IPCC guidelines, greenhouse gas emissions were calculated for the urban solid waste incineration facilities A and B at 244.43 ton CO2/day and 322.09 ton CO2/day, respectively. Hence, it showed a lot of difference from Calculation Methods B and C, which used the assay value of this study. It is determined that this was because the default value as presented in IPCC, as the world average value, could not reflect the characteristics of urban solid waste incineration facilities. Calculation Method B indicated 163.31 ton CO2/day and 230.34 ton CO2/day respectively for the urban solid waste incineration facilities A and B; also, Calculation Method C indicated 151.79 ton CO2/day and 218.99 ton CO2/day, respectively. This study intends to compare greenhouse gas emissions calculated using (12)C content default value provided by the IPCC (Intergovernmental Panel on Climate Change) with greenhouse gas emissions calculated using (12)C content and waste assay value that can reflect the characteristics of the target urban solid waste incineration facilities. Also, the concentration and (12)C content were calculated by directly collecting incineration gases of the target urban solid waste incineration facilities, and greenhouse gas emissions of the target urban solid waste incineration facilities through this survey were compared with greenhouse gas emissions, which used the previously calculated assay value of solid waste.

  15. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    PubMed

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.

  16. The impact of surgeon choice on the cost of performing laparoscopic appendectomy.

    PubMed

    Chu, Thomas; Chandhoke, Ryan A; Smith, Paul C; Schwaitzberg, Steven D

    2011-04-01

    While laparoscopic appendectomy (LA) can be performed using a myriad of techniques, the cost of each method varies. The purpose of this study is to analyze the effects of surgeon choice of technique on the cost of key steps in LA. Surgeon operative notes, hospital invoice lists, and surgeon instrumentation preference sheets were obtained for all LA cases in 2008 at Cambridge Health Alliance (CHA). Only cases (N = 89) performed by fulltime staff general surgeons (N = 8) were analyzed. Disposable costs were calculated for the following components of LA: port access, mesoappendix division, and management of the appendiceal stump. The actual cost of each disposable was determined based on the hospital's materials management database. Actual hospital reimbursements for LA in 2008 were obtained for all payers and compared with the disposable cost per case. Disposable cost per case for the three portions analyzed for 126 theoretical models were calculated and found to range from US $81 to US $873. The surgeon with the most cost-effective preferred method (US $299) utilized one multi-use endoscopic clip applier for mesoappendix division, two commercially available pretied loops for management of the appendiceal stump, and three 5-mm trocars as their preferred technique. The surgeon with the least cost-effective preferred method (US $552) utilized two staple firings for mesoappendix division, one staple firing for management of the appendiceal stump, and 12/5/10-mm trocars for access. The two main payers for LA patients were Medicaid and Health Safety Net, whose total hospital reimbursements ranged from US $264 to US $504 and from US $0 to US $545 per case, respectively, for patients discharged on day 1. Disposable costs frequently exceeded hospital reimbursements. Currently, there is no scientific literature that clearly illustrates a superior surgical method for performing these portions of LA in routine cases. This study suggests that surgeons should review the cost implications of their practice and to find ways to provide the most cost-effective care without jeopardizing clinical outcome.

  17. Prediction uncertainty and data worth assessment for groundwater transport times in an agricultural catchment

    NASA Astrophysics Data System (ADS)

    Zell, Wesley O.; Culver, Teresa B.; Sanford, Ward E.

    2018-06-01

    Uncertainties about the age of base-flow discharge can have serious implications for the management of degraded environmental systems where subsurface pathways, and the ongoing release of pollutants that accumulated in the subsurface during past decades, dominate the water quality signal. Numerical groundwater models may be used to estimate groundwater return times and base-flow ages and thus predict the time required for stakeholders to see the results of improved agricultural management practices. However, the uncertainty inherent in the relationship between (i) the observations of atmospherically-derived tracers that are required to calibrate such models and (ii) the predictions of system age that the observations inform have not been investigated. For example, few if any studies have assessed the uncertainty of numerically-simulated system ages or evaluated the uncertainty reductions that may result from the expense of collecting additional subsurface tracer data. In this study we combine numerical flow and transport modeling of atmospherically-derived tracers with prediction uncertainty methods to accomplish four objectives. First, we show the relative importance of head, discharge, and tracer information for characterizing response times in a uniquely data rich catchment that includes 266 age-tracer measurements (SF6, CFCs, and 3H) in addition to long term monitoring of water levels and stream discharge. Second, we calculate uncertainty intervals for model-simulated base-flow ages using both linear and non-linear methods, and find that the prediction sensitivity vector used by linear first-order second-moment methods results in much larger uncertainties than non-linear Monte Carlo methods operating on the same parameter uncertainty. Third, by combining prediction uncertainty analysis with multiple models of the system, we show that data-worth calculations and monitoring network design are sensitive to variations in the amount of water leaving the system via stream discharge and irrigation withdrawals. Finally, we demonstrate a novel model-averaged computation of potential data worth that can account for these uncertainties in model structure.

  18. Costs of implementing integrated community case management (iCCM) in six African countries: implications for sustainability

    PubMed Central

    Daviaud, Emmanuelle; Besada, Donnela; Leon, Natalie; Rohde, Sarah; Sanders, David; Oliphant, Nicholas; Doherty, Tanya

    2017-01-01

    Background Sub–Saharan Africa still reports the highest rates of under–five mortality. Low cost, high impact interventions exist, however poor access remains a challenge. Integrated community case management (iCCM) was introduced to improve access to essential services for children 2–59 months through diagnosis, treatment and referral services by community health workers for malaria, pneumonia and diarrhea. This paper presents the results of an economic analysis of iCCM implementation in regions supported by UNICEF in six countries and assesses country–level scale–up implications. The paper focuses on costs to provider (health system and donors) to inform planning and budgeting, and does not cover cost–effectiveness. Methods The analysis combines annualised set–up costs and 1 year implementation costs to calculate incremental economic and financial costs per treatment from a provider perspective. Affordability is assessed by calculating the per capita financial cost of the program as a percentage of the public health expenditure per capita. Time and financial implications of a 30% increase in utilization were modeled. Country scale–up is modeled for all children under 5 in rural areas. Results Utilization of iCCM services varied from 0.05 treatment/y/under–five in Ethiopia to over 1 in Niger. There were between 10 and 603 treatments/community health worker (CHW)/y. Consultation cost represented between 93% and 22% of economic costs per treatment influenced by the level of utilization. Weighted economic cost per treatment ranged from US$ 13 (2015 USD) in Ghana to US$ 2 in Malawi. CHWs spent from 1 to 9 hours a week on iCCM. A 30% increase in utilization would add up to 2 hours a week, but reduce cost per treatment (by 20% in countries with low utilization). Country scale up would amount to under US$ 0.8 per capita total population (US$ 0.06–US$0.74), between 0.5% and 2% of public health expenditure per capita but 8% in Niger. Conclusions iCCM addresses unmet needs and impacts on under 5 mortality. An economic cost of under US$ 1/capita/y represents a sound investment. Utilization remains low however, and strategies must be developed as a priority to improve demand. Continued donor support is required to sustain iCCM services and strengthen its integration within national health systems. PMID:28702174

  19. A method for calculating a real-gas two-dimensional nozzle contour including the effects of gamma

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Boney, L. R.

    1975-01-01

    A method for calculating two-dimensional inviscid nozzle contours for a real gas or an ideal gas by the method of characteristics is described. The method consists of a modification of an existing nozzle computer program. The ideal-gas nozzle contour can be calculated for any constant value of gamma. Two methods of calculating the center-line boundary values of the Mach number in the throat region are also presented. The use of these three methods of calculating the center-line Mach number distribution in the throat region can change the distance from the throat to the inflection point by a factor of 2.5. A user's guide is presented for input to the computer program for both the two-dimensional and axisymmetric nozzle contours.

  20. Increased lectin binding capacity of trophoblastic cells of late day 5 rat blastocysts.

    PubMed Central

    Stein, B A; Shaw, T J; Turner, V F; Murphy, C R

    1994-01-01

    The binding of lectins to the trophoblast of rat blastocysts has been studied using quantitative ultrastructural cytochemistry. Rat blastocysts from early, mid and late d 5 of gestation were stained using biotinylated lectins (Phytolacca americana [Phy am], fucose binding protein [FBP] and soybean agglutinin [SBA]) and a sensitive avidin-ferritin cytochemical method. Electron micrographs of ferritin particles along the membrane were processed to produce images for which grey scale levels could be established and the ferritin particles automatically counted. The ferritin:membrane ratio was then calculated. Increased binding with Phy am (which detects short chain oligosaccharides) was found after midday of d 5, i.e. after hatching. Binding of FBP and SBA did not alter during the period studied. The increased concentration of oligosaccharides on the blastocyst surface membrane after hatching may have important implications for blastocyst attachment to the endometrium. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 PMID:7649802

  1. Effects of mechanical properties of adhesive resin cements on stress distribution in fiber-reinforced composite adhesive fixed partial dentures.

    PubMed

    Yokoyama, Daiichiro; Shinya, Akikazu; Gomi, Harunori; Vallittu, Pekka K; Shinya, Akiyoshi

    2012-01-01

    Using finite element analysis (FEA), this study investigated the effects of the mechanical properties of adhesive resin cements on stress distributions in fiber-reinforced resin composite (FRC) adhesive fixed partial dentures (AFPDs). Two adhesive resin cements were compared: Super-Bond C&B and Panavia Fluoro Cement. The AFPD consisted of a pontic to replace a maxillary right lateral incisor and retainers on a maxillary central incisor and canine. FRC framework was made of isotropic, continuous, unidirectional E-glass fibers. Maximum principal stresses were calculated using finite element method (FEM). Test results revealed that differences in the mechanical properties of adhesive resin cements led to different stress distributions at the cement interfaces between AFPD and abutment teeth. Clinical implication of these findings suggested that the safety and longevity of an AFPD depended on choosing an adhesive resin cement with the appropriate mechanical properties.

  2. Shielding and activation calculations around the reactor core for the MYRRHA ADS design

    NASA Astrophysics Data System (ADS)

    Ferrari, Anna; Mueller, Stefan; Konheiser, J.; Castelliti, D.; Sarotto, M.; Stankovskiy, A.

    2017-09-01

    In the frame of the FP7 European project MAXSIMA, an extensive simulation study has been done to assess the main shielding problems in view of the construction of the MYRRHA accelerator-driven system at SCK·CEN in Mol (Belgium). An innovative method based on the combined use of the two state-of-the-art Monte Carlo codes MCNPX and FLUKA has been used, with the goal to characterize complex, realistic neutron fields around the core barrel, to be used as source terms in detailed analyses of the radiation fields due to the system in operation, and of the coupled residual radiation. The main results of the shielding analysis are presented, as well as the construction of an activation database of all the key structural materials. The results evidenced a powerful way to analyse the shielding and activation problems, with direct and clear implications on the design solutions.

  3. Interpretation of Ferroan Anorthosite Ages and Implications for the Lunar Magma Ocean

    NASA Technical Reports Server (NTRS)

    Neal, C. R.; Draper, D. S.

    2017-01-01

    Ferroan Anorthosites (FANs) are considered to have purportedly crystallized directly from the lunar magma ocean (LMO) as a flotation crust. LMO modeling suggests that such anorthosites started to form only after greater than 70 percent of the LMO had crystallized. Recent age dates for FANs have questioned this hypothesis as they span too large of an age range. This means a younger age for the Moon-forming giant impact or the LMO hypothesis is flawed. However, FANs are notoriously difficult to age-date using the isochron method. We have proposed a mechanism for testing the LMO hypothesis through using plagioclase trace element abundances to calculate equilibrium liquids and compare them with LMO crystallization models. We now examine the petrography of the samples that have Sm-Nd (Samarium-Neodymium) age dates (Rb-Sr (Rubidium-Strontium) isotopic systematics may have been disturbed) and propose a relative way to age date FANs.

  4. Reversible switching between pressure-induced amorphization and thermal-driven recrystallization in VO2(B) nanosheets

    PubMed Central

    Wang, Yonggang; Zhu, Jinlong; Yang, Wenge; Wen, Ting; Pravica, Michael; Liu, Zhenxian; Hou, Mingqiang; Fei, Yingwei; Kang, Lei; Lin, Zheshuai; Jin, Changqing; Zhao, Yusheng

    2016-01-01

    Pressure-induced amorphization (PIA) and thermal-driven recrystallization have been observed in many crystalline materials. However, controllable switching between PIA and a metastable phase has not been described yet, due to the challenge to establish feasible switching methods to control the pressure and temperature precisely. Here, we demonstrate a reversible switching between PIA and thermally-driven recrystallization of VO2(B) nanosheets. Comprehensive in situ experiments are performed to establish the precise conditions of the reversible phase transformations, which are normally hindered but occur with stimuli beyond the energy barrier. Spectral evidence and theoretical calculations reveal the pressure–structure relationship and the role of flexible VOx polyhedra in the structural switching process. Anomalous resistivity evolution and the participation of spin in the reversible phase transition are observed for the first time. Our findings have significant implications for the design of phase switching devices and the exploration of hidden amorphous materials. PMID:27426219

  5. A Density Functional Study of Atomic Hydrogen and Oxygen Chemisorptions on the (0001) Surface of Double Hexagonal Close Packed Americium

    NASA Astrophysics Data System (ADS)

    Dholabhai, Pratik; Atta-Fynn, Raymond; Ray, Asok

    2008-03-01

    Ab initio total energy calculations within the framework of density functional theory have been performed for atomic hydrogen and oxygen chemisorptions on the (0001) surface of double hexagonal packed americium using a full-potential all-electron linearized augmented plane wave plus local orbitals (FLAPW+lo) method. The three-fold hollow hcp site was found to be the most stable site for H adsorption, while the two-fold bridge adsorption site was found to be the most stable site for O adsorption. Chemisorption energies and adsorption geometries for different adsorption sites will be discussed. The change in work functions, magnetic moments, partial charges inside muffin-tins, difference charge density distributions and density of states for the bare Am slab and the Am slab after adsorption of the adatom will be discussed. The implications of chemisorption on Am 5f electron localization-delocalization will also be discussed.

  6. Older women's health and financial vulnerability: implications of the Medicare benefit structure.

    PubMed

    Sofaer, S; Abel, E

    1990-01-01

    Elderly women and men have different patterns of disease and utilize health services differently. This essay examines the extent to which Medicare covers the specific conditions and services associated with women and men. Elderly women experience higher rates of poverty than elderly men; consequently, elderly women are especially likely to be unable to pay high out-of-pocket costs for health care. Using a new method for simulating out-of-pocket costs, the Illness Episode Approach, the essay shows that Medicare provides better coverage for illnesses which predominate among men than for those which predominate among women. In addition, women on Medicare who supplement their basic coverage by purchasing a typical private insurance "Medigap" policy do not receive as much of an advantage from their purchases as do men. The calculations also show that the Medicare Catastrophic Coverage Act would have had little impact on the gender gap in financial vulnerability.

  7. Shock Response and Phase Transitions of MgO at Planetary Impact Conditions.

    PubMed

    Root, Seth; Shulenburger, Luke; Lemke, Raymond W; Dolan, Daniel H; Mattsson, Thomas R; Desjarlais, Michael P

    2015-11-06

    The moon-forming impact and the subsequent evolution of the proto-Earth is strongly dependent on the properties of materials at the extreme conditions generated by this violent collision. We examine the high pressure behavior of MgO, one of the dominant constituents in Earth's mantle, using high-precision, plate impact shock compression experiments performed on Sandia National Laboratories' Z Machine and extensive quantum calculations using density functional theory (DFT) and quantum Monte Carlo (QMC) methods. The combined data span from ambient conditions to 1.2 TPa and 42 000 K, showing solid-solid and solid-liquid phase boundaries. Furthermore our results indicate that under impact the solid and liquid phases coexist for more than 100 GPa, pushing complete melting to pressures in excess of 600 GPa. The high pressure required for complete shock melting has implications for a broad range of planetary collision events.

  8. Shock response and phase transitions of MgO at planetary impact conditions

    DOE PAGES

    Root, Seth; Shulenburger, Luke; Lemke, Raymond W.; ...

    2015-11-04

    The moon-forming impact and the subsequent evolution of the proto-Earth is strongly dependent on the properties of materials at the extreme conditions generated by this violent collision. We examine the high pressure behavior of MgO, one of the dominant constituents in Earth’s mantle, using high-precision, plate impact shock compression experiments performed on Sandia National Laboratories’ Z Machine and extensive quantum calculations using density functional theory (DFT) and quantum Monte Carlo (QMC) methods. The combined data span from ambient conditions to 1.2 TPa and 42,000 K, showing solid-solid and solid-liquid phase boundaries. Furthermore our results indicate that under impact the solidmore » and liquid phases coexist for more than 100 GPa, pushing complete melting to pressures in excess of 600 GPa. Furthermore, the high pressure required for complete shock melting has implications for a broad range of planetary collision events.« less

  9. Parents’ Management of Children’s Pain at Home after Surgery

    PubMed Central

    Vincent, Catherine; Chiappetta, Maria; Beach, Abigail; Kiolbasa, Carolyn; Latta, Kelsey; Maloney, Rebekah; Van Roeyen, Linda Sue

    2012-01-01

    Purpose We tested Home Pain Management for Children (HPMC) for effects on pain intensity, analgesics administered, satisfaction, and use of healthcare services over 3 post-discharge days. Design and Methods In this quasi-experimental study with 108 children and their parents, we used the numeric rating scale (NRS) or the Faces Pain Scale-Revised (FPS-R), calculated percentages of analgesics administered, and asked questions about expectations, satisfaction, and services. Between-group differences were tested with t-tests and ANOVA. Results After HPMC, children reported moderate pain and parents administered more analgesics on 2 study days. Parents and children were satisfied; parents used few services. Written instructions and a brief interactive session were not sufficient to change parents’ analgesic administration practices to relieve their children’s pain. Practice Implications Further research is needed to develop and test effective education interventions to facilitate relief of children’s post-operative pain. PMID:22463471

  10. Tsunamis and splay fault dynamics

    USGS Publications Warehouse

    Wendt, J.; Oglesby, D.D.; Geist, E.L.

    2009-01-01

    The geometry of a fault system can have significant effects on tsunami generation, but most tsunami models to date have not investigated the dynamic processes that determine which path rupture will take in a complex fault system. To gain insight into this problem, we use the 3D finite element method to model the dynamics of a plate boundary/splay fault system. We use the resulting ground deformation as a time-dependent boundary condition for a 2D shallow-water hydrodynamic tsunami calculation. We find that if me stress distribution is homogeneous, rupture remains on the plate boundary thrust. When a barrier is introduced along the strike of the plate boundary thrust, rupture propagates to the splay faults, and produces a significantly larger tsunami man in the homogeneous case. The results have implications for the dynamics of megathrust earthquakes, and also suggest mat dynamic earthquake modeling may be a useful tool in tsunami researcn. Copyright 2009 by the American Geophysical Union.

  11. Data Reduction and Image Reconstruction Techniques for Non-redundant Masking

    NASA Astrophysics Data System (ADS)

    Sallum, S.; Eisner, J.

    2017-11-01

    The technique of non-redundant masking (NRM) transforms a conventional telescope into an interferometric array. In practice, this provides a much better constrained point-spread function than a filled aperture and thus higher resolution than traditional imaging methods. Here, we describe an NRM data reduction pipeline. We discuss strategies for NRM observations regarding dithering patterns and calibrator selection. We describe relevant image calibrations and use example Large Binocular Telescope data sets to show their effects on the scatter in the Fourier measurements. We also describe the various ways to calculate Fourier quantities, and discuss different calibration strategies. We present the results of image reconstructions from simulated observations where we adjust prior images, weighting schemes, and error bar estimation. We compare two imaging algorithms and discuss implications for reconstructing images from real observations. Finally, we explore how the current state of the art compares to next-generation Extremely Large Telescopes.

  12. Detecting and disentangling nonlinear structure from solar flux time series

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.; Roszman, L.

    1992-01-01

    Interest in solar activity has grown in the past two decades for many reasons. Most importantly for flight dynamics, solar activity changes the atmospheric density, which has important implications for spacecraft trajectory and lifetime prediction. Building upon the previously developed Rayleigh-Benard nonlinear dynamic solar model, which exhibits many dynamic behaviors observed in the Sun, this work introduces new chaotic solar forecasting techniques. Our attempt to use recently developed nonlinear chaotic techniques to model and forecast solar activity has uncovered highly entangled dynamics. Numerical techniques for decoupling additive and multiplicative white noise from deterministic dynamics and examines falloff of the power spectra at high frequencies as a possible means of distinguishing deterministic chaos from noise than spectrally white or colored are presented. The power spectral techniques presented are less cumbersome than current methods for identifying deterministic chaos, which require more computationally intensive calculations, such as those involving Lyapunov exponents and attractor dimension.

  13. Multi-level structure in the large scale distribution of optically luminous galaxies

    NASA Astrophysics Data System (ADS)

    Deng, Xin-fa; Deng, Zu-gan; Liu, Yong-zhen

    1992-04-01

    Fractal dimensions in the large scale distribution of galaxies have been calculated with the method given by Wen et al. [1] Samples are taken from CfA redshift survey in northern and southern galactic [2] hemisphere in our analysis respectively. Results from these two regions are compared with each other. There are significant differences between the distributions in these two regions. However, our analyses do show some common features of the distributions in these two regions. All subsamples show multi-level fractal character distinctly. Combining it with the results from analyses of samples given by IRAS galaxies and results from samples given by redshift survey in pencil-beam fields, [3,4] we suggest that multi-level fractal structure is most likely to be a general and important character in the large scale distribution of galaxies. The possible implications of this character are discussed.

  14. Electronic and vibrational spectra of matrix isolated anthracene radical cations - Experimental and theoretical aspects

    NASA Technical Reports Server (NTRS)

    Szczepanski, Jan; Vala, Martin; Talbi, Dahbia; Parisel, Olivier; Ellinger, Yves

    1993-01-01

    The IR vibrational and visible/UV electronic absorption spectra of the anthracene cation, An(+), were studied experimentally, in argon matrices at 12 K, as well as theoretically, using ab initio calculations for the vibrational modes and enhanced semiempirical methods with configuration interaction for the electronic spectra. It was found that both approaches predicted well the observed photoelectron spectrum. The theoretical IR intensities showed some remarkable differences between neutral and ionized species (for example, the CH in-plane bending modes and CC in-plane stretching vibrations were predicted to increase by several orders of magnitude upon ionization). Likewise, estimated experimental IR intensities showed a significant increase in the cation band intensities over the neutrals. The implication of these findings for the hypothesis that polycyclic aromatic hydrocarbon cations are responsible for the unidentified IR emission bands from interstellar space is discussed.

  15. Corrections to the geometrical interpretation of bosonization

    NASA Astrophysics Data System (ADS)

    Steiner, Manfred; Marston, Brad

    2012-02-01

    Bosonization is a powerful approach for understanding certain strongly-correlated fermion systems, especially in one spatial dimension but also in higher dimensionsootnotetextA.Houghton, H.-J. Kwon and J. B. Marston, Adv. in Phys. 49, 141 (2000).. The method may be interpreted geometrically in terms of deformations of the Fermi surface, and the quantum operator that effects the deformations may be expressed in terms of a bilinear combination of fermion creation and annihilation operators. Alternatively the deformation operator has an approximate representation in terms of coherent states of bosonic fieldsootnotetextA. H. Castro Neto and E. Fradkin, Phys. Rev. B 49, 10877 (1994).. Calculation of the inner product of deformed Fermi surfaces within the two representations reveals corrections to the bosonic picture both in one and higher spatial dimensions. We discuss the implications of the corrections for efforts to improve the usefulness of multidimensional bosonization.

  16. Reversible switching between pressure-induced amorphization and thermal-driven recrystallization in VO2(B) nanosheets.

    PubMed

    Wang, Yonggang; Zhu, Jinlong; Yang, Wenge; Wen, Ting; Pravica, Michael; Liu, Zhenxian; Hou, Mingqiang; Fei, Yingwei; Kang, Lei; Lin, Zheshuai; Jin, Changqing; Zhao, Yusheng

    2016-07-18

    Pressure-induced amorphization (PIA) and thermal-driven recrystallization have been observed in many crystalline materials. However, controllable switching between PIA and a metastable phase has not been described yet, due to the challenge to establish feasible switching methods to control the pressure and temperature precisely. Here, we demonstrate a reversible switching between PIA and thermally-driven recrystallization of VO2(B) nanosheets. Comprehensive in situ experiments are performed to establish the precise conditions of the reversible phase transformations, which are normally hindered but occur with stimuli beyond the energy barrier. Spectral evidence and theoretical calculations reveal the pressure-structure relationship and the role of flexible VOx polyhedra in the structural switching process. Anomalous resistivity evolution and the participation of spin in the reversible phase transition are observed for the first time. Our findings have significant implications for the design of phase switching devices and the exploration of hidden amorphous materials.

  17. Post‐mortem oxygen isotope exchange within cultured diatom silica

    PubMed Central

    Sloane, Hilary J.; Rickaby, Rosalind E.M.; Cox, Eileen J.; Leng, Melanie J.

    2017-01-01

    Rationale Potential post‐mortem alteration to the oxygen isotope composition of biogenic silica is critical to the validity of palaeoclimate reconstructions based on oxygen isotope ratios (δ18O values) from sedimentary silica. We calculate the degree of oxygen isotope alteration within freshly cultured diatom biogenic silica in response to heating and storing in the laboratory. Methods The experiments used freshly cultured diatom silica. Silica samples were either stored in water or dried at temperatures between 20 °C and 80 °C. The mass of affected oxygen and the associated silica‐water isotope fractionation during alteration were calculated by conducting parallel experiments using endmember waters with δ18O values of −6.3 to −5.9 ‰ and −36.3 to −35.0 ‰. Dehydroxylation and subsequent oxygen liberation were achieved by stepwise fluorination with BrF5. The 18O/16O ratios were measured using a ThermoFinnigan MAT 253 isotope ratio mass spectrometer. Results Significant alterations in silica δ18O values were observed, most notably an increase in the δ18O values following drying at 40–80 °C. Storage in water for 7 days between 20 and 80 °C also led to significant alteration in δ18O values. Mass balance calculations suggest that the amount of affected oxygen is positively correlated with temperature. The estimated oxygen isotope fractionation during alteration is an inverse function of temperature, consistent with the extrapolation of models for high‐temperature silica‐water oxygen isotope fractionation. Conclusions Routinely used preparatory methods may impart significant alterations to the δ18O values of biogenic silica, particularly when dealing with modern cultured or field‐collected material. The significance of such processes within natural aquatic environments is uncertain; however, there is potential that similar processes also affect sedimentary diatoms, with implications for the interpretation of biogenic silica‐hosted δ18O palaeoclimate records. PMID:28792631

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isotalo, Aarno

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  19. Prediction of Quality Change During Thawing of Frozen Tuna Meat by Numerical Calculation I

    NASA Astrophysics Data System (ADS)

    Murakami, Natsumi; Watanabe, Manabu; Suzuki, Toru

    A numerical calculation method has been developed to determine the optimum thawing method for minimizing the increase of metmyoglobin content (metMb%) as an indicator of color changes in frozen tuna meat during thawing. The calculation method is configured the following two steps: a) calculation of temperature history in each part of frozen tuna meat during thawing by control volume method under the assumption of one-dimensional heat transfer, and b) calculation of metMb% based on the combination of calculated temperature history, Arrenius equation and the first-order reaction equation for the increase rate of metMb%. Thawing experiments for measuring temperature history of frozen tuna meat were carried out under the conditions of rapid thawing and slow thawing to compare the experimental data with calculated temperature history as well as the increase of metMb%. The results were coincident with the experimental data. The proposed simulation method would be useful for predicting the optimum thawing conditions in terms of metMb%.

  20. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  1. MSTor version 2013: A new version of the computer code for the multi-structural torsional anharmonicity, now with a coupled torsional potential

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Meana-Pañeda, Rubén; Truhlar, Donald G.

    2013-08-01

    We present an improved version of the MSTor program package, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsions; the method is based on either a coupled torsional potential or an uncoupled torsional potential. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes seven utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files for the MSTor calculation and Voronoi calculation, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multitorsional problems for which one can afford to calculate all the conformational structures and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes, the symmetry program for determining point group symmetry of a molecule, and seven utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes of the torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 26 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 s. References: [1] MS-T(C) method: Quantum Thermochemistry: Multi-Structural Method with Torsional Anharmonicity Based on a Coupled Torsional Potential, J. Zheng and D.G. Truhlar, Journal of Chemical Theory and Computation 9 (2013) 1356-1367, DOI: http://dx.doi.org/10.1021/ct3010722. [2] MS-T(U) method: Practical Methods for Including Torsional Anharmonicity in Thermochemical Calculations of Complex Molecules: The Internal-Coordinate Multi-Structural Approximation, J. Zheng, T. Yu, E. Papajak, I, M. Alecu, S.L. Mielke, and D.G. Truhlar, Physical Chemistry Chemical Physics 13 (2011) 10885-10907.

  2. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services

    PubMed Central

    Rajabi, A; Dabiri, A

    2012-01-01

    Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171

  3. A semi-empirical method for calculating the pitching moment of bodies of revolution at low Mach numbers

    NASA Technical Reports Server (NTRS)

    Hopkins, Edward J

    1951-01-01

    A semiempirical method, in which potential theory is arbitrarily combined with an approximate viscous theory, for calculating the aerodynamic pitching moments for bodies of revolution is presented. The method can also be used for calculating the lift and drag forces. The calculated and experimental force and moment characteristics of 15 bodies of revolution are compared.

  4. Examinations of electron temperature calculation methods in Thomson scattering diagnostics.

    PubMed

    Oh, Seungtae; Lee, Jong Ha; Wi, Hanmin

    2012-10-01

    Electron temperature from Thomson scattering diagnostic is derived through indirect calculation based on theoretical model. χ-square test is commonly used in the calculation, and the reliability of the calculation method highly depends on the noise level of input signals. In the simulations, noise effects of the χ-square test are examined and scale factor test is proposed as an alternative method.

  5. A method of solid-solid phase equilibrium calculation by molecular dynamics

    NASA Astrophysics Data System (ADS)

    Karavaev, A. V.; Dremov, V. V.

    2016-12-01

    A method for evaluation of solid-solid phase equilibrium curves in molecular dynamics simulation for a given model of interatomic interaction is proposed. The method allows to calculate entropies of crystal phases and provides an accuracy comparable with that of the thermodynamic integration method by Frenkel and Ladd while it is much simpler in realization and less intense computationally. The accuracy of the proposed method was demonstrated in MD calculations of entropies for EAM potential for iron and for MEAM potential for beryllium. The bcc-hcp equilibrium curves for iron calculated for the EAM potential by the thermodynamic integration method and by the proposed one agree quite well.

  6. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  7. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  8. A longitudinal study on predictors of early calculation development among young children at risk for learning difficulties.

    PubMed

    Peng, Peng; Namkung, Jessica M; Fuchs, Douglas; Fuchs, Lynn S; Patton, Samuel; Yen, Loulee; Compton, Donald L; Zhang, Wenjuan; Miller, Amanda; Hamlett, Carol

    2016-12-01

    The purpose of this study was to explore domain-general cognitive skills, domain-specific academic skills, and demographic characteristics that are associated with calculation development from first grade to third grade among young children with learning difficulties. Participants were 176 children identified with reading and mathematics difficulties at the beginning of first grade. Data were collected on working memory, language, nonverbal reasoning, processing speed, decoding, numerical competence, incoming calculations, socioeconomic status, and gender at the beginning of first grade and on calculation performance at four time points: the beginning of first grade, the end of first grade, the end of second grade, and the end of third grade. Latent growth modeling analysis showed that numerical competence, incoming calculation, processing speed, and decoding skills significantly explained the variance in calculation performance at the beginning of first grade. Numerical competence and processing speed significantly explained the variance in calculation performance at the end of third grade. However, numerical competence was the only significant predictor of calculation development from the beginning of first grade to the end of third grade. Implications of these findings for early calculation instructions among young at-risk children are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Commentary on “Performance of a Glucose Meter with a Built-In Automated Bolus Calculator versus Manual Bolus Calculation in Insulin-Using Subjects”

    PubMed Central

    Rossetti, Paolo; Vehí, Josep; Revert, Ana; Calm, Remei; Bondia, Jorge

    2012-01-01

    Since the early 2000s, there has been an exponentially increasing development of new diabetes-applied technology, such as continuous glucose monitoring, bolus calculators, and “smart” pumps, with the expectation of partially overcoming clinical inertia and low patient compliance. However, its long-term efficacy in glucose control has not been unequivocally proven. In this issue of Journal of Diabetes Science and Technology, Sussman and colleagues evaluated a tool for the calculation of the prandial insulin dose. A total of 205 insulin-treated patients were asked to compute a bolus dose in two simulated conditions either manually or with the bolus calculator built into the FreeStyle InsuLinx meter, revealing the high frequency of wrong calculations when performed manually. Although the clinical impact of this study is limited, it highlights the potential implications of low diabetes-related numeracy in poor glycemic control. Educational programs aiming to increase patients’ empowerment and caregivers’ knowledge are needed in order to get full benefit of the technology. PMID:22538145

  10. ADHD and math - The differential effect on calculation and estimation.

    PubMed

    Ganor-Stern, Dana; Steinhorn, Ofir

    2018-05-31

    Adults with ADHD were compared to controls when solving multiplication problems exactly and when estimating the results of multidigit multiplication problems relative to reference numbers. The ADHD participants were slower than controls in the exact calculation and in the estimation tasks, but not less accurate. The ADHD participants were similar to controls in showing enhanced accuracy and speed for smaller problem sizes, for trials in which the reference numbers were smaller (vs. larger) than the exact answers and for reference numbers that were far (vs. close) from the exact answer. The two groups similarly used the approximated calculation and the sense of magnitude strategies. They differed however in strategy execution, mainly of the approximated calculation strategy, which requires working memory resources. The increase in reaction time associated with using the approximated calculation strategy was larger for the ADHD compared to the control participants. Thus, ADHD seems to selectively impair calculation processes in estimation tasks that rely on working memory, but it does not hamper estimation skills that are based on sense of magnitude. The educational implications of these findings are discussed. Copyright © 2018. Published by Elsevier B.V.

  11. Effects of Self-Graphing and Goal Setting on the Math Fact Fluency of Students with Disabilities

    PubMed Central

    Figarola, Patricia M; Gunter, Philip L; Reffel, Julia M; Worth, Susan R; Hummel, John; Gerber, Brian L

    2008-01-01

    We evaluated the impact of goal setting and students' participation in graphing their own performance data on the rate of math fact calculations. Participants were 3 students with mild disabilities in the first and second grades; 2 of the 3 students were also identified with Attention-Deficit/Hyperactivity Disorder (ADHD). They were taught to use Microsoft Excel® software to graph their rate of correct calculations when completing timed, independent practice sheets consisting of single-digit mathematics problems. Two students' rates of correct calculations nearly always met or exceeded the aim line established for their correct calculations. Additional interventions were required for the third student. Results are discussed in terms of implications and future directions for increasing the use of evaluation components in classrooms for students at risk for behavior disorders and academic failure. PMID:22477686

  12. A Method for Calculating Transient Surface Temperatures and Surface Heating Rates for High-Speed Aircraft

    NASA Technical Reports Server (NTRS)

    Quinn, Robert D.; Gong, Leslie

    2000-01-01

    This report describes a method that can calculate transient aerodynamic heating and transient surface temperatures at supersonic and hypersonic speeds. This method can rapidly calculate temperature and heating rate time-histories for complete flight trajectories. Semi-empirical theories are used to calculate laminar and turbulent heat transfer coefficients and a procedure for estimating boundary-layer transition is included. Results from this method are compared with flight data from the X-15 research vehicle, YF-12 airplane, and the Space Shuttle Orbiter. These comparisons show that the calculated values are in good agreement with the measured flight data.

  13. Using benefit-cost ratio to select Universal Newborn Hearing Screening test criteria.

    PubMed

    Porter, Heather L; Neely, Stephen T; Gorga, Michael P

    2009-08-01

    Current protocols presumably use criteria that are chosen on the basis of the sensitivity and specificity rates they produce. Such an approach emphasizes test performance but does not include societal implications of the benefit of early identification. The purpose of the present analysis was to evaluate an approach to selecting criteria for use in Universal Newborn Hearing Screening (UNHS) programs that uses benefit-cost ratio (BCR) to demonstrate an alternative method to audiologists, administrators, and others involved in UNHS protocol decisions. Existing data from more than 1200 ears were used to analyze BCR as a function of Distortion Product Otoacoustic Emission (DPOAE) level. These data were selected because both audiometric and DPOAE data were available on every ear. Although these data were not obtained in newborns, this compromise was necessary because audiometric outcomes (especially in infants with congenital hearing loss) in neonates are either lacking or limited in number. As such, it is important to note that the characteristics of responses from the group of subjects that formed the bases of the present analyses are different from those for neonates. This limits the extent to which actual criterion levels can be selected but should not affect the general approach of using BCR as a framework for considering UNHS criteria. Estimates of the prevalence of congenital hearing loss identified through UNHS in 37 states and U.S. territories in 2004 were used to calculate BCR. A range of estimates for the lifetime monetary benefits and yearly costs for UNHS were used, based on data available in the literature. Still, exact benefits and costs are difficult to know. Both one-step (DPOAE alone) and two-step (DPOAE followed by automated auditory brainstem response, AABR) screening paradigms were considered in the calculation of BCR. The influence of middle ear effusion was simulated by incorporating a range of expected DPOAE level reductions into an additional BCR analyses Our calculations indicate that for a range of proposed benefit and cost estimates, the monetary benefits of both one-step (DPOAE alone) and two-step (DPOAE followed by AABR) NHS programs outweigh programmatic costs. Our calculations indicate that BCR is robust in that it can be applied regardless of the values that are assigned to benefit and cost. Maximum BCR was identified and remained stable regardless of these values; however, it was recognized that the use of maximum BCR could result in reduced test sensitivity and may not be optimal for use in UNHS programs. The inclusion of secondary AABR screening increases BCR but does not alter the DPOAE criterion level at which maximum BCR occurs. The model of middle ear effusion reduces overall DPOAE level, subsequently lowering the DPOAE criterion level at which maximum BCR was obtained BCR is one of several alternative methods for choosing UNHS criteria, in which the evaluation of costs and benefits allows clinical and societal considerations to be incorporated into the pass/refer decision in a meaningful way. Although some of the benefits of early identification of hearing impairment cannot be estimated through a monetary analysis, such as improved psychosocial development and quality of life, this article provides an alternative to audiologists and administrators for selecting UNHS protocols that includes consideration of societal implications of UNHS screening criteria. BCR suggests that UNHS is a worthwhile investment for society as benefits always outweigh costs, at least for the estimations included in this article. Although the use of screening criteria that maximize BCR results in lower test sensitivity compared with other criteria, BCR may be used to select criteria that result in increased test sensitivity and still provide a high, although not maximal, BCR. Using BCR analysis provides a framework in which the societal implications of NHS protocols are considered and emphasizes the value of UNHS.

  14. Chronic air pollution and social deprivation as modifiers of the association between high temperature and daily mortality

    PubMed Central

    2014-01-01

    Background Heat and air pollution are both associated with increases in mortality. However, the interactive effect of temperature and air pollution on mortality remains unsettled. Similarly, the relationship between air pollution, air temperature, and social deprivation has never been explored. Methods We used daily mortality data from 2004 to 2009, daily mean temperature variables and relative humidity, for Paris, France. Estimates of chronic exposure to air pollution and social deprivation at a small spatial scale were calculated and split into three strata. We developed a stratified Poisson regression models to assess daily temperature and mortality associations, and tested the heterogeneity of the regression coefficients of the different strata. Deaths due to ambient temperature were calculated from attributable fractions and mortality rates were estimated. Results We found that chronic air pollution exposure and social deprivation are effect modifiers of the association between daily temperature and mortality. We found a potential interactive effect between social deprivation and chronic exposure with regards to air pollution in the mortality-temperature relationship. Conclusion Our results may have implications in considering chronically polluted areas as vulnerable in heat action plans and in the long-term measures to reduce the burden of heat stress especially in the context of climate change. PMID:24941876

  15. Lunar and Planetary Science XXXV: Meteorites

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The session "Meteorites" included the following reports:Description of a New Stony Meteorite Find from Bulloch County, Georgia; Meteorite Ablation Derived from Cosmic Ray Track Data Dhofar 732: A Mg-rich Orthopyroxenitic Achondrite Halogens, Carbon and Sulfur in the Tagish Lake Meteorite: Implications for Classification and Terrestrial Alteration; Electromagnetic Scrape of Meteorites and Probably Columbia Tiles; Pre-Atmospheric Sizes and Orbits of Several Chondrites; Research of Shock-Thermal History of the Enstatite Chondrites by Track, Thermoluminescence and Neutron-Activation (NAA) Methods; Radiation and Shock-thermal History of the Kaidun CR2 Chondrite Glass Inclusions; On the Problem of Search for Super-Heavy Element Traces in the Meteorites: Probability of Their Discovery by Three-Prong Tracks due to Nuclear Spontaneous Fission Trace Element Abundances in Separated Phases of Pesyanoe, Enstatite Achondrite; Evaluation of Cooling Rate Calculated by Diffusional Modification of Chemical Zoning: Different Initial Profiles for Diffusion Calculation; Mineralogical Features and REE Distribution in Ortho- and Clinopyroxenes of the HaH 317 Enstatite Chondrite Dhofar 311, 730 and 731: New Lunar Meteorites from Oman; The Deuterium Content of Individual Murchison Amino Acids; Clues to the Formation of PV1, an Enigmatic Carbon-rich Chondritic Clast from the Plainview H-Chondrite Regolith Breccia ;Numerical Simulations of the Production of Extinct Radionuclides and ProtoCAIs by Magnetic Flaring.

  16. Two neuropeptides colocalized in a command-like neuron use distinct mechanisms to enhance its fast synaptic connection.

    PubMed

    Koh, H Y; Vilim, F S; Jing, J; Weiss, K R

    2003-09-01

    In many neurons more than one peptide is colocalized with a classical neurotransmitter. The functional consequence of such an arrangement has been rarely investigated. Here, within the feeding circuit of Aplysia, we investigate at a single synapse the actions of two modulatory neuropeptides that are present in a cholinergic interneuron. In combination with previous work, our study shows that the command-like neuron for feeding, CBI-2, contains two neuropeptides, feeding circuit activating peptide (FCAP) and cerebral peptide 2 (CP2). Previous studies showed that high-frequency prestimulation or repeated stimulation of CBI-2 increases the size of CBI-2 to B61/62 excitatory postsynaptic potentials (EPSPs) and shortens the latency of firing of neuron B61/62 in response to CBI-2 stimulation. We find that both FCAP and CP2 mimic these two effects. The variance method of quantal analysis indicates that FCAP increases the calculated quantal size (q) and CP2 increases the calculated quantal content (m) of EPSPs. Since the PSP amplitude represents the product of q and m, the joint action of the two peptides is expected to be cooperative. This observation suggests a possible functional implication for multiple neuropeptides colocalized with a classical neurotransmitter in one neuron.

  17. Men Are from Mars, Women Are from Venus: Sex Differences in Insulin Action and Secretion.

    PubMed

    Basu, Ananda; Dube, Simmi; Basu, Rita

    2017-01-01

    Sex difference plays a substantial role in the regulation of glucose metabolism in healthy glucose-tolerant humans. The factors which may contribute to the sex-related differences in glucose metabolism include differences in lifestyle (diet and exercise), sex hormones, and body composition. Several epidemiological and observational studies have noted that impaired glucose tolerance is more common in women than men. Some of these studies have attributed this to differences in body composition, while others have attributed impaired insulin sensitivity as a cause of impaired glucose tolerance in women. We studied postprandial glucose metabolism in 120 men and 90 women after ingestion of a mixed meal. Rates of meal glucose appearance, endogenous glucose production, and glucose disappearance were calculated using a novel triple-tracer isotope dilution method. Insulin action and secretion were calculated using validated physiological models. While rate of meal glucose appearance was higher in women than men, rates of glucose disappearance were higher in elderly women than elderly men while young women had lower rates of glucose disappearance than young men. Hence, sex has an impact on postprandial glucose metabolism, and sex differences in carbohydrate metabolism may have important implications for approaches to prevent and manage diabetes in an individual.

  18. Long-range Self-interacting Dark Matter in the Sun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jing; State Key Laboratory of Theoretical Physics, Kavli Institute for Theoretical Physics China,Institute of Theoretical Physics, Chinese Academy of Science,Zhong Guan Cun East Street 55#, Beijing, 100190; Liang, Zheng-Liang

    2015-12-10

    We investigate the implications of the long-rang self-interaction on both the self-capture and the annihilation of the self-interacting dark matter (SIDM) trapped in the Sun. Our discussion is based on a specific SIDM model in which DM particles self-interact via a light scalar mediator, or Yukawa potential, in the context of quantum mechanics. Within this framework, we calculate the self-capture rate across a broad region of parameter space. While the self-capture rate can be obtained separately in the Born regime with perturbative method, and in the classical limits with the Rutherford formula, our calculation covers the gap between in amore » non-perturbative fashion. Besides, the phenomenology of both the Sommerfeld-enhanced s- and p-wave annihilation of the solar SIDM is also involved in our discussion. Moreover, by combining the analysis of the Super-Kamiokande (SK) data and the observed DM relic density, we constrain the nuclear capture rate of the DM particles in the presence of the dark Yukawa potential. The consequence of the long-range dark force on probing the solar SIDM turns out to be significant if the force-carrier is much lighter than the DM particle, and a quantitative analysis is provided.« less

  19. Montelukast potentiates the anticonvulsant effect of phenobarbital in mice: an isobolographic analysis.

    PubMed

    Fleck, Juliana; Marafiga, Joseane Righes; Jesse, Ana Cláudia; Ribeiro, Leandro Rodrigo; Rambo, Leonardo Magno; Mello, Carlos Fernando

    2015-04-01

    Although leukotrienes have been implicated in seizures, no study has systematically investigated whether the blockade of CysLT1 receptors synergistically increases the anticonvulsant action of classic antiepileptics. In this study, behavioral and electroencephalographic methods, as well as isobolographic analysis, are used to show that the CysLT1 inverse agonist montelukast synergistically increases the anticonvulsant action of phenobarbital against pentylenetetrazole-induced seizures. Moreover, it is shown that LTD4 reverses the effect of montelukast. The experimentally derived ED50mix value for a fixed-ratio combination (1:1 proportion) of montelukast plus phenobarbital was 0.06±0.02 μmol, whereas the additively calculated ED50add value was 0.49±0.03 μmol. The calculated interaction index was 0.12, indicating a synergistic interaction. The association of montelukast significantly decreased the antiseizure ED50 for phenobarbital (0.74 and 0.04 μmol in the absence and presence of montelukast, respectively) and, consequently, phenobarbital-induced sedation at equieffective doses. The demonstration of a strong synergism between montelukast and phenobarbital is particularly relevant because both drugs are already used in the clinics, foreseeing an immediate translational application for epileptic patients who have drug-resistant seizures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Stress interactions among arrays of tensile cracks in 3D: Implications for the nucleation of shear failure and the orientations of faults.

    NASA Astrophysics Data System (ADS)

    Healy, D.; Davis, T.

    2017-12-01

    In low porosity rocks it is widely believed that planes of shear failure nucleate through the interaction of arrays of smaller tensile microcracks. This model has been confirmed through laboratory rock deformation experiments and detailed microstructural analyses. In this contribution we use the Boundary Element Method (BEM) to model the interactions of arrays of tensile cracks, discretised as ellipsoidal voids in three dimensions (3D). We calculate the elastic stresses in the solid matrix surrounding the cracks resulting from an applied load and include the interaction effects of each crack upon all the others. We explore the role of variations in crack shape, size, position and orientation upon the total and locally perturbed stress fields. We calculate the average crack normal stress (CNS) acting over the area of each tensile crack, and then find the locus of the maximum value of this stress throughout the modelled volume. Following Reches & Lockner (1994) and Healy et al. (2006a, 2006b), we assert that planes of shear failure will most likely nucleate on surfaces parallel to the locus of maximum average CNS. These shear planes are oblique to all three principal stresses in the far field.

  1. Therapy operating characteristic curves: tools for precision chemotherapy

    PubMed Central

    Barrett, Harrison H.; Alberts, David S.; Woolfenden, James M.; Caucci, Luca; Hoppin, John W.

    2016-01-01

    Abstract. The therapy operating characteristic (TOC) curve, developed in the context of radiation therapy, is a plot of the probability of tumor control versus the probability of normal-tissue complications as the overall radiation dose level is varied, e.g., by varying the beam current in external-beam radiotherapy or the total injected activity in radionuclide therapy. This paper shows how TOC can be applied to chemotherapy with the administered drug dosage as the variable. The area under a TOC curve (AUTOC) can be used as a figure of merit for therapeutic efficacy, analogous to the area under an ROC curve (AUROC), which is a figure of merit for diagnostic efficacy. In radiation therapy, AUTOC can be computed for a single patient by using image data along with radiobiological models for tumor response and adverse side effects. The mathematical analogy between response of observers to images and the response of tumors to distributions of a chemotherapy drug is exploited to obtain linear discriminant functions from which AUTOC can be calculated. Methods for using mathematical models of drug delivery and tumor response with imaging data to estimate patient-specific parameters that are needed for calculation of AUTOC are outlined. The implications of this viewpoint for clinical trials are discussed. PMID:27175376

  2. State-Level Progress in Reducing the Black–White Infant Mortality Gap, United States, 1999–2013

    PubMed Central

    Goldfarb, Samantha Sittig; Wells, Brittny A.; Beitsch, Leslie; Levine, Robert S.; Rust, George

    2017-01-01

    Objectives. To assess state-level progress on eliminating racial disparities in infant mortality. Methods. Using linked infant birth–death files from 1999 to 2013, we calculated state-level 3-year rolling average infant mortality rates (IMRs) and Black–White IMR ratios. We also calculated percentage improvement and a projected year for achieving equality if current trend lines are sustained. Results. We found substantial state-level variation in Black IMRs (range = 6.6–13.8) and Black–White rate ratios (1.5–2.7), and also in percentage relative improvement in IMR (range = 2.7% to 36.5% improvement) and in Black–White rate ratios (from 11.7% relative worsening to 24.0% improvement). Thirteen states achieved statistically significant reductions in Black–White IMR disparities. Eliminating the Black–White IMR gap would have saved 64 876 babies during these 15 years. Eighteen states would achieve IMR racial equality by the year 2050 if current trends are sustained. Conclusions. States are achieving varying levels of progress in reducing Black infant mortality and Black–White IMR disparities. Public Health Implications. Racial equality in infant survival is achievable, but will require shifting our focus to determinants of progress and strategies for success. PMID:28323476

  3. Electronic Structure Theory Study of the Microsolvated F(-)(H2O) + CH3I SN2 Reaction.

    PubMed

    Zhang, Jiaxu; Yang, Li; Sheng, Li

    2016-05-26

    The potential energy profile of microhydrated fluorine ion reaction with methyl iodine has been characterized by extensive electronic structure calculations. Both hydrogen-bonded F(-)(H2O)---HCH2I and ion-dipole F(-)(H2O)---CH3I complexes are formed for the reaction entrance and the PES in vicinity of these complexes is very flat, which may have important implications for the reaction dynamics. The water molecule remains on the fluorine side until the reactive system goes to the SN2 saddle point. It can easily move to the iodine side with little barrier, but in a nonsynchronous reaction path after the dynamical bottleneck to the reaction, which supports the previous prediction for microsolvated SN2 systems. The influence of solvating water molecule on the reaction mechanism is probed by comparing with the influence of the nonsolvated analogue and other microsolvated SN2 systems. Taking the CCSD(T) single-point calculations based on MP2-optimized geometries as benchmark, the DFT functionals B97-1 and B3LYP are found to better characterize the potential energy profile for the title reaction and are recommended as the preferred methods for the direct dynamics simulations to uncover the dynamic behaviors.

  4. A g-factor puzzle for the N=38 nuclei:First measurement of the ^70Ge 41^+ magnetic moment.

    NASA Astrophysics Data System (ADS)

    Boutachkov, Plamen; Kumbartzki, G.; Benczer-Koller, N.; Robinson, S.; Escuderos, A.; Stefanova, E.; Sharon, Y.; Zamick, L.; McCutchan, E.; Werner, V.; Ai, H.; Gurdal, G.; Heinz, A.; Qian, J.; Williams, E.; Winkler, R.; Garnsworthy, A.; Thompson, N.; Maier-Komor, P.

    2006-10-01

    The transient field technique in inverse kinematics allows g-factor studies of short-lived states. This method gives information on both the sign and the magnitude of the g factor. In a recent experiment, the g factor of the 4^+1 state of ^6830Zn38 was measured to be -0.37(17) suggesting a significant neutron g9/2 contribution to the wave function[1]. However, shell model calculations in the 0f5/2,1p3/2,1p1/2,0g9/2 space [1] predict a positive, nearly zero g factor. To obtain more information on this region we measured the magnetic moment of the 4^+1 in ^7032Ge38. The measurement was performed at WNSL, Yale, using a 275 MeV ^70Ge beam and a multilayered C+Gd+Cu target. A positive g factor was obtained. The measured magnetic moment was compared to full fp shell model calculations which we performed with the code ANTOINE using several effective interactions. The results were in good agreement with the experiment. The experiment and the implications of the new results will be discussed.1. J. Leske et al., Phys. Rev C 72, 044301 (2005).

  5. Neutron skyshine calculations with the integral line-beam method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gui, A.A.; Shultis, J.K.; Faw, R.E.

    1997-10-01

    Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.

  6. Implications of method specific creatinine adjustments on General Medical Services chronic kidney disease classification

    PubMed Central

    Reynolds, Timothy M; Twomey, Patrick J

    2007-01-01

    Aims To evaluate the impact of different equations for calculation of estimated glomerular filtration rate (eGFR) on general practitioner (GP) workload. Methods Retrospective evaluation of routine workload data from a district general hospital chemical pathology laboratory serving a GP patient population of approximately 250 000. The most recent serum creatinine result from 80 583 patients was identified and used for the evaluation. eGFR was calculated using one of three different variants of the four‐parameter Modification of Diet in Renal Disease (MDRD) equation. Results The original MDRD equation (eGFR186) and the modified equation with assay‐specific data (eGFR175corrected) both identified similar numbers of patients with stage 4 and stage 5 chronic kidney disease (ChKD), but the modified equation without assay specific data (eGFR175) resulted in a significant increase in stage 4 ChKD. For stage 3 ChKD the eGFR175 identified 28.69% of the population, the eGFR186 identified 21.35% of the population and the eGFR175corrected identified 13.6% of the population. Conclusions Depending on the choice of equation there can be very large changes in the proportions of patients identified with the different stages of ChKD. Given that according to the General Medical Services Quality Framework, all patients with ChKD stages 3–5 should be included on a practice renal registry, and receive relevant drug therapy, this could have significant impacts on practice workload and drug budgets. It is essential that practices work with their local laboratories. PMID:17761741

  7. Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.

    2015-07-28

    A method of simulating X-ray diffuse scattering from multi-model PDB files is presented. Despite similar agreement with Bragg data, different translation–libration–screw refinement strategies produce unique diffuse intensity patterns. Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling andmore » validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier’s equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls-as-xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.« less

  8. Health care reform: motivation for discrimination?

    PubMed

    Navin, J C; Pettit, M A

    1995-01-01

    One of the major issues in the health care reform debate is the requirement that employers pay a portion of their employees' health insurance premiums. This paper examines the method for calculating the employer share of the health care premiums, as specified in the President's health care reform proposal. The calculation of the firm's cost of providing employee health care benefits is a function of marital status as well as the incidence of two-income earner households. This paper demonstrates that this method provides for lower than average premiums for married employees with no dependents in communities in which there is at least one married couple where both individuals participate in the labor market. This raises the non-wage labor costs of employing single individuals relative to individuals which are identical in every respect except their marital status. This paper explores the economic implications for hiring, as well as profits, for firms located in a perfectly-competitive industry. The results of the theoretical model presented here are clear. Under this proposed version of health care reform, ceteris paribus, firms have a clear preference for two-earner households. This paper also demonstrates that the incentive to discriminate is related to the size of the firm and to the size of the average wage of full-time employees for firms which employ fewer than fifty individuals. While this paper examines the specifics of President Clinton's original proposal, the conclusions reached here would apply to any form of employer-mandated coverage in which the premiums are a function of family status and the incidence of two-earner households.

  9. Optimizing preoperative blood ordering with data acquired from an anesthesia information management system.

    PubMed

    Frank, Steven M; Rothschild, James A; Masear, Courtney G; Rivers, Richard J; Merritt, William T; Savage, Will J; Ness, Paul M

    2013-06-01

    The maximum surgical blood order schedule (MSBOS) is used to determine preoperative blood orders for specific surgical procedures. Because the list was developed in the late 1970s, many new surgical procedures have been introduced and others improved upon, making the original MSBOS obsolete. The authors describe methods to create an updated, institution-specific MSBOS to guide preoperative blood ordering. Blood utilization data for 53,526 patients undergoing 1,632 different surgical procedures were gathered from an anesthesia information management system. A novel algorithm based on previously defined criteria was used to create an MSBOS for each surgical specialty. The economic implications were calculated based on the number of blood orders placed, but not indicated, according to the MSBOS. Among 27,825 surgical cases that did not require preoperative blood orders as determined by the MSBOS, 9,099 (32.7%) had a type and screen, and 2,643 (9.5%) had a crossmatch ordered. Of 4,644 cases determined to require only a type and screen, 1,509 (32.5%) had a type and crossmatch ordered. By using the MSBOS to eliminate unnecessary blood orders, the authors calculated a potential reduction in hospital charges and actual costs of $211,448 and $43,135 per year, respectively, or $8.89 and $1.81 per surgical patient, respectively. An institution-specific MSBOS can be created, using blood utilization data extracted from an anesthesia information management system along with our proposed algorithm. Using these methods to optimize the process of preoperative blood ordering can potentially improve operating room efficiency, increase patient safety, and decrease costs.

  10. The Potential Cost Implications of Averting Severe Hypoglycemic Events Requiring Hospitalization in High-Risk Adults With Type 1 Diabetes Using Real-Time Continuous Glucose Monitoring

    PubMed Central

    Bronstone, Amy; Graham, Claudia

    2016-01-01

    Background: Severe hypoglycemia remains a major barrier to optimal diabetes management and places a high burden on the US health care system due to the high costs of hypoglycemia-related emergency visits and hospitalizations. Patients with type 1 diabetes (T1DM) who have hypoglycemia unawareness are at a particularly high risk for severe hypoglycemia, the incidence of which may be reduced by the use of real-time continuous glucose monitoring (RT-CGM). Methods: We performed a cost calculation using values of key parameters derived from various published sources to examine the potential cost implications of standalone RT-CGM as a tool for reducing rates of severe hypoglycemia requiring hospitalization in adult patients with T1DM who have hypoglycemia unawareness. Results: In a hypothetical commercial health plan with 10 million members aged 18-64 years, 9.3% (930 000) are expected to have diagnosed diabetes, with approximately 5% (46 500) having T1DM, of whom approximately 20% (9300) have hypoglycemia unawareness. RT-CGM was estimated to reduce the cost of annual hypoglycemia-related hospitalizations in this select population by $54 369 000, yielding an estimated net cost savings of $8 799 000 to $12 519 000 and a savings of $946 to $1346 per patient. Conclusion: This article presents a cost calculation based on available data from multiple sources showing that RT-CGM has the potential to reduce short-term health care costs by averting severe hypoglycemic events requiring hospitalization in a select high-risk population. Prospective, randomized studies that are adequately powered and specifically enroll patients at high risk for severe hypoglycemia are needed to confirm that RT-CGM significantly reduces the incidence of these costly events. PMID:26880392

  11. Energy deposition of H and He ion beams in hydroxyapatite films: a study with implications for ion-beam cancer therapy.

    PubMed

    Limandri, Silvina; de Vera, Pablo; Fadanelli, Raul C; Nagamine, Luiz C C M; Mello, Alexandre; Garcia-Molina, Rafael; Behar, Moni; Abril, Isabel

    2014-02-01

    Ion-beam cancer therapy is a promising technique to treat deep-seated tumors; however, for an accurate treatment planning, the energy deposition by the ions must be well known both in soft and hard human tissues. Although the energy loss of ions in water and other organic and biological materials is fairly well known, scarce information is available for the hard tissues (i.e., bone), for which the current stopping power information relies on the application of simple additivity rules to atomic data. Especially, more knowledge is needed for the main constituent of human bone, calcium hydroxyapatite (HAp), which constitutes 58% of its mass composition. In this work the energy loss of H and He ion beams in HAp films has been obtained experimentally. The experiments have been performed using the Rutherford backscattering technique in an energy range of 450-2000 keV for H and 400-5000 keV for He ions. These measurements are used as a benchmark for theoretical calculations (stopping power and mean excitation energy) based on the dielectric formalism together with the MELF-GOS (Mermin energy loss function-generalized oscillator strength) method to describe the electronic excitation spectrum of HAp. The stopping power calculations are in good agreement with the experiments. Even though these experimental data are obtained for low projectile energies compared with the ones used in hadron therapy, they validate the mean excitation energy obtained theoretically, which is the fundamental quantity to accurately assess energy deposition and depth-dose curves of ion beams at clinically relevant high energies. The effect of the mean excitation energy choice on the depth-dose profile is discussed on the basis of detailed simulations. Finally, implications of the present work on the energy loss of charged particles in human cortical bone are remarked.

  12. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    PubMed

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  13. DOPPLER CALCULATIONS FOR LARGE FAST CERAMIC REACTORS--EFFECTS OF IMPROVED METHODS AND RECENT CROSS SECTION INFORMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greebler, P.; Goldman, E.

    1962-12-19

    Doppler calculations for large fast ceramic reactors (FCR), using recent cross section information and improved methods, are described. Cross sections of U/sup 238/, Pu/sup 239/, and Pu/sup 210/ with fuel temperature variations needed for perturbation calculations of Doppler reactivity changes are tabulated as a function of potential scattering cross section per absorber isotope at energies below 400 kev. These may be used in Doppler calculations for anv fast reactor. Results of Doppler calculations on a large fast ceramic reactor are given to show the effects of the improved calculation methods and of recent cross secrion data on the calculated Dopplermore » coefficient. The updated methods and cross sections used yield a somewhat harder spectrum and accordingly a somewhat smaller Doppler coefficient for a given FCR core size and composition than calculated in earlier work, but they support the essential conclusion derived earlier that the Doppler effect provides an important safety advantage in a large FCR. 28 references. (auth)« less

  14. Recent progress in the joint velocity-scalar PDF method

    NASA Technical Reports Server (NTRS)

    Anand, M. S.

    1995-01-01

    This viewgraph presentation discusses joint velocity-scalar PDF method; turbulent combustion modeling issues for gas turbine combustors; PDF calculations for a recirculating flow; stochastic dissipation model; joint PDF calculations for swirling flows; spray calculations; reduced kinetics/manifold methods; parallel processing; and joint PDF focus areas.

  15. A novel approach to evaluate soil heat flux calculation: An analytical review of nine methods: Soil Heat Flux Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zhongming; Russell, Eric S.; Missik, Justine E. C.

    We evaluated nine methods of soil heat flux calculation using field observations. All nine methods underestimated the soil heat flux by at least 19%. This large underestimation is mainly caused by uncertainties in soil thermal properties.

  16. The fast neutron fluence and the activation detector activity calculations using the effective source method and the adjoint function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hep, J.; Konecna, A.; Krysl, V.

    2011-07-01

    This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weightingmore » is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV and RPV cavity of the VVER-440 reacto rand located axially at the position of maximum power and at the position of the weld. Both of these methods (the effective source and the adjoint function) are briefly described in the present paper. The paper also describes their application to the solution of fast neutron fluence and detectors activities for the VVER-440 reactor. (authors)« less

  17. Posttest analysis of MIST Test 330302 using TRAC-PF1/MOD1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyack, B E

    This report discusses a posttest analysis of Multi-Loop Integral System Test (MIST) 330302 which has been performed using TRAC-PF1/MOD1. This test was one of group performed in the MIST facility to investigate high-pressure injection (HPI)-power-operated relief valve (PORV) cooling, also known as feed-and-bleed cooling. In Test 330302, HPI cooling was delayed 20 min after opening and locking the PORV open to induce extensive system voiding. We have concluded that the TRAC-calculated results are in reasonable overall agreement with the data for Test 330302. All major trends and phenomena were correctly predicted. Differences observed between the measured and calculated results havemore » been traced and related, in part, to deficiencies in our knowledge of the facility configuration and operation. We have identified two models forwhich additional review is appropriate. However, in general, the TRAC closure models and correlations appear to be adequate for the prediction of the phenomena expected to occur during feed-and-bleed transientsin the MIST facility. We believe that the correct conclusions about trends and phenomena will be reached if the code is used in similar applications. Conclusions reached regarding use of the code to calculate similar phenomena in full-size plants (scaling implications) and regulatory implications of this work are also presented.« less

  18. ``Phantom'' Modes in Ab Initio Tunneling Calculations: Implications for Theoretical Materials Optimization, Tunneling, and Transport

    NASA Astrophysics Data System (ADS)

    Barabash, Sergey V.; Pramanik, Dipankar

    2015-03-01

    Development of low-leakage dielectrics for semiconductor industry, together with many other areas of academic and industrial research, increasingly rely upon ab initio tunneling and transport calculations. Complex band structure (CBS) is a powerful formalism to establish the nature of tunneling modes, providing both a deeper understanding and a guided optimization of materials, with practical applications ranging from screening candidate dielectrics for lowest ``ultimate leakage'' to identifying charge-neutrality levels and Fermi level pinning. We demonstrate that CBS is prone to a particular type of spurious ``phantom'' solution, previously deemed true but irrelevant because of a very fast decay. We demonstrate that (i) in complex materials, phantom modes may exhibit very slow decay (appearing as leading tunneling terms implying qualitative and huge quantitative errors), (ii) the phantom modes are spurious, (iii) unlike the pseudopotential ``ghost'' states, phantoms are an apparently unavoidable artifact of large numerical basis sets, (iv) a presumed increase in computational accuracy increases the number of phantoms, effectively corrupting the CBS results despite the higher accuracy achieved in resolving the true CBS modes and the real band structure, and (v) the phantom modes cannot be easily separated from the true CBS modes. We discuss implications for direct transport calculations. The strategy for dealing with the phantom states is discussed in the context of optimizing high-quality high- κ dielectric materials for decreased tunneling leakage.

  19. Mass and energy budgets of animals: behavioral and ecological implications. Progress report, December 1, 1984-July 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, W.P.

    1985-08-01

    We continue to put considerable effort into analysis of the dynamics of interactions between environmental and animal variance and its implications for growth and reproduction. We have completed the physiological experiments necessary for defining a complete mass and energy budget for two species of lizards, Uta stansburiana, and Sceloporus undulatus. We have completed the programming and are evaluating calculations for potential growth and reproduction for Sceloporus undulatus for the sandhill country of western Nebraska, where we have field data on microclimates, doubly labeled water measurements, and growth and reproduction measurements to do a thorough test of the microclimate and ectothermmore » models that together calculate potential growth and reproduction. The doubly labeled water analysis system is calibrated and running very well. We are just beginning analysis of the lizard samples from Nebraska. In deer mice, marmots and prairie dogs we have found significant diurnal and seasonal changes in body temperature and activity times. Deer mice in the field may exhibit 6-7 degree and as much as 20 degree body temperature changes in 15 to 20 minute intervals. Winter work in Jackson Hole at -40C showed capability for core temperature drops in deer mice of 8C in one minute and full recovery once the animal could burrow into the snow. Our calculations show that this variability saves significantly in energy costs.« less

  20. Outdoor cooking prevalence in developing countries and its implication for clean cooking policies

    NASA Astrophysics Data System (ADS)

    Langbein, Jörg; Peters, Jörg; Vance, Colin

    2017-11-01

    More than 3 billion people use wood fuels for their daily cooking needs, with detrimental health implications related to smoke emissions. Best practice global initiatives emphasize the dissemination of clean cooking stoves, but these are often expensive and suffer from interrupted supply chains that do not reach rural areas. This emphasis neglects that many households in the developing world cook outdoors. Our calculations suggest that for such households, the use of less expensive biomass cooking stoves can substantially reduce smoke exposure. The cost-effectiveness of clean cooking policies can thus be improved by taking cooking location and ventilation into account.

  1. Methods and Systems for Measurement and Estimation of Normalized Contrast in Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M. (Inventor)

    2017-01-01

    Methods and systems for converting an image contrast evolution of an object to a temperature contrast evolution and vice versa are disclosed, including methods for assessing an emissivity of the object; calculating an afterglow heat flux evolution; calculating a measurement region of interest temperature change; calculating a reference region of interest temperature change; calculating a reflection temperature change; calculating the image contrast evolution or the temperature contrast evolution; and converting the image contrast evolution to the temperature contrast evolution or vice versa, respectively.

  2. Methods and Systems for Measurement and Estimation of Normalized Contrast in Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M. (Inventor)

    2015-01-01

    Methods and systems for converting an image contrast evolution of an object to a temperature contrast evolution and vice versa are disclosed, including methods for assessing an emissivity of the object; calculating an afterglow heat flux evolution; calculating a measurement region of interest temperature change; calculating a reference region of interest temperature change; calculating a reflection temperature change; calculating the image contrast evolution or the temperature contrast evolution; and converting the image contrast evolution to the temperature contrast evolution or vice versa, respectively.

  3. A Cooperative Test of the Load/Unload Response Ratio Proposed Method of Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Trotta, J. E.; Tullis, T. E.

    2004-12-01

    The Load/Unload Response Ratio (LURR) method is a proposed technique to predict earthquakes that was first put forward by Yin in 1984 (Yin, 1987). LURR is based on the idea that when a region is near failure, there is an increase in the rate of seismic activity during loading of the tidal cycle relative to the rate of seismic activity during unloading of the tidal cycle. Typically the numerator of the LURR ratio is the number, or the sum of some measure of the size (e.g. Benioff strain), of small earthquakes that occur during loading of the tidal cycle, whereas the denominator is the same as the numerator except it is calculated during unloading. LURR method suggests this ratio should increase in the months to year preceding a large earthquake. Regions near failure have tectonic stresses nearly high enough for a large earthquake to occur, thus it seems more likely that smaller earthquakes in the region would be triggered when the tidal stresses add to the tectonic ones. However, until recently even the most careful studies suggested that the effect of tidal stresses on earthquake occurrence is very small and difficult to detect. New studies have shown that there is a tidal triggering effect on shallow thrust faults in areas with strong tides from ocean loading (Tanaka et al., 2002; Cochran et al., 2004). We have been conducting an independent test of the LURR method, since there would be important scientific and social implications if the LURR method were proven to be a robust method of earthquake prediction. Smith and Sammis (2003) also undertook a similar study. Following both the parameters of Yin et al. (2000) and the somewhat different ones of Smith and Sammis (2003), we have repeated calculations of LURR for the Northridge and Loma Prieta earthquakes in California. Though we have followed both sets of parameters closely, we have been unable to reproduce either set of results. A general agreement was made at the recent ACES Workshop in China between research groups studying LURR to work cooperatively to discover what is causing these differences in results. All parties will share codes and data sets, be more specific regarding the calculation parameters, and develop a synthetic data set for which we know the expected LURR value. Each research group will then test their codes and the codes of other groups on this synthetic data set. The goal of this cooperative effort is to resolve the differences in methods and results and permit more definitive conclusions on the potential usefulness of LURR in earthquake prediction.

  4. Calculations of Nuclear Astrophysics and Californium Fission Neutron Spectrum Averaged Cross Section Uncertainties Using ENDF/B-VII.1, JEFF-3.1.2, JENDL-4.0 and Low-fidelity Covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B., E-mail: pritychenko@bnl.gov

    Nuclear astrophysics and californium fission neutron spectrum averaged cross sections and their uncertainties for ENDF materials have been calculated. Absolute values were deduced with Maxwellian and Mannhart spectra, while uncertainties are based on ENDF/B-VII.1, JEFF-3.1.2, JENDL-4.0 and Low-Fidelity covariances. These quantities are compared with available data, independent benchmarks, EXFOR library, and analyzed for a wide range of cases. Recommendations for neutron cross section covariances are given and implications are discussed.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jena, Puru; Kandalam, Anil K.; Christian, Theresa M.

    Gallium phosphide bismide (GaP1-xBix) epilayers with bismuth fractions from 0.9% to 3.2%, as calculated from lattice parameter measurements, were studied with Rutherford backscattering spectrometry (RBS) to directly measure bismuth incorporation. The total bismuth fractions found by RBS were higher than expected from the lattice parameter calculations. Furthermore, in one analyzed sample grown by molecular beam epitaxy at 300 degrees C, 55% of incorporated bismuth was found to occupy interstitial sites. We discuss implications of this high interstitial incorporation fraction and its possible relationship to x-ray diffraction and photoluminescence measurements of GaP0.99Bi0.01.

  6. Unstable matter and the 1-0 MeV gamma-ray background

    NASA Technical Reports Server (NTRS)

    Daly, Ruth A.

    1988-01-01

    The spectrum of photons produced by an unstable particle which decayed while the universe was young is calculated. This spectrum is compared to that of the 1-10 MeV shoulder, a feature of the high-energy, extragalactic gamma-ray background, whose origin has not yet been determined. The calculated spectrum contains two parameters which are adjusted to obtain a maximal fit to the observed spectrum; the fit thus obtained is accurate to the 99 percent confidence level. The implications for the mass, lifetime, initial abundance, and branching ratio of the unstable particle are discussed.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B., E-mail: pritychenko@bnl.go; Mughaghab, S.F.; Sonzogni, A.A.

    We have calculated the Maxwellian-averaged cross sections and astrophysical reaction rates of the stellar nucleosynthesis reactions (n, {gamma}), (n, fission), (n, p), (n, {alpha}), and (n, 2n) using the ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, and ENDF/B-VI.8 evaluated nuclear reaction data libraries. These four major nuclear reaction libraries were processed under the same conditions for Maxwellian temperatures (kT) ranging from 1 keV to 1 MeV. We compare our current calculations of the s-process nucleosynthesis nuclei with previous data sets and discuss the differences between them and the implications for nuclear astrophysics.

  8. Detecting the Disruption of Dark-Matter Halos with Stellar Streams.

    PubMed

    Bovy, Jo

    2016-03-25

    Narrow stellar streams in the Milky Way halo are uniquely sensitive to dark-matter subhalos, but many of these subhalos may be tidally disrupted. I calculate the interaction between stellar and dark-matter streams using analytical and N-body calculations, showing that disrupting objects can be detected as low-concentration subhalos. Through this effect, we can constrain the lumpiness of the halo as well as the orbit and present position of individual dark-matter streams. This will have profound implications for the formation of halos and for direct- and indirect-detection dark-matter searches.

  9. Vapor-liquid equilibrium thermodynamics of N2 + CH4 - Model and Titan applications

    NASA Technical Reports Server (NTRS)

    Thompson, W. R.; Zollweg, John A.; Gabis, David H.

    1992-01-01

    A thermodynamic model is presented for vapor-liquid equilibrium in the N2 + CH4 system, which is implicated in calculations of the Titan tropospheric clouds' vapor-liquid equilibrium thermodynamics. This model imposes constraints on the consistency of experimental equilibrium data, and embodies temperature effects by encompassing enthalpy data; it readily calculates the saturation criteria, condensate composition, and latent heat for a given pressure-temperature profile of the Titan atmosphere. The N2 content of condensate is about half of that computed from Raoult's law, and about 30 percent greater than that computed from Henry's law.

  10. Up quark mass in lattice QCD with three light dynamical quarks and implications for strong CP invariance.

    PubMed

    Nelson, Daniel R; Fleming, George T; Kilcup, Gregory W

    2003-01-17

    A standing mystery in the standard model is the unnatural smallness of the strong CP violating phase. A massless up quark has long been proposed as one potential solution. A lattice calculation of the constants of the chiral Lagrangian essential for the determination of the up quark mass, 2alpha(8)-alpha(5), is presented. We find 2alpha(8)-alpha(5)=0.29+/-0.18, which corresponds to m(u)/m(d)=0.410+/-0.036. This is the first such calculation using a physical number of dynamical light quarks, N(f)=3.

  11. Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation

    ERIC Educational Resources Information Center

    Ayre, Colin; Scally, Andrew John

    2014-01-01

    The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.

  12. Analytic method for calculating properties of random walks on networks

    NASA Technical Reports Server (NTRS)

    Goldhirsch, I.; Gefen, Y.

    1986-01-01

    A method for calculating the properties of discrete random walks on networks is presented. The method divides complex networks into simpler units whose contribution to the mean first-passage time is calculated. The simplified network is then further iterated. The method is demonstrated by calculating mean first-passage times on a segment, a segment with a single dangling bond, a segment with many dangling bonds, and a looplike structure. The results are analyzed and related to the applicability of the Einstein relation between conductance and diffusion.

  13. Proposal on Calculation of Ventilation Threshold Using Non-contact Respiration Measurement with Pattern Light Projection

    NASA Astrophysics Data System (ADS)

    Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo

    We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.

  14. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  15. Automated Transition State Theory Calculations for High-Throughput Kinetics.

    PubMed

    Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H

    2017-09-21

    A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.

  16. Calculation of unsteady transonic flows with mild separation by viscous-inviscid interaction

    NASA Technical Reports Server (NTRS)

    Howlett, James T.

    1992-01-01

    This paper presents a method for calculating viscous effects in two- and three-dimensional unsteady transonic flow fields. An integral boundary-layer method for turbulent viscous flow is coupled with the transonic small-disturbance potential equation in a quasi-steady manner. The viscous effects are modeled with Green's lag-entrainment equations for attached flow and an inverse boundary-layer method for flows that involve mild separation. The boundary-layer method is used stripwise to approximate three-dimensional effects. Applications are given for two-dimensional airfoils, aileron buzz, and a wing planform. Comparisons with inviscid calculations, other viscous calculation methods, and experimental data are presented. The results demonstrate that the present technique can economically and accurately calculate unsteady transonic flow fields that have viscous-inviscid interactions with mild flow separation.

  17. Problems and methods of calculating the Legendre functions of arbitrary degree and order

    NASA Astrophysics Data System (ADS)

    Novikova, Elena; Dmitrenko, Alexander

    2016-12-01

    The known standard recursion methods of computing the full normalized associated Legendre functions do not give the necessary precision due to application of IEEE754-2008 standard, that creates a problems of underflow and overflow. The analysis of the problems of the calculation of the Legendre functions shows that the problem underflow is not dangerous by itself. The main problem that generates the gross errors in its calculations is the problem named the effect of "absolute zero". Once appeared in a forward column recursion, "absolute zero" converts to zero all values which are multiplied by it, regardless of whether a zero result of multiplication is real or not. Three methods of calculating of the Legendre functions, that removed the effect of "absolute zero" from the calculations are discussed here. These methods are also of interest because they almost have no limit for the maximum degree of Legendre functions. It is shown that the numerical accuracy of these three methods is the same. But, the CPU calculation time of the Legendre functions with Fukushima method is minimal. Therefore, the Fukushima method is the best. Its main advantage is computational speed which is an important factor in calculation of such large amount of the Legendre functions as 2 401 336 for EGM2008.

  18. NMR Model of PrgI-SipD Interaction and its Implications in the Needle-Tip Assembly of the Salmonella Type III Secretion System

    PubMed Central

    Rathinavelan, Thenmalarchelvi; Lara-Tejero, Maria; Lefebre, Matthew; Chatterjee, Srirupa; McShan, Andrew C.; Guo, Da-Chuan; Tang, Chun; Galan, Jorge E.; De Guzman, Roberto N.

    2014-01-01

    Salmonella and other pathogenic bacteria use the type III secretion system (T3SS) to inject virulence proteins into human cells to initiate infections. The structural component of the T3SS contains a needle and a needle tip. The needle is assembled from PrgI needle protomers and the needle tip is capped with several copies of the SipD tip protein. How a tip protein docks on the needle is unclear. A crystal structure of a PrgI-SipD fusion protein docked on the PrgI needle results in steric clash of SipD at the needle tip when modeled on the recent atomic structure of the needle. Thus, there is currently no good model of how SipD is docked on the PrgI needle tip. Previously, we showed by NMR paramagnetic relaxation enhancement (PRE) methods that a specific region in the SipD coiled-coil is the binding site for PrgI. Others have hypothesized that a domain of the tip protein – the N-terminal α-helical hairpin, has to swing away during the assembly of the needle apparatus. Here, we show by PRE methods that a truncated form of SipD lacking the α-helical hairpin domain binds more tightly to PrgI. Further, PRE-based structure calculations revealed multiple PrgI binding sites on the SipD coiled-coil. Our PRE results together with the recent NMR-derived atomic structure of the Salmonella needle suggest a possible model of how SipD might dock at the PrgI needle tip. SipD and PrgI are conserved in other bacterial T3SSs, thus our results have wider implication in understanding other needle-tip complexes. PMID:24951833

  19. Data analytics for simplifying thermal efficiency planning in cities

    PubMed Central

    Abdolhosseini Qomi, Mohammad Javad; Noshadravan, Arash; Sobstyl, Jake M.; Toole, Jameson; Ferreira, Joseph; Pellenq, Roland J.-M.; Ulm, Franz-Josef; Gonzalez, Marta C.

    2016-01-01

    More than 44% of building energy consumption in the USA is used for space heating and cooling, and this accounts for 20% of national CO2 emissions. This prompts the need to identify among the 130 million households in the USA those with the greatest energy-saving potential and the associated costs of the path to reach that goal. Whereas current solutions address this problem by analysing each building in detail, we herein reduce the dimensionality of the problem by simplifying the calculations of energy losses in buildings. We present a novel inference method that can be used via a ranking algorithm that allows us to estimate the potential energy saving for heating purposes. To that end, we only need consumption from records of gas bills integrated with a building's footprint. The method entails a statistical screening of the intricate interplay between weather, infrastructural and residents' choice variables to determine building gas consumption and potential savings at a city scale. We derive a general statistical pattern of consumption in an urban settlement, reducing it to a set of the most influential buildings' parameters that operate locally. By way of example, the implications are explored using records of a set of (N = 6200) buildings in Cambridge, MA, USA, which indicate that retrofitting only 16% of buildings entails a 40% reduction in gas consumption of the whole building stock. We find that the inferred heat loss rate of buildings exhibits a power-law data distribution akin to Zipf's law, which provides a means to map an optimum path for gas savings per retrofit at a city scale. These findings have implications for improving the thermal efficiency of cities' building stock, as outlined by current policy efforts seeking to reduce home heating and cooling energy consumption and lower associated greenhouse gas emissions. PMID:27097652

  20. Technical Note: Statistical dependences between channels in radiochromic film readings. Implications in multichannel dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    González-López, Antonio, E-mail: antonio.gonzalez7@carm.es; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen

    Purpose: This note studies the statistical relationships between color channels in radiochromic film readings with flatbed scanners. The same relationships are studied for noise. Finally, their implications for multichannel film dosimetry are discussed. Methods: Radiochromic films exposed to wedged fields of 6 MV energy were read in a flatbed scanner. The joint histograms of pairs of color channels were used to obtain the joint and conditional probability density functions between channels. Then, the conditional expectations and variances of one channel given another channel were obtained. Noise was extracted from film readings by means of a multiresolution analysis. Two different dosemore » ranges were analyzed, the first one ranging from 112 to 473 cGy and the second one from 52 to 1290 cGy. Results: For the smallest dose range, the conditional expectations of one channel given another channel can be approximated by linear functions, while the conditional variances are fairly constant. The slopes of the linear relationships between channels can be used to simplify the expression that estimates the dose by means of the multichannel method. The slopes of the linear relationships between each channel and the red one can also be interpreted as weights in the final contribution to dose estimation. However, for the largest dose range, the conditional expectations of one channel given another channel are no longer linear functions. Finally, noises in different channels were found to correlate weakly. Conclusions: Signals present in different channels of radiochromic film readings show a strong statistical dependence. By contrast, noise correlates weakly between channels. For the smallest dose range analyzed, the linear behavior between the conditional expectation of one channel given another channel can be used to simplify calculations in multichannel film dosimetry.« less

  1. Objectifying Content Validity: Conducting a Content Validity Study in Social Work Research.

    ERIC Educational Resources Information Center

    Rubio, Doris McGartland; Berg-Weger, Marla; Tebb, Susan S.; Lee, E. Suzanne; Rauch, Shannon

    2003-01-01

    The purpose of this article is to demonstrate how to conduct a content validity study. Instructions on how to calculate a content validity index, factorial validity index, and an interrater reliability index and guide for interpreting these indices are included. Implications regarding the value of conducting a content validity study for…

  2. New proton drip-line nuclei relevant to nuclear astrophysics

    NASA Astrophysics Data System (ADS)

    Ferreira, L. S.

    2018-02-01

    We discuss recent results on decay of exotic proton rich nuclei at the proton drip line for Z < 50, that are of great importance for nuclear astrophysics models. From the interpretation of the data, we assign their properties, and impose a constraint on the separation energy which has strong implications in the network calculations.

  3. Using Simulations in Linked Courses to Foster Student Understanding of Complex Political Institutions

    ERIC Educational Resources Information Center

    Williams, Michelle Hale

    2015-01-01

    Political institutions provide basic building blocks for understanding and comparing political systems. Yet, students often struggle to understand the implications of institutional choice, such as electoral system rules, especially when the formulas and calculations used to determine seat allocation can be multilevel and complex. This study brings…

  4. Public and Private Language Games as Mental States: Wittgenstein's Contribution to the Qualitative Research Tradition.

    ERIC Educational Resources Information Center

    Washington, Ernest D.

    An interpretation is provided of the philosopher L. Wittgenstein's analyses of mental states. The theoretical implications of these analyses for cognitive development and qualitatively oriented researchers are discussed. The mental states examined are: (1) pain; (2) remembering; (3) calculating/adding; (4) following a rule; and (5) reading.…

  5. A REVISED SOLAR TRANSFORMITY FOR TIDAL ENERGY RECEIVED BY THE EARTH AND DISSIPATED GLOBALLY: IMPLICATIONS FOR EMERGY ANALYSIS

    EPA Science Inventory

    Solar transformities for the tidal energy received by the earth and the tidal energy dissipated globally can be calculated because both solar energy and the gravitational attraction of the sun and moon drive independent processes that produce an annual flux of geopotential energy...

  6. Equivalent Crack Size Modelling of Corrosion Pitting in an AA7050-T7451 Aluminium Alloy and its Implications for Aircraft Structural Integrity

    DTIC Science & Technology

    2012-09-01

    15 3.5 Fractography ... Fractography Results .............................................................................................. 19 4.2.1 Fatigue Crack Growth Images...quantitative fractography [17, 18]. The determination of the ECS is achieved by a trial-and-error calculation with the aim of matching the experimental

  7. The Return on the Investment in Library Education.

    ERIC Educational Resources Information Center

    Van House, Nancy A.

    1985-01-01

    Measures change in social and private net present value of expected lifetime earnings attributable to M.L.S. degree under current market conditions and calculates effect of changes in placement rates and of two-year MLS degrees. Implications for profession's ability to attract capable individuals and for its sex composition are discussed. (33…

  8. Laboratory Measurements of Radar Transmission Through Dust with Implications for Radar Imaging on Mars

    NASA Technical Reports Server (NTRS)

    Williams, K. K.; Greeley, R.

    2001-01-01

    Measurements of radar transmission through a Mars analog dust are used to calculate values of signal attenuation over a frequency range of 0.5-12 GHz. These values are discussed in the context of a Mars radar imaging mission. Additional information is contained in the original extended abstract.

  9. Towards cost-effective reliability through visualization of the reliability option space

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.

    2004-01-01

    In planning a complex system's development there can be many options to improve its reliability. Typically their sum total cost exceeds the budget available, so it is necessary to select judiciously from among them. Reliability models can be employed to calculate the cost and reliability implications of a candidate selection.

  10. Diagnosing and Managing Violence

    PubMed Central

    2011-01-01

    Available categorization systems for violence encountered in medical practice do not constitute optimal tools to guide management. In this article, 4 common patterns of violence across psychiatric diagnoses are described (defensive, dominance-defining, impulsive, and calculated) and management implications are considered. The phenomenologic and neurobiological rationale for a clinical classification system of violence is also presented. PMID:22295257

  11. Attitude of Students towards Teaching Profession in Nigeria: Implications for Education Development

    ERIC Educational Resources Information Center

    Egwu, Sarah Oben

    2015-01-01

    The study was conducted to ascertain attitude of students towards teaching profession in Faculty of Education, Ebonyi State University, Abakaliki. A sample of 300 students completed a 15 item questionnaire designed for the study the instrument was validated and the reliability calculated which was 0.92 using Pearson product moment correlation…

  12. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    PubMed

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.

  13. Ecological network analysis for economic systems: growth and development and implications for sustainable development.

    PubMed

    Huang, Jiali; Ulanowicz, Robert E

    2014-01-01

    The quantification of growth and development is an important issue in economics, because these phenomena are closely related to sustainability. We address growth and development from a network perspective in which economic systems are represented as flow networks and analyzed using ecological network analysis (ENA). The Beijing economic system is used as a case study and 11 input-output (I-O) tables for 1985-2010 are converted into currency networks. ENA is used to calculate system-level indices to quantify the growth and development of Beijing. The contributions of each direct flow toward growth and development in 2010 are calculated and their implications for sustainable development are discussed. The results show that during 1985-2010, growth was the main attribute of the Beijing economic system. Although the system grew exponentially, its development fluctuated within only a small range. The results suggest that system ascendency should be increased in order to favor more sustainable development. Ascendency can be augmented in two ways: (1) strengthen those pathways with positive contributions to increasing ascendency and (2) weaken those with negative effects.

  14. Ecological Network Analysis for Economic Systems: Growth and Development and Implications for Sustainable Development

    PubMed Central

    Huang, Jiali; Ulanowicz, Robert E.

    2014-01-01

    The quantification of growth and development is an important issue in economics, because these phenomena are closely related to sustainability. We address growth and development from a network perspective in which economic systems are represented as flow networks and analyzed using ecological network analysis (ENA). The Beijing economic system is used as a case study and 11 input–output (I-O) tables for 1985–2010 are converted into currency networks. ENA is used to calculate system-level indices to quantify the growth and development of Beijing. The contributions of each direct flow toward growth and development in 2010 are calculated and their implications for sustainable development are discussed. The results show that during 1985–2010, growth was the main attribute of the Beijing economic system. Although the system grew exponentially, its development fluctuated within only a small range. The results suggest that system ascendency should be increased in order to favor more sustainable development. Ascendency can be augmented in two ways: (1) strengthen those pathways with positive contributions to increasing ascendency and (2) weaken those with negative effects. PMID:24979465

  15. Comparison of calculation and simulation of evacuation in real buildings

    NASA Astrophysics Data System (ADS)

    Szénay, Martin; Lopušniak, Martin

    2018-03-01

    Each building must meet requirements for safe evacuation in order to prevent casualties. Therefore methods for evaluation of evacuation are used when designing buildings. In the paper, calculation methods were tested on three real buildings. The testing used methods of evacuation time calculation pursuant to Slovak standards and evacuation time calculation using the buildingExodus simulation software. If calculation methods have been suitably selected taking into account the nature of evacuation and at the same time if correct values of parameters were entered, we will be able to obtain almost identical times of evacuation in comparison with real results obtained from simulation. The difference can range from 1% to 27%.

  16. The Futures Wheel: A method for exploring the implications of social-ecological change

    Treesearch

    D.N. Bengston

    2015-01-01

    Change in social-ecological systems often produces a cascade of unanticipated consequences. Natural resource professionals and other stakeholders need to understand the possible implications of cascading change to prepare for it. The Futures Wheel is a "smart group" method that uses a structured brainstorming process to uncover and evaluate multiple levels of...

  17. Methods for calculating dietary energy density in a nationally representative sample

    PubMed Central

    Vernarelli, Jacqueline A.; Mitchell, Diane C.; Rolls, Barbara J.; Hartman, Terryl J.

    2013-01-01

    There has been a growing interest in examining dietary energy density (ED, kcal/g) as it relates to various health outcomes. Consuming a diet low in ED has been recommended in the 2010 Dietary Guidelines, as well as by other agencies, as a dietary approach for disease prevention. Translating this recommendation into practice; however, is difficult. Currently there is no standardized method for calculating dietary ED; as dietary ED can be calculated with foods alone, or with a combination of foods and beverages. Certain items may be defined as either a food or a beverage (e.g., meal replacement shakes) and require special attention. National survey data are an excellent resource for evaluating factors that are important to dietary ED calculation. The National Health and Nutrition Examination Survey (NHANES) nutrient and food database does not include an ED variable, thus researchers must independently calculate ED. The objective of this study was to provide information that will inform the selection of a standardized ED calculation method by comparing and contrasting methods for ED calculation. The present study evaluates all consumed items and defines foods and beverages based on both USDA food codes and how the item was consumed. Results are presented as mean EDs for the different calculation methods stratified by population demographics (e.g. age, sex). Using United State Department of Agriculture (USDA) food codes in the 2005–2008 NHANES, a standardized method for calculating dietary ED can be derived. This method can then be adapted by other researchers for consistency across studies. PMID:24432201

  18. System and method for radiation dose calculation within sub-volumes of a monte carlo based particle transport grid

    DOEpatents

    Bergstrom, Paul M.; Daly, Thomas P.; Moses, Edward I.; Patterson, Jr., Ralph W.; Schach von Wittenau, Alexis E.; Garrett, Dewey N.; House, Ronald K.; Hartmann-Siantar, Christine L.; Cox, Lawrence J.; Fujino, Donald H.

    2000-01-01

    A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.

  19. An assessment of air pollutant exposure methods in Mexico City, Mexico.

    PubMed

    Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S

    2015-05-01

    Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.

  20. Research on volume metrology method of large vertical energy storage tank based on internal electro-optical distance-ranging method

    NASA Astrophysics Data System (ADS)

    Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang

    2018-01-01

    A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.

  1. Two- and three-photon ionization of hydrogen and lithium

    NASA Technical Reports Server (NTRS)

    Chang, T. N.; Poe, R. T.

    1977-01-01

    We present the detailed result of a calculation on two- and three-photon ionization of hydrogen and lithium based on a recently proposed calculational method. Our calculation has demonstrated that this method is capable of retaining the numerical advantages enjoyed by most of the existing calculational methods and, at the same time, circumventing their limitations. In particular, we have concentrated our discussion on the relative contribution from the resonant and nonresonant intermediate states.

  2. Estimating the Attack Rate of Pregnancy-Associated Listeriosis during a Large Outbreak

    PubMed Central

    Imanishi, Maho; Routh, Janell A.; Klaber, Marigny; Gu, Weidong; Vanselow, Michelle S.; Jackson, Kelly A.; Sullivan-Chang, Loretta; Heinrichs, Gretchen; Jain, Neena; Albanese, Bernadette; Callaghan, William M.; Mahon, Barbara E.; Silk, Benjamin J.

    2015-01-01

    Background. In 2011, a multistate outbreak of listeriosis linked to contaminated cantaloupes raised concerns that many pregnant women might have been exposed to Listeria monocytogenes. Listeriosis during pregnancy can cause fetal death, premature delivery, and neonatal sepsis and meningitis. Little information is available to guide healthcare providers who care for asymptomatic pregnant women with suspected L. monocytogenes exposure. Methods. We tracked pregnancy-associated listeriosis cases using reportable diseases surveillance and enhanced surveillance for fetal death using vital records and inpatient fetal deaths data in Colorado. We surveyed 1,060 pregnant women about symptoms and exposures. We developed three methods to estimate how many pregnant women in Colorado ate the implicated cantaloupes, and we calculated attack rates. Results. One laboratory-confirmed case of listeriosis was associated with pregnancy. The fetal death rate did not increase significantly compared to preoutbreak periods. Approximately 6,500–12,000 pregnant women in Colorado might have eaten the contaminated cantaloupes, an attack rate of ~1 per 10,000 exposed pregnant women. Conclusions. Despite many exposures, the risk of pregnancy-associated listeriosis was low. Our methods for estimating attack rates may help during future outbreaks and product recalls. Our findings offer relevant considerations for management of asymptomatic pregnant women with possible L. monocytogenes exposure. PMID:25784782

  3. Application of a Novel Semi-Automatic Technique for Determining the Bilateral Symmetry Plane of the Facial Skeleton of Normal Adult Males.

    PubMed

    Roumeliotis, Grayson; Willing, Ryan; Neuert, Mark; Ahluwalia, Romy; Jenkyn, Thomas; Yazdani, Arjang

    2015-09-01

    The accurate assessment of symmetry in the craniofacial skeleton is important for cosmetic and reconstructive craniofacial surgery. Although there have been several published attempts to develop an accurate system for determining the correct plane of symmetry, all are inaccurate and time consuming. Here, the authors applied a novel semi-automatic method for the calculation of craniofacial symmetry, based on principal component analysis and iterative corrective point computation, to a large sample of normal adult male facial computerized tomography scans obtained clinically (n = 32). The authors hypothesized that this method would generate planes of symmetry that would result in less error when one side of the face was compared to the other than a symmetry plane generated using a plane defined by cephalometric landmarks. When a three-dimensional model of one side of the face was reflected across the semi-automatic plane of symmetry there was less error than when reflected across the cephalometric plane. The semi-automatic plane was also more accurate when the locations of bilateral cephalometric landmarks (eg, frontozygomatic sutures) were compared across the face. The authors conclude that this method allows for accurate and fast measurements of craniofacial symmetry. This has important implications for studying the development of the facial skeleton, and clinical application for reconstruction.

  4. Putaminal volume and diffusion in early familial Creutzfeldt-Jakob disease.

    PubMed

    Seror, Ilana; Lee, Hedok; Cohen, Oren S; Hoffmann, Chen; Prohovnik, Isak

    2010-01-15

    The putamen is centrally implicated in the pathophysiology of Creutzfeldt-Jakob Disease (CJD). To our knowledge, its volume has never been measured in this disease. We investigated whether gross putaminal atrophy can be detected by MRI in early stages, when the diffusion is already reduced. Twelve familial CJD patients with the E200K mutation and 22 healthy controls underwent structural and diffusion MRI scans. The putamen was identified in anatomical scans by two methods: manual tracing by a blinded investigator, and automatic parcellation by a computerized segmentation procedure (FSL FIRST). For each method, volume and mean Apparent Diffusion Coefficient (ADC) were calculated. ADC was significantly lower in CJD patients (697+/-64 microm(2)/s vs. 750+/-31 microm(2)/s, p<0.005), as expected, but the volume was not reduced. The computerized FIRST delineation yielded comparable ADC values to the manual method, but computerized volumes were smaller than manual tracing values. We conclude that significant diffusion reduction in the putamen can be detected by delineating the structure manually or with a computerized algorithm. Our findings confirm and extend previous voxel-based and observational studies. Putaminal volume was not reduced in our early-stage patients, thus confirming that diffusion abnormalities precede detectible atrophy in this structure.

  5. Lifetime Prevalence of Posttraumatic Stress Disorder in Two American Indian Reservation Populations

    PubMed Central

    Beals, Janette; Manson, Spero M.; Croy, Calvin; Klein, Suzell A.; Whitesell, Nancy Rumbaugh; Mitchell, Christina M.

    2015-01-01

    Posttraumatic stress disorder (PTSD) has been found to be more common among American Indian populations than among other Americans. A complex diagnosis, the assessment methods for PTSD have varied across epidemiological studies, especially in terms of the trauma criteria. Here, we examined data from the American Indian Service Utilization, Psychiatric Epidemiology, Risk and Protective Factors Project (AI-SUPERPFP) to estimate the lifetime prevalence of PTSD in two culturally distinct American Indian reservation communities, using two formulas for calculating PTSD prevalence. The AI-SUPERPFP was a cross-sectional probability sample survey conducted between 1997 and 2000. Southwest (n = 1,446) and Northern Plains (n = 1,638) tribal members living on or near their reservations, aged 15–57 years at time of interview, were randomly sampled from tribal rolls. PTSD estimates were derived based on both the single worst and 3 worst traumas. Prevalence estimates varied by ascertainment method: single worst trauma (lifetime: 5.9% to 14.8%) versus 3 worst traumas (lifetime, 8.9% to 19.5%). Use of the 3-worst-event approach increased prevalence by 28.3% over the single-event method. PTSD was prevalent in these tribal communities. These results also serve to underscore the need to better understand the implications for PTSD prevalence with the current focus on a single worst event. PMID:23900893

  6. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  7. Calculation of the Coulomb Fission Cross Sections for Pb-Pb and Bi-Pb Interactions at 158 A GeV

    NASA Technical Reports Server (NTRS)

    Poyser, William J.; Ahern, Sean C.; Norbury, John W.; Tripathi, R. K.

    2002-01-01

    The Weizsacker-Williams (WW) method of virtual quanta is used to make approximate cross section calculations for peripheral relativistic heavy-ion collisions. We calculated the Coulomb fission cross sections for projectile ions of Pb-208 and Bi-209 with energies of 158 A GeV interacting with a Pb-208 target. We also calculated the electromagnetic absorption cross section for Pb-208 ion interacting as described. For comparison we use both the full WW method and a standard approximate WW method. The approximate WW method in larger cross sections compared to the more accurate full WW method.

  8. Select critically ill patients at risk of augmented renal clearance: experience in a Malaysian intensive care unit.

    PubMed

    Adnan, S; Ratnam, S; Kumar, S; Paterson, D; Lipman, J; Roberts, J; Udy, A A

    2014-11-01

    Augmented renal clearance (ARC) refers to increased solute elimination by the kidneys. ARC has considerable implications for altered drug concentrations. The aims of this study were to describe the prevalence of ARC in a select cohort of patients admitted to a Malaysian intensive care unit (ICU) and to compare measured and calculated creatinine clearances in this group. Patients with an expected ICU stay of <24 hours plus an admission serum creatinine concentration <120 µmol/l, were enrolled from May to July 2013. Twenty-four hour urinary collections and serum creatinine concentrations were used to measure creatinine clearance. A total of 49 patients were included, with a median age of 34 years. Most study participants were male and admitted after trauma. Thirty-nine percent were found to have ARC. These patients were more commonly admitted in emergency (P=0.03), although no other covariants were identified as predicting ARC, likely due to the inclusion criteria and the study being under-powered. Significant imprecision was demonstrated when comparing calculated Cockcroft-Gault creatinine clearance (Crcl) and measured Crcl. Bias was larger in ARC patients, with Cockcroft-Gault Crcl being significantly lower than measured Crcl (P <0.01) and demonstrating poor correlation (rs=-0.04). In conclusion, critically ill patients with 'normal' serum creatinine concentrations have varied Crcl. Many are at risk of ARC, which may necessitate individualised drug dosing. Furthermore, significant bias and imprecision between calculated and measured Crcl exists, suggesting clinicians should carefully consider which method they employ in assessing renal function.

  9. Cost Analysis of an Intervention to Prevent Methicillin-Resistant Staphylococcus Aureus (MRSA) Transmission

    PubMed Central

    Chowers, Michal; Carmeli, Yehuda; Shitrit, Pnina; Elhayany, Asher; Geffen, Keren

    2015-01-01

    Introduction Our objective was to assess the cost implications of a vertical MRSA prevention program that led to a reduction in MRSA bacteremia. Methods We performed a matched historical cohort study and cost analysis in a single hospital in Israel for the years 2005-2011. The cost of MRSA bacteremia was calculated as total hospital cost for patients admitted with bacteremia and for patients with hospital-acquired bacteremia, the difference in cost compared to matched controls. The cost of prevention was calculated as the sum of the cost of microbiology tests, single-use equipment used for patients in isolation, and infection control personnel. Results An average of 20,000 patients were screened yearly. The cost of prevention was $208,100 per year, with the major contributor being laboratory cost. We calculated that our intervention averted 34 cases of bacteremia yearly: 17 presenting on admission and 17 acquired in the hospital. The average cost of a case admitted with bacteremia was $14,500, and the net cost attributable to nosocomial bacteremia was $9,400. Antibiotics contributed only 0.4% of the total disease management cost. When the annual cost of averted cases of bacteremia and that of prevention were compared, the intervention resulted in annual cost savings of $199,600. Conclusions A vertical MRSA prevention program targeted at high-risk patients, which was highly effective in preventing bacteremia, is cost saving. These results suggest that allocating resources to targeted prevention efforts might be beneficial even in a single institution in a high incidence country. PMID:26406889

  10. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-11-01

    Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  11. Colorectal cancer occurs earlier in those exposed to tobacco smoke: implications for screening

    PubMed Central

    Mahoney, Martin C.; Cummings, K. Michael; Michalek, Arthur M.; Reid, Mary E.; Moysich, Kirsten B.; Hyland, Andrew

    2011-01-01

    Background Colorectal cancer (CRC) is the third most common cancer in the USA. While various lifestyle factors have been shown to alter the risk for colorectal cancer, recommendations for the early detection of CRC are based only on age and family history. Methods This case-only study examined the age at diagnosis of colorectal cancer in subjects exposed to tobacco smoke. Subjects included all patients who attended RPCI between 1957 and 1997, diagnosed with colorectal cancer, and completed an epidemiologic questionnaire. Adjusted linear regression models were calculated for the various smoking exposures. Results Of the 3,540 cases of colorectal cancer, current smokers demonstrated the youngest age of CRC onset (never: 64.2 vs. current: 57.4, P < 0.001) compared to never smokers, followed by recent former smokers. Among never smokers, individuals with past second-hand smoke exposure were diagnosed at a significantly younger age compared to the unexposed. Conclusion This study found that individuals with heavy, long-term tobacco smoke exposure were significantly younger at the time of CRC diagnosis compared to lifelong never smokers. The implication of this finding is that screening for colorectal cancer, which is recommended to begin at age 50 years for persons at average risk should be initiated 5–10 years earlier for persons with a significant lifetime history of exposure to tobacco smoke. PMID:18264728

  12. Embryonic metabolism of the ornithischian dinosaurs Protoceratops andrewsi and Hypacrosaurus stebingeri and implications for calculations of dinosaur egg incubation times

    NASA Astrophysics Data System (ADS)

    Lee, Scott A.

    2017-04-01

    The embryonic metabolisms of the ornithischian dinosaurs Protoceratops andrewsi and Hypacrosaurus stebingeri have been determined and are in the range observed in extant reptiles. The average value of the measured embryonic metabolic rates for P. andrewsi and H. stebingeri are then used to calculate the incubation times for 21 dinosaurs from both Sauischia and Ornithischia using a mass growth model based on conservation of energy. The calculated incubation times vary from about 70 days for Archaeopteryx lithographica to about 180 days for Alamosaurus sanjuanensis. Such long incubation times seem unlikely, particularly for the sauropods and large theropods. Incubation times are also predicted with the assumption that the saurischian dinosaurs had embryonic metabolisms in the range observed in extant birds.

  13. Comparisons of measured and calculated potential magnetic fields. [in solar corona

    NASA Technical Reports Server (NTRS)

    Hagyard, M. J.; Teuber, D.

    1978-01-01

    Photospheric line-of-sight and transverse-magnetic-field data obtained, with a vector magnetograph system for an isolated sunspot are described. A study of the linear polarization patterns and of the calculated transverse field lines indicates that the magnetic field of the region is very nearly potential. The H-alpha fibril structures of this region as seen in high-resolution photographs corroborate this conclusion. Consequently, a potential-field calculation is described using the measured line-of-sight fields together with assumed Neumann boundary conditions; both are necessary and sufficient for a unique solution. The computed transverse fields are then compared with the measured transverse fields to verify the potential-field model and assumed boundary values. The implications of these comparisons for the validity of magnetic-field extrapolations using potential theory are discussed.

  14. Leading-order calculation of hadronic contributions to the Muon g-2 using the Dyson-Schwinger approach

    NASA Astrophysics Data System (ADS)

    Goecke, Tobias; Fischer, Christian S.; Williams, Richard

    2011-10-01

    We present a calculation of the hadronic vacuum polarisation (HVP) tensor within the framework of Dyson-Schwinger equations. To this end we use a well-established phenomenological model for the quark-gluon interaction with parameters fixed to reproduce hadronic observables. From the HVP tensor we compute both the Adler function and the HVP contribution to the anomalous magnetic moment of the muon, aμ. We find aμHVP = 6760 ×10-11 which deviates about two percent from the value extracted from experiment. Additionally, we make comparison with a recent lattice determination of aμHVP and find good agreement within our approach. We also discuss the implications of our result for a corresponding calculation of the hadronic light-by-light scattering contribution to aμ.

  15. Embryonic metabolism of the ornithischian dinosaurs Protoceratops andrewsi and Hypacrosaurus stebingeri and implications for calculations of dinosaur egg incubation times.

    PubMed

    Lee, Scott A

    2017-04-01

    The embryonic metabolisms of the ornithischian dinosaurs Protoceratops andrewsi and Hypacrosaurus stebingeri have been determined and are in the range observed in extant reptiles. The average value of the measured embryonic metabolic rates for P. andrewsi and H. stebingeri are then used to calculate the incubation times for 21 dinosaurs from both Sauischia and Ornithischia using a mass growth model based on conservation of energy. The calculated incubation times vary from about 70 days for Archaeopteryx lithographica to about 180 days for Alamosaurus sanjuanensis. Such long incubation times seem unlikely, particularly for the sauropods and large theropods. Incubation times are also predicted with the assumption that the saurischian dinosaurs had embryonic metabolisms in the range observed in extant birds.

  16. Calculations of Maxwellian-averaged cross sections and astrophysical reaction rates using the ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, and ENDF/B-VI.8 evaluated nuclear reaction data libraries

    NASA Astrophysics Data System (ADS)

    Pritychenko, B.; Mughaghab, S. F.; Sonzogni, A. A.

    2010-11-01

    We have calculated the Maxwellian-averaged cross sections and astrophysical reaction rates of the stellar nucleosynthesis reactions (n, γ), (n, fission), (n, p), (n, α), and (n, 2n) using the ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, and ENDF/B-VI.8 evaluated nuclear reaction data libraries. These four major nuclear reaction libraries were processed under the same conditions for Maxwellian temperatures (kT) ranging from 1 keV to 1 MeV. We compare our current calculations of the s-process nucleosynthesis nuclei with previous data sets and discuss the differences between them and the implications for nuclear astrophysics.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  18. Portfolio assessment and evaluation: implications and guidelines for clinical nursing education.

    PubMed

    Chabeli, M M

    2002-08-01

    With the advent of Outcomes-Based Education in South Africa, the quality of nursing education is debatable, especially with regard to the assessment and evaluation of clinical nursing education, which is complex and renders the validity and reliability of the methods used questionable. This paper seeks to explore and describe the use of portfolio assessment and evaluation, its implications and guidelines for its effective use in nursing education. Firstly, the concepts of assessment, evaluation, portfolio and alternative methods of evaluation are defined. Secondly, a comparison of the characteristics of the old (traditional) methods and the new alternative methods of evaluation is made. Thirdly, through deductive analysis, synthesis and inference, implications and guidelines for the effective use of portfolio assessment and evaluation are described. In view of the qualitative, descriptive and exploratory nature of the study, a focus group interview with twenty students following a post-basic degree at a university in Gauteng regarding their perceptions on the use of portfolio assessment and evaluation method in clinical nursing education was used. A descriptive method of qualitative data analysis of open coding in accordance with Tesch's protocol (in Creswell 1994:155) was used. Resultant implications and guidelines were conceptualised and described within the existing theoretical framework. Principles of trustworthiness were maintained as described by (Lincoln & Guba 1985:290-327). Ethical considerations were in accordance with DENOSA's standards of research (1998:7).

  19. Space industrialization. Volume 3: World and domestic implications

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The status of worldwide space industralization activities is assessed as well as the benefits to be anticipated from enhanced activities. Methods for stimulating space industralization growth are discussed with emphasis on foreign and international activities, national and world impact assessments, industry/government interfaces, legal implications, institutional implications, economics and capitalization, and implementation issues and recommendations.

  20. Thermodynamic evaluation of transonic compressor rotors using the finite volume approach

    NASA Technical Reports Server (NTRS)

    Nicholson, S.; Moore, J.

    1986-01-01

    A method was developed which calculates two-dimensional, transonic, viscous flow in ducts. The finite volume, time marching formulation is used to obtain steady flow solutions of the Reynolds-averaged form of the Navier Stokes equations. The entire calculation is performed in the physical domain. The method is currently limited to the calculation of attached flows. The features of the current method can be summarized as follows. Control volumes are chosen so that smoothing of flow properties, typically required for stability, is now needed. Different time steps are used in the different governing equations to improve the convergence speed of the viscous calculations. A new pressure interpolation scheme is introduced which improves the shock capturing ability of the method. A multi-volume method for pressure changes in the boundary layer allows calculations which use very long and thin control volumes. A special discretization technique is also used to stabilize these calculations. A special formulation of the energy equation is used to provide improved transient behavior of solutions which use the full energy equation. The method is then compared with a wide variety of test cases. The freestream Mach numbers range from 0.075 to 2.8 in the calculations. Transonic viscous flow in a converging diverging nozzle is calculated with the method; the Mach number upstream of the shock is approximately 1.25. The agreement between the calculated and measured shock strength and total pressure losses is good. Essentially incompressible turbulent boundary layer flow in a adverse pressure gradient is calculated and the computed distribution of mean velocity and shear stress are in good agreement with the measurements. At the other end of the Mach number range, a flat plate turbulent boundary layer with a freestream Mach number of 2.8 is calculated using the full energy equation; the computed total temperature distribution and recovery factor agree well with the measurements when a variable Prandtl number is used through the boundary layer.

  1. Environment-based pin-power reconstruction method for homogeneous core calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-07-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less

  2. Calculation of transonic flows using an extended integral equation method

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1976-01-01

    An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.

  3. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  4. The calculations of small molecular conformation energy differences by density functional method

    NASA Astrophysics Data System (ADS)

    Topol, I. A.; Burt, S. K.

    1993-03-01

    The differences in the conformational energies for the gauche (G) and trans(T) conformers of 1,2-difluoroethane and for myo-and scyllo-conformer of inositol have been calculated by local density functional method (LDF approximation) with geometry optimization using different sets of calculation parameters. It is shown that in the contrast to Hartree—Fock methods, density functional calculations reproduce the correct sign and value of the gauche effect for 1,2-difluoroethane and energy difference for both conformers of inositol. The results of normal vibrational analysis for1,2-difluoroethane showed that harmonic frequencies calculated in LDF approximation agree with experimental data with the accuracy typical for scaled large basis set Hartree—Fock calculations.

  5. A submerged singularity method for calculating potential flow velocities at arbitrary near-field points

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1976-01-01

    A discrete singularity method has been developed for calculating the potential flow around two-dimensional airfoils. The objective was to calculate velocities at any arbitrary point in the flow field, including points that approach the airfoil surface. That objective was achieved and is demonstrated here on a Joukowski airfoil. The method used combined vortices and sources ''submerged'' a small distance below the airfoil surface and incorporated a near-field subvortex technique developed earlier. When a velocity calculation point approached the airfoil surface, the number of discrete singularities effectively increased (but only locally) to keep the point just outside the error region of the submerged singularity discretization. The method could be extended to three dimensions, and should improve nonlinear methods, which calculate interference effects between multiple wings, and which include the effects of force-free trailing vortex sheets. The capability demonstrated here would extend the scope of such calculations to allow the close approach of wings and vortex sheets (or vortices).

  6. Three-phase short circuit calculation method based on pre-computed surface for doubly fed induction generator

    NASA Astrophysics Data System (ADS)

    Ma, J.; Liu, Q.

    2018-02-01

    This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.

  7. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  8. A hypersonic aeroheating calculation method based on inviscid outer edge of boundary layer parameters

    NASA Astrophysics Data System (ADS)

    Meng, ZhuXuan; Fan, Hu; Peng, Ke; Zhang, WeiHua; Yang, HuiXin

    2016-12-01

    This article presents a rapid and accurate aeroheating calculation method for hypersonic vehicles. The main innovation is combining accurate of numerical method with efficient of engineering method, which makes aeroheating simulation more precise and faster. Based on the Prandtl boundary layer theory, the entire flow field is divided into inviscid and viscid flow at the outer edge of the boundary layer. The parameters at the outer edge of the boundary layer are numerically calculated from assuming inviscid flow. The thermodynamic parameters of constant-volume specific heat, constant-pressure specific heat and the specific heat ratio are calculated, the streamlines on the vehicle surface are derived and the heat flux is then obtained. The results of the double cone show that at the 0° and 10° angle of attack, the method of aeroheating calculation based on inviscid outer edge of boundary layer parameters reproduces the experimental data better than the engineering method. Also the proposed simulation results of the flight vehicle reproduce the viscid numerical results well. Hence, this method provides a promising way to overcome the high cost of numerical calculation and improves the precision.

  9. Methods for specifying spatial boundaries of cities in the world: The impacts of delineation methods on city sustainability indices.

    PubMed

    Uchiyama, Yuta; Mori, Koichiro

    2017-08-15

    The purpose of this paper is to analyze how different definitions and methods for delineating the spatial boundaries of cities have an impact on the values of city sustainability indicators. It is necessary to distinguish the inside of cities from the outside when calculating the values of sustainability indicators that assess the impacts of human activities within cities on areas beyond their boundaries. For this purpose, spatial boundaries of cities should be practically detected on the basis of a relevant definition of a city. Although no definition of a city is commonly shared among academic fields, three practical methods for identifying urban areas are available in remote sensing science. Those practical methods are based on population density, landcover, and night-time lights. These methods are correlated, but non-negligible differences exist in their determination of urban extents and urban population. Furthermore, critical and statistically significant differences in some urban environmental sustainability indicators result from the three different urban detection methods. For example, the average values of CO 2 emissions per capita and PM 10 concentration in cities with more than 1 million residents are significantly different among the definitions. When analyzing city sustainability indicators and disseminating the implication of the results, the values based on the different definitions should be simultaneously investigated. It is necessary to carefully choose a relevant definition to analyze sustainability indicators for policy making. Otherwise, ineffective and inefficient policies will be developed. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Density functional theory calculations of 95Mo NMR parameters in solid-state compounds.

    PubMed

    Cuny, Jérôme; Furet, Eric; Gautier, Régis; Le Pollès, Laurent; Pickard, Chris J; d'Espinose de Lacaillerie, Jean-Baptiste

    2009-12-21

    The application of periodic density functional theory-based methods to the calculation of (95)Mo electric field gradient (EFG) and chemical shift (CS) tensors in solid-state molybdenum compounds is presented. Calculations of EFG tensors are performed using the projector augmented-wave (PAW) method. Comparison of the results with those obtained using the augmented plane wave + local orbitals (APW+lo) method and with available experimental values shows the reliability of the approach for (95)Mo EFG tensor calculation. CS tensors are calculated using the recently developed gauge-including projector augmented-wave (GIPAW) method. This work is the first application of the GIPAW method to a 4d transition-metal nucleus. The effects of ultra-soft pseudo-potential parameters, exchange-correlation functionals and structural parameters are precisely examined. Comparison with experimental results allows the validation of this computational formalism.

  11. Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru

    2015-05-01

    Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.

  12. Continuous-energy eigenvalue sensitivity coefficient calculations in TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, C. M.; Rearden, B. T.

    2013-07-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several test problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and a low memory footprint, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations. (authors)

  13. Ab initio method for calculating total cross sections

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.; Schneider, B. I.; Temkin, A.

    1993-01-01

    A method for calculating total cross sections without formally including nonelastic channels is presented. The idea is to use a one channel T-matrix variational principle with a complex correlation function. The derived T matrix is therefore not unitary. Elastic scattering is calculated from T-parallel-squared, but total scattering is derived from the imaginary part of T using the optical theorem. The method is applied to the spherically symmetric model of electron-hydrogen scattering. No spurious structure arises; results for sigma(el) and sigma(total) are in excellent agreement with calculations of Callaway and Oza (1984). The method has wide potential applicability.

  14. Development of a SCALE Tool for Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several criticality safety problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and low memory requirements, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations.

  15. Methods for the Calculation of Settling Tanks for Batch Experiments; METODOS DE CALCULO DE ESPESADORES POR ENSAYOS DISCONTINUOS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasos, P.; Perea, C.P.; Jodra, L.G.

    1957-01-01

    >In order to calculate settling tanks, some tests on batch sedimentation were made, and with the data obtained the dimensions of the settling tank were found. The mechanism of sedimentation is first briefly described, and then the factors involved in the calculation of the dimensions and the sedimentation velocity are discussed. The Cloe and Clevenger method and the Kynch method were investigated experimentally and compared. The application of the calculations are illustrated. It is shown that the two methods gave markedly different results. (J.S.R.)

  16. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2014-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less

  17. Toward Quantifying the Mass-Based Hygroscopicity of Individual Submicron Atmospheric Aerosol Particles with STXM/NEXAFS and SEM/EDX

    NASA Astrophysics Data System (ADS)

    Yancey Piens, D.; Kelly, S. T.; OBrien, R. E.; Wang, B.; Petters, M. D.; Laskin, A.; Gilles, M. K.

    2014-12-01

    The hygroscopic behavior of atmospheric aerosols influences their optical and cloud-nucleation properties, and therefore affects climate. Although changes in particle size as a function of relative humidity have often been used to quantify the hygroscopic behavior of submicron aerosol particles, it has been noted that calculations of hygroscopicity based on size contain error due to particle porosity, non-ideal volume additivity and changes in surface tension. We will present a method to quantify the hygroscopic behavior of submicron aerosol particles based on changes in mass, rather than size, as a function of relative humidity. This method results from a novel experimental approach combining scanning transmission x-ray microscopy with near-edge x-ray absorption fine spectroscopy (STXM/NEXAFS), as well as scanning electron microscopy with energy dispersive x-ray spectroscopy (SEM/EDX) on the same individual particles. First, using STXM/NEXAFS, our methods are applied to aerosol particles of known composition ‒ for instance ammonium sulfate, sodium bromide and levoglucosan ‒ and validated by theory. Then, using STXM/NEXAFS and SEM/EDX, these methods are extended to mixed atmospheric aerosol particles collected in the field at the DOE Atmospheric Radiation Measurement (ARM) Climate Research Facility at the Southern Great Planes sampling site in Oklahoma, USA. We have observed and quantified a range of hygroscopic behaviors which are correlated to the composition and morphology of individual aerosol particles. These methods will have implications for parameterizing aerosol mixing state and cloud-nucleation activity in atmospheric models.

  18. Prediction of space sickness in astronauts from preflight fluid, electrolyte, and cardiovascular variables and Weightless Environmental Training Facility (WETF) training

    NASA Technical Reports Server (NTRS)

    Simanonok, K.; Mosely, E.; Charles, J.

    1992-01-01

    Nine preflight variables related to fluid, electrolyte, and cardiovascular status from 64 first-time Shuttle crewmembers were differentially weighted by discrimination analysis to predict the incidence and severity of each crewmember's space sickness as rated by NASA flight surgeons. The nine variables are serum uric acid, red cell count, environmental temperature at the launch site, serum phosphate, urine osmolality, serum thyroxine, sitting systolic blood pressure, calculated blood volume, and serum chloride. Using two methods of cross-validation on the original samples (jackknife and a stratefied random subsample), these variables enable the prediction of space sickness incidence (NONE or SICK) with 80 percent sickness and space severity (NONE, MILD, MODERATE, of SEVERE) with 59 percent success by one method of cross-validation and 67 percent by another method. Addition of a tenth variable, hours spent in the Weightlessness Environment Training Facility (WETF) did not improve the prediction of space sickness incidences but did improve the prediction of space sickness severity to 66 percent success by the first method of cross-validation of original samples and to 71 percent by the second method. Results to date suggest the presence of predisposing physiologic factors to space sickness that implicate fluid shift etiology. The data also suggest that prior exposure to fluid shift during WETF training may produce some circulatory pre-adaption to fluid shifts in weightlessness that results in a reduction of space sickness severity.

  19. Calculation reduction method for color digital holography and computer-generated hologram using color space conversion

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Nagahama, Yuki; Kakue, Takashi; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Hiyama, Daisuke; Ito, Tomoyoshi

    2014-02-01

    A calculation reduction method for color digital holography (DH) and computer-generated holograms (CGHs) using color space conversion is reported. Color DH and color CGHs are generally calculated on RGB space. We calculate color DH and CGHs in other color spaces for accelerating the calculation (e.g., YCbCr color space). In YCbCr color space, a RGB image or RGB hologram is converted to the luminance component (Y), blue-difference chroma (Cb), and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color DH and CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space. The proposed method, which is possible to accelerate the calculations up to a factor of 3 in theory, accelerates the calculation over two times faster than the ones in RGB color space.

  20. Calculated Coupling Efficiency Between an Elliptical-Core Optical Fiber and a Silicon Oxynitride Rib Waveguide [Corrected Copy

    NASA Technical Reports Server (NTRS)

    Tuma, Margaret L.; Beheim, Glenn

    1995-01-01

    The effective-index method and Marcatili's technique were utilized independently to calculate the electric field profile of a rib channel waveguide. Using the electric field profile calculated from each method, the theoretical coupling efficiency between a single-mode optical fiber and a rib waveguide was calculated using the overlap integral. Perfect alignment was assumed and the coupling efficiency calculated. The coupling efficiency calculation was then repeated for a range of transverse offsets.

  1. Pressure algorithm for elliptic flow calculations with the PDF method

    NASA Technical Reports Server (NTRS)

    Anand, M. S.; Pope, S. B.; Mongia, H. C.

    1991-01-01

    An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.

  2. A Generalized Weizsacker-Williams Method Applied to Pion Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Ahern, Sean C.; Poyser, William J.; Norbury, John W.; Tripathi, R. K.

    2002-01-01

    A new "Generalized" Weizsacker-Williams method (GWWM) is used to calculate approximate cross sections for relativistic peripheral proton-proton collisions. Instead of a mass less photon mediator, the method allows for the mediator to have mass for short range interactions. This method generalizes the Weizsacker-Williams method (WWM) from Coulomb interactions to GWWM for strong interactions. An elastic proton-proton cross section is calculated using GWWM with experimental data for the elastic p+p interaction, where the mass p+ is now the mediator. The resulting calculated cross sections is compared to existing data for the elastic proton-proton interaction. A good approximate fit is found between the data and the calculation.

  3. Accurate electronic and chemical properties of 3d transition metal oxides using a calculated linear response U and a DFT + U(V) method.

    PubMed

    Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R

    2015-04-14

    We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.

  4. Hybrid method (JM-ECS) combining the J-matrix and exterior complex scaling methods for scattering calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanroose, W.; Broeckhove, J.; Arickx, F.

    The paper proposes a hybrid method for calculating scattering processes. It combines the J-matrix method with exterior complex scaling and an absorbing boundary condition. The wave function is represented as a finite sum of oscillator eigenstates in the inner region, and it is discretized on a grid in the outer region. The method is validated for a one- and a two-dimensional model with partial wave equations and a calculation of p-shell nuclear scattering with semirealistic interactions.

  5. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...

  6. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...

  7. Calculation methods for compressible turbulent boundary layers, 1976

    NASA Technical Reports Server (NTRS)

    Bushnell, D. M.; Cary, A. M., Jr.; Harris, J. E.

    1977-01-01

    Equations and closure methods for compressible turbulent boundary layers are discussed. Flow phenomena peculiar to calculation of these boundary layers were considered, along with calculations of three dimensional compressible turbulent boundary layers. Procedures for ascertaining nonsimilar two and three dimensional compressible turbulent boundary layers were appended, including finite difference, finite element, and mass-weighted residual methods.

  8. An effective method to accurately calculate the phase space factors for β - β - decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neacsu, Andrei; Horoi, Mihai

    2016-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. Here, we present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  9. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  10. A new method for calculating ecological flow: Distribution flow method

    NASA Astrophysics Data System (ADS)

    Tan, Guangming; Yi, Ran; Chang, Jianbo; Shu, Caiwen; Yin, Zhi; Han, Shasha; Feng, Zhiyong; Lyu, Yiwei

    2018-04-01

    A distribution flow method (DFM) and its ecological flow index and evaluation grade standard are proposed to study the ecological flow of rivers based on broadening kernel density estimation. The proposed DFM and its ecological flow index and evaluation grade standard are applied into the calculation of ecological flow in the middle reaches of the Yangtze River and compared with traditional calculation method of hydrological ecological flow, method of flow evaluation, and calculation result of fish ecological flow. Results show that the DFM considers the intra- and inter-annual variations in natural runoff, thereby reducing the influence of extreme flow and uneven flow distributions during the year. This method also satisfies the actual runoff demand of river ecosystems, demonstrates superiority over the traditional hydrological methods, and shows a high space-time applicability and application value.

  11. Effects of seed source origin on bark thickness of Douglas-fir (Pseudotsuga menziesii) growing in southwestern Germany

    Treesearch

    Ulrich Kohnle; Sebastian Hein; Frank C. Sorensen; Aaron R. Weiskittel

    2012-01-01

    Provenance-specific variation in bark thickness in Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) is important for accurate volume calculations and might carry ecological implications as well. To investigate variation, diameter at breast height (dbh) and double bark thickness (dbt) were measured in 10 experiments in southwestern Germany (16...

  12. Student Growth Percentiles Based on MIRT: Implications of Calibrated Projection. CRESST Report 842

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li; Choi, Kilchan

    2014-01-01

    This research concerns a new proposal for calculating student growth percentiles (SGP, Betebenner, 2009). In Betebenner (2009), quantile regression (QR) is used to estimate the SGPs. However, measurement error in the score estimates, which always exists in practice, leads to bias in the QR-­based estimates (Shang, 2012). One way to address this…

  13. Isosinglet approximation for nonelastic reactions

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.

    1972-01-01

    Group theoretic relations are derived between different combinations of projectile and secondary particles which appear to have a broad range of application in spacecraft shielding or radiation damage studies. These relations are used to reduce the experimental effort required to obtain nuclear reaction data for transport calculations. Implications for theoretical modeling are also noted, especially for heavy-heavy reactions.

  14. Does Time-on-Task Estimation Matter? Implications for the Validity of Learning Analytics Findings

    ERIC Educational Resources Information Center

    Kovanovic, Vitomir; Gaševic, Dragan; Dawson, Shane; Joksimovic, Srecko; Baker, Ryan S.; Hatala, Marek

    2015-01-01

    With\twidespread adoption of Learning Management Systems (LMS) and other learning technology, large amounts of data--commonly known as trace data--are readily accessible to researchers. Trace data has been extensively used to calculate time that students spend on different learning activities--typically referred to as time-on-task. These measures…

  15. The Relationship between Mathematics and Language: Academic Implications for Children with Specific Language Impairment and English Language Learners

    ERIC Educational Resources Information Center

    Alt, Mary; Arizmendi, Genesis D.; Beal, Carole R.

    2014-01-01

    Purpose: The present study examined the relationship between mathematics and language to better understand the nature of the deficit and the academic implications associated with specific language impairment (SLI) and academic implications for English language learners (ELLs). Method: School-age children (N = 61; 20 SLI, 20 ELL, 21 native…

  16. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    PubMed

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.

  17. Molecular structure, vibrational spectral assignments (FT-IR and FT-RAMAN), NMR, NBO, HOMO-LUMO and NLO properties of O-methoxybenzaldehyde based on DFT calculations

    NASA Astrophysics Data System (ADS)

    Vennila, P.; Govindaraju, M.; Venkatesh, G.; Kamal, C.

    2016-05-01

    Fourier transform - Infra red (FT-IR) and Fourier transform - Raman (FT-Raman) spectroscopic techniques have been carried out to analyze O-methoxy benzaldehyde (OMB) molecule. The fundamental vibrational frequencies and intensity of vibrational bands were evaluated using density functional theory (DFT). The vibrational analysis of stable isomer of OMB has been carried out by FT-IR and FT-Raman in combination with theoretical method simultaneously. The first-order hyperpolarizability and the anisotropy polarizability invariant were computed by DFT method. The atomic charges, hardness, softness, ionization potential, electronegativity, HOMO-LUMO energies, and electrophilicity index have been calculated. The 13C and 1H Nuclear magnetic resonance (NMR) have also been obtained by GIAO method. Molecular electronic potential (MEP) has been calculated by the DFT calculation method. Electronic excitation energies, oscillator strength and excited states characteristics were computed by the closed-shell singlet calculation method.

  18. Use of equivalent spheres to model the relation between radar reflectivity and optical extinction of ice cloud particles.

    PubMed

    Donovan, David Patrick; Quante, Markus; Schlimme, Ingo; Macke, Andreas

    2004-09-01

    The effect of ice crystal size and shape on the relation between radar reflectivity and optical extinction is examined. Discrete-dipole approximation calculations of 95-GHz radar reflectivity and ray-tracing calculations are applied to ice crystals of various habits and sizes. Ray tracing was used primarily to calculate optical extinction and to provide approximate information on the lidar backscatter cross section. The results of the combined calculations are compared with Mie calculations applied to collections of different types of equivalent spheres. Various equivalent sphere formulations are considered, including equivalent radar-lidar spheres; equivalent maximum dimension spheres; equivalent area spheres, and equivalent volume and equivalent effective radius spheres. Marked differences are found with respect to the accuracy of different formulations, and certain types of equivalent spheres can be used for useful prediction of both the radar reflectivity at 95 GHz and the optical extinction (but not lidar backscatter cross section) over a wide range of particle sizes. The implications of these results on combined lidar-radar ice cloud remote sensing are discussed.

  19. Chiral three-nucleon forces and the evolution of correlations along the oxygen isotopic chain

    NASA Astrophysics Data System (ADS)

    Cipollone, A.; Barbieri, C.; Navrátil, P.

    2015-07-01

    Background: Three-nucleon forces (3NFs) have nontrivial implications on the evolution of correlations at extreme proton-neutron asymmetries. Recent ab initio calculations show that leading-order chiral interactions are crucial to obtain the correct binding energies and neutron driplines along the O, N, and F chains [A. Cipollone, C. Barbieri, and P. Navrátil, Phys. Rev. Lett. 111, 062501 (2013), 10.1103/PhysRevLett.111.062501]. Purpose: Here we discuss the impact of 3NFs along the oxygen chain for other quantities of interest, such has the spectral distribution for attachment and removal of a nucleon, spectroscopic factors, and radii. The objective is to better delineate the general effects of 3NFs on nuclear correlations. Methods: We employ self-consistent Green's function (SCGF) theory which allows a comprehensive calculation of the single-particle spectral function. For the closed subshell isotopes, 14O, 16O, 22O, 24O, and 28O, we perform calculations with the Dyson-ADC(3) method, which is fully nonperturbative and is the state of the art for both nuclear physics and quantum chemistry applications. The remaining open-shell isotopes are studied using the newly developed Gorkov-SCGF formalism up to second order. Results: We produce complete plots for the spectral distributions. The spectroscopic factors for the dominant quasiparticle peaks are found to depend very little on the leading-order (NNLO) chiral 3NFs. The latter have small impact on the calculated matter radii, which, however, are consistently obtained smaller than experiment. Similarly, single-particle spectra tend to be too spread with respect to the experiment. This effect might hinder, to some extent, the onset of correlations and screen the quenching of calculated spectroscopic factors. The most important effect of 3NFs is thus the fine tuning of the energies for the dominant quasiparticle states, which governs the shell evolution and the position of driplines. Conclusions: Although present chiral NNLO 3NFs interactions do reproduce the binding energies correctly in this mass region, the details of the nuclear spectral function remain at odds with the experiment showing too-small radii and a too-spread single-particle spectrum, similar to what has already been pointed out for larger masses. This suggests a lack of repulsion in the present model of N N +3 N interactions, which is mildly apparent already for masses in the A =14 - 28 mass range.

  20. Comparison of Measured Leakage Current Distributions with Calculated Damage Energy Distributions in HgCdTe

    NASA Technical Reports Server (NTRS)

    Marshall, C. J.; Ladbury, R.; Marshall, P. W.; Reed, R. A.; Howe, C.; Weller, B.; Mendenhall, M.; Waczynski, A.; Jordan, T. M.; Fodness, B.

    2006-01-01

    This paper presents a combined Monte Carlo and analytic approach to the calculation of the pixel-to-pixel distribution of proton-induced damage in a HgCdTe sensor array and compares the results to measured dark current distributions after damage by 63 MeV protons. The moments of the Coulombic, nuclear elastic and nuclear inelastic damage distribution were extracted from Monte Carlo simulations and combined to form a damage distribution using the analytic techniques first described in [I]. The calculations show that the high energy recoils from the nuclear inelastic reactions (calculated using the Monte Car10 code MCNPX [2]) produce a pronounced skewing of the damage energy distribution. The nuclear elastic component (also calculated using the MCNPX) has a negligible effect on the shape of the damage distribution. The Coulombic contribution was calculated using MRED [3,4], a Geant4 [4,5] application. The comparison with the dark current distribution strongly suggests that mechanisms which are not linearly correlated with nonionizing damage produced according to collision kinematics are responsible for the observed dark current increases. This has important implications for the process of predicting the on-orbit dark current response of the HgCdTe sensor array.

  1. Fluid-fluid interfacial mobility from random walks

    NASA Astrophysics Data System (ADS)

    Barclay, Paul L.; Lukes, Jennifer R.

    2017-12-01

    Dual control volume grand canonical molecular dynamics is used to perform the first calculation of fluid-fluid interfacial mobilities. The mobility is calculated from one-dimensional random walks of the interface by relating the diffusion coefficient to the interfacial mobility. Three different calculation methods are employed: one using the interfacial position variance as a function of time, one using the mean-squared interfacial displacement, and one using the time-autocorrelation of the interfacial velocity. The mobility is calculated for two liquid-liquid interfaces and one liquid-vapor interface to examine the robustness of the methods. Excellent agreement between the three calculation methods is shown for all the three interfaces, indicating that any of them could be used to calculate the interfacial mobility.

  2. A new shielding calculation method for X-ray computed tomography regarding scattered radiation.

    PubMed

    Watanabe, Hiroshi; Noto, Kimiya; Shohji, Tomokazu; Ogawa, Yasuyoshi; Fujibuchi, Toshioh; Yamaguchi, Ichiro; Hiraki, Hitoshi; Kida, Tetsuo; Sasanuma, Kazutoshi; Katsunuma, Yasushi; Nakano, Takurou; Horitsugi, Genki; Hosono, Makoto

    2017-06-01

    The goal of this study is to develop a more appropriate shielding calculation method for computed tomography (CT) in comparison with the Japanese conventional (JC) method and the National Council on Radiation Protection and Measurements (NCRP)-dose length product (DLP) method. Scattered dose distributions were measured in a CT room with 18 scanners (16 scanners in the case of the JC method) for one week during routine clinical use. The radiation doses were calculated for the same period using the JC and NCRP-DLP methods. The mean (NCRP-DLP-calculated dose)/(measured dose) ratios in each direction ranged from 1.7 ± 0.6 to 55 ± 24 (mean ± standard deviation). The NCRP-DLP method underestimated the dose at 3.4% in fewer shielding directions without the gantry and a subject, and the minimum (NCRP-DLP-calculated dose)/(measured dose) ratio was 0.6. The reduction factors were 0.036 ± 0.014 and 0.24 ± 0.061 for the gantry and couch directions, respectively. The (JC-calculated dose)/(measured dose) ratios ranged from 11 ± 8.7 to 404 ± 340. The air kerma scatter factor κ is expected to be twice as high as that calculated with the NCRP-DLP method and the reduction factors are expected to be 0.1 and 0.4 for the gantry and couch directions, respectively. We, therefore, propose a more appropriate method, the Japanese-DLP method, which resolves the issues of possible underestimation of the scattered radiation and overestimation of the reduction factors in the gantry and couch directions.

  3. Revisiting the Quantitative-Qualitative Debate: Implications for Mixed-Methods Research

    PubMed Central

    SALE, JOANNA E. M.; LOHFELD, LYNNE H.; BRAZIL, KEVIN

    2015-01-01

    Health care research includes many studies that combine quantitative and qualitative methods. In this paper, we revisit the quantitative-qualitative debate and review the arguments for and against using mixed-methods. In addition, we discuss the implications stemming from our view, that the paradigms upon which the methods are based have a different view of reality and therefore a different view of the phenomenon under study. Because the two paradigms do not study the same phenomena, quantitative and qualitative methods cannot be combined for cross-validation or triangulation purposes. However, they can be combined for complementary purposes. Future standards for mixed-methods research should clearly reflect this recommendation. PMID:26523073

  4. Quantum chemical calculations of Cr2O3/SnO2 using density functional theory method

    NASA Astrophysics Data System (ADS)

    Jawaher, K. Rackesh; Indirajith, R.; Krishnan, S.; Robert, R.; Das, S. Jerome

    2018-03-01

    Quantum chemical calculations have been employed to study the molecular effects produced by Cr2O3/SnO2 optimised structure. The theoretical parameters of the transparent conducting metal oxides were calculated using DFT / B3LYP / LANL2DZ method. The optimised bond parameters such as bond lengths, bond angles and dihedral angles were calculated using the same theory. The non-linear optical property of the title compound was calculated using first-order hyperpolarisability calculation. The calculated HOMO-LUMO analysis explains the charge transfer interaction between the molecule. In addition, MEP and Mulliken atomic charges were also calculated and analysed.

  5. Achieving cost-neutrality with long-acting reversible contraceptive methods⋆

    PubMed Central

    Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna

    2014-01-01

    Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161

  6. Calculation and research of electrical characteristics of induction crucible furnaces with unmagnetized conductive crucible

    NASA Astrophysics Data System (ADS)

    Fedin, M. A.; Kuvaldin, A. B.; Kuleshov, A. O.; Zhmurko, I. Y.; Akhmetyanov, S. V.

    2018-01-01

    Calculation methods for induction crucible furnaces with a conductive crucible have been reviewed and compared. The calculation method of electrical and energy characteristics of furnaces with a conductive crucible has been developed and the example of the calculation is shown below. The calculation results are compared with experimental data. Dependences of electrical and power characteristics of the furnace on frequency, inductor current, geometric dimensions and temperature have been obtained.

  7. The influence of internal variability on Earth's energy balance framework and implications for estimating climate sensitivity

    NASA Astrophysics Data System (ADS)

    Dessler, Andrew E.; Mauritsen, Thorsten; Stevens, Bjorn

    2018-04-01

    Our climate is constrained by the balance between solar energy absorbed by the Earth and terrestrial energy radiated to space. This energy balance has been widely used to infer equilibrium climate sensitivity (ECS) from observations of 20th-century warming. Such estimates yield lower values than other methods, and these have been influential in pushing down the consensus ECS range in recent assessments. Here we test the method using a 100-member ensemble of the Max Planck Institute Earth System Model (MPI-ESM1.1) simulations of the period 1850-2005 with known forcing. We calculate ECS in each ensemble member using energy balance, yielding values ranging from 2.1 to 3.9 K. The spread in the ensemble is related to the central assumption in the energy budget framework: that global average surface temperature anomalies are indicative of anomalies in outgoing energy (either of terrestrial origin or reflected solar energy). We find that this assumption is not well supported over the historical temperature record in the model ensemble or more recent satellite observations. We find that framing energy balance in terms of 500 hPa tropical temperature better describes the planet's energy balance.

  8. Estimating the attack rate of pregnancy-associated listeriosis during a large outbreak.

    PubMed

    Imanishi, Maho; Routh, Janell A; Klaber, Marigny; Gu, Weidong; Vanselow, Michelle S; Jackson, Kelly A; Sullivan-Chang, Loretta; Heinrichs, Gretchen; Jain, Neena; Albanese, Bernadette; Callaghan, William M; Mahon, Barbara E; Silk, Benjamin J

    2015-01-01

    In 2011, a multistate outbreak of listeriosis linked to contaminated cantaloupes raised concerns that many pregnant women might have been exposed to Listeria monocytogenes. Listeriosis during pregnancy can cause fetal death, premature delivery, and neonatal sepsis and meningitis. Little information is available to guide healthcare providers who care for asymptomatic pregnant women with suspected L. monocytogenes exposure. We tracked pregnancy-associated listeriosis cases using reportable diseases surveillance and enhanced surveillance for fetal death using vital records and inpatient fetal deaths data in Colorado. We surveyed 1,060 pregnant women about symptoms and exposures. We developed three methods to estimate how many pregnant women in Colorado ate the implicated cantaloupes, and we calculated attack rates. One laboratory-confirmed case of listeriosis was associated with pregnancy. The fetal death rate did not increase significantly compared to preoutbreak periods. Approximately 6,500-12,000 pregnant women in Colorado might have eaten the contaminated cantaloupes, an attack rate of ~1 per 10,000 exposed pregnant women. Despite many exposures, the risk of pregnancy-associated listeriosis was low. Our methods for estimating attack rates may help during future outbreaks and product recalls. Our findings offer relevant considerations for management of asymptomatic pregnant women with possible L. monocytogenes exposure.

  9. Validation of the Australian diagnostic reference levels for paediatric multi detector computed tomography: a comparison of RANZCR QUDI data and subsequent NDRLS data from 2012 to 2015.

    PubMed

    Anna, Hayton; Wallace, Anthony; Thomas, Peter

    2017-03-01

    The national diagnostic reference level service (NDRLS), was launched in 2011, however no paediatric data were submitted during the first calendar year of operation. As such, Australian national diagnostic reference levels (DRLs), for paediatric multi detector computed tomography (MDCT), were established using data obtained from a Royal Australian and New Zealand College of Radiologists (RANZCR), Quality Use of Diagnostic Imaging (QUDI), study. Paediatric data were submitted to the NDRLS in 2012 through 2015. An analysis has been made of the NDRLS paediatric data using the same method as was used to analyse the QUDI data to establish the Australian national paediatric DRLs for MDCT. An analysis of the paediatric NDRLS data has also been made using the method used to calculate the Australian national adult DRLs for MDCT. A comparison between the QUDI data and subsequent NDRLS data shows the NDRLS data to be lower on average for the Head and AbdoPelvis protocol and similar for the chest protocol. Using an average of NDRLS data submitted between 2012 and 2015 implications for updated paediatric DRLS are considered.

  10. Measurement of Dietary Restraint: Validity Tests of Four Questionnaires

    PubMed Central

    Williamson, Donald A.; Martin, Corby K.; York-Crowe, Emily; Anton, Stephen D.; Redman, Leanne M.; Han, Hongmei; Ravussin, Eric

    2007-01-01

    This study tested the validity of four measures of dietary restraint: Dutch Eating Behavior Questionnaire, Eating Inventory (EI), Revised Restraint Scale (RS), and the Current Dieting Questionnaire. Dietary restraint has been implicated as a determinant of overeating and binge eating. Conflicting findings have been attributed to different methods for measuring dietary restraint. The validity of four self-report measures of dietary restraint and dieting behavior was tested using: 1) factor analysis, 2) changes in dietary restraint in a randomized controlled trial of different methods to achieve calorie restriction, and 3) correlation of changes in dietary restraint with an objective measure of energy balance, calculated from the changes in fat mass and fat-free mass over a six-month dietary intervention. Scores from all four questionnaires, measured at baseline, formed a dietary restraint factor, but the RS also loaded on a binge eating factor. Based on change scores, the EI Restraint scale was the only measure that correlated significantly with energy balance expressed as a percentage of energy require d for weight maintenance. These findings suggest that that, of the four questionnaires tested, the EI Restraint scale was the most valid measure of the intent to diet and actual caloric restriction. PMID:17101191

  11. Reliability of infarct volumetry: Its relevance and the improvement by a software-assisted approach.

    PubMed

    Friedländer, Felix; Bohmann, Ferdinand; Brunkhorst, Max; Chae, Ju-Hee; Devraj, Kavi; Köhler, Yvette; Kraft, Peter; Kuhn, Hannah; Lucaciu, Alexandra; Luger, Sebastian; Pfeilschifter, Waltraud; Sadler, Rebecca; Liesz, Arthur; Scholtyschik, Karolina; Stolz, Leonie; Vutukuri, Rajkumar; Brunkhorst, Robert

    2017-08-01

    Despite the efficacy of neuroprotective approaches in animal models of stroke, their translation has so far failed from bench to bedside. One reason is presumed to be a low quality of preclinical study design, leading to bias and a low a priori power. In this study, we propose that the key read-out of experimental stroke studies, the volume of the ischemic damage as commonly measured by free-handed planimetry of TTC-stained brain sections, is subject to an unrecognized low inter-rater and test-retest reliability with strong implications for statistical power and bias. As an alternative approach, we suggest a simple, open-source, software-assisted method, taking advantage of automatic-thresholding techniques. The validity and the improvement of reliability by an automated method to tMCAO infarct volumetry are demonstrated. In addition, we show the probable consequences of increased reliability for precision, p-values, effect inflation, and power calculation, exemplified by a systematic analysis of experimental stroke studies published in the year 2015. Our study reveals an underappreciated quality problem in translational stroke research and suggests that software-assisted infarct volumetry might help to improve reproducibility and therefore the robustness of bench to bedside translation.

  12. Measuring the Osmotic Water Permeability of the Plant Protoplast Plasma Membrane: Implication of the Nonosmotic Volume

    PubMed Central

    2010-01-01

    Starting from the original theoretical descriptions of osmotically induced water volume flow in membrane systems, a convenient procedure to determine the osmotic water permeability coefficient (Pos) and the relative nonosmotic volume (β) of individual protoplasts is presented. Measurements performed on protoplasts prepared from pollen grains and pollen tubes of Lilium longiflorum cv. Thunb. and from mesophyll cells of Nicotiana tabacum L. and Arabidopsis thaliana revealed low values for the osmotic water permeability coefficient in the range 5–20 μm · s−1 with significant differences in Pos, depending on whether β is considered or not. The value of β was determined using two different methods: by interpolation from Boyle-van’t Hoff plots or by fitting a solution of the theoretical equation for water volume flow to the whole volume transients measured during osmotic swelling. The values determined with the second method were less affected by the heterogeneity of the protoplast samples and were around 30% of the respective isoosmotic protoplast volume. It is therefore important to consider nonosmotic volume in the calculation of Pos as plant protoplasts behave as nonideal osmometers. PMID:17568979

  13. The 19F(α, p)22Ne Reaction at Energies of Astrophysical Relevance by Means of the Trojan Horse Method and Its Implications in AGB Stars

    NASA Astrophysics Data System (ADS)

    D’Agata, G.; Pizzone, R. G.; La Cognata, M.; Indelicato, I.; Spitaleri, C.; Palmerini, S.; Trippella, O.; Vescovi, D.; Blagus, S.; Cherubini, S.; Figuera, P.; Grassi, L.; Guardo, G. L.; Gulino, M.; Hayakawa, S.; Kshetri, R.; Lamia, L.; Lattuada, M.; Mijatovic`, T.; Milin, M.; Miljanic`, Đ.; Prepolec, L.; Rapisarda, G. G.; Romano, S.; Sergi, M. L.; Skukan, N.; Soic`, N.; Tokic`, V.; Tumino, A.; Uroic`, M.

    2018-06-01

    The main source of 19F in the universe has not yet been clearly identified and this issue represents one of the unanswered questions of stellar modeling. This lack of knowledge can be due to the 19F(α, p)22Ne reaction cross-section that has proven to be difficult at low energies: direct measurements stop only at about ∼660 keV, leaving roughly half of the astrophysical relevant energy region (from 200 keV to 1.1 MeV) explored only by R-matrix calculations. In this work, we applied the Trojan Horse Method to the quasi-free three-body 6Li(19F, p22Ne)d reaction performed at E beam = 6 MeV in order to indirectly study the 19F(α, p)22Ne reaction in the sub-Coulomb energy region. In this way, we obtained the cross-section and the reaction rate in the temperature region of interest for astrophysics and free from electron screening effects. A brief analysis of the impact of the new measured reaction rate in AGB star nucleosynthesis is also presented.

  14. Spatial pattern and temporal trend of mortality due to tuberculosis 10

    PubMed Central

    de Queiroz, Ana Angélica Rêgo; Berra, Thaís Zamboni; Garcia, Maria Concebida da Cunha; Popolin, Marcela Paschoal; Belchior, Aylana de Souza; Yamamura, Mellina; dos Santos, Danielle Talita; Arroyo, Luiz Henrique; Arcêncio, Ricardo Alexandre

    2018-01-01

    ABSTRACT Objectives: To describe the epidemiological profile of mortality due to tuberculosis (TB), to analyze the spatial pattern of these deaths and to investigate the temporal trend in mortality due to tuberculosis in Northeast Brazil. Methods: An ecological study based on secondary mortality data. Deaths due to TB were included in the study. Descriptive statistics were calculated and gross mortality rates were estimated and smoothed by the Local Empirical Bayesian Method. Prais-Winsten’s regression was used to analyze the temporal trend in the TB mortality coefficients. The Kernel density technique was used to analyze the spatial distribution of TB mortality. Results: Tuberculosis was implicated in 236 deaths. The burden of tuberculosis deaths was higher amongst males, single people and people of mixed ethnicity, and the mean age at death was 51 years. TB deaths were clustered in the East, West and North health districts, and the tuberculosis mortality coefficient remained stable throughout the study period. Conclusions: Analyses of the spatial pattern and temporal trend in mortality revealed that certain areas have higher TB mortality rates, and should therefore be prioritized in public health interventions targeting the disease. PMID:29742272

  15. Assessment of the potential future market in Sweden for hydrogen as an energy carrier

    NASA Astrophysics Data System (ADS)

    Carleson, G.

    Future hydrogen markets for the period 1980-2025 are projected, the probable range of hydrogen production costs for various manufacturing methods is estimated, and expected market shares in competition with alternative energy carriers are evaluated. A general scenario for economic and industrial development in Sweden for the given period was evaluated, showing the average increase in gross national product to become 1.6% per year. Three different energy scenarios were then developed: alternatives were based on nuclear energy, renewable indigenous energy sources, and the present energy situation with free access to imported natural or synthetic fuels. An analysis was made within each scenario of the competitiveness of hydrogen on both the demand and the supply of the following sectors: chemical industry, steel industry, peak power production, residential and commercial heating, and transportation. Costs were calculated for the production, storage and transmission of hydrogen according to technically feasible methods and were compared to those of alternative energy carriers. Health, environmental and societal implications were also considered. The market penetration of hydrogen in each sector was estimated, and the required investment capital was shown to be less than 4% of the national gross investment sum.

  16. Gibbs Ensemble Simulations of the Solvent Swelling of Polymer Films

    NASA Astrophysics Data System (ADS)

    Gartner, Thomas; Epps, Thomas, III; Jayaraman, Arthi

    Solvent vapor annealing (SVA) is a useful technique to tune the morphology of block polymer, polymer blend, and polymer nanocomposite films. Despite SVA's utility, standardized SVA protocols have not been established, partly due to a lack of fundamental knowledge regarding the interplay between the polymer(s), solvent, substrate, and free-surface during solvent annealing and evaporation. An understanding of how to tune polymer film properties in a controllable manner through SVA processes is needed. Herein, the thermodynamic implications of the presence of solvent in the swollen polymer film is explored through two alternative Gibbs ensemble simulation methods that we have developed and extended: Gibbs ensemble molecular dynamics (GEMD) and hybrid Monte Carlo (MC)/molecular dynamics (MD). In this poster, we will describe these simulation methods and demonstrate their application to polystyrene films swollen by toluene and n-hexane. Polymer film swelling experiments, Gibbs ensemble molecular simulations, and polymer reference interaction site model (PRISM) theory are combined to calculate an effective Flory-Huggins χ (χeff) for polymer-solvent mixtures. The effects of solvent chemistry, solvent content, polymer molecular weight, and polymer architecture on χeff are examined, providing a platform to control and understand the thermodynamics of polymer film swelling.

  17. A Modeled Analysis of Telehealth Methods for Treating Pressure Ulcers after Spinal Cord Injury

    PubMed Central

    Smith, Mark W.; Hill, Michelle L.; Hopkins, Karen L.; Kiratli, B. Jenny; Cronkite, Ruth C.

    2012-01-01

    Home telehealth can improve clinical outcomes for conditions that are common among patients with spinal cord injury (SCI). However, little is known about the costs and potential savings associated with its use. We developed clinical scenarios that describe common situations in treatment or prevention of pressure ulcers. We calculated the cost implications of using telehealth for each scenario and under a range of reasonable assumptions. Data were gathered primarily from US Department of Veterans Affairs (VA) administrative records. For each scenario and treatment method, we multiplied probabilities, frequencies, and costs to determine the expected cost over the entire treatment period. We generated low-, medium-, and high-cost estimates based on reasonable ranges of costs and probabilities. Telehealth care was less expensive than standard care when low-cost technology was used but often more expensive when high-cost, interactive devices were installed in the patient's home. Increased utilization of telehealth technology (particularly among rural veterans with SCI) could reduce the incidence of stage III and stage IV ulcers, thereby improving veterans' health and quality of care without increasing costs. Future prospective studies of our present scenarios using patients with various healthcare challenges are recommended. PMID:22969798

  18. The positive binding energy envelopes of low-mass helium stars

    NASA Astrophysics Data System (ADS)

    Hall, Philip D.; Jeffery, C. Simon

    2018-04-01

    It has been hypothesized that stellar envelopes with positive binding energy may be ejected if the release of recombination energy can be triggered and the calculation of binding energy includes this contribution. The implications of this hypothesis for the evolution of normal hydrogen-rich stars have been investigated, but the implications for helium stars - which may represent mass-transfer or merger remnants in binary star systems - have not. Making a set of model helium stars, we find that those with masses between 0.9 and 2.4 M⊙ evolve to configurations with positive binding energy envelopes. We discuss consequences of the ejection hypothesis for such stars, and possible observational tests of these predictions.

  19. Structuralism and Its Heuristic Implications.

    ERIC Educational Resources Information Center

    Greene, Ruth M.

    1984-01-01

    The author defines structuralism (a method for modeling and analyzing event systems in a space-time framework), traces its origins to the work of J. Piaget and M. Fourcault, and discusses its implications for learning. (CL)

  20. One-electron oxidation of individual DNA bases and DNA base stacks.

    PubMed

    Close, David M

    2010-02-04

    In calculations performed with DFT there is a tendency of the purine cation to be delocalized over several bases in the stack. Attempts have been made to see if methods other than DFT can be used to calculate localized cations in stacks of purines, and to relate the calculated hyperfine couplings with known experimental results. To calculate reliable hyperfine couplings it is necessary to have an adequate description of spin polarization which means that electron correlation must be treated properly. UMP2 theory has been shown to be unreliable in estimating spin densities due to overestimates of the doubles correction. Therefore attempts have been made to use quadratic configuration interaction (UQCISD) methods to treat electron correlation. Calculations on the individual DNA bases are presented to show that with UQCISD methods it is possible to calculate hyperfine couplings in good agreement with the experimental results. However these UQCISD calculations are far more time-consuming than DFT calculations. Calculations are then extended to two stacked guanine bases. Preliminary calculations with UMP2 or UQCISD theory on two stacked guanines lead to a cation localized on a single guanine base.

Top