Science.gov

Sample records for all-electron calculations based

  1. e/a classification of Hume–Rothery Rhombic Triacontahedron-type approximants based on all-electron density functional theory calculations

    SciTech Connect

    Mizutani, U; Inukai, M; Sato, H; Zijlstra, E S; Lin, Q

    2014-05-16

    There are three key electronic parameters in elucidating the physics behind the Hume–Rothery electron concentration rule: the square of the Fermi diameter (2kF)2, the square of the critical reciprocal lattice vector and the electron concentration parameter or the number of itinerant electrons per atom e/a. We have reliably determined these three parameters for 10 Rhombic Triacontahedron-type 2/1–2/1–2/1 (N = 680) and 1/1–1/1–1/1 (N = 160–162) approximants by making full use of the full-potential linearized augmented plane wave-Fourier band calculations based on all-electron density-functional theory. We revealed that the 2/1–2/1–2/1 approximants Al13Mg27Zn45 and Na27Au27Ga31 belong to two different sub-groups classified in terms of equal to 126 and 109 and could explain why they take different e/a values of 2.13 and 1.76, respectively. Among eight 1/1–1/1–1/1 approximants Al3Mg4Zn3, Al9Mg8Ag3, Al21Li13Cu6, Ga21Li13Cu6, Na26Au24Ga30, Na26Au37Ge18, Na26Au37Sn18 and Na26Cd40Pb6, the first two, the second two and the last four compounds were classified into three sub-groups with = 50, 46 and 42; and were claimed to obey the e/a = 2.30, 2.10–2.15 and 1.70–1.80 rules, respectively.

  2. Toward Accurate Modelling of Enzymatic Reactions: All Electron Quantum Chemical Analysis combined with QM/MM Calculation of Chorismate Mutase

    SciTech Connect

    Ishida, Toyokazu

    2008-09-17

    To further understand the catalytic role of the protein environment in the enzymatic process, the author has analyzed the reaction mechanism of the Claisen rearrangement of Bacillus subtilis chorismate mutase (BsCM). By introducing a new computational strategy that combines all-electron QM calculations with ab initio QM/MM modelings, it was possible to simulate the molecular interactions between the substrate and the protein environment. The electrostatic nature of the transition state stabilization was characterized by performing all-electron QM calculations based on the fragment molecular orbital technique for the entire enzyme.

  3. All-Electron Scalar Relativistic Calculations on the Adsorption of Small Gold Clusters Toward Methanol Molecule.

    PubMed

    Kuang, Xiang-Jun; Wang, Xin-Qiang; Liu, Gao-Bin

    2015-02-01

    Under the framework of DFT, an all-electron scalar relativistic calculation on the adsorption of Aun (n = 1-13) clusters toward methanol molecule has been performed with the generalized gradient approximation at PW91 level. Our calculation results reveal that the small gold cluster would like to bond with oxygen of methanol molecule at the edge of gold cluster plane. After adsorption, the chemical activities of hydroxyl group and methyl group are enhanced to some extent. The even-numbered AunCH3OH cluster with closed-shell electronic configuration is relatively more stable than the neighboring odd-numbered AunCH3OH cluster with open-shell electronic configuration. All the AunCH3OH clusters prefer low spin multiplicity (M = 1 for even-numbered AuNCH3OH clusters, M = 2 for odd-numbered AunCH3OH clusters) and the magnetic moments are mainly contributed by gold atoms. The odd-even alterations of magnetic moments and electronic configurations can be observed clearly and may be simply understood in terms of the electron pairing effect. PMID:26353643

  4. Norm-conserving pseudopotentials with chemical accuracy compared to all-electron calculations

    NASA Astrophysics Data System (ADS)

    Willand, Alex; Kvashnin, Yaroslav O.; Genovese, Luigi; Vázquez-Mayagoitia, Álvaro; Deb, Arpan Krishna; Sadeghi, Ali; Deutsch, Thierry; Goedecker, Stefan

    2013-03-01

    By adding a nonlinear core correction to the well established dual space Gaussian type pseudopotentials for the chemical elements up to the third period, we construct improved pseudopotentials for the Perdew-Burke-Ernzerhof [J. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996), 10.1103/PhysRevLett.77.3865] functional and demonstrate that they exhibit excellent accuracy. Our benchmarks for the G2-1 test set show average atomization energy errors of only half a kcal/mol. The pseudopotentials also remain highly reliable for high pressure phases of crystalline solids. When supplemented by empirical dispersion corrections [S. Grimme, J. Comput. Chem. 27, 1787 (2006), 10.1002/jcc.20495; S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, J. Chem. Phys. 132, 154104 (2010), 10.1063/1.3382344] the average error in the interaction energy between molecules is also about half a kcal/mol. The accuracy that can be obtained by these pseudopotentials in combination with a systematic basis set is well superior to the accuracy that can be obtained by commonly used medium size Gaussian basis sets in all-electron calculations.

  5. All-electron scalar relativistic calculation of water molecule adsorption onto small gold clusters.

    PubMed

    Kuang, Xiang-Jun; Wang, Xin-Qiang; Liu, Gao-Bin

    2011-08-01

    An all-electron scalar relativistic calculation was performed on Au( n )H(2)O (n = 1-13) clusters using density functional theory (DFT) with the generalized gradient approximation at PW91 level. The calculation results reveal that, after adsorption, the small gold cluster would like to bond with oxygen and the H(2)O molecule prefers to occupy the single fold coordination site. Reflecting the strong scalar relativistic effect, Au( n ) geometries are distorted slightly but still maintain a planar structure. The Au-Au bond is strengthened and the H-O bond is weakened, as manifested by the shortening of the Au-Au bond-length and the lengthening of the H-O bond-length. The H-O-H bond angle becomes slightly larger. The enhancement of reactivity of the H(2)O molecule is obvious. The Au-O bond-lengths, adsorption energies, VIPs, HLGs, HOMO (LUMO) energy levels, charge transfers and the highest vibrational frequencies of the Au-O mode for Au( n )H(2)O clusters exhibit an obvious odd-even oscillation. The most favorable adsorption between small gold clusters and the H(2)O molecule takes place when the H(2)O molecule is adsorbed onto an even-numbered Au( n ) cluster and becomes an Au( n )H(2)O cluster with an even number of valence electrons. The odd-even alteration of magnetic moments is observed in Au( n )H(2)O clusters and may serve as material with a tunable code capacity of "0" and "1" by adsorbing a H(2)O molecule onto an odd or even-numbered small gold cluster. PMID:21140279

  6. Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations

    SciTech Connect

    Webster, R. Harrison, N. M.; Bernasconi, L.

    2015-06-07

    We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features of the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green’s function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional c{sub HF} and show that there exists one value of c{sub HF} (∼0.32) that reproduces at least semi-quantitatively the optical gap of this material.

  7. Potential energy curves of Li+2 from all-electron EA-EOM-CCSD calculations

    NASA Astrophysics Data System (ADS)

    Musiał, Monika; Medrek, Magdalena; Kucharski, Stanisław A.

    2015-10-01

    The electron attachment (EA) equation-of-motion coupled-cluster theory provides description of the states obtained by the attachment of an electron to the reference system. If the reference is assumed to be a doubly ionised cation, then the EA results relate to the singly ionised ion. In the current work, the above scheme is applied to the calculations of the potential energy curves (PECs) of the Li+2 cation adopting the doubly ionised Li2 +2 structure as the reference system. The advantage of such computational strategy relies on the fact that the closed-shell Li2 +2 reference dissociates into closed-shell fragments (Li2 +2 ⇒ Li+ + Li+), hence the RHF (restricted Hartree-Fock) function can be used as the reference in the whole range of interatomic distances. This scheme offers the first principle method without any model or effective potential parameters for the description of the bond-breaking processes. In this study, the PECs and selected spectroscopic constants for 18 electronic states of the Li+2 ion were computed and compared with experimental and other theoretical results. †In honour of Professor Sourav Pal on the occasion of an anniversary in his private and scientific life.

  8. All-electron G W +Bethe-Salpeter calculations on small molecules

    NASA Astrophysics Data System (ADS)

    Hirose, Daichi; Noguchi, Yoshifumi; Sugino, Osamu

    2015-05-01

    Accuracy of the first-principles G W +Bethe-Salpeter equation (BSE) method is examined for low-energy excited states of small molecules. The standard formalism, which is based on the one-shot G W approximation and the Tamm-Dancoff approximation (TDA), is found to underestimate the optical gap of N2, CO, H2O ,C2H4 , and CH2O by about 1 eV. Possible origins are investigated separately for the effect of TDA and for the approximate schemes of the self-energy operator, which are known to cause overbinding of the electron-hole pair and overscreening of the interaction. By applying the known correction formula, we find the amount of the correction is too small to overcome the underestimated excitation energy. This result indicates a need for fundamental revision of the G W +BSE method rather than adjustment of the standard one. We expect that this study makes the problems in the current G W +BSE formalism clearer and provides useful information for further intrinsic development beyond the current framework.

  9. All-electron double zeta basis sets for the lanthanides: Application in atomic and molecular property calculations

    NASA Astrophysics Data System (ADS)

    Jorge, F. E.; Martins, L. S. C.; Franco, M. L.

    2016-01-01

    Segmented all-electron basis sets of valence double zeta quality plus polarization functions (DZP) for the elements from Ce to Lu are generated to be used with the non-relativistic and Douglas-Kroll-Hess (DKH) Hamiltonians. At the B3LYP level, the DZP-DKH atomic ionization energies and equilibrium bond lengths and atomization energies of the lanthanide trifluorides are evaluated and compared with benchmark theoretical and experimental data reported in the literature. In general, this compact size set shows to have a regular, efficient, and reliable performance. It can be particularly useful in molecular property calculations that require explicit treatment of the core electrons.

  10. All-electron mixed basis G W calculations of TiO2 and ZnO crystals

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Ono, Shota; Nagatsuka, Naoki; Ohno, Kaoru

    2016-04-01

    In transition metal oxide systems, there exists a serious discrepancy between the theoretical quasiparticle energies and the experimental photoemission energies. To improve the accuracy of electronic structure calculations for these systems, we use the all-electron mixed basis GW method, in which single-particle wave functions are accurately described by the linear combinations of plane waves and atomic orbitals. We adopt the full ω integration to evaluate the correlation part of the self-energy and compare the results with those obtained by plasmon pole models. We present the quasiparticle energies and band gap of titanium dioxide (TiO2) and zinc oxide (ZnO) within the one-shot GW approximation. The results are in reasonable agreement with experimental data in the case of TiO2 but underestimated by about 0.6-1.4 eV from experimental data in the case of ZnO, although our results are comparable to previous one-shot GW calculations. We also explain a new approach to perform ω integration very efficiently and accurately.

  11. Structure, stability, depolarized light scattering, and vibrational spectra of fullerenols from all-electron density-functional-theory calculations

    NASA Astrophysics Data System (ADS)

    Rivelino, Roberto; Malaspina, Thaciana; Fileti, Eudes E.

    2009-01-01

    We have investigated the stability, electronic properties, Rayleigh (elastic), and Raman (inelastic) depolarization ratios, infrared and Raman absorption vibrational spectra of fullerenols [C60(OH)n] with different degrees of hydroxylation by using all-electron density-functional-theory (DFT) methods. Stable arrangements of these molecules were found by means of full geometry optimizations using Becke’s three-parameter exchange functional with the Lee, Yang, and Parr correlation functional. This DFT level has been combined with the 6-31G(d,p) Gaussian-type basis set, as a compromise between accuracy and capability to treat highly hydroxylated fullerenes, e.g., C60(OH)36 . Thus, the molecular properties of fullerenols were systematically analyzed for structures with n=1 , 2, 3, 4, 8, 10, 16, 18, 24, 32, and 36. From the electronic structure analysis of these molecules, we have evidenced an important effect related to the weak chemical reactivity of a possible C60(OH)24 isomer. To investigate Raman scattering and the vibrational spectra of the different fullerenols, frequency calculations are carried out within the harmonic approximation. In this case a systematic study is only performed for n=1-4 , 8, 10, 16, 18, and 24. Our results give good agreements with the expected changes in the spectral absorptions due to the hydroxylation of fullerenes.

  12. Validity of virial theorem in all-electron mixed basis density functional, Hartree-Fock, and GW calculations.

    PubMed

    Kuwahara, Riichi; Tadokoro, Yoichi; Ohno, Kaoru

    2014-08-28

    In this paper, we calculate kinetic and potential energy contributions to the electronic ground-state total energy of several isolated atoms (He, Be, Ne, Mg, Ar, and Ca) by using the local density approximation (LDA) in density functional theory, the Hartree-Fock approximation (HFA), and the self-consistent GW approximation (GWA). To this end, we have implemented self-consistent HFA and GWA routines in our all-electron mixed basis code, TOMBO. We confirm that virial theorem is fairly well satisfied in all of these approximations, although the resulting eigenvalue of the highest occupied molecular orbital level, i.e., the negative of the ionization potential, is in excellent agreement only in the case of the GWA. We find that the wave function of the lowest unoccupied molecular orbital level of noble gas atoms is a resonating virtual bound state, and that of the GWA spreads wider than that of the LDA and thinner than that of the HFA. PMID:25173006

  13. Real-space electronic structure calculations with full-potential all-electron precision for transition metals

    NASA Astrophysics Data System (ADS)

    Ono, Tomoya; Heide, Marcus; Atodiresei, Nicolae; Baumeister, Paul; Tsukamoto, Shigeru; Blügel, Stefan

    2010-11-01

    We have developed an efficient computational scheme utilizing the real-space finite-difference formalism and the projector augmented-wave (PAW) method to perform precise first-principles electronic-structure simulations based on the density-functional theory for systems containing transition metals with a modest computational effort. By combining the advantages of the time-saving double-grid technique and the Fourier-filtering procedure for the projectors of pseudopotentials, we can overcome the egg box effect in the computations even for first-row elements and transition metals, which is a problem of the real-space finite-difference formalism. In order to demonstrate the potential power in terms of precision and applicability of the present scheme, we have carried out simulations to examine several bulk properties and structural energy differences between different bulk phases of transition metals and have obtained excellent agreement with the results of other precise first-principles methods such as a plane-wave-based PAW method and an all-electron full-potential linearized augmented plane-wave (FLAPW) method.

  14. Relativistic and correlated all-electron calculations on the ground and excited states of AgH and AuH

    NASA Astrophysics Data System (ADS)

    Witek, Henryk A.; Nakijima, Takahito; Hirao, Kimihiko

    2000-11-01

    We report relativistic all-electron multireference based perturbation calculations on the low-lying excited states of gold and silver hydrides. For AuH, we consider all molecular states dissociating to the Au(2S)+H(2S) and Au(2D)+H(2S) atomic limits, and for AgH, the states corresponding to the Ag(2S)+H(2S), Ag(2P)+H(2S), and Ag(2D)+H(2S) dissociation channels. Spin-free relativistic effects and the correlation effects are treated on the same footing through the relativistic scheme of eliminating small components (RESC). Spin-orbit effects are included perturbatively. The calculated potential energy curves for AgH are the first reported in the literature. The computed spectroscopic properties agree well with experimental findings; however, the assignment of states does not correspond to our calculations. Therefore, we give a reinterpretation of the experimentally observed C 1Π, a 3Π, B 1Σ+, b(3Δ1)1, D 1Π, c13Π1, and c0(3Π0) states. A labeling suggested by us is a1, C0+, b0-, c2, B3Π0+, d3Π1, e1, f1 and g1, respectively. The spin-orbit states corresponding to Ag(2D)+H(2S) have not well defined the Λ and S quantum numbers, and therefore, they probably correspond to Hund's coupling case c. For AuH, we present a comparison of the calculated potential energy curves and spectroscopic parameters with the previous configuration interaction study and the experiment.

  15. Algorithm for quantum-mechanical finite-nuclear-mass variational calculations of atoms with two p electrons using all-electron explicitly correlated Gaussian basis functions

    SciTech Connect

    Sharkey, Keeper L.; Pavanello, Michele; Bubin, Sergiy; Adamowicz, Ludwik

    2009-12-15

    A new algorithm for calculating the Hamiltonian matrix elements with all-electron explicitly correlated Gaussian functions for quantum-mechanical calculations of atoms with two p electrons or a single d electron have been derived and implemented. The Hamiltonian used in the approach was obtained by rigorously separating the center-of-mass motion and it explicitly depends on the finite mass of the nucleus. The approach was employed to perform test calculations on the isotopes of the carbon atom in their ground electronic states and to determine the finite-nuclear-mass corrections for these states.

  16. All-electron first principles calculations of the ground and some low-lying excited states of BaI.

    PubMed

    Miliordos, Evangelos; Papakondylis, Aristotle; Tsekouras, Athanasios A; Mavridis, Aristides

    2007-10-01

    The electronic structure of the heavy diatomic molecule BaI has been examined for the first time by ab initio multiconfigurational configuration interaction (MRCI) and coupled cluster (RCCSD(T)) methods. The effects of special relativity have been taken into account through the second-order Douglas-Kroll-Hess approximation. The construction of Omega(omega,omega) potential energy curves allows for the estimation of "experimental" dissociation energies (De) of the first few excited states by exploiting the accurately known De experimental value of the X2Sigma+ ground state. All states examined are of ionic character with a Mulliken charge transfer of 0.5 e- from Ba to I, and this is reflected to large dipole moments ranging from 6 to 11 D. Despite the inherent difficulties of a heavy system like BaI, our results are encouraging. With the exception of bond distances that on the average are calculated 0.05 A longer than the experimental ones, common spectroscopic parameters are in fair agreement with experiment, whereas De values are on the average 10 kcal/mol smaller. PMID:17850123

  17. All-electron molecular Dirac-Hartree-Fock calculations: Properties of the group IV monoxides GeO, SnO and PbO

    NASA Technical Reports Server (NTRS)

    Dyall, Kenneth G.

    1991-01-01

    Dirac-Hartree-Fock calculations have been carried out on the ground states of the group IV monoxides GeO, SnO and PbO. Geometries, dipole moments and infrared data are presented. For comparison, nonrelativistic, first-order perturbation and relativistic effective core potential calculations have also been carried out. Where appropriate the results are compared with the experimental data and previous calculations. Spin-orbit effects are of great importance for PbO, where first-order perturbation theory including only the mass-velocity and Darwin terms is inadequate to predict the relativistic corrections to the properties. The relativistic effective core potential results show a larger deviation from the all-electron values than for the hydrides, and confirm the conclusions drawn on the basis of the hydride calculations.

  18. An algorithm for quantum mechanical finite-nuclear-mass variational calculations of atoms with L = 3 using all-electron explicitly correlated Gaussian basis functions.

    PubMed

    Sharkey, Keeper L; Kirnosov, Nikita; Adamowicz, Ludwik

    2013-03-14

    A new algorithm for quantum-mechanical nonrelativistic calculation of the Hamiltonian matrix elements with all-electron explicitly correlated Gaussian functions for atoms with an arbitrary number of s electrons and with three p electrons, or one p electron and one d electron, or one f electron is developed and implemented. In particular the implementation concerns atomic states with L = 3 and M = 0. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. The approach is employed to perform test calculations on the lowest (2)F state of the two main isotopes of the lithium atom, (7)Li and (6)Li. PMID:23514465

  19. All-electron LCAO calculations of the LiF crystal phonon spectrum: Influence of the basis set, the exchange-correlation functional, and the supercell size.

    PubMed

    Evarestov, R A; Losev, M V

    2009-12-01

    For the first time the convergence of the phonon frequencies and dispersion curves in terms of the supercell size is studied in ab initio frozen phonon calculations on LiF crystal. Helmann-Feynman forces over atomic displacements are found in all-electron calculations with the localized atomic functions (LCAO) basis using CRYSTAL06 program. The Parlinski-Li-Kawazoe method and FROPHO program are used to calculate the dynamical matrix and phonon frequencies of the supercells. For fcc lattice, it is demonstrated that use of the full supercell space group (including the supercell inner translations) enables to reduce essentially the number of the displacements under consideration. For Hartree-Fock (HF), PBE and hybrid PBE0, B3LYP, and B3PW exchange-correlation functionals the atomic basis set optimization is performed. The supercells up to 216 atoms (3 x 3 x 3 conventional unit cells) are considered. The phonon frequencies using the supercells of different size and shape are compared. For the commensurate with supercell k-points the best agreement of the theoretical results with the experimental data is found for B3PW exchange-correlation functional calculations with the optimized basis set. The phonon frequencies at the most non-commensurate k-points converged for the supercell consisting of 4 x 4 x 4 primitive cells and ensures the accuracy 1-2% in the thermodynamic properties calculated (the Helmholtz free energy, entropy, and heat capacity at the room temperature). PMID:19382176

  20. Advancing Efficient All-Electron Electronic Structure Methods Based on Numeric Atom-Centered Orbitals for Energy Related Materials

    NASA Astrophysics Data System (ADS)

    Blum, Volker

    This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.

  1. Atomic force calculations within the all-electron FLAPW method: Treatment of core states and discontinuities at the muffin-tin sphere boundary

    NASA Astrophysics Data System (ADS)

    Klüppelberg, Daniel A.; Betzinger, Markus; Blügel, Stefan

    2015-01-01

    We analyze the accuracy of the atomic force within the all-electron full-potential linearized augmented plane-wave (FLAPW) method using the force formalism of Yu et al. [Phys. Rev. B 43, 6411 (1991), 10.1103/PhysRevB.43.6411]. A refinement of this formalism is presented that explicitly takes into account the tail of high-lying core states leaking out of the muffin-tin sphere and considers the small discontinuities of LAPW wave function, density, and potential at the muffin-tin sphere boundaries. For MgO and EuTiO3 it is demonstrated that these amendments substantially improve the acoustic sum rule and the symmetry of the force constant matrix. Sum rule and symmetry are realized with an accuracy of μ Htr /aB .

  2. Electronic structure and physical properties of the spinel-type phase of BeP{sub 2}N{sub 4} from all-electron density functional calculations

    SciTech Connect

    Ching, W. Y.; Aryal, Sitram; Rulis, Paul; Schnick, Wolfgang

    2011-04-15

    Using density-functional-theory-based ab initio methods, the electronic structure and physical properties of the newly synthesized nitride BeP{sub 2}N{sub 4} with a phenakite-type structure and the predicted high-pressure spinel phase of BeP{sub 2}N{sub 4} are studied in detail. It is shown that both polymorphs are wide band-gap semiconductors with relatively small electron effective masses at the conduction-band minima. The spinel-type phase is more covalently bonded due to the increased number of P-N bonds for P at the octahedral sites. Calculations of mechanical properties indicate that the spinel-type polymorph is a promising superhard material with notably large bulk, shear, and Young's moduli. Also calculated are the Be K, P K, P L{sub 3}, and N K edges of the electron energy-loss near-edge structure for both phases. They show marked differences because of the different local environments of the atoms in the two crystalline polymorphs. These differences will be very useful for the experimental identification of the products of high-pressure syntheses targeting the predicted spinel-type phase of BeP{sub 2}N{sub 4}.

  3. Two-component relativistic density-functional calculations of the dimers of the halogens from bromine through element 117 using effective core potential and all-electron methods.

    PubMed

    Mitin, Alexander V; van Wüllen, Christoph

    2006-02-14

    A two-component quasirelativistic Hamiltonian based on spin-dependent effective core potentials is used to calculate ionization energies and electron affinities of the heavy halogen atom bromine through the superheavy element 117 (eka-astatine) as well as spectroscopic constants of the homonuclear dimers of these atoms. We describe a two-component Hartree-Fock and density-functional program that treats spin-orbit coupling self-consistently within the orbital optimization procedure. A comparison with results from high-order Douglas-Kroll calculations--for the superheavy systems also with zeroth-order regular approximation and four-component Dirac results--demonstrates the validity of the pseudopotential approximation. The density-functional (but not the Hartree-Fock) results show very satisfactory agreement with theoretical coupled cluster as well as experimental data where available, such that the theoretical results can serve as an estimate for the hitherto unknown properties of astatine, element 117, and their dimers. PMID:16483205

  4. Singlet-triplet energy splitting between 1D and 3D (1s2 2s nd), n = 3, 4, 5, and 6, Rydberg states of the beryllium atom (9Be) calculated with all-electron explicitly correlated Gaussian functions

    NASA Astrophysics Data System (ADS)

    Sharkey, Keeper L.; Bubin, Sergiy; Adamowicz, Ludwik

    2014-11-01

    Accurate variational nonrelativistic quantum-mechanical calculations are performed for the five lowest 1D and four lowest 3D states of the 9Be isotope of the beryllium atom. All-electron explicitly correlated Gaussian (ECG) functions are used in the calculations and their nonlinear parameters are optimized with the aid of the analytical energy gradient determined with respect to these parameters. The effect of the finite nuclear mass is directly included in the Hamiltonian used in the calculations. The singlet-triplet energy gaps between the corresponding 1D and 3D states, are reported.

  5. An algorithm for nonrelativistic quantum-mechanical finite-nuclear-mass variational calculations of nitrogen atom in L = 0, M = 0 states using all-electrons explicitly correlated Gaussian basis functions.

    PubMed

    Sharkey, Keeper L; Adamowicz, Ludwik

    2014-05-01

    An algorithm for quantum-mechanical nonrelativistic variational calculations of L = 0 and M = 0 states of atoms with an arbitrary number of s electrons and with three p electrons have been implemented and tested in the calculations of the ground (4)S state of the nitrogen atom. The spatial part of the wave function is expanded in terms of all-electrons explicitly correlated Gaussian functions with the appropriate pre-exponential Cartesian angular factors for states with the L = 0 and M = 0 symmetry. The algorithm includes formulas for calculating the Hamiltonian and overlap matrix elements, as well as formulas for calculating the analytic energy gradient determined with respect to the Gaussian exponential parameters. The gradient is used in the variational optimization of these parameters. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all-particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. With that, the mass effect on the total ground-state energy is determined. PMID:24811630

  6. Velocity Based Modulus Calculations

    NASA Astrophysics Data System (ADS)

    Dickson, W. C.

    2007-12-01

    A new set of equations are derived for the modulus of elasticity E and the bulk modulus K which are dependent only upon the seismic wave propagation velocities Vp, Vs and the density ρ. The three elastic moduli, E (Young's modulus), the shear modulus μ (Lamé's second parameter) and the bulk modulus K are found to be simple functions of the density and wave propagation velocities within the material. The shear and elastic moduli are found to equal the density of the material multiplied by the square of their respective wave propagation-velocities. The bulk modulus may be calculated from the elastic modulus using Poisson's ratio. These equations and resultant values are consistent with published literature and values in both magnitude and dimension (N/m2) and are applicable to the solid, liquid and gaseous phases. A 3D modulus of elasticity model for the Parkfield segment of the San Andreas Fault is presented using data from the wavespeed model of Thurber et al. [2006]. A sharp modulus gradient is observed across the fault at seismic depths, confirming that "variation in material properties play a key role in fault segmentation and deformation style" [Eberhart-Phillips et al., 1993] [EPM93]. The three elastic moduli E, μ and K may now be calculated directly from seismic pressure and shear wave propagation velocities. These velocities may be determined using conventional seismic reflection, refraction or transmission data and techniques. These velocities may be used in turn to estimate the density. This allows velocity based modulus calculations to be used as a tool for geophysical analysis, modeling, engineering and prospecting.

  7. All-electron molecular Dirac-Hartree-Fock calculations: The group 4 tetrahydrides CH4, SiH4, GeH4, SnH4 and PbH4

    NASA Technical Reports Server (NTRS)

    Dyall, Kenneth G.; Taylor, Peter R.; Faegri, Knut, Jr.; Partridge, Harry

    1990-01-01

    A basis-set-expansion Dirac-Hartree-Fock program for molecules is described. Bond lengths and harmonic frequencies are presented for the ground states of the group 4 tetrahydrides, CH4, SiH4, GeH4, SnH4, and PbH4. The results are compared with relativistic effective core potential (RECP) calculations, first-order perturbation theory (PT) calculations and with experimental data. The bond lengths are well predicted by first-order perturbation theory for all molecules, but non of the RECP's considered provides a consistent prediction. Perturbation theory overestimates the relativistic correction to the harmonic frequencies; the RECP calculations underestimate the correction.

  8. Increasing the detection speed of an all-electronic real-time biosensor.

    PubMed

    Leyden, Matthew R; Messinger, Robert J; Schuman, Canan; Sharf, Tal; Remcho, Vincent T; Squires, Todd M; Minot, Ethan D

    2012-03-01

    Biosensor response time, which depends sensitively on the transport of biomolecules to the sensor surface, is a critical concern for future biosensor applications. We have fabricated carbon nanotube field-effect transistor biosensors and quantified protein binding rates onto these nanoelectronic sensors. Using this experimental platform we test the effectiveness of a protein repellent coating designed to enhance protein flux to the all-electronic real-time biosensor. We observe a 2.5-fold increase in the initial protein flux to the sensor when upstream binding sites are blocked. Mass transport modelling is used to calculate the maximal flux enhancement that is possible with this strategy. Our results demonstrate a new methodology for characterizing nanoelectronic biosensor performance, and demonstrate a mass transport optimization strategy that is applicable to a wide range of microfluidic based biosensors. PMID:22252647

  9. Upper Subcritical Calculations Based on Correlated Data

    SciTech Connect

    Sobes, Vladimir; Rearden, Bradley T; Mueller, Don; Marshall, William BJ J; Scaglione, John M; Dunn, Michael E

    2015-01-01

    The American National Standards Institute and American Nuclear Society standard for Validation of Neutron Transport Methods for Nuclear Criticality Safety Calculations defines the upper subcritical limit (USL) as “a limit on the calculated k-effective value established to ensure that conditions calculated to be subcritical will actually be subcritical.” Often, USL calculations are based on statistical techniques that infer information about a nuclear system of interest from a set of known/well-characterized similar systems. The work in this paper is part of an active area of research to investigate the way traditional trending analysis is used in the nuclear industry, and in particular, the research is assessing the impact of the underlying assumption that the experimental data being analyzed for USL calculations are statistically independent. In contrast, the multiple experiments typically used for USL calculations can be correlated because they are often performed at the same facilities using the same materials and measurement techniques. This paper addresses this issue by providing a set of statistical inference methods to calculate the bias and bias uncertainty based on the underlying assumption that the experimental data are correlated. Methods to quantify these correlations are the subject of a companion paper and will not be discussed here. The newly proposed USL methodology is based on the assumption that the integral experiments selected for use in the establishment of the USL are sufficiently applicable and that experimental correlations are known. Under the assumption of uncorrelated data, the new methods collapse directly to familiar USL equations currently used. We will demonstrate our proposed methods on real data and compare them to calculations of currently used methods such as USLSTATS and NUREG/CR-6698. Lastly, we will also demonstrate the effect experiment correlations can have on USL calculations.

  10. Exact-exchange-based quasiparticle calculations

    SciTech Connect

    Aulbur, Wilfried G.; Staedele, Martin; Goerling, Andreas

    2000-09-15

    One-particle wave functions and energies from Kohn-Sham calculations with the exact local Kohn-Sham exchange and the local density approximation (LDA) correlation potential [EXX(c)] are used as input for quasiparticle calculations in the GW approximation (GWA) for eight semiconductors. Quasiparticle corrections to EXX(c) band gaps are small when EXX(c) band gaps are close to experiment. In the case of diamond, quasiparticle calculations are essential to remedy a 0.7 eV underestimate of the experimental band gap within EXX(c). The accuracy of EXX(c)-based GWA calculations for the determination of band gaps is as good as the accuracy of LDA-based GWA calculations. For the lowest valence band width a qualitatively different behavior is observed for medium- and wide-gap materials. The valence band width of medium- (wide-) gap materials is reduced (increased) in EXX(c) compared to the LDA. Quasiparticle corrections lead to a further reduction (increase). As a consequence, EXX(c)-based quasiparticle calculations give valence band widths that are generally 1-2 eV smaller (larger) than experiment for medium- (wide-) gap materials. (c) 2000 The American Physical Society.

  11. Calculations of NMR chemical shifts with APW-based methods

    NASA Astrophysics Data System (ADS)

    Laskowski, Robert; Blaha, Peter

    2012-01-01

    We present a full potential, all electron augmented plane wave (APW) implementation of first-principles calculations of NMR chemical shifts. In order to obtain the induced current we follow a perturbation approach [Pickard and Mauri, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.63.245101 63, 245101 (2001)] and extended the common APW + local orbital (LO) basis by several LOs at higher energies. The calculated all-electron current is represented in traditional APW manner as Fourier series in the interstitial region and with a spherical harmonics representation inside the nonoverlapping atomic spheres. The current is integrated using a “pseudocharge” technique. The implementation is validated by comparison of the computed chemical shifts with some “exact” results for spherical atoms and for a set of solids and molecules with available published data.

  12. Label-free all-electronic biosensing in microfluidic systems

    NASA Astrophysics Data System (ADS)

    Stanton, Michael A.

    Label-free, all-electronic detection techniques offer great promise for advancements in medical and biological analysis. Electrical sensing can be used to measure both interfacial and bulk impedance changes in conducting solutions. Electronic sensors produced using standard microfabrication processes are easily integrated into microfluidic systems. Combined with the sensitivity of radiofrequency electrical measurements, this approach offers significant advantages over competing biological sensing methods. Scalable fabrication methods also provide a means of bypassing the prohibitive costs and infrastructure associated with current technologies. We describe the design, development and use of a radiofrequency reflectometer integrated into a microfluidic system towards the specific detection of biologically relevant materials. We developed a detection protocol based on impedimetric changes caused by the binding of antibody/antigen pairs to the sensing region. Here we report the surface chemistry that forms the necessary capture mechanism. Gold-thiol binding was utilized to create an ordered alkane monolayer on the sensor surface. Exposed functional groups target the N-terminus, affixing a protein to the monolayer. The general applicability of this method lends itself to a wide variety of proteins. To demonstrate specificity, commercially available mouse anti- Streptococcus Pneumoniae monoclonal antibody was used to target the full-length recombinant pneumococcal surface protein A, type 2 strain D39 expressed by Streptococcus Pneumoniae. We demonstrate the RF response of the sensor to both the presence of the surface decoration and bound SPn cells in a 1x phosphate buffered saline solution. The combined microfluidic sensor represents a powerful platform for the analysis and detection of cells and biomolecules.

  13. Numerical inductance calculations based on first principles.

    PubMed

    Shatz, Lisa F; Christensen, Craig W

    2014-01-01

    A method of calculating inductances based on first principles is presented, which has the advantage over the more popular simulators in that fundamental formulas are explicitly used so that a deeper understanding of the inductance calculation is obtained with no need for explicit discretization of the inductor. It also has the advantage over the traditional method of formulas or table lookups in that it can be used for a wider range of configurations. It relies on the use of fast computers with a sophisticated mathematical computing language such as Mathematica to perform the required integration numerically so that the researcher can focus on the physics of the inductance calculation and not on the numerical integration. PMID:25402467

  14. GPU-based fast gamma index calculation

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Jia, Xun; Jiang, Steve B.

    2011-03-01

    The γ-index dose comparison tool has been widely used to compare dose distributions in cancer radiotherapy. The accurate calculation of γ-index requires an exhaustive search of the closest Euclidean distance in the high-resolution dose-distance space. This is a computational intensive task when dealing with 3D dose distributions. In this work, we combine a geometric method (Ju et al 2008 Med. Phys. 35 879-87) with a radial pre-sorting technique (Wendling et al 2007 Med. Phys. 34 1647-54) and implement them on computer graphics processing units (GPUs). The developed GPU-based γ-index computational tool is evaluated on eight pairs of IMRT dose distributions. The γ-index calculations can be finished within a few seconds for all 3D testing cases on one single NVIDIA Tesla C1060 card, achieving 45-75× speedup compared to CPU computations conducted on an Intel Xeon 2.27 GHz processor. We further investigated the effect of various factors on both CPU and GPU computation time. The strategy of pre-sorting voxels based on their dose difference values speeds up the GPU calculation by about 2.7-5.5 times. For n-dimensional dose distributions, γ-index calculation time on CPU is proportional to the summation of γn over all voxels, while that on GPU is affected by γn distributions and is approximately proportional to the γn summation over all voxels. We found that increasing the resolution of dose distributions leads to a quadratic increase of computation time on CPU, while less-than-quadratic increase on GPU. The values of dose difference and distance-to-agreement criteria also have an impact on γ-index calculation time.

  15. GPU-based fast gamma index calculation.

    PubMed

    Gu, Xuejun; Jia, Xun; Jiang, Steve B

    2011-03-01

    The γ-index dose comparison tool has been widely used to compare dose distributions in cancer radiotherapy. The accurate calculation of γ-index requires an exhaustive search of the closest Euclidean distance in the high-resolution dose-distance space. This is a computational intensive task when dealing with 3D dose distributions. In this work, we combine a geometric method (Ju et al 2008 Med. Phys. 35 879-87) with a radial pre-sorting technique (Wendling et al 2007 Med. Phys. 34 1647-54) and implement them on computer graphics processing units (GPUs). The developed GPU-based γ-index computational tool is evaluated on eight pairs of IMRT dose distributions. The γ-index calculations can be finished within a few seconds for all 3D testing cases on one single NVIDIA Tesla C1060 card, achieving 45-75× speedup compared to CPU computations conducted on an Intel Xeon 2.27 GHz processor. We further investigated the effect of various factors on both CPU and GPU computation time. The strategy of pre-sorting voxels based on their dose difference values speeds up the GPU calculation by about 2.7-5.5 times. For n-dimensional dose distributions, γ-index calculation time on CPU is proportional to the summation of γ(n) over all voxels, while that on GPU is affected by γ(n) distributions and is approximately proportional to the γ(n) summation over all voxels. We found that increasing the resolution of dose distributions leads to a quadratic increase of computation time on CPU, while less-than-quadratic increase on GPU. The values of dose difference and distance-to-agreement criteria also have an impact on γ-index calculation time. PMID:21317484

  16. Rapid Bacterial Detection via an All-Electronic CMOS Biosensor.

    PubMed

    Nikkhoo, Nasim; Cumby, Nichole; Gulak, P Glenn; Maxwell, Karen L

    2016-01-01

    The timely and accurate diagnosis of infectious diseases is one of the greatest challenges currently facing modern medicine. The development of innovative techniques for the rapid and accurate identification of bacterial pathogens in point-of-care facilities using low-cost, portable instruments is essential. We have developed a novel all-electronic biosensor that is able to identify bacteria in less than ten minutes. This technology exploits bacteriocins, protein toxins naturally produced by bacteria, as the selective biological detection element. The bacteriocins are integrated with an array of potassium-selective sensors in Complementary Metal Oxide Semiconductor technology to provide an inexpensive bacterial biosensor. An electronic platform connects the CMOS sensor to a computer for processing and real-time visualization. We have used this technology to successfully identify both Gram-positive and Gram-negative bacteria commonly found in human infections. PMID:27618185

  17. Grid-based electronic structure calculations: The tensor decomposition approach

    NASA Astrophysics Data System (ADS)

    Rakhuba, M. V.; Oseledets, I. V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  18. GPU-based calculations in digital holography

    NASA Astrophysics Data System (ADS)

    Madrigal, R.; Acebal, P.; Blaya, S.; Carretero, L.; Fimia, A.; Serrano, F.

    2013-05-01

    In this work we are going to apply GPU (Graphical Processing Units) with CUDA environment for scientific calculations, concretely high cost computations on the field of digital holography. For this, we have studied three typical problems in digital holography such as Fourier transforms, Fresnel reconstruction of the hologram and the calculation of vectorial diffraction integral. In all cases the runtime at different image size and the corresponding accuracy were compared to the obtained by traditional calculation systems. The programs have been carried out on a computer with a graphic card of last generation, Nvidia GTX 680, which is optimized for integer calculations. As a result a large reduction of runtime has been obtained which allows a significant improvement. Concretely, 15 fold shorter times for Fresnel approximation calculations and 600 times for the vectorial diffraction integral. These initial results, open the possibility for applying such kind of calculations in real time digital holography.

  19. All-electron time-dependent density functional theory with finite elements: time-propagation approach.

    PubMed

    Lehtovaara, Lauri; Havu, Ville; Puska, Martti

    2011-10-21

    We present an all-electron method for time-dependent density functional theory which employs hierarchical nonuniform finite-element bases and the time-propagation approach. The method is capable of treating linear and nonlinear response of valence and core electrons to an external field. We also introduce (i) a preconditioner for the propagation equation, (ii) a stable way to implement absorbing boundary conditions, and (iii) a new kind of absorbing boundary condition inspired by perfectly matched layers. PMID:22029294

  20. Precise all-electron dynamical response functions: Application to COHSEX and the RPA correlation energy

    NASA Astrophysics Data System (ADS)

    Betzinger, Markus; Friedrich, Christoph; Görling, Andreas; Blügel, Stefan

    2015-12-01

    We present a methodology to calculate frequency and momentum dependent all-electron response functions determined within Kohn-Sham density functional theory. It overcomes the main obstacle in calculating response functions in practice, which is the slow convergence with respect to the number of unoccupied states and the basis-set size. In this approach, the usual sum-over-states expression of perturbation theory is complemented by the response of the orbital basis functions, explicitly constructed by radial integrations of frequency-dependent Sternheimer equations. To an essential extent an infinite number of unoccupied states are included in this way. Furthermore, the response of the core electrons is treated virtually exactly, which is out of reach otherwise. The method is an extension of the recently introduced incomplete-basis-set correction (IBC) [Betzinger et al., Phys. Rev. B 85, 245124 (2012), 10.1103/PhysRevB.85.245124; Phys. Rev. B 88, 075130 (2013), 10.1103/PhysRevB.88.075130] to the frequency and momentum domain. We have implemented the generalized IBC within the all-electron full-potential linearized augmented-plane-wave method and demonstrate for rocksalt BaO the improved convergence of the dynamical Kohn-Sham polarizability. We apply this technique to compute (a) quasiparticle energies employing the COHSEX approximation for the self-energy of many-body perturbation theory and (b) all-electron RPA correlation energies. It is shown that the favorable convergence of the polarizability is passed over to the COHSEX and RPA calculation.

  1. Spreadsheet Based Scaling Calculations and Membrane Performance

    SciTech Connect

    Wolfe, T D; Bourcier, W L; Speth, T F

    2000-12-28

    Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI) for each solid of

  2. All-electronic line width reduction in a semiconductor diode laser using a crystalline microresonator

    NASA Astrophysics Data System (ADS)

    Rury, Aaron S.; Mansour, Kamjou; Yu, Nan

    2015-07-01

    This study examines the capability to significantly suppress the frequency noise of a semiconductor distributed feedback diode laser using a universally applicable approach: a combination of a high-Q crystalline whispering gallery mode microresonator reference and the Pound-Drever-Hall locking scheme using an all-electronic servo loop. An out-of-loop delayed self-heterodyne measurement system demonstrates the ability of this approach to reduce a test laser's absolute line width by nearly a factor of 100. In addition, in-loop characterization of the laser stabilized using this method demonstrates a 1-kHz residual line width with reference to the resonator frequency. Based on these results, we propose that utilization of an all-electronic loop combined with the use of the wide transparency window of crystalline materials enable this approach to be readily applicable to diode lasers emitting in other regions of the electromagnetic spectrum, especially in the UV and mid-IR.

  3. SPREADSHEET BASED SCALING CALCULATIONS AND MEMBRANE PERFORMANCE

    EPA Science Inventory

    Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total...

  4. Precise response functions in all-electron methods: Generalization to nonspherical perturbations and application to NiO

    NASA Astrophysics Data System (ADS)

    Betzinger, Markus; Friedrich, Christoph; Blügel, Stefan

    2013-08-01

    In a previous publication [Betzinger, Friedrich, Görling, and Blügel, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.85.245124 85, 245124 (2012)] we presented a technique to compute accurate all-electron response functions, e.g., the density response function, within the full-potential linearized augmented-plane-wave (FLAPW) method. Response contributions that are not captured (completely) within the finite Hilbert space spanned by the LAPW basis are taken into account by an incomplete-basis-set correction (IBC). The latter is based on a formal response of the basis functions themselves, which is derived by exploiting their dependence on the effective potential. Its construction requires the solution of radial differential equations, having the form of Sternheimer equations, by numerical integration. The approach includes a formally exact treatment of the response contribution from the core states. While we restricted the formalism to spherical perturbations in the previous work, we here generalize the formalism to nonspherical perturbations. The improvements are demonstrated with exact-exchange optimized-effective-potential (EXX-OEP) calculations of antiferromagnetic NiO. It is shown that with the generalized IBC a basis-set convergence is realized that is as fast as in density-functional theory calculations using standard local or semilocal functionals. The EXX-OEP band gap, magnetic moment, and spectral function of NiO are in substantially better agreement with experiment than results obtained from calculations with local and semilocal functionals.

  5. Relaxation of Actinide Surfaces: An All Electron Study

    NASA Astrophysics Data System (ADS)

    Atta-Fynn, Raymond; Dholabhai, Pratik; Ray, Asok

    2006-10-01

    Fully relativistic full potential density functional calculations with a linearized augmented plane wave plus local orbitals basis (LAPW + lo) have been performed to investigate the relaxations of heavy actinide surfaces, namely the (111) surface of fcc δ-Pu and the (0001) surface of dhcp Am using WIEN2k. This code uses the LAPW + lo method with the unit cell divided into non-overlapping atom-centered spheres and an interstitial region. The APW+lo basis is used to describe all s, p, d, and f states and LAPW basis to describe all higher angular momentum states. Each surface was modeled by a three-layer periodic slab separated by 60 Bohr vacuum with four atoms per surface unit cell. In general, we have found a contraction of the interlayer separations for both Pu and Am. We will report, in detail, the electronic and geometric structures of the relaxed surfaces and comparisons with the respective non-relaxed surfaces.

  6. Ultra reliable infrared absorption water vapor detection through the all-electronic feedback stabilization

    NASA Astrophysics Data System (ADS)

    Zhu, C. G.; Chang, J.; Wang, P. P.; Wang, Q.; Wei, W.; Tian, J. Q.; Chang, H. T.; Liu, X. Z.; Zhang, S. S.

    2014-03-01

    Single-beam balanced radiometric detection (BRD) system with all-electronic feedback stabilization has been proposed for high reliability water vapor detection under rough environmental conditions, which is insensitive to the fluctuation of transmission loss of light. The majority of photocurrent attenuation caused by the optical loss can be effectively compensated by automatically adjusting the splitting ratio of probe photocurrent. Based on the Ebers-Moll model, we present a theoretical analysis which can be suppressed the photocurrent attenuation caused by optical loss from 0.5552 dB to 0.0004 dB by using the all-electronic feedback stabilization. The deviation of the single-beam BRD system is below 0.29% with the bending loss of 0.31 dB in fiber, which is obviously lower than the dual-beam BRD system (5.96%) and subtraction system (11.3%). After averaging and filtering, the absorption sensitivity of water vapor at 1368.597 nm has been demonstrated, which is 7.368×10-6.

  7. Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set.

    PubMed

    Paschoal, D; Guerra, C Fonseca; de Oliveira, M A L; Ramalho, T C; Dos Santos, H F

    2016-10-01

    Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the Pt-195 NMR chemical shift, are presented for Pt and all elements commonly found as Pt-ligands. The new basis sets identified as NMR-DKH were partially contracted as a triple-zeta doubly polarized scheme with all coefficients obtained from a Douglas-Kroll-Hess (DKH) second-order scalar relativistic calculation. The Pt-195 chemical shift was predicted through empirical models fitted to reproduce experimental data for a set of 183 Pt(II) complexes which NMR sign ranges from -1000 to -6000 ppm. Furthermore, the models were validated using a new set of 75 Pt(II) complexes, not included in the descriptive set. The models were constructed using non-relativistic Hamiltonian at density functional theory (DFT-PBEPBE) level with NMR-DKH basis set for all atoms. For the best model, the mean absolute deviation (MAD) and the mean relative deviation (MRD) were 150 ppm and 6%, respectively, for the validation set (75 Pt-complexes) and 168 ppm (MAD) and 5% (MRD) for all 258 Pt(II) complexes. These results were comparable with relativistic DFT calculation, 200 ppm (MAD) and 6% (MRD). © 2016 Wiley Periodicals, Inc. PMID:27510431

  8. All-electron Kohn–Sham density functional theory on hierarchic finite element spaces

    SciTech Connect

    Schauer, Volker; Linder, Christian

    2013-10-01

    In this work, a real space formulation of the Kohn–Sham equations is developed, making use of the hierarchy of finite element spaces from different polynomial order. The focus is laid on all-electron calculations, having the highest requirement onto the basis set, which must be able to represent the orthogonal eigenfunctions as well as the electrostatic potential. A careful numerical analysis is performed, which points out the numerical intricacies originating from the singularity of the nuclei and the necessity for approximations in the numerical setting, with the ambition to enable solutions within a predefined accuracy. In this context the influence of counter-charges in the Poisson equation, the requirement of a finite domain size, numerical quadratures and the mesh refinement are examined as well as the representation of the electrostatic potential in a high order finite element space. The performance and accuracy of the method is demonstrated in computations on noble gases. In addition the finite element basis proves its flexibility in the calculation of the bond-length as well as the dipole moment of the carbon monoxide molecule.

  9. Proton dose calculation based on in-air fluence measurements.

    PubMed

    Schaffner, Barbara

    2008-03-21

    Proton dose calculation algorithms--as well as photon and electron algorithms--are usually based on configuration measurements taken in a water phantom. The exceptions to this are proton dose calculation algorithms for modulated scanning beams. There, it is usual to measure the spot profiles in air. We use the concept of in-air configuration measurements also for scattering and uniform scanning (wobbling) proton delivery techniques. The dose calculation includes a separate step for the calculation of the in-air fluence distribution per energy layer. The in-air fluence calculation is specific to the technique and-to a lesser extent-design of the treatment machine. The actual dose calculation uses the in-air fluence as input and is generic for all proton machine designs and techniques. PMID:18367787

  10. Proton dose calculation based on in-air fluence measurements

    NASA Astrophysics Data System (ADS)

    Schaffner, Barbara

    2008-03-01

    Proton dose calculation algorithms—as well as photon and electron algorithms—are usually based on configuration measurements taken in a water phantom. The exceptions to this are proton dose calculation algorithms for modulated scanning beams. There, it is usual to measure the spot profiles in air. We use the concept of in-air configuration measurements also for scattering and uniform scanning (wobbling) proton delivery techniques. The dose calculation includes a separate step for the calculation of the in-air fluence distribution per energy layer. The in-air fluence calculation is specific to the technique and—to a lesser extent—design of the treatment machine. The actual dose calculation uses the in-air fluence as input and is generic for all proton machine designs and techniques.

  11. A basic insight to FEM_based temperature distribution calculation

    NASA Astrophysics Data System (ADS)

    Purwaningsih, A.; Khairina

    2012-06-01

    A manual for finite element method (FEM)-based temperature distribution calculation has been performed. The code manual is written in visual basic that is operated in windows. The calculation of temperature distribution based on FEM has three steps namely preprocessor, processor and post processor. Therefore, three manuals are produced namely a preprocessor to prepare the data, a processor to solve the problem, and a post processor to display the result. In these manuals, every step of a general procedure is described in detail. It is expected, by these manuals, the understanding of calculating temperature distribution be better and easier.

  12. Fluorescent color factor calculation using dBASE-II.

    PubMed

    King, R L; Carter, H A; Birckbichler, P J

    1986-06-01

    A software system utilizing dBASE-II operating on a dual-drive Apple II+ computer is described. Color factors and retention times for 15 amino acids and epsilon-(gamma-glutamyl)lysine dipeptide are calculated following high performance liquid chromatography. The software package produces a listing of acceptable limits for these parameters calculated as plus and minus 2 standard deviations of the mean. The code is distributed in source form. PMID:3450360

  13. Algorithm for calculating torque base in vehicle traction control system

    NASA Astrophysics Data System (ADS)

    Li, Hongzhi; Li, Liang; Song, Jian; Wu, Kaihui; Qiao, Yanjuan; Liu, Xingchun; Xia, Yongguang

    2012-11-01

    Existing research on the traction control system(TCS) mainly focuses on control methods, such as the PID control, fuzzy logic control, etc, aiming at achieving an ideal slip rate of the drive wheel over long control periods. The initial output of the TCS (referred to as the torque base in this paper), which has a great impact on the driving performance of the vehicle in early cycles, remains to be investigated. In order to improve the control performance of the TCS in the first several cycles, an algorithm is proposed to determine the torque base. First, torque bases are calculated by two different methods, one based on states judgment and the other based on the vehicle dynamics. The confidence level of the torque base calculated based on the vehicle dynamics is also obtained. The final torque base is then determined based on the two torque bases and the confidence level. Hardware-in-the-loop(HIL) simulation and vehicle tests emulating sudden start on low friction roads have been conducted to verify the proposed algorithm. The control performance of a PID-controlled TCS with and without the proposed torque base algorithm is compared, showing that the proposed algorithm improves the performance of the TCS over the first several cycles and enhances about 5% vehicle speed by contrast. The proposed research provides a more proper initial value for TCS control, and improves the performance of the first several control cycles of the TCS.

  14. Putting Math in Motion with Calculator-Based Labs.

    ERIC Educational Resources Information Center

    Doerr, Helen M.; Rieff, Cathieann; Tabor, Jason

    1999-01-01

    Many students have difficulties in interpreting position versus time graphs. Presents an activity involving calculator-based motion labs that allows students to bring these graphs to life by turning their own motion into a graph that can be analyzed, investigated, and interpreted in terms of how they actually moved. (ASK)

  15. Software-Based Visual Loan Calculator For Banking Industry

    NASA Astrophysics Data System (ADS)

    Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.

    2012-03-01

    industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.

  16. Gamma Knife radiosurgery with CT image-based dose calculation.

    PubMed

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful

    2015-01-01

    The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution

  17. All-electronic biosensing in microfluidics: bulk and surface impedance sensing

    NASA Astrophysics Data System (ADS)

    Fraikin, Jean-Luc

    All-electronic, impedance-based sensing techniques offer promising new routes for probing nanoscale biological processes. The ease with which electrical probes can be fabricated at the nanoscale and integrated into microfluidic systems, combined with the large bandwidth afforded by radiofrequency electrical measurement, gives electrical detection significant advantages over other sensing approaches. We have developed two microfluidic devices for impedance-based biosensing. The first is a novel radiofrequency (rf) field-effect transistor which uses the electrolytic Debye layer as its active element. We demonstrate control of the nm-thick Debye layer using an external gate voltage, with gate modulation at frequencies as high 5 MHz. We use this sensor to make quantitative measurements of the electric double-layer capacitance, including determining and controlling the potential of zero charge of the electrodes, a quantity of importance for electrochemistry and impedance-based biosensing. The second device is a microfluidic analyzer for high-throughput, label-free measurement of nanoparticles suspended in a fluid. We demonstrate detection and volumetric analysis of individual synthetic nanoparticles (<100 nm dia.) with sufficient throughput to analyze >500,000 particles/second, and are able to distinguish subcomponents of a polydisperse particle mixture with diameters larger than about 30-40 nm. We also demonstrate the rapid (seconds) size and titer analysis of unlabeled bacteriophage T7 (55-65 nm dia.) in both salt solution and mouse blood plasma, using ˜ 1 muL of analyte. Surprisingly, we find that the background of naturally-occurring nanoparticles in plasma have a power-law size distribution. The scalable fabrication of these instruments, and the simple electronics required for readout make them well-suited for practical applications.

  18. Electronic coupling calculation and pathway analysis of electron transfer reaction using ab initio fragment-based method. I. FMO-LCMO approach

    NASA Astrophysics Data System (ADS)

    Nishioka, Hirotaka; Ando, Koji

    2011-05-01

    By making use of an ab initio fragment-based electronic structure method, fragment molecular orbital-linear combination of MOs of the fragments (FMO-LCMO), developed by Tsuneyuki et al. [Chem. Phys. Lett. 476, 104 (2009)], 10.1016/j.cplett.2009.05.069, we propose a novel approach to describe long-distance electron transfer (ET) in large system. The FMO-LCMO method produces one-electron Hamiltonian of whole system using the output of the FMO calculation with computational cost much lower than conventional all-electron calculations. Diagonalizing the FMO-LCMO Hamiltonian matrix, the molecular orbitals (MOs) of the whole system can be described by the LCMOs. In our approach, electronic coupling TDA of ET is calculated from the energy splitting of the frontier MOs of whole system or perturbation method in terms of the FMO-LCMO Hamiltonian matrix. Moreover, taking into account only the valence MOs of the fragments, we can considerably reduce computational cost to evaluate TDA. Our approach was tested on four different kinds of model ET systems with non-covalent stacks of methane, non-covalent stacks of benzene, trans-alkanes, and alanine polypeptides as their bridge molecules, respectively. As a result, it reproduced reasonable TDA for all cases compared to the reference all-electron calculations. Furthermore, the tunneling pathway at fragment-based resolution was obtained from the tunneling current method with the FMO-LCMO Hamiltonian matrix.

  19. 40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specified in 40 CFR 86.144 or 40 CFR part 1065, subpart G. (b) For composite emission calculations over... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and...

  20. 40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specified in 40 CFR 86.144 or 40 CFR part 1065, subpart G. (b) For composite emission calculations over... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and...

  1. Electronic Structure Calculations of delta-Pu Based Alloys

    SciTech Connect

    Landa, A; Soderlind, P; Ruban, A

    2003-11-13

    First-principles methods are employed to study the ground-state properties of {delta}-Pu-based alloys. The calculations show that an alloy component larger than {delta}-Pu has a stabilizing effect. Detailed calculations have been performed for the {delta}-Pu{sub 1-c}Am{sub c} system. Calculated density of Pu-Am alloys agrees well with the experimental data. The paramagnetic {yields} antiferromagnetic transition temperature (T{sub c}) of {delta}-Pu{sub 1-c}Am{sub c} alloys is calculated by a Monte-Carlo technique. By introducing Am into the system, one could lower T{sub c} from 548 K (pure Pu) to 372 K (Pu{sub 70}Am{sub 30}). We also found that, contrary to pure Pu where this transition destabilizes {delta}-phase, Pu{sub 3}Am compound remains stable in the antiferromagnetic phase that correlates with the recent discovery of a Curie-Weiss behavior of {delta}-Pu{sub 1-c}Am{sub c} at c {approx} 24 at. %.

  2. Calculating track-based observables for the LHC.

    PubMed

    Chang, Hsi-Ming; Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J

    2013-09-01

    By using observables that only depend on charged particles (tracks), one can efficiently suppress pileup contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new nonperturbative objects called track functions which absorb infrared divergences. The track function Ti(x) describes the energy fraction x of a hard parton i which is converted into charged hadrons. We give a field-theoretic definition of the track function and derive its renormalization group evolution, which is in excellent agreement with the pythia parton shower. We then perform a next-to-leading order calculation of the total energy fraction of charged particles in e+ e-→ hadrons. To demonstrate the implications of our framework for the LHC, we match the pythia parton shower onto a set of track functions to describe the track mass distribution in Higgs plus one jet events. We also show how to reduce smearing due to hadronization fluctuations by measuring dimensionless track-based ratios. PMID:25166657

  3. Vertical emission profiles for Europe based on plume rise calculations.

    PubMed

    Bieser, J; Aulinger, A; Matthias, V; Quante, M; Denier van der Gon, H A C

    2011-10-01

    The vertical allocation of emissions has a major impact on results of Chemistry Transport Models. However, in Europe it is still common to use fixed vertical profiles based on rough estimates to determine the emission height of point sources. This publication introduces a set of new vertical profiles for the use in chemistry transport modeling that were created from hourly gridded emissions calculated by the SMOKE for Europe emission model. SMOKE uses plume rise calculations to determine effective emission heights. Out of more than 40,000 different vertical emission profiles 73 have been chosen by means of hierarchical cluster analysis. These profiles show large differences to those currently used in many emission models. Emissions from combustion processes are released in much lower altitudes while those from production processes are allocated to higher altitudes. The profiles have a high temporal and spatial variability which is not represented by currently used profiles. PMID:21561695

  4. Helium diffusion in olivine based on first principles calculations

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Brodholt, John; Lu, Xiancai

    2015-05-01

    As a key trace element involved in mantle evolution, the transport properties of helium in the mantle are important for understanding the thermal and chemical evolution of the Earth. However, the mobility of helium in the mantle is still unclear due to the scarcity of measured diffusion data from minerals under mantle conditions. In this study, we used first principles calculations based on density functional theory to calculate the absolute diffusion coefficients of the helium in olivine. Using the climbing images nudged elastic band method, we defined the diffusion pathways, the activation energies (Ea), and the prefactors. Our results demonstrate that the diffusion of helium has moderate anisotropy. The directionally dependent diffusion of helium in olivine can be written in Arrhenius form as follows.

  5. Advancing QCD-based calculations of energy loss

    NASA Astrophysics Data System (ADS)

    Tywoniuk, Konrad

    2013-08-01

    We give a brief overview of the basics and current developments of QCD-based calculations of radiative processes in medium. We put an emphasis on the underlying physics concepts and discuss the theoretical uncertainties inherently associated with the fundamental parameters to be extracted from data. An important area of development is the study of the single-gluon emission in medium. Moreover, establishing the correct physical picture of multi-gluon emissions is imperative for comparison with data. We will report on progress made in both directions and discuss perspectives for the future.

  6. Supersampling method for efficient grid-based electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-01

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.

  7. Supersampling method for efficient grid-based electronic structure calculations.

    PubMed

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-01

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing. PMID:26957151

  8. Sensor Based Engine Life Calculation: A Probabilistic Perspective

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei; Chen, Philip

    2003-01-01

    It is generally known that an engine component will accumulate damage (life usage) during its lifetime of use in a harsh operating environment. The commonly used cycle count for engine component usage monitoring has an inherent range of uncertainty which can be overly costly or potentially less safe from an operational standpoint. With the advance of computer technology, engine operation modeling, and the understanding of damage accumulation physics, it is possible (and desirable) to use the available sensor information to make a more accurate assessment of engine component usage. This paper describes a probabilistic approach to quantify the effects of engine operating parameter uncertainties on the thermomechanical fatigue (TMF) life of a selected engine part. A closed-loop engine simulation with a TMF life model is used to calculate the life consumption of different mission cycles. A Monte Carlo simulation approach is used to generate the statistical life usage profile for different operating assumptions. The probabilities of failure of different operating conditions are compared to illustrate the importance of the engine component life calculation using sensor information. The results of this study clearly show that a sensor-based life cycle calculation can greatly reduce the risk of component failure as well as extend on-wing component life by avoiding unnecessary maintenance actions.

  9. Probabilistic Study Conducted on Sensor-Based Engine Life Calculation

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei

    2004-01-01

    Turbine engine life management is a very complicated process to ensure the safe operation of an engine subjected to complex usage. The challenge of life management is to find a reasonable compromise between the safe operation and the maximum usage of critical parts to reduce maintenance costs. The commonly used "cycle count" approach does not take the engine operation conditions into account, and it oversimplifies the calculation of the life usage. Because of the shortcomings, many engine components are regularly pulled for maintenance before their usable life is over. And, if an engine has been running regularly under more severe conditions, components might not be taken out of service before they exceed their designed risk of failure. The NASA Glenn Research Center and its industrial and academic partners have been using measurable parameters to improve engine life estimation. This study was based on the Monte Carlo simulation of 5000 typical flights under various operating conditions. First a closed-loop engine model was developed to simulate the engine operation across the mission profile and a thermomechanical fatigue (TMF) damage model was used to calculate the actual damage during takeoff, where the maximum TMF accumulates. Next, a Weibull distribution was used to estimate the implied probability of failure for a given accumulated cycle count. Monte Carlo simulations were then employed to find the profiles of the TMF damage under different operating assumptions including parameter uncertainties. Finally, probabilities of failure for different operating conditions were analyzed to demonstrate the importance of a sensor-based damage calculation in order to better manage the risk of failure and on-wing life.

  10. Prediction of (1)P Rydberg energy levels of beryllium based on calculations with explicitly correlated Gaussians.

    PubMed

    Bubin, Sergiy; Adamowicz, Ludwik

    2014-01-14

    Benchmark variational calculations are performed for the seven lowest 1s(2)2s np ((1)P), n = 2...8, states of the beryllium atom. The calculations explicitly include the effect of finite mass of (9)Be nucleus and account perturbatively for the mass-velocity, Darwin, and spin-spin relativistic corrections. The wave functions of the states are expanded in terms of all-electron explicitly correlated Gaussian functions. Basis sets of up to 12,500 optimized Gaussians are used. The maximum discrepancy between the calculated nonrelativistic and experimental energies of 1s(2)2s np ((1)P) →1s(2)2s(2) ((1)S) transition is about 12 cm(-1). The inclusion of the relativistic corrections reduces the discrepancy to bellow 0.8 cm(-1). PMID:24437871

  11. Error propagation in PIV-based Poisson pressure calculations

    NASA Astrophysics Data System (ADS)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2015-11-01

    After more than 20 years of development, PIV has become a standard non-invasive velocity field measurement technique, and promises to make PIV-based pressure calculations possible. However, the errors inherent in PIV velocity fields propagate through integration and contaminate the calculated pressure field. We propose an analysis that shows how the uncertainties in the velocity field propagate to the pressure field through the Poisson equation. First we model the dynamics of error propagation using boundary value problems (BVPs). Next, L2-norm and/or L∞-norm are utilized as the measure of error in the velocity and pressure field. Finally, using analysis techniques including the maximum principle, the Poincare inequality pressure field can be bounded by the error level of the data by considering the well-posedness of the BVPs. Specifically, we exam if and how the error in the pressure field depend continually on the BVP data. Factors such as flow field geometry, boundary conditions, and velocity field noise levels will be discussed analytically.

  12. Water on the sun: line assignments based on variational calculations.

    PubMed

    Polyansky, O L; Zobov, N F; Viti, S; Tennyson, J; Bernath, P F; Wallace, L

    1997-07-18

    The infrared spectrum of hot water observed in a sunspot has been assigned. The high temperature of the sunspot (3200 K) gave rise to a highly congested pure rotational spectrum in the 10-micrometer region that involved energy levels at least halfway to dissociation. Traditional spectroscopy, based on perturbation theory, is inadequate for this problem. Instead, accurate variational solutions of the vibration-rotation Schrödinger equation were used to make assignments, revealing unexpected features, including rotational difference bands and fewer degeneracies than anticipated. These results indicate that a shift away from perturbation theory to first principles calculations is necessary in order to assign spectra of hot polyatomic molecules such as water. PMID:9219686

  13. Wavelet-Based DFT calculations on Massively Parallel Hybrid Architectures

    NASA Astrophysics Data System (ADS)

    Genovese, Luigi

    2011-03-01

    In this contribution, we present an implementation of a full DFT code that can run on massively parallel hybrid CPU-GPU clusters. Our implementation is based on modern GPU architectures which support double-precision floating-point numbers. This DFT code, named BigDFT, is delivered within the GNU-GPL license either in a stand-alone version or integrated in the ABINIT software package. Hybrid BigDFT routines were initially ported with NVidia's CUDA language, and recently more functionalities have been added with new routines writeen within Kronos' OpenCL standard. The formalism of this code is based on Daubechies wavelets, which is a systematic real-space based basis set. As we will see in the presentation, the properties of this basis set are well suited for an extension on a GPU-accelerated environment. In addition to focusing on the implementation of the operators of the BigDFT code, this presentation also relies of the usage of the GPU resources in a complex code with different kinds of operations. A discussion on the interest of present and expected performances of Hybrid architectures computation in the framework of electronic structure calculations is also adressed.

  14. Windows based computer program for gasket determination based on two different calculation procedures

    SciTech Connect

    Bernard, F.; Borovnicar, I.; Ghirlanda, M.

    1996-12-01

    The windows based computer program for gasket calculation was presented. C++ computer language was used. On the basis of experimental results and data sets available in the literature and calculated with the help of FSA and PVRC method, the assembly parameters were determined. The result is DONIT TESNITI Diskette, a smart tool to select gaskets on the basis of service conditions and tightness requirements.

  15. Rapid Parallel Calculation of shell Element Based On GPU

    NASA Astrophysics Data System (ADS)

    Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao

    2010-06-01

    Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.

  16. Fast calculation of object infrared spectral scattering based on CUDA

    NASA Astrophysics Data System (ADS)

    Li, Liang-chao; Niu, Wu-bin; Wu, Zhen-sen

    2010-11-01

    Computational unified device architecture (CUDA) is used for paralleling the spectral scattering calculation from non-Lambertian object of sky and earth background irradiation. The bidirectional reflectance distribution function (BRDF) of five parameter model is utilized in object surface element scattering calculation. The calculation process is partitioned into many threads running in GPU kernel and each thread computes a visible surface element infrared spectral scattering intensity in a specific incident direction, all visible surface elements' intensity are weighted and averaged to obtain the object surface scattering intensity. The comparison of results of the CPU calculation and CUDA parallel calculation of a cylinder shows that the CUDA parallel calculation speed improves more than two hundred times in meeting the accuracy, with a high engineering value.

  17. Locally Refined Multigrid Solution of the All-Electron Kohn-Sham Equation.

    PubMed

    Cohen, Or; Kronik, Leeor; Brandt, Achi

    2013-11-12

    We present a fully numerical multigrid approach for solving the all-electron Kohn-Sham equation in molecules. The equation is represented on a hierarchy of Cartesian grids, from coarse ones that span the entire molecule to very fine ones that describe only a small volume around each atom. This approach is adaptable to any type of geometry. We demonstrate it for a variety of small molecules and obtain high accuracy agreement with results obtained previously for diatomic molecules using a prolate-spheroidal grid. We provide a detailed presentation of the numerical methodology and discuss possible extensions of this approach. PMID:26583393

  18. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  19. Trajectory Based Heating and Ablation Calculations for MESUR Pathfinder Aeroshell

    NASA Technical Reports Server (NTRS)

    Chen, Y. K.; Henline, W. D.; Tauber, M. E.; Arnold, James O. (Technical Monitor)

    1994-01-01

    Based on the geometry of Mars Environment Survey (MESUR) Pathfinder aeroshell and an estimated Mars entry trajectory, two-dimensional axisymmetric time dependent calculations have been obtained using GIANTS (Gauss-Siedel Implicit Aerothermodynamic Navier-Stokes code with Thermochemical Surface Conditions) code and CMA (Charring Material Thermal Response and Ablation) Program for heating analysis and heat shield material sizing. These two codes are interfaced using a loosely coupled technique. The flowfield and convective heat transfer coefficients are computed by the GIANTS code with a species balance condition for an ablating surface, and the time dependent in-depth conduction with surface blowing is simulated by the CMA code with a complete surface energy balance condition. In this study, SLA-561V has been selected as heat shield material. The solutions, including the minimum heat shield thicknesses over aeroshell forebody, pyrolysis gas blowing rates, surface heat fluxes and temperature distributions, flowfield, and in-depth temperature history of SLA-561V, are presented and discussed in detail.

  20. Nonideal thermoequilibrium calculations using a large product species data base

    SciTech Connect

    Hobbs, M.L.; Baer, M.R.

    1992-06-01

    Thermochemical data fits for approximately 900 gaseous and 600 condensed species found in the JANAF tables (Chase et al., 1985) have been completed for use with the TIGER nonideal thermoequilibrium code (Cowperthwaite and Zwisler, 1973). The TIGER code has been modified to allow systems containing up to 400 gaseous and 100 condensed constituents composed of up to 50 elements. Gaseous covolumes have been estimated following the procedure outlined by Mader (1979) using estimates of van der Waals radii for 48 elements and three-dimensional molecular mechanics. Molecular structures for all gaseous components were explicitly defined in terms of atomic coordinates in {Angstrom}. The Becker-Kistiakowsky-Wilson equation of state (BKW-EOS) has been calibrated near C-J states using detonation temperatures measured in liquid and solid explosives and a large product species data base. Detonation temperatures for liquid and solid explosives were predicted adequately with a single set of BKW parameters. Values for the empirical BKW constants {alpha}, {beta}, k, and {theta} were 0.5, 0.174, 11.85, and 5160, respectively. Values for the covolume factors, k{sub i}, were assumed to be invariant. The liquid explosives included mixtures of hydrazine nitrate with hydrazine, hydrazine hydrate, and water; mixtures of tetranitromethane with nitromethane; liquid isomers ethyl nitrate and 2-nitroethanol; and nitroglycerine. The solid explosives included HMX, RDX, PETN, Tetryl, and TNT. Color contour plots of HMX equilibrium products as well as thermodynamic variables are shown in pressure and temperature space. Similar plots for a pyrotechnic reaction composed of TiH{sub 2} and KC1O{sub 4} are also reported. Calculations for a typical HMX-based propellant are also discussed.

  1. Method of characteristics - Based sensitivity calculations for international PWR benchmark

    SciTech Connect

    Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.

    2013-07-01

    Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)

  2. An Analysis of Differential Item Functioning Based on Calculator Type.

    ERIC Educational Resources Information Center

    Schwarz, Richard; Rich, Changhua; Arenson, Ethan; Podrabsky, Tracy; Cook, Gary

    The effect of calculator type on student performance on a mathematics examination was studied. Differential item functioning (DIF) methodology was applied to examine group differences (calculator use) on item performance while conditioning on the relevant ability. Other survey questions were developed to ask students the extent to which they used…

  3. All-electron formalism for total energy strain derivatives and stress tensor components for numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Knuth, Franz; Carbogno, Christian; Atalla, Viktor; Blum, Volker; Scheffler, Matthias

    2015-05-01

    We derive and implement the strain derivatives of the total energy of solids, i.e., the analytic stress tensor components, in an all-electron, numeric atom-centered orbital based density-functional formalism. We account for contributions that arise in the semi-local approximation (LDA/GGA) as well as in the generalized Kohn-Sham case, in which a fraction of exact exchange (hybrid functionals) is included. In this work, we discuss the details of the implementation including the numerical corrections for sparse integrations grids which allow to produce accurate results. We validate the implementation for a variety of test cases by comparing to strain derivatives performed via finite differences. Additionally, we include the detailed definition of the overlapping atom-centered integration formalism used in this work to obtain total energies and their derivatives.

  4. 31 CFR 370.35 - Does the Bureau of the Public Debt accept all electronically signed transaction requests?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Public Debt accept all electronically signed transaction requests? An electronic signature will not be... accept all electronically signed transaction requests? 370.35 Section 370.35 Money and Finance: Treasury... PUBLIC DEBT ELECTRONIC TRANSACTIONS AND FUNDS TRANSFERS RELATING TO UNITED STATES SECURITIES...

  5. Magnetic susceptibility of semiconductors by an all-electron first-principles approach

    SciTech Connect

    Ohno, K. |; Mauri, F.; Louie, S.G. |

    1997-07-01

    The magnetic susceptibility ({chi}) of the semiconductors (diamond, Si, GaAs, and GaP) and of the inert-gas solids (Ne, Ar, and Kr) are evaluated within density-functional theory in the local-density approximation, using a mixed-basis all-electron approach. In Si, GaAs, GaP, Ar, and Kr, the contribution of core electrons to {chi} is comparable to that of valence electrons. However, our results show that the contribution associated with the core states is independent of the chemical environment and can be computed from the isolated atoms. Moreover, our results indicate that the use of a {open_quotes}scissor operator{close_quotes} does not improve the agreement of the theoretical {chi} with experiments. {copyright} {ital 1997} {ital The American Physical Society}

  6. Procedure for calculating general aircraft noise based on ISO 3891

    SciTech Connect

    Hediger, J.R.

    1982-01-01

    The standard ISO-3891 specifies the presentation of aircraft noise heard on the ground or of noise exposure by succession of aircraft, without giving any details on different parameters required to their calculation. The following study provides some of these parameters considering acoustic measurements as well as laboratory analysis realized in co-operation with the Swiss Federal Office for Civil Aviation.

  7. Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine

    SciTech Connect

    Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois

    2013-01-01

    Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6 MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, − 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3 mm criteria. The mean and standard deviation of pixels passing

  8. Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine.

    PubMed

    Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois

    2013-01-01

    Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, - 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3mm criteria. The mean and standard deviation of pixels passing gamma

  9. Efficient Parallel All-Electron Four-Component Dirac-Kohn-Sham Program Using a Distributed Matrix Approach II.

    PubMed

    Storchi, Loriano; Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Quiney, Harry M

    2013-12-10

    We propose a new complete memory-distributed algorithm, which significantly improves the parallel implementation of the all-electron four-component Dirac-Kohn-Sham (DKS) module of BERTHA (J. Chem. Theory Comput. 2010, 6, 384). We devised an original procedure for mapping the DKS matrix between an efficient integral-driven distribution, guided by the structure of specific G-spinor basis sets and by density fitting algorithms, and the two-dimensional block-cyclic distribution scheme required by the ScaLAPACK library employed for the linear algebra operations. This implementation, because of the efficiency in the memory distribution, represents a leap forward in the applicability of the DKS procedure to arbitrarily large molecular systems and its porting on last-generation massively parallel systems. The performance of the code is illustrated by some test calculations on several gold clusters of increasing size. The DKS self-consistent procedure has been explicitly converged for two representative clusters, namely Au20 and Au34, for which the density of electronic states is reported and discussed. The largest gold cluster uses more than 39k basis functions and DKS matrices of the order of 23 GB. PMID:26592273

  10. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  11. A probability-based formula for calculating interobserver agreement1

    PubMed Central

    Yelton, Ann R.; Wildman, Beth G.; Erickson, Marilyn T.

    1977-01-01

    Estimates of observer agreement are necessary to assess the acceptability of interval data. A common method for assessing observer agreement, per cent agreement, includes several major weaknesses and varies as a function of the frequency of behavior recorded and the inclusion or exclusion of agreements on nonoccurrences. Also, agreements that might be expected to occur by chance are not taken into account. An alternative method for assessing observer agreement that determines the exact probability that the obtained number of agreements or better would have occurred by chance is presented and explained. Agreements on both occurrences and nonoccurrences of behavior are considered in the calculation of this probability. PMID:16795541

  12. Freeway Travel Speed Calculation Model Based on ETC Transaction Data

    PubMed Central

    Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang

    2014-01-01

    Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107

  13. Freeway travel speed calculation model based on ETC transaction data.

    PubMed

    Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang

    2014-01-01

    Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107

  14. Integral-transport-based deterministic brachytherapy dose calculations

    NASA Astrophysics Data System (ADS)

    Zhou, Chuanyu; Inanc, Feyzi

    2003-01-01

    We developed a transport-equation-based deterministic algorithm for computing three-dimensional brachytherapy dose distributions. The deterministic algorithm has been based on the integral transport equation. The algorithm provided us with the capability of computing dose distributions for multiple isotropic point and/or volumetric sources in a homogenous/heterogeneous medium. The algorithm results have been benchmarked against the results from the literature and MCNP results for isotropic point sources and volumetric sources.

  15. Safety assessment of the conversion of toll plazas to all-electronic toll collection system.

    PubMed

    Abuzwidah, Muamer; Abdel-Aty, Mohamed

    2015-07-01

    Traditional mainline toll plaza (TMTP) is considered the most high-risk location on the toll roads. Conversion from TMTP or hybrid mainline toll plaza (HMTP) to an all-electronic toll collection (AETC) system has demonstrated measured improvement in traffic operations and environmental issues. However, there is a lack of research that quantifies the safety impacts of these new tolling systems. This study evaluated the safety effectiveness of the conversion from TMTP or HMTP to AETC system. An extensive data collection was conducted that included hundred mainline toll plazas located on more than 750 miles of toll roads in Florida. Various observational before-after studies including the empirical Bayes method were applied. The results indicated that the conversion from the TMTP to an AETC system resulted in an average crash reduction of 76, 75, and 68% for total, fatal-and-injury and property damage only (PDO) crashes, respectively; for rear end and lane change related (LCR) crashes the average reductions were 80 and 74%, respectively. The conversion from HMTP to AETC system enhanced traffic safety by reducing crashes by 24, 28 and 20% of total, fatal-and-injury, and PDO crashes respectively; also, for rear end and LCR crashes, the average reductions were 15 and 22%, respectively. Overall, this paper provided an up-to-date safety impact of using different toll collection systems. The results proved that the AETC system significantly improved traffic safety for all crash categories; and changed toll plazas from the highest risk on Expressways to be similar to regular segments. PMID:25909391

  16. Formation flying benefits based on vortex lattice calculations

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1977-01-01

    A quadrilateral vortex-lattice method was applied to a formation of three wings to calculate force and moment data for use in estimating potential benefits of flying aircraft in formation on extended range missions, and of anticipating the control problems which may exist. The investigation led to two types of formation having virtually the same overall benefits for the formation as a whole, i.e., a V or echelon formation and a double row formation (with two staggered rows of aircraft). These formations have unequal savings on aircraft within the formation, but this allows large longitudinal spacings between aircraft which is preferable to the small spacing required in formations having equal benefits for all aircraft. A reasonable trade-off between a practical formation size and range benefit seems to lie at about three to five aircraft with corresponding maximum potential range increases of about 46 percent to 67 percent. At this time it is not known what fraction of this potential range increase is achievable in practice.

  17. Coupled-cluster based basis sets for valence correlation calculations

    NASA Astrophysics Data System (ADS)

    Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J.

    2016-03-01

    Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers.

  18. Coupled-cluster based basis sets for valence correlation calculations.

    PubMed

    Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J

    2016-03-14

    Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via ⟨r(n)⟩ (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers. PMID:26979680

  19. Ray-Based Calculations of Backscatter in Laser Fusion Targets

    SciTech Connect

    Strozzi, D J; Williams, E A; Hinkel, D E; Froula, D H; London, R A; Callahan, D A

    2008-02-26

    A steady-state model for Brillouin and Raman backscatter along a laser ray path is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code deplete, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as 'plane-wave' simulations with the paraxial propagation code pf3d. Comparisons with Brillouin-scattering experiments at the OMEGA Laser Facility [T. R. Boehly et al., Opt. Commun. 133, p. 495 (1997)] show that laser speckles greatly enhance the reflectivity over the deplete results. An approximate upper bound on this enhancement, motivated by phase conjugation, is given by doubling the deplete coupling coefficient. Analysis with deplete of an ignition design for the National Ignition Facility (NIF) [J. A. Paisner, E. M. Campbell, and W. J. Hogan, Fusion Technol. 26, p. 755 (1994)], with a peak radiation temperature of 285 eV, shows encouragingly low reflectivity. Doubling the coupling to bound the speckle enhancement suggests a less optimistic picture. Re-absorption of Raman light is seen to be significant in this design.

  20. UAV-based NDVI calculation over grassland: An alternative approach

    NASA Astrophysics Data System (ADS)

    Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc

    2016-04-01

    The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a

  1. Exciting: a full-potential all-electron package implementing density-functional theory and many-body perturbation theory.

    PubMed

    Gulans, Andris; Kontur, Stefan; Meisenbichler, Christian; Nabok, Dmitrii; Pavone, Pasquale; Rigamonti, Santiago; Sagmeister, Stephan; Werner, Ute; Draxl, Claudia

    2014-09-10

    Linearized augmented planewave methods are known as the most precise numerical schemes for solving the Kohn-Sham equations of density-functional theory (DFT). In this review, we describe how this method is realized in the all-electron full-potential computer package, exciting. We emphasize the variety of different related basis sets, subsumed as (linearized) augmented planewave plus local orbital methods, discussing their pros and cons and we show that extremely high accuracy (microhartrees) can be achieved if the basis is chosen carefully. As the name of the code suggests, exciting is not restricted to ground-state calculations, but has a major focus on excited-state properties. It includes time-dependent DFT in the linear-response regime with various static and dynamical exchange-correlation kernels. These are preferably used to compute optical and electron-loss spectra for metals, molecules and semiconductors with weak electron-hole interactions. exciting makes use of many-body perturbation theory for charged and neutral excitations. To obtain the quasi-particle band structure, the GW approach is implemented in the single-shot approximation, known as G(0)W(0). Optical absorption spectra for valence and core excitations are handled by the solution of the Bethe-Salpeter equation, which allows for the description of strongly bound excitons. Besides these aspects concerning methodology, we demonstrate the broad range of possible applications by prototypical examples, comprising elastic properties, phonons, thermal-expansion coefficients, dielectric tensors and loss functions, magneto-optical Kerr effect, core-level spectra and more. PMID:25135665

  2. Calculation of thermomechanical fatigue life based on isothermal behavior

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.; Saltsman, James F.

    1987-01-01

    The isothermal and thermomechanical fatigue (TMF) crack initiation response of a hypothetical material was analyzed. Expected thermomechanical behavior was evaluated numerically based on simple, isothermal, cyclic stress-strain - time characteristics and on strainrange versus cyclic life relations that have been assigned to the material. The attempt was made to establish basic minimum requirements for the development of a physically accurate TMF life-prediction model. A worthy method must be able to deal with the simplest of conditions: that is, those for which thermal cycling, per se, introduces no damage mechanisms other than those found in isothermal behavior. Under these assumed conditions, the TMF life should be obtained uniquely from known isothermal behavior. The ramifications of making more complex assumptions will be dealt with in future studies. Although analyses are only in their early stages, considerable insight has been gained in understanding the characteristics of several existing high-temperature life-prediction methods. The present work indicates that the most viable damage parameter is based on the inelastic strainrange.

  3. Validation of KENO based criticality calculations at Rocky Flats

    SciTech Connect

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P.

    1992-01-01

    In the absence of experimental data it is necessary to rely on computer based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the codes ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper we discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG G Rocky Flats.

  4. Glass viscosity calculation based on a global statistical modelling approach

    SciTech Connect

    Fluegel, Alex

    2007-02-01

    A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.

  5. Validation of KENO-based criticality calculations at Rocky Flats

    SciTech Connect

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P. )

    1992-01-01

    In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k{sub eff} limit of 0.95 for the limiting-accident scanarios of a criticality evaluation.

  6. GYutsis: heuristic based calculation of general recoupling coefficients

    NASA Astrophysics Data System (ADS)

    Van Dyck, D.; Fack, V.

    2003-08-01

    General angular momentum recoupling coefficients can be expressed as a summation formula over products of 6- j coefficients. Yutsis, Levinson and Vanagas developed graphical techniques for representing the general recoupling coefficient as a cubic graph and they describe a set of reduction rules allowing a stepwise generation of the corresponding summation formula. This paper is a follow up to [Van Dyck and Fack, Comput. Phys. Comm. 151 (2003) 353-368] where we described a heuristic algorithm based on these techniques. In this article we separate the heuristic from the algorithm and describe some new heuristic approaches which can be plugged into the generic algorithm. We show that these new heuristics lead to good results: in many cases we get a more efficient summation formula than our previous approach, in particular for problems of higher order. In addition the new features and the use of our program GYutsis, which implements these techniques, is described both for end users and application programmers. Program summaryTitle of program: CycleCostAlgorithm, GYutsis Catalogue number: ADSA Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSA Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland. Users may obtain the program also by downloading either the compressed tar file gyutsis.tgz (for Unix and Linux) or the zip file gyutsis.zip (for Windows) from our website ( http://caagt.rug.ac.be/yutsis/). An applet version of the program is also available on our website and can be run in a web browser from the URL http://caagt.rug.ac.be/yutsis/GYutsisApplet.html. Licensing provisions: none Computers for which the program is designed: any computer with Sun's Java Runtime Environment 1.4 or higher installed. Programming language used: Java 1.2 (Compiler: Sun's SDK 1.4.0) No. of lines in program: approximately 9400 No. of bytes in distributed program, including test data, etc.: 544 117 Distribution format: tar gzip file Nature of

  7. All-electron GW quasiparticle band structures of group 14 nitride compounds

    NASA Astrophysics Data System (ADS)

    Chu, Iek-Heng; Kozhenikov, Anton; Schulthess, Thomas; Cheng, Hai-Ping

    2014-03-01

    We have investigated the group 14 nitrides (M3N4) in both the spinel phase (with M =C, Si, Ge and Sn) and the beta phase (with M =Si, Ge and Sn) using density functional theory (DFT) with the local density approximation (LDA). The Kohn-Sham energies of these systems are first calculated within the framework of full-potential LAPW and then corrected using single-shot G0W0 calculations, which we have implemented in the Exciting-Plus code. Direct bands gap at the Γ point are found for all spinel-type nitrides. The calculated band gaps of Si3N4, Ge3N4 and Sn3N4 agree with experiment. We also find that for all systems studied, our GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core 3d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications. This work is supported by NSF/DMR-0804407 and DOE/BES-DE-FG02-02ER45995. Computations are performed using facilities at NERSC.

  8. Prediction of {sup 1}P Rydberg energy levels of beryllium based on calculations with explicitly correlated Gaussians

    SciTech Connect

    Bubin, Sergiy; Adamowicz, Ludwik

    2014-01-14

    Benchmark variational calculations are performed for the seven lowest 1s{sup 2}2s np ({sup 1}P), n = 2…8, states of the beryllium atom. The calculations explicitly include the effect of finite mass of {sup 9}Be nucleus and account perturbatively for the mass-velocity, Darwin, and spin-spin relativistic corrections. The wave functions of the states are expanded in terms of all-electron explicitly correlated Gaussian functions. Basis sets of up to 12 500 optimized Gaussians are used. The maximum discrepancy between the calculated nonrelativistic and experimental energies of 1s{sup 2}2s np ({sup 1}P) →1s{sup 2}2s{sup 2} ({sup 1}S) transition is about 12 cm{sup −1}. The inclusion of the relativistic corrections reduces the discrepancy to bellow 0.8 cm{sup −1}.

  9. Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.

    PubMed

    Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano

    2014-09-01

    A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system. PMID:26588521

  10. All-electron GW quasiparticle band structures of group 14 nitride compounds

    NASA Astrophysics Data System (ADS)

    Chu, Iek-Heng; Kozhevnikov, Anton; Schulthess, Thomas C.; Cheng, Hai-Ping

    2014-07-01

    We have investigated the group 14 nitrides (M3N4) in the spinel phase (γ-M3N4 with M = C, Si, Ge, and Sn) and β phase (β-M3N4 with M = Si, Ge, and Sn) using density functional theory with the local density approximation and the GW approximation. The Kohn-Sham energies of these systems have been first calculated within the framework of full-potential linearized augmented plane waves (LAPW) and then corrected using single-shot G0W0 calculations, which we have implemented in the modified version of the Elk full-potential LAPW code. Direct band gaps at the Γ point have been found for spinel-type nitrides γ-M3N4 with M = Si, Ge, and Sn. The corresponding GW-corrected band gaps agree with experiment. We have also found that the GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications because of their direct and tunable band gaps.

  11. All-electron GW quasiparticle band structures of group 14 nitride compounds

    SciTech Connect

    Chu, Iek-Heng; Cheng, Hai-Ping; Kozhevnikov, Anton; Schulthess, Thomas C.

    2014-07-28

    We have investigated the group 14 nitrides (M{sub 3}N{sub 4}) in the spinel phase (γ-M{sub 3}N{sub 4} with M = C, Si, Ge, and Sn) and β phase (β-M{sub 3}N{sub 4} with M = Si, Ge, and Sn) using density functional theory with the local density approximation and the GW approximation. The Kohn-Sham energies of these systems have been first calculated within the framework of full-potential linearized augmented plane waves (LAPW) and then corrected using single-shot G{sub 0}W{sub 0} calculations, which we have implemented in the modified version of the Elk full-potential LAPW code. Direct band gaps at the Γ point have been found for spinel-type nitrides γ-M{sub 3}N{sub 4} with M = Si, Ge, and Sn. The corresponding GW-corrected band gaps agree with experiment. We have also found that the GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications because of their direct and tunable band gaps.

  12. Operating distance calculation of ground-based and air-based infrared system based on Lowtran7

    NASA Astrophysics Data System (ADS)

    Ren, Kan; Tian, Jie; Gu, Guohua; Chen, Qian

    2016-07-01

    In this paper, the infrared system operating distance model of point target based on the contrast is used, starting from the target radiance and atmospheric transmission parameters in the operating distance formula. The radiance of different point targets detected by ground-based and air-based detector are analyzed, and the spectral division method is used for the integration of target and background radiance, the databases of atmospheric spectral radiance and transmittance are established by calling Lowtran7. A new method for solving the operating distance formula is proposed. And the operating distance calculation system is established, which improves the efficiency and accuracy of calculation. The databases of atmospheric spectral radiance and transmittance of five meteorological conditions are generated, and the variations of them with wavelength and range are given. The atmospheric radiance of infinite transmission range can be considered as the atmospheric radiance of 100 km by calculating the integration of wavelength. The targets and detectors parameters are set to be simulated by using the generated database. The operating distance of each zenith angle is calculated, and spatial distribution of operating distance is given in the meteorological condition of mid latitude summer.

  13. Fast calculation with point-based method to make CGHs of the polygon model

    NASA Astrophysics Data System (ADS)

    Ogihara, Yuki; Ichikawa, Tsubasa; Sakamoto, Yuji

    2014-02-01

    Holography is one of the three-dimensional technology. Light waves from an object are recorded and reconstructed by using a hologram. Computer generated holograms (CGHs), which are made by simulating light propagation using a computer, are able to represent virtual object. However, an enormous amount of computation time is required to make CGHs. There are two primary methods of calculating CGHs: the polygon-based method and the point-based method. In the polygon-based method with Fourier transforms, CGHs are calculated using a fast Fourier transform (FFT). The calculation of complex objects composed of multiple polygons requires as many FFTs, so unfortunately the calculation time become enormous. In contrast, in the point-based method, it is easy to express complex objects, an enormous calculation time is still required. Graphics processing units (GPUs) have been used to speed up the calculations of point-based method. Because a GPU is specialized for parallel computation and CGH calculation can be calculated independently for each pixel. However, expressing a planar object by the point-based method requires a signi cant increase in the density of points and consequently in the number of point light sources. In this paper, we propose a fast calculation algorithm to express planar objects by the point-based method with a GPU. The proposed method accelerate calculation by obtaining the distance between a pixel and the point light source from the adjacent point light source by a difference method. Under certain speci ed conditions, the difference between adjacent object points becomes constant, so the distance is obtained by only an additions. Experimental results showed that the proposed method is more effective than the polygon-based method with FFT when the number of polygons composing an objects are high.

  14. Hybrid functionals within the all-electron FLAPW method: Implementation and applications of PBE0

    NASA Astrophysics Data System (ADS)

    Betzinger, Markus; Friedrich, Christoph; Blügel, Stefan

    2010-05-01

    We present an efficient implementation of the Perdew-Burke-Ernzerhof hybrid functional PBE0 within the full-potential linearized augmented-plane-wave (FLAPW) method. The Hartree-Fock exchange term, which is a central ingredient of hybrid functionals, gives rise to a computationally expensive nonlocal potential in the one-particle Schrödinger equation. The matrix elements of this exchange potential are calculated with the help of an auxiliary basis that is constructed from products of FLAPW basis functions. By representing the Coulomb interaction in this basis the nonlocal exchange term becomes a Brillouin-zone sum over vector-matrix-vector products. The Coulomb matrix is calculated only once at the beginning of a self-consistent-field cycle. We show that it can be made sparse by a suitable unitary transformation of the auxiliary basis, which accelerates the computation of the vector-matrix-vector products considerably. Additionally, we exploit spatial and time-reversal symmetry to identify the nonvanishing exchange matrix elements in advance and to restrict the k summations for the nonlocal potential to an irreducible set of k points. Favorable convergence of the self-consistent-field cycle is achieved by a nested density-only and density-matrix iteration scheme. We discuss the convergence with respect to the parameters of our numerical scheme and show results for a variety of semiconductors and insulators, including the oxides ZnO, EuO, Al2O3 , and SrTiO3 , where the PBE0 hybrid functional improves the band gaps and the description of localized states in comparison with the PBE functional. Furthermore, we find that in contrast to conventional local exchange-correlation functionals ferromagnetic EuO is correctly predicted to be a semiconductor.

  15. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 3 2011-04-01 2011-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value,...

  16. Creative Uses for Calculator-based Laboratory (CBL) Technology in Chemistry.

    ERIC Educational Resources Information Center

    Sales, Cynthia L.; Ragan, Nicole M.; Murphy, Maureen Kendrick

    1999-01-01

    Reviews three projects that use a graphing calculator linked to a calculator-based laboratory device as a portable data-collection system for students in chemistry classes. Projects include Isolation, Purification and Quantification of Buckminsterfullerene from Woodstove Ashes; Determination of the Activation Energy Associated with the…

  17. The effect of statistical uncertainty on inverse treatment planning based on Monte Carlo dose calculation

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Keall, Paul

    2000-12-01

    The effect of the statistical uncertainty, or noise, in inverse treatment planning for intensity modulated radiotherapy (IMRT) based on Monte Carlo dose calculation was studied. Sets of Monte Carlo beamlets were calculated to give uncertainties at Dmax ranging from 0.2% to 4% for a lung tumour plan. The weights of these beamlets were optimized using a previously described procedure based on a simulated annealing optimization algorithm. Several different objective functions were used. It was determined that the use of Monte Carlo dose calculation in inverse treatment planning introduces two errors in the calculated plan. In addition to the statistical error due to the statistical uncertainty of the Monte Carlo calculation, a noise convergence error also appears. For the statistical error it was determined that apparently successfully optimized plans with a noisy dose calculation (3% 1σ at Dmax ), which satisfied the required uniformity of the dose within the tumour, showed as much as 7% underdose when recalculated with a noise-free dose calculation. The statistical error is larger towards the tumour and is only weakly dependent on the choice of objective function. The noise convergence error appears because the optimum weights are determined using a noisy calculation, which is different from the optimum weights determined for a noise-free calculation. Unlike the statistical error, the noise convergence error is generally larger outside the tumour, is case dependent and strongly depends on the required objectives.

  18. Ontology-Based Exchange and Immediate Application of Business Calculation Definitions for Online Analytical Processing

    NASA Astrophysics Data System (ADS)

    Kehlenbeck, Matthias; Breitner, Michael H.

    Business users define calculated facts based on the dimensions and facts contained in a data warehouse. These business calculation definitions contain necessary knowledge regarding quantitative relations for deep analyses and for the production of meaningful reports. The business calculation definitions are implementation and widely organization independent. But no automated procedures facilitating their exchange across organization and implementation boundaries exist. Separately each organization currently has to map its own business calculations to analysis and reporting tools. This paper presents an innovative approach based on standard Semantic Web technologies. This approach facilitates the exchange of business calculation definitions and allows for their automatic linking to specific data warehouses through semantic reasoning. A novel standard proxy server which enables the immediate application of exchanged definitions is introduced. Benefits of the approach are shown in a comprehensive case study.

  19. Inverse calculation of biochemical oxygen demand models based on time domain for the tidal Foshan River.

    PubMed

    Er, Li; Xiangying, Zeng

    2014-01-01

    To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models. PMID:25026574

  20. Precise response functions in all-electron methods: Application to the optimized-effective-potential approach

    NASA Astrophysics Data System (ADS)

    Betzinger, Markus; Friedrich, Christoph; Görling, Andreas; Blügel, Stefan

    2012-06-01

    The optimized-effective-potential method is a special technique to construct local Kohn-Sham potentials from general orbital-dependent energy functionals. In a recent publication [M. Betzinger, C. Friedrich, S. Blügel, A. Görling, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.83.045105 83, 045105 (2011)] we showed that uneconomically large basis sets were required to obtain a smooth local potential without spurious oscillations within the full-potential linearized augmented-plane-wave method. This could be attributed to the slow convergence behavior of the density response function. In this paper, we derive an incomplete-basis-set correction for the response, which consists of two terms: (1) a correction that is formally similar to the Pulay correction in atomic-force calculations and (2) a numerically more important basis response term originating from the potential dependence of the basis functions. The basis response term is constructed from the solutions of radial Sternheimer equations in the muffin-tin spheres. With these corrections the local potential converges at much smaller basis sets, at much fewer states, and its construction becomes numerically very stable. We analyze the improvements for rock-salt ScN and report results for BN, AlN, and GaN, as well as the perovskites CaTiO3, SrTiO3, and BaTiO3. The incomplete-basis-set correction can be applied to other electronic-structure methods with potential-dependent basis sets and opens the perspective to investigate a broad spectrum of problems in theoretical solid-state physics that involve response functions.

  1. Hybrid functionals for large periodic systems in an all-electron, numeric atom-centered basis framework

    NASA Astrophysics Data System (ADS)

    Levchenko, Sergey V.; Ren, Xinguo; Wieferink, Jürgen; Johanni, Rainer; Rinke, Patrick; Blum, Volker; Scheffler, Matthias

    2015-07-01

    We describe a framework to evaluate the Hartree-Fock exchange operator for periodic electronic-structure calculations based on general, localized atom-centered basis functions. The functionality is demonstrated by hybrid-functional calculations of properties for several semiconductors. In our implementation of the Fock operator, the Coulomb potential is treated either in reciprocal space or in real space, where the sparsity of the density matrix can be exploited for computational efficiency. Computational aspects, such as the rigorous avoidance of on-the-fly disk storage, and a load-balanced parallel implementation, are also discussed. We demonstrate linear scaling of our implementation with system size by calculating the electronic structure of a bulk semiconductor (GaAs) with up to 1,024 atoms per unit cell without compromising the accuracy.

  2. All-electron topological insulator in InAs double wells

    NASA Astrophysics Data System (ADS)

    Erlingsson, Sigurdur I.; Egues, J. Carlos

    2015-01-01

    We show that electrons in ordinary III-V semiconductor double wells with an in-plane modulating periodic potential and interwell spin-orbit interaction are tunable topological insulators (TIs). Here the essential TI ingredients, namely, band inversion and the opening of an overall bulk gap in the spectrum arise, respectively, from (i) the combined effect of the double-well even-odd state splitting ΔSAS together with the superlattice potential and (ii) the interband Rashba spin-orbit coupling η . We corroborate our exact diagonalization results with an analytical nearly-free-electron description that allows us to derive an effective Bernevig-Hughes-Zhang model. Interestingly, the gate-tunable mass gap M drives a topological phase transition featuring a discontinuous Chern number at ΔSAS˜5.4 meV . Finally, we explicitly verify the bulk-edge correspondence by considering a strip configuration and determining not only the bulk bands in the nontopological and topological phases but also the edge states and their Dirac-like spectrum in the topological phase. The edge electronic densities exhibit peculiar spatial oscillations as they decay away into the bulk. For concreteness, we present our results for InAs-based wells with realistic parameters.

  3. Asynchronous electro-optic sampling of all-electronically generated ultrashort voltage pulses

    NASA Astrophysics Data System (ADS)

    Füser, Heiko; Bieler, Mark; Ahmed, Sajjad; Verbeyst, Frans

    2015-02-01

    We measure the output of an electrical pulse generator with a repetition rate of 76 MHz employing a laser-based asynchronous sampling technique with an effective sampling frequency of 250 GHz. A best estimate of the resulting 13 ns long waveform is obtained from multiple waveform measurements, which are taken without any trigger event and subsequently aligned in time. This asynchronous sampling scheme can even be adopted in situations where small phase drifts between the electrical pulse generator and the laser occur, making synchronized sampling very difficult. In addition to accurate measurements, the proposed asynchronous measurement scheme allows for the construction of covariance matrices with full rank since a large number of time traces is acquired. Such matrices might reveal correlations which do not appear in low-rank matrices. We believe that the asynchronous sampling technique advocated in this paper will prove to be a valuable characterization tool covering an ultra-broadband frequency range from below 100 MHz to above 100 GHz.

  4. Implications to Postsecondary Faculty of Alternative Calculation Methods of Gender-Based Wage Differentials.

    ERIC Educational Resources Information Center

    Hagedorn, Linda Serra

    1998-01-01

    A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…

  5. A comparison of Monte Carlo and model-based dose calculations in radiotherapy using MCNPTV

    NASA Astrophysics Data System (ADS)

    Wyatt, Mark S.; Miller, Laurence F.

    2006-06-01

    Monte Carlo calculations for megavoltage radiotherapy beams represent the next generation of dose calculation in the clinical environment. In this paper, calculations obtained by the MCNP code based on CT data from a human pelvis are compared against those obtained by a commercial radiotherapy treatment system (CMS XiO). The MCNP calculations are automated by the use of MCNPTV (MCNP Treatment Verification), an integrated application developed in Visual Basic that runs on a Windows-based PC. The linear accelerator beam is modeled as a finite point source, and validated by comparing depth dose curves and lateral profiles in a water phantom to measured data. Calculated water phantom PDDs are within 1% of measured data, but the lateral profiles exhibit differences of 2.4, 5.5, and 5.7 mm at the 60%, 40%, and 20% isodose lines, respectively. A MCNP calculation is performed using the CT data and 15 points are selected for comparison with XiO. Results are generally within the uncertainty of the MCNP calculation, although differences up to 13.2% are seen in the presence of large heterogeneities.

  6. Simple atmospheric transmittance calculation based on a Fourier-transformed Voigt profile.

    PubMed

    Kobayashi, Hirokazu

    2002-11-20

    A method of line-by-line transmission calculation for a homogeneous atmospheric layer that uses the Fourier-transformed Voigt profile is presented. The method is based on a pure Voigt function with no approximation and an interference term that takes into account the line-mixing effect. One can use the method to calculate transmittance, considering each line shape as it is affected by temperature and pressure, with a line database with an arbitrary wave-number range and resolution. To show that the method is feasible for practical model development, we compared the calculated transmittance with that obtained with a conventional model, and good consistency was observed. PMID:12463237

  7. A transport based one-dimensional perturbation code for reactivity calculations in metal systems

    SciTech Connect

    Wenz, T.R.

    1995-02-01

    A one-dimensional reactivity calculation code is developed using first order perturbation theory. The reactivity equation is based on the multi-group transport equation using the discrete ordinates method for angular dependence. In addition to the first order perturbation approximations, the reactivity code uses only the isotropic scattering data, but cross section libraries with higher order scattering data can still be used with this code. The reactivity code obtains all the flux, cross section, and geometry data from the standard interface files created by ONEDANT, a discrete ordinates transport code. Comparisons between calculated and experimental reactivities were done with the central reactivity worth data for Lady Godiva, a bare uranium metal assembly. Good agreement is found for isotopes that do not violate the assumptions in the first order approximation. In general for cases where there are large discrepancies, the discretized cross section data is not accurately representing certain resonance regions that coincide with dominant flux groups in the Godiva assembly. Comparing reactivities calculated with first order perturbation theory and a straight {Delta}k/k calculation shows agreement within 10% indicating the perturbation of the calculated fluxes is small enough for first order perturbation theory to be applicable in the modeled system. Computation time comparisons between reactivities calculated with first order perturbation theory and straight {Delta}k/k calculations indicate considerable time can be saved performing a calculation with a perturbation code particularly as the complexity of the modeled problems increase.

  8. Artificial neural network based torque calculation of switched reluctance motor without locking the rotor

    NASA Astrophysics Data System (ADS)

    Kucuk, Fuat; Goto, Hiroki; Guo, Hai-Jiao; Ichinokura, Osamu

    2009-04-01

    Feedback of motor torque is required in most of switched reluctance (SR) motor applications in order to control torque and its ripple. An SR motor shows highly nonlinear property which does not allow calculating torque analytically. Torque can be directly measured by torque sensor, but it inevitably increases the cost and has to be properly mounted on the motor shaft. Instead of torque sensor, finite element analysis (FEA) may be employed for torque calculation. However, motor modeling and calculation takes relatively long time. The results of FEA may also differ from the actual results. The most convenient way seems to calculate torque from the measured values of rotor position, current, and flux linkage while locking the rotor at definite positions. However, this method needs an extra assembly to lock the rotor. In this study, a novel torque calculation based on artificial neural networks (ANNs) is presented. Magnetizing data are collected while a 6/4 SR motor is running. They need to be interpolated for torque calculation. ANN is very strong tool for data interpolation. ANN based torque estimation is verified on the 6/4 SR motor and is compared by FEA based torque estimation to show its validity.

  9. A correction-based dose calculation algorithm for kilovoltage x rays

    SciTech Connect

    Ding, George X.; Pawlowski, Jason M.; Coffey, Charles W.

    2008-12-15

    Frequent and repeated imaging procedures such as those performed in image-guided radiotherapy (IGRT) programs may add significant dose to radiosensitive organs of radiotherapy patients. It has been shown that kV-CBCT results in doses to bone that are up to a factor of 3-4 higher than those in surrounding soft tissue. Imaging guidance procedures are necessary due to their potential benefits, but the additional incremental dose per treatment fraction may exceed an individual organ tolerance. Hence it is important to manage and account for this additional dose from imaging for radiotherapy patients. Currently available model-based dose calculation methods in radiation treatment planning (RTP) systems are not suitable for low-energy x rays, and new and fast calculation algorithms are needed for a RTP system for kilovoltage dose computations. This study presents a new dose calculation algorithm, referred to as the medium-dependent-correction (MDC) algorithm, for accurate patient dose calculation resulting from kilovoltage x rays. The accuracy of the new algorithm is validated against Monte Carlo calculations. The new algorithm overcomes the deficiency of existing density correction based algorithms in dose calculations for inhomogeneous media, especially for CT-based human volumetric images used in radiotherapy treatment planning.

  10. Calculation of the diffraction efficiency on concave gratings based on Fresnel-Kirchhoff's diffraction formula.

    PubMed

    Huang, Yuanshen; Li, Ting; Xu, Banglian; Hong, Ruijin; Tao, Chunxian; Ling, Jinzhong; Li, Baicheng; Zhang, Dawei; Ni, Zhengji; Zhuang, Songlin

    2013-02-10

    Fraunhofer diffraction formula cannot be applied to calculate the diffraction wave energy distribution of concave gratings like plane gratings because their grooves are distributed on a concave spherical surface. In this paper, a method based on the Kirchhoff diffraction theory is proposed to calculate the diffraction efficiency on concave gratings by considering the curvature of the whole concave spherical surface. According to this approach, each groove surface is divided into several limited small planes, on which the Kirchhoff diffraction field distribution is calculated, and then the diffraction field of whole concave grating can be obtained by superimposition. Formulas to calculate the diffraction efficiency of Rowland-type and flat-field concave gratings are deduced from practical applications. Experimental results showed strong agreement with theoretical computations. With the proposed method, light energy can be optimized to the expected diffraction wave range while implementing aberration-corrected design of concave gratings, particularly for the concave blazed gratings. PMID:23400074

  11. The effects of calculator-based laboratories on standardized test scores

    NASA Astrophysics Data System (ADS)

    Stevens, Charlotte Bethany Rains

    Nationwide, the goal of providing a productive science and math education to our youth in today's educational institutions is centering itself around the technology being utilized in these classrooms. In this age of digital technology, educational software and calculator-based laboratories (CBL) have become significant devices in the teaching of science and math for many states across the United States. Among the technology, the Texas Instruments graphing calculator and Vernier Labpro interface, are among some of the calculator-based laboratories becoming increasingly popular among middle and high school science and math teachers in many school districts across this country. In Tennessee, however, it is reported that this type of technology is not regularly utilized at the student level in most high school science classrooms, especially in the area of Physical Science (Vernier, 2006). This research explored the effect of calculator based laboratory instruction on standardized test scores. The purpose of this study was to determine the effect of traditional teaching methods versus graphing calculator teaching methods on the state mandated End-of-Course (EOC) Physical Science exam based on ability, gender, and ethnicity. The sample included 187 total tenth and eleventh grade physical science students, 101 of which belonged to a control group and 87 of which belonged to the experimental group. Physical Science End-of-Course scores obtained from the Tennessee Department of Education during the spring of 2005 and the spring of 2006 were used to examine the hypotheses. The findings of this research study suggested the type of teaching method, traditional or calculator based, did not have an effect on standardized test scores. However, the students' ability level, as demonstrated on the End-of-Course test, had a significant effect on End-of-Course test scores. This study focused on a limited population of high school physical science students in the middle Tennessee

  12. 40 CFR 1066.605 - Mass-based and molar-based exhaust emission calculations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... media buoyancy as described in 40 CFR 1065.690. (d) Calculate the emission mass of each gaseous... specified in paragraph (c) of this section or in 40 CFR part 1065, subpart G, as applicable. (b) See the... contamination as described in 40 CFR 1065.660(a), including continuous readings, sample bag readings,...

  13. Monte Carlo-based dose calculation engine for minibeam radiation therapy.

    PubMed

    Martínez-Rovira, I; Sempau, J; Prezado, Y

    2014-02-01

    Minibeam radiation therapy (MBRT) is an innovative radiotherapy approach based on the well-established tissue sparing effect of arrays of quasi-parallel micrometre-sized beams. In order to guide the preclinical trials in progress at the European Synchrotron Radiation Facility (ESRF), a Monte Carlo-based dose calculation engine has been developed and successfully benchmarked with experimental data in anthropomorphic phantoms. Additionally, a realistic example of treatment plan is presented. Despite the micron scale of the voxels used to tally dose distributions in MBRT, the combination of several efficiency optimisation methods allowed to achieve acceptable computation times for clinical settings (approximately 2 h). The calculation engine can be easily adapted with little or no programming effort to other synchrotron sources or for dose calculations in presence of contrast agents. PMID:23597423

  14. Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene

    NASA Astrophysics Data System (ADS)

    Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.

    2012-02-01

    We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.

  15. The Effect of Calculator-Based Ranger Activities on Students' Graphing Ability.

    ERIC Educational Resources Information Center

    Kwon, Oh Nam

    2002-01-01

    Addresses three issues of Calculator-based Ranger (CBR) activities on graphing abilities: (a) the effect of CBR activities on graphing abilities; (b) the extent to which prior knowledge about graphing skills affects graphing ability; and (c) the influence of instructional styles on students' graphing abilities. Indicates that CBR activities are…

  16. Preliminary result of transport properties calculation molten Ag-based superionics

    NASA Astrophysics Data System (ADS)

    Oztek, H. O.; Yılmaz, M.; Kavanoz, H. B.

    2016-03-01

    We studied molten Ag based superionics (AgI, Ag2S and Ag3S I) which are well defined with Vashista-Rahman potential. Molecular Dynamic simulation code is Moldy which is used for canonical ensemble (NPT). Thermal properties are obtained from Green-Kubo formalism with equilibrium molecular dynamics (EMD) simulation. These calculation results are compared with the experimentals results.

  17. Medication calculation: the potential role of digital game-based learning in nurse education.

    PubMed

    Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle

    2013-12-01

    Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area. PMID:24107685

  18. CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals.

    PubMed

    Červinka, Ctirad; Fulem, Michal; Růžička, Květoslav

    2016-02-14

    A comparative study of the lattice energy calculations for a data set of 25 molecular crystals is performed using an additive scheme based on the individual energies of up to four-body interactions calculated using the coupled clusters with iterative treatment of single and double excitations and perturbative triples correction (CCSD(T)) with an estimated complete basis set (CBS) description. The CCSD(T)/CBS values on lattice energies are used to estimate sublimation enthalpies which are compared with critically assessed and thermodynamically consistent experimental values. The average absolute percentage deviation of calculated sublimation enthalpies from experimental values amounts to 13% (corresponding to 4.8 kJ mol(-1) on absolute scale) with unbiased distribution of positive to negative deviations. As pair interaction energies present a dominant contribution to the lattice energy and CCSD(T)/CBS calculations still remain computationally costly, benchmark calculations of pair interaction energies defined by crystal parameters involving 17 levels of theory, including recently developed methods with local and explicit treatment of electronic correlation, such as LCC and LCC-F12, are also presented. Locally and explicitly correlated methods are found to be computationally effective and reliable methods enabling the application of fragment-based methods for larger systems. PMID:26874495

  19. CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals

    NASA Astrophysics Data System (ADS)

    Červinka, Ctirad; Fulem, Michal; Růžička, Květoslav

    2016-02-01

    A comparative study of the lattice energy calculations for a data set of 25 molecular crystals is performed using an additive scheme based on the individual energies of up to four-body interactions calculated using the coupled clusters with iterative treatment of single and double excitations and perturbative triples correction (CCSD(T)) with an estimated complete basis set (CBS) description. The CCSD(T)/CBS values on lattice energies are used to estimate sublimation enthalpies which are compared with critically assessed and thermodynamically consistent experimental values. The average absolute percentage deviation of calculated sublimation enthalpies from experimental values amounts to 13% (corresponding to 4.8 kJ mol-1 on absolute scale) with unbiased distribution of positive to negative deviations. As pair interaction energies present a dominant contribution to the lattice energy and CCSD(T)/CBS calculations still remain computationally costly, benchmark calculations of pair interaction energies defined by crystal parameters involving 17 levels of theory, including recently developed methods with local and explicit treatment of electronic correlation, such as LCC and LCC-F12, are also presented. Locally and explicitly correlated methods are found to be computationally effective and reliable methods enabling the application of fragment-based methods for larger systems.

  20. A NASTRAN DMAP procedure for calculation of base excitation modal participation factors

    NASA Technical Reports Server (NTRS)

    Case, W. R.

    1983-01-01

    This paper presents a technique for calculating the modal participation factors for base excitation problems using a DMAP alter to the NASTRAN real eigenvalue analysis Rigid Format. The DMAP program automates the generation of the seismic mass to add to the degrees of freedom representing the shaker input directions and calculates the modal participation factors. These are shown in the paper to be a good measure of the maximum acceleration expected at any point on the structure when the subsequent frequency response analysis is run.

  1. Modeling and Ab initio Calculations of Thermal Transport in Si-Based Clathrates and Solar Perovskites

    NASA Astrophysics Data System (ADS)

    He, Yuping

    2015-03-01

    We present calculations of the thermal transport coefficients of Si-based clathrates and solar perovskites, as obtained from ab initio calculations and models, where all input parameters derived from first principles. We elucidated the physical mechanisms responsible for the measured low thermal conductivity in Si-based clatherates and predicted their electronic properties and mobilities, which were later confirmed experimentally. We also predicted that by appropriately tuning the carrier concentration, the thermoelectric figure of merit of Sn and Pb based perovskites may reach values ranging between 1 and 2, which could possibly be further increased by optimizing the lattice thermal conductivity through engineering perovskite superlattices. Work done in collaboration with Prof. G. Galli, and supported by DOE/BES Grant No. DE-FG0206ER46262.

  2. Optical characterization and crystal field calculations for some erbium based solid state materials for laser refrigeration

    NASA Astrophysics Data System (ADS)

    Hasan, Z.; Qiu, Z.; Johnson, Jackie; Homerick, Uwe

    2009-02-01

    The potential of three erbium based solids hosts has been investigated for laser cooling. Absorption and emission spectra have been studied for the low lying IR transitions of erbium that are relevant to recent reports of cooling using the 4I15/2-4I9/2 and4I15/2 -4I13/2 transitions. Experimental studies have been performed for erbium in three hosts; ZBLAN glass and KPb2Cl5 and Cs2NaYCl6 crystals. In order to estimate the efficiencies of cooling, theoretical calculations have been performed for the cubic Elpasolite (Cs2NaYCl6 ) crystal. These calculations also provide a first principle insight into the cooling efficiency for non-cubic and glassy hosts where such calculations are not possible.

  3. Structural predictions based on the compositions of cathodic materials by first-principles calculations

    NASA Astrophysics Data System (ADS)

    Li, Yang; Lian, Fang; Chen, Ning; Hao, Zhen-jia; Chou, Kuo-chih

    2015-05-01

    A first-principles method is applied to comparatively study the stability of lithium metal oxides with layered or spinel structures to predict the most energetically favorable structure for different compositions. The binding and reaction energies of the real or virtual layered LiMO2 and spinel LiM2O4 (M = Sc-Cu, Y-Ag, Mg-Sr, and Al-In) are calculated. The effect of element M on the structural stability, especially in the case of multiple-cation compounds, is discussed herein. The calculation results indicate that the phase stability depends on both the binding and reaction energies. The oxidation state of element M also plays a role in determining the dominant structure, i.e., layered or spinel phase. Moreover, calculation-based theoretical predictions of the phase stability of the doped materials agree with the previously reported experimental data.

  4. Efficient algorithms for semiclassical instanton calculations based on discretized path integrals

    SciTech Connect

    Kawatsu, Tsutomu E-mail: smiura@mail.kanazawa-u.ac.jp; Miura, Shinichi E-mail: smiura@mail.kanazawa-u.ac.jp

    2014-07-14

    Path integral instanton method is a promising way to calculate the tunneling splitting of energies for degenerated two state systems. In order to calculate the tunneling splitting, we need to take the zero temperature limit, or the limit of infinite imaginary time duration. In the method developed by Richardson and Althorpe [J. Chem. Phys. 134, 054109 (2011)], the limit is simply replaced by the sufficiently long imaginary time. In the present study, we have developed a new formula of the tunneling splitting based on the discretized path integrals to take the limit analytically. We have applied our new formula to model systems, and found that this approach can significantly reduce the computational cost and gain the numerical accuracy. We then developed the method combined with the electronic structure calculations to obtain the accurate interatomic potential on the fly. We present an application of our ab initio instanton method to the ammonia umbrella flip motion.

  5. GPU-based acceleration of free energy calculations in solid state physics

    NASA Astrophysics Data System (ADS)

    Januszewski, Michał; Ptok, Andrzej; Crivelli, Dawid; Gardas, Bartłomiej

    2015-07-01

    Obtaining a thermodynamically accurate phase diagram through numerical calculations is a computationally expensive problem that is crucially important to understanding the complex phenomena of solid state physics, such as superconductivity. In this work we show how this type of analysis can be significantly accelerated through the use of modern GPUs. We illustrate this with a concrete example of free energy calculation in multi-band iron-based superconductors, known to exhibit a superconducting state with oscillating order parameter (OP). Our approach can also be used for classical BCS-type superconductors. With a customized algorithm and compiler tuning we are able to achieve a 19×speedup compared to the CPU (119×compared to a single CPU core), reducing calculation time from minutes to mere seconds, enabling the analysis of larger systems and the elimination of finite size effects.

  6. Understanding Iron-based catalysts with efficient Oxygen reduction activity from first-principles calculations

    NASA Astrophysics Data System (ADS)

    Hafiz, Hasnain; Barbiellini, B.; Jia, Q.; Tylus, U.; Strickland, K.; Bansil, A.; Mukerjee, S.

    2015-03-01

    Catalysts based on Fe/N/C clusters can support the oxygen-reduction reaction (ORR) without the use of expensive metals such as platinum. These systems can also prevent some poisonous species to block the active sites from the reactant. We have performed spin-polarized calculations on various Fe/N/C fragments using the Vienna Ab initio Simulation Package (VASP) code. Some results are compared to similar calculations obtained with the Gaussian code. We investigate the partial density of states (PDOS) of the 3d orbitals near the Fermi level and calculate the binding energies of several ligands. Correlations of the binding energies with the 3d electronic PDOS's are used to propose electronic descriptors of the ORR associated with the 3d states of Fe. We also suggest a structural model for the most active site with a ferrous ion (Fe2+) in the high spin state or the so-called Doublet 3 (D3).

  7. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  8. Review of dynamical models for external dose calculations based on Monte Carlo simulations in urbanised areas.

    PubMed

    Eged, Katalin; Kis, Zoltán; Voigt, Gabriele

    2006-01-01

    After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the "location factor method" and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison. PMID:16095771

  9. Photon SAF calculation based on the Chinese mathematical phantom and comparison with the ORNL phantoms.

    PubMed

    Qiu, Rui; Li, Junli; Zhang, Zhan; Wu, Zhen; Zeng, Zhi; Fan, Jiajin

    2008-12-01

    The Chinese mathematical phantom (CMP) is a stylized human body model developed based on the methods of Oak Ridge National Laboratory (ORNL) mathematical phantom series (OMPS), and data from Reference Asian Man and Chinese Reference Man. It is constructed for radiation dose estimation for Mongolians, whose anatomical parameters are different from those of Caucasians to some extent. Specific absorbed fractions (SAF) are useful quantities for the primary estimation of internal radiation dose. In this paper, a general Monte Carlo code, Monte Carlo N-Particle Code (MCNP) is used to transport particles and calculate SAF. A new variance reduction technique, called the "pointing probability with force collision" method, is implemented into MCNP to reduce the calculation uncertainty, especially for a small-volume target organ. Finally, SAF data for all 31 organs of both sexes of CMP are calculated. A comparison between SAF based on male phantoms of CMP and OMPS demonstrates that the differences apparently exist, and more than 80% of SAF data based on CMP are larger than that of OMPS. However, the differences are acceptable (the differences are above one order of magnitude only in less than 3% of situations) considering the differences in physique. Furthermore, trends in the SAF with increasing photon energy based on the two phantoms agree well. This model complements existing phantoms of different age, sex and ethnicity. PMID:19001898

  10. GPU-based ultra-fast dose calculation using a finite size pencil beam model

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B.

    2009-10-01

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  11. GPU-based ultra-fast dose calculation using a finite size pencil beam model.

    PubMed

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B

    2009-10-21

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy. PMID:19794244

  12. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    NASA Astrophysics Data System (ADS)

    Nyholm, T.; Olofsson, J.; Ahnesjö, A.; Karlsson, M.

    2006-08-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  13. Dose calculation from a D-D-reaction-based BSA for boron neutron capture synovectomy.

    PubMed

    Abdalla, Khalid; Naqvi, A A; Maalej, N; Elshahat, B

    2010-01-01

    Monte Carlo simulations were carried out to calculate dose in a knee phantom from a D-D-reaction-based Beam Shaping Assembly (BSA) for Boron Neutron Capture Synovectomy (BNCS). The BSA consists of a D(d,n)-reaction-based neutron source enclosed inside a polyethylene moderator and graphite reflector. The polyethylene moderator and graphite reflector sizes were optimized to deliver the highest ratio of thermal to fast neutron yield at the knee phantom. Then neutron dose was calculated at various depths in a knee phantom loaded with boron and therapeutic ratios of synovium dose/skin dose and synovium dose/bone dose were determined. Normalized to same boron loading in synovium, the values of the therapeutic ratios obtained in the present study are 12-30 times higher than the published values. PMID:19828325

  14. Effect of composition on antiphase boundary energy in Ni3Al based alloys: Ab initio calculations

    NASA Astrophysics Data System (ADS)

    Gorbatov, O. I.; Lomaev, I. L.; Gornostyrev, Yu. N.; Ruban, A. V.; Furrer, D.; Venkatesh, V.; Novikov, D. L.; Burlatsky, S. F.

    2016-06-01

    The effect of composition on the antiphase boundary (APB) energy of Ni-based L 12-ordered alloys is investigated by ab initio calculations employing the coherent potential approximation. The calculated APB energies for the {111} and {001} planes reproduce experimental values of the APB energy. The APB energies for the nonstoichiometric γ' phase increase with Al concentration and are in line with the experiment. The magnitude of the alloying effect on the APB energy correlates with the variation of the ordering energy of the alloy according to the alloying element's position in the 3 d row. The elements from the left side of the 3 d row increase the APB energy of the Ni-based L 12-ordered alloys, while the elements from the right side slightly affect it except Ni. The way to predict the effect of an addition on the {111} APB energy in a multicomponent alloy is discussed.

  15. A note on geometric method-based procedures to calculate the Hurst exponent

    NASA Astrophysics Data System (ADS)

    Trinidad Segovia, J. E.; Fernández-Martínez, M.; Sánchez-Granero, M. A.

    2012-03-01

    Geometric method-based procedures, which we will call GM algorithms hereafter, were introduced in M.A. Sánchez-Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551, to calculate the Hurst exponent of a time series. The authors proved that GM algorithms, based on a geometrical approach, are more accurate than classical algorithms, especially with short length time series. The main contribution of this paper is to provide a mathematical background for the validity of these two algorithms to calculate the Hurst exponent H of random processes with stationary and self-affine increments. In particular, we show that these procedures are valid not only for exploring long memory in classical processes such as (fractional) Brownian motions, but also for estimating the Hurst exponent of (fractional) Lévy stable motions.

  16. Stiffness of Diphenylalanine-Based Molecular Solids from First Principles Calculations

    NASA Astrophysics Data System (ADS)

    Azuri, Ido; Hod, Oded; Gazit, Ehud; Kronik, Leeor

    2013-03-01

    Diphenylalanine-based peptide nanotubes were found to be unexpectedly stiff, with a Young modulus of 19 GPa. Here, we calculate the Young modulus from first principles, using density functional theory with dispersive corrections. This allows us to show that at least half of the stiffness of the material comes from dispersive interactions and to identify the nature of the interactions that contribute most to the stiffness. This presents a general strategy for the analysis of bioinspired functional materials.

  17. An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet

    NASA Astrophysics Data System (ADS)

    Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon

    2015-08-01

    The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.

  18. Iterative diagonalization in augmented plane wave based methods in electronic structure calculations

    SciTech Connect

    Blaha, P.; Laskowski, R.; Schwarz, K.

    2010-01-20

    Due to the increased computer power and advanced algorithms, quantum mechanical calculations based on Density Functional Theory are more and more widely used to solve real materials science problems. In this context large nonlinear generalized eigenvalue problems must be solved repeatedly to calculate the electronic ground state of a solid or molecule. Due to the nonlinear nature of this problem, an iterative solution of the eigenvalue problem can be more efficient provided it does not disturb the convergence of the self-consistent-field problem. The blocked Davidson method is one of the widely used and efficient schemes for that purpose, but its performance depends critically on the preconditioning, i.e. the procedure to improve the search space for an accurate solution. For more diagonally dominated problems, which appear typically for plane wave based pseudopotential calculations, the inverse of the diagonal of (H - ES) is used. However, for the more efficient 'augmented plane wave + local-orbitals' basis set this preconditioning is not sufficient due to large off-diagonal terms caused by the local orbitals. We propose a new preconditioner based on the inverse of (H - {lambda}S) and demonstrate its efficiency for real applications using both, a sequential and a parallel implementation of this algorithm into our WIEN2k code.

  19. Plane-wave based electronic structure calculations for correlated materials using dynamical mean-field theory and projected local orbitals

    NASA Astrophysics Data System (ADS)

    Amadon, B.; Lechermann, F.; Georges, A.; Jollet, F.; Wehling, T. O.; Lichtenstein, A. I.

    2008-05-01

    The description of realistic strongly correlated systems has recently advanced through the combination of density functional theory in the local density approximation (LDA) and dynamical mean field theory (DMFT). This LDA+DMFT method is able to treat both strongly correlated insulators and metals. Several interfaces between LDA and DMFT have been used, such as ( Nth order) linear muffin-tin orbitals or maximally localized Wannier functions. Such schemes are, however, either complex in use or additional simplifications are often performed (i.e., the atomic sphere approximation). We present an alternative implementation of LDA+DMFT , which keeps the precision of the Wannier implementation, but which is lighter. It relies on the projection of localized orbitals onto a restricted set of Kohn-Sham states to define the correlated subspace. The method is implemented within the projector augmented wave and within the mixed-basis pseudopotential frameworks. This opens the way to electronic structure calculations within LDA+DMFT for more complex structures with the precision of an all-electron method. We present an application to two correlated systems, namely, SrVO3 and β -NiS (a charge-transfer material), including ligand states in the basis set. The results are compared to calculations done with maximally localized Wannier functions, and the physical features appearing in the orbitally resolved spectral functions are discussed.

  20. An economic prediction of refinement coefficients in wavelet-based adaptive methods for electron structure calculations.

    PubMed

    Pipek, János; Nagy, Szilvia

    2013-03-01

    The wave function of a many electron system contains inhomogeneously distributed spatial details, which allows to reduce the number of fine detail wavelets in multiresolution analysis approximations. Finding a method for decimating the unnecessary basis functions plays an essential role in avoiding an exponential increase of computational demand in wavelet-based calculations. We describe an effective prediction algorithm for the next resolution level wavelet coefficients, based on the approximate wave function expanded up to a given level. The prediction results in a reasonable approximation of the wave function and allows to sort out the unnecessary wavelets with a great reliability. PMID:23115109

  1. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    SciTech Connect

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  2. A design of a DICOM-RT-based tool box for nonrigid 4D dose calculation.

    PubMed

    Wong, Victy Y W; Baker, Colin R; Leung, T W; Tung, Stewart Y

    2016-01-01

    The study was aimed to introduce a design of a DICOM-RT-based tool box to facilitate 4D dose calculation based on deformable voxel-dose registration. The computational structure and the calculation algorithm of the tool box were explicitly discussed in the study. The tool box was written in MATLAB in conjunction with CERR. It consists of five main functions which allow a) importation of DICOM-RT-based 3D dose plan, b) deformable image registration, c) tracking voxel doses along breathing cycle, d) presentation of temporal dose distribution at different time phase, and e) derivation of 4D dose. The efficacy of using the tool box for clinical application had been verified with nine clinical cases on retrospective-study basis. The logistic and the robustness of the tool box were tested with 27 applications and the results were shown successful with no computational errors encountered. In the study, the accumulated dose coverage as a function of planning CT taken at end-inhale, end-exhale, and mean tumor position were assessed. The results indicated that the majority of the cases (67%) achieved maximum target coverage, while the planning CT was taken at the temporal mean tumor position and 56% at the end-exhale position. The comparable results to the literature imply that the studied tool box can be reliable for 4D dose calculation. The authors suggest that, with proper application, 4D dose calculation using deformable registration can provide better dose evaluation for treatment with moving target. PMID:27074476

  3. Development of facile property calculation model for adsorption chillers based on equilibrium adsorption cycle

    NASA Astrophysics Data System (ADS)

    Yano, Masato; Hirose, Kenji; Yoshikawa, Minoru; Thermal management technology Team

    Facile property calculation model for adsorption chillers was developed based on equilibrium adsorption cycles. Adsorption chillers are one of promising systems that can use heat energy efficiently because adsorption chillers can generate cooling energy using relatively low temperature heat energy. Properties of adsorption chillers are determined by heat source temperatures, adsorption/desorption properties of adsorbent, and kinetics such as heat transfer rate and adsorption/desorption rate etc. In our model, dependence of adsorption chiller properties on heat source temperatures was represented using approximated equilibrium adsorption cycles instead of solving conventional time-dependent differential equations for temperature changes. In addition to equilibrium cycle calculations, we calculated time constants for temperature changes as functions of heat source temperatures, which represent differences between equilibrium cycles and real cycles that stemmed from kinetic adsorption processes. We found that the present approximated equilibrium model could calculate properties of adsorption chillers (driving energies, cooling energies, and COP etc.) under various driving conditions quickly and accurately within average errors of 6% compared to experimental data.

  4. Thermal conductivity calculation of bio-aggregates based materials using finite and discrete element methods

    NASA Astrophysics Data System (ADS)

    Pennec, Fabienne; Alzina, Arnaud; Tessier-Doyen, Nicolas; Naitali, Benoit; Smith, David S.

    2012-11-01

    This work is about the calculation of thermal conductivity of insulating building materials made from plant particles. To determine the type of raw materials, the particle sizes or the volume fractions of plant and binder, a tool dedicated to calculate the thermal conductivity of heterogeneous materials has been developped, using the discrete element method to generate the volume element and the finite element method to calculate the homogenized properties. A 3D optical scanner has been used to capture plant particle shapes and convert them into a cluster of discret elements. These aggregates are initially randomly distributed but without any overlap, and then fall down in a container due to the gravity force and collide with neighbour particles according to a velocity Verlet algorithm. Once the RVE is built, the geometry is exported in the open-source Salome-Meca platform to be meshed. The calculation of the effective thermal conductivity of the heterogeneous volume is then performed using a homogenization technique, based on an energy method. To validate the numerical tool, thermal conductivity measurements have been performed on sunflower pith aggregates and on packed beds of the same particles. The experimental values have been compared satisfactorily with a batch of numerical simulations.

  5. Automated Calculation of Water-equivalent Diameter (DW) Based on AAPM Task Group 220.

    PubMed

    Anam, Choirul; Haryanto, Freddy; Widita, Rena; Arif, Idam; Dougherty, Geoff

    2016-01-01

    The purpose of this study is to accurately and effectively automate the calculation of the water-equivalent diameter (DW) from 3D CT images for estimating the size-specific dose. DW is the metric that characterizes the patient size and attenuation. In this study, DW was calculated for standard CTDI phantoms and patient images. Two types of phantom were used, one representing the head with a diameter of 16 cm and the other representing the body with a diameter of 32 cm. Images of 63 patients were also taken, 32 who had undergone a CT head examination and 31 who had undergone a CT thorax examination. There are three main parts to our algorithm for automated DW calculation. The first part is to read 3D images and convert the CT data into Hounsfield units (HU). The second part is to find the contour of the phantoms or patients automatically. And the third part is to automate the calculation of DW based on the automated contouring for every slice (DW,all). The results of this study show that the automated calculation of DW and the manual calculation are in good agreement for phantoms and patients. The differences between the automated calculation of DW and the manual calculation are less than 0.5%. The results of this study also show that the estimating of DW,all using DW,n=1 (central slice along longitudinal axis) produces percentage differences of -0.92% ± 3.37% and 6.75%± 1.92%, and estimating DW,all using DW,n=9 produces percentage differences of 0.23% ± 0.16% and 0.87% ± 0.36%, for thorax and head examinations, respectively. From this study, the percentage differences between normalized size-specific dose estimate for every slice (nSSDEall) and nSSDEn=1 are 0.74% ± 2.82% and -4.35% ± 1.18% for thorax and head examinations, respectively; between nSSDEall and nSSDEn=9 are 0.00% ± 0.46% and -0.60% ± 0.24% for thorax and head examinations, respectively. PMID:27455491

  6. Develop and test a solvent accessible surface area-based model in conformational entropy calculations.

    PubMed

    Wang, Junmei; Hou, Tingjun

    2012-05-25

    It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS

  7. Monte Carlo-based dose calculation for 32P patch source for superficial brachytherapy applications

    PubMed Central

    Sahoo, Sridhar; Palani, Selvam T.; Saxena, S. K.; Babu, D. A. R.; Dash, A.

    2015-01-01

    Skin cancer treatment involving 32P source is an easy, less expensive method of treatment limited to small and superficial lesions of approximately 1 mm deep. Bhabha Atomic Research Centre (BARC) has indigenously developed 32P nafion-based patch source (1 cm × 1 cm) for treating skin cancer. For this source, the values of dose per unit activity at different depths including dose profiles in water are calculated using the EGSnrc-based Monte Carlo code system. For an initial activity of 1 Bq distributed in 1 cm2 surface area of the source, the calculated central axis depth dose values are 3.62 × 10-10 GyBq-1 and 8.41 × 10-11 GyBq-1at 0.0125 and 1 mm depths in water, respectively. Hence, the treatment time calculated for delivering therapeutic dose of 30 Gy at 1 mm depth along the central axis of the source involving 37 MBq activity is about 2.7 hrs. PMID:26150682

  8. Efficient Procedure for the Numerical Calculation of Harmonic Vibrational Frequencies Based on Internal Coordinates

    SciTech Connect

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2013-08-15

    We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding number using double differentiation in Cartesian coordinates. For molecules of C1 symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm–1 from those obtained from Cartesian coordinates.

  9. GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources

    NASA Astrophysics Data System (ADS)

    Townson, Reid W.; Jia, Xun; Tian, Zhen; Jiang Graves, Yan; Zavgorodni, Sergei; Jiang, Steve B.

    2013-06-01

    A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm

  10. GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.

    PubMed

    Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B

    2013-06-21

    A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm

  11. SU-E-T-161: Evaluation of Dose Calculation Based On Cone-Beam CT

    SciTech Connect

    Abe, T; Nakazawa, T; Saitou, Y; Nakata, A; Yano, M; Tateoka, K; Fujimoto, K; Sakata, K

    2014-06-01

    Purpose: The purpose of this study is to convert pixel values in cone-beam CT (CBCT) using histograms of pixel values in the simulation CT (sim-CT) and the CBCT images and to evaluate the accuracy of dose calculation based on the CBCT. Methods: The sim-CT and CBCT images immediately before the treatment of 10 prostate cancer patients were acquired. Because of insufficient calibration of the pixel values in the CBCT, it is difficult to be directly used for dose calculation. The pixel values in the CBCT images were converted using an in-house program. A 7 fields treatment plans (original plan) created on the sim-CT images were applied to the CBCT images and the dose distributions were re-calculated with same monitor units (MUs). These prescription doses were compared with those of original plans. Results: In the results of the pixel values conversion in the CBCT images,the mean differences of pixel values for the prostate,subcutaneous adipose, muscle and right-femur were −10.78±34.60, 11.78±41.06, 29.49±36.99 and 0.14±31.15 respectively. In the results of the calculated doses, the mean differences of prescription doses for 7 fields were 4.13±0.95%, 0.34±0.86%, −0.05±0.55%, 1.35±0.98%, 1.77±0.56%, 0.89±0.69% and 1.69±0.71% respectively and as a whole, the difference of prescription dose was 1.54±0.4%. Conclusion: The dose calculation on the CBCT images achieve an accuracy of <2% by using this pixel values conversion program. This may enable implementation of efficient adaptive radiotherapy.

  12. A Brief User's Guide to the Excel® -Based DF Calculator

    SciTech Connect

    Jubin, Robert T.

    2015-09-30

    To understand the importance of capturing penetrating forms of iodine as well as the other volatile radionuclides, a calculation tool was developed in the form of an Excel® spreadsheet to estimate the overall plant decontamination factor (DF). The tool requires the user to estimate splits of the volatile radionuclides within the major portions of the reprocessing plant, speciation of iodine and individual DFs for each off-gas stream within the Used Nuclear Fuel reprocessing plant. The Impact to the overall plant DF for each volatile radionuclide is then calculated by the tool based on the specific user choices. The Excel® spreadsheet tracks both elemental and penetrating forms of iodine separately and allows changes in the speciation of iodine at each processing step. It also tracks 3H, 14C and 85Kr. This document provides a basic user's guide to the manipulation of this tool.

  13. ALL-ELECTRONIC DROPLET GENERATION ON-CHIP WITH REAL-TIME FEEDBACK CONTROL FOR EWOD DIGITIAL MICROFLUIDICS

    PubMed Central

    Gong, Jian; Kim, Chang-Jin “CJ”

    2009-01-01

    Electrowetting-on-dielectric (EWOD) actuation enables digital (or droplet) microfluidics where small packets of liquids are manipulated on a two-dimensional surface. Due to its mechanical simplicity and low energy consumption, EWOD holds particular promise for portable systems. To improve volume precision of the droplets, which is desired for quantitative applications such as biochemical assays, existing practices would require near-perfect device fabricaion and operation conditions unless the droplets are generated under feedback control by an extra pump setup off of the chip. In this paper, we develop an all-electronic (i.e., no ancillary pumping) real-time feedback control of on-chip droplet generation. A fast voltage modulation, capacitance sensing, and discrete-time PID feedback controller are integrated on the operating electronic board. A significant improvement is obtained in the droplet volume uniformity, compared with an open loop control as well as the previous feedback control employing an external pump. Furthermore, this new capability empowers users to prescribe the droplet volume even below the previously considered minimum, allowing, for example, 1:x (x < 1) mixing, in comparison to the previously considered n:m mixing (i.e., n and m unit droplets). PMID:18497909

  14. High accuracy modeling for advanced nuclear reactor core designs using Monte Carlo based coupled calculations

    NASA Astrophysics Data System (ADS)

    Espel, Federico Puente

    The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods

  15. Nanothermochromics with VO2-based core-shell structures: Calculated luminous and solar optical properties

    NASA Astrophysics Data System (ADS)

    Li, S.-Y.; Niklasson, G. A.; Granqvist, C. G.

    2011-06-01

    Composites including VO2-based thermochromic nanoparticles are able to combine high luminous transmittance Tlum with a significant modulation of the solar energy transmittance ΔTsol at a "critical" temperature in the vicinity of room temperature. Thus nanothermochromics is of much interest for energy efficient fenestration and offers advantages over thermochromic VO2-based thin films. This paper presents calculations based on effective medium theory applied to dilute suspensions of core-shell nanoparticles and demonstrates that, in particular, moderately thin-walled hollow spherical VO2 nanoshells can give significantly higher values of ΔTsol than solid nanoparticles at the expense of a somewhat lowered Tlum. This paper is a sequel to a recent publication [S.-Y. Li, G. A. Niklasson, and C. G. Granqvist, J. Appl. Phys. 108, 063525 (2010)].

  16. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Chen, Chaobin; Huang, Qunying; Wu, Yican

    2005-04-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  17. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  18. Improving iterative surface energy balance convergence for remote sensing based flux calculation

    NASA Astrophysics Data System (ADS)

    Dhungel, Ramesh; Allen, Richard G.; Trezza, Ricardo

    2016-04-01

    A modification of the iterative procedure of the surface energy balance was purposed to expedite the convergence of Monin-Obukhov stability correction utilized by the remote sensing based flux calculation. This was demonstrated using ground-based weather stations as well as the gridded weather data (North American Regional Reanalysis) and remote sensing based (Landsat 5, 7) images. The study was conducted for different land-use classes in southern Idaho and northern California for multiple satellite overpasses. The convergence behavior of a selected Landsat pixel as well as all of the Landsat pixels within the area of interest was analyzed. Modified version needed multiple times less iteration compared to the current iterative technique. At the time of low wind speed (˜1.3 m/s), the current iterative technique was not able to find a solution of surface energy balance for all of the Landsat pixels, while the modified version was able to achieve it in a few iterations. The study will facilitate many operational evapotranspiration models to avoid the nonconvergence in low wind speeds, which helps to increase the accuracy of flux calculations.

  19. A method for calculating strain energy release rate based on beam theory

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Pandey, R. K.

    1993-01-01

    The Timoshenko beam theory was used to model cracked beams and to calculate the total strain energy release rate. The root rotation of the beam segments at the crack tip were estimated based on an approximate 2D elasticity solution. By including the strain energy released due to the root rotations of the beams during crack extension, the strain energy release rate obtained using beam theory agrees very well with the 2D finite element solution. Numerical examples were given for various beam geometries and loading conditions. Comparisons with existing beam models were also given.

  20. Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations

    SciTech Connect

    Jo, Y.; Yun, S.; Cho, N. Z.

    2013-07-01

    In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)

  1. Synthesis, spectral, optical properties and theoretical calculations on schiff bases ligands containing o-tolidine

    NASA Astrophysics Data System (ADS)

    Arroudj, S.; Bouchouit, M.; Bouchouit, K.; Bouraiou, A.; Messaadia, L.; Kulyk, B.; Figa, V.; Bouacida, S.; Sofiani, Z.; Taboukhat, S.

    2016-06-01

    This paper explores the synthesis, structure characterization and optical properties of two new schiff bases. These compounds were obtained by condensation of o-tolidine with salicylaldehyde and cinnamaldehyde. The obtained ligands were characterized by UV, 1H and NMR. Their third-order NLO properties were measured using the third harmonic generation technique on thin films at 1064 nm. The electric dipole moment (μ), the polarizability (α) and the first hyperpolarizability (β) were calculated using the density functional B3LYP method with the lanl2dz basis set. For the results, the title compound shows nonzero β value revealing second order NLO behaviour.

  2. Graph model for calculating the properties of saturated monoalcohols based on the additivity of energy terms

    NASA Astrophysics Data System (ADS)

    Grebeshkov, V. V.; Smolyakov, V. M.

    2012-05-01

    A 16-constant additive scheme was derived for calculating the physicochemical properties of saturated monoalcohols CH4O-C9H20O and decomposing the triangular numbers of the Pascal triangle based on the similarity of subgraphs in the molecular graphs (MGs) of the homologous series of these alcohols. It was shown, using this scheme for calculation of properties of saturated monoalcohols as an example, that each coefficient of the scheme (in other words, the number of methods to impose a chain of a definite length i 1, i 2, … on a molecular graph) is the result of the decomposition of the triangular numbers of the Pascal triangle. A linear dependence was found within the adopted classification of structural elements. Sixteen parameters of the schemes were recorded as linear combinations of 17 parameters. The enthalpies of vaporization L {298/K 0} of the saturated monoalcohols CH4O-C9H20O, for which there were no experimental data, were calculated. It was shown that the parameters are not chosen randomly when using the given procedure for constructing an additive scheme by decomposing the triangular numbers of the Pascal triangle.

  3. a Novel Sub-Pixel Matching Algorithm Based on Phase Correlation Using Peak Calculation

    NASA Astrophysics Data System (ADS)

    Xie, Junfeng; Mo, Fan; Yang, Chao; Li, Pin; Tian, Shiqiang

    2016-06-01

    The matching accuracy of homonymy points of stereo images is a key point in the development of photogrammetry, which influences the geometrical accuracy of the image products. This paper presents a novel sub-pixel matching method phase correlation using peak calculation to improve the matching accuracy. The peak theoretic centre that means to sub-pixel deviation can be acquired by Peak Calculation (PC) according to inherent geometrical relationship, which is generated by inverse normalized cross-power spectrum, and the mismatching points are rejected by two strategies: window constraint, which is designed by matching window and geometric constraint, and correlation coefficient, which is effective for satellite images used for mismatching points removing. After above, a lot of high-precise homonymy points can be left. Lastly, three experiments are taken to verify the accuracy and efficiency of the presented method. Excellent results show that the presented method is better than traditional phase correlation matching methods based on surface fitting in these aspects of accuracy and efficiency, and the accuracy of the proposed phase correlation matching algorithm can reach 0.1 pixel with a higher calculation efficiency.

  4. Accelerating Atomic Orbital-based Electronic Structure Calculation via Pole Expansion plus Selected Inversion

    SciTech Connect

    Lin, Lin; Chen, Mohan; Yang, Chao; He, Lixin

    2012-02-10

    We describe how to apply the recently developed pole expansion plus selected inversion (PEpSI) technique to Kohn-Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating charge density, total energy, Helmholtz free energy and atomic forces without using the eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to update the chemical potential without using Kohn-Sham eigenvalues. The advantage of using PEpSI is that it has a much lower computational complexity than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEpSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEpSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEpSI is modest. This makes it even possible to perform Kohn-Sham DFT calculations for 10,000-atom nanotubes on a single processor. We also show that the use of PEpSI does not lead to loss of accuracy required in a practical DFT calculation.

  5. Dual-energy CT-based material extraction for tissue segmentation in Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Bazalova, Magdalena; Carrier, Jean-François; Beaulieu, Luc; Verhaegen, Frank

    2008-05-01

    Monte Carlo (MC) dose calculations are performed on patient geometries derived from computed tomography (CT) images. For most available MC codes, the Hounsfield units (HU) in each voxel of a CT image have to be converted into mass density (ρ) and material type. This is typically done with a (HU; ρ) calibration curve which may lead to mis-assignment of media. In this work, an improved material segmentation using dual-energy CT-based material extraction is presented. For this purpose, the differences in extracted effective atomic numbers Z and the relative electron densities ρe of each voxel are used. Dual-energy CT material extraction based on parametrization of the linear attenuation coefficient for 17 tissue-equivalent inserts inside a solid water phantom was done. Scans of the phantom were acquired at 100 kVp and 140 kVp from which Z and ρe values of each insert were derived. The mean errors on Z and ρe extraction were 2.8% and 1.8%, respectively. Phantom dose calculations were performed for 250 kVp and 18 MV photon beams and an 18 MeV electron beam in the EGSnrc/DOSXYZnrc code. Two material assignments were used: the conventional (HU; ρ) and the novel (HU; ρ, Z) dual-energy CT tissue segmentation. The dose calculation errors using the conventional tissue segmentation were as high as 17% in a mis-assigned soft bone tissue-equivalent material for the 250 kVp photon beam. Similarly, the errors for the 18 MeV electron beam and the 18 MV photon beam were up to 6% and 3% in some mis-assigned media. The assignment of all tissue-equivalent inserts was accurate using the novel dual-energy CT material assignment. As a result, the dose calculation errors were below 1% in all beam arrangements. Comparable improvement in dose calculation accuracy is expected for human tissues. The dual-energy tissue segmentation offers a significantly higher accuracy compared to the conventional single-energy segmentation.

  6. 42 CFR 413.220 - Methodology for calculating the per-treatment base rate under the ESRD prospective payment system...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Methodology for calculating the per-treatment base... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the per.... The methodology for determining the per treatment base rate under the ESRD prospective payment...

  7. GPU-based fast Monte Carlo dose calculation for proton therapy

    PubMed Central

    Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B

    2015-01-01

    Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6–22 s to simulate 10 million source protons to achieve ~1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy. PMID:23128424

  8. GPU-based fast Monte Carlo dose calculation for proton therapy.

    PubMed

    Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B

    2012-12-01

    Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ∼1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy. PMID

  9. A brief comparison between grid based real space algorithms andspectrum algorithms for electronic structure calculations

    SciTech Connect

    Wang, Lin-Wang

    2006-12-01

    Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N{sup 3}) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the

  10. GPU-based fast Monte Carlo dose calculation for proton therapy

    NASA Astrophysics Data System (ADS)

    Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B.

    2012-12-01

    Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ˜1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.

  11. Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei

    2007-02-01

    On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No

  12. Modeling of an industrial environment: external dose calculations based on Monte Carlo simulations of photon transport.

    PubMed

    Kis, Zoltán; Eged, Katalin; Voigt, Gabriele; Meckbach, Reinhard; Müller, Heinz

    2004-02-01

    External gamma exposures from radionuclides deposited on surfaces usually result in the major contribution to the total dose to the public living in urban-industrial environments. The aim of the paper is to give an example for a calculation of the collective and averted collective dose due to the contamination and decontamination of deposition surfaces in a complex environment based on the results of Monte Carlo simulations. The shielding effects of the structures in complex and realistic industrial environments (where productive and/or commercial activity is carried out) were computed by the use of Monte Carlo method. Several types of deposition areas (walls, roofs, windows, streets, lawn) were considered. Moreover, this paper gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. An exposure scenario was designed based on a survey of average German and Hungarian supermarkets. In the first part of the paper the air kermas per photon per unit area due to each specific deposition area contaminated by 137Cs were determined at several arbitrary locations in the whole environment relative to a reference value of 8.39 x 10(-4) pGy per gamma m(-2). The calculations provide the possibility to assess the whole contribution of a specific deposition area to the collective dose, separately. According to the current results, the roof and the paved area contribute the most part (approximately 92%) to the total dose in the first year taking into account the relative contamination of the deposition areas. When integrating over 10 or 50 y, these two surfaces remain the most important contributors as well but the ratio will increasingly be shifted in favor of the roof. The decontamination of the roof and the paved area results in about 80-90% of the total averted collective dose in each calculated time period (1, 10, 50 y

  13. SDT: A Virus Classification Tool Based on Pairwise Sequence Alignment and Identity Calculation

    PubMed Central

    Muhire, Brejnev Muhizi; Varsani, Arvind; Martin, Darren Patrick

    2014-01-01

    The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV). There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT), a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms). PMID:25259891

  14. Novel Anthropometry-Based Calculation of the Body Heat Capacity in the Korean Population.

    PubMed

    Pham, Duong Duc; Lee, Jeong Hoon; Lee, Young Boum; Park, Eun Seok; Kim, Ka Yul; Song, Ji Yeon; Kim, Ji Eun; Leem, Chae Hun

    2015-01-01

    Heat capacity (HC) has an important role in the temperature regulation process, particularly in dealing with the heat load. The actual measurement of the body HC is complicated and is generally estimated by body-composition-specific data. This study compared the previously known HC estimating equations and sought how to define HC using simple anthropometric indices such as weight and body surface area (BSA) in the Korean population. Six hundred participants were randomly selected from a pool of 902 healthy volunteers aged 20 to 70 years for the training set. The remaining 302 participants were used for the test set. Body composition analysis using multi-frequency bioelectrical impedance analysis was used to access body components including body fat, water, protein, and mineral mass. Four different HCs were calculated and compared using a weight-based HC (HC_Eq1), two HCs estimated from fat and fat-free mass (HC_Eq2 and HC_Eq3), and an HC calculated from fat, protein, water, and mineral mass (HC_Eq4). HC_Eq1 generally produced a larger HC than the other HC equations and had a poorer correlation with the other HC equations. HC equations using body composition data were well-correlated to each other. If HC estimated with HC_Eq4 was regarded as a standard, interestingly, the BSA and weight independently contributed to the variation of HC. The model composed of weight, BSA, and gender was able to predict more than a 99% variation of HC_Eq4. Validation analysis on the test set showed a very high satisfactory level of the predictive model. In conclusion, our results suggest that gender, BSA, and weight are the independent factors for calculating HC. For the first time, a predictive equation based on anthropometry data was developed and this equation could be useful for estimating HC in the general Korean population without body-composition measurement. PMID:26529594

  15. Novel Anthropometry-Based Calculation of the Body Heat Capacity in the Korean Population

    PubMed Central

    Pham, Duong Duc; Lee, Jeong Hoon; Lee, Young Boum; Park, Eun Seok; Kim, Ka Yul; Song, Ji Yeon; Kim, Ji Eun; Leem, Chae Hun

    2015-01-01

    Heat capacity (HC) has an important role in the temperature regulation process, particularly in dealing with the heat load. The actual measurement of the body HC is complicated and is generally estimated by body-composition-specific data. This study compared the previously known HC estimating equations and sought how to define HC using simple anthropometric indices such as weight and body surface area (BSA) in the Korean population. Six hundred participants were randomly selected from a pool of 902 healthy volunteers aged 20 to 70 years for the training set. The remaining 302 participants were used for the test set. Body composition analysis using multi-frequency bioelectrical impedance analysis was used to access body components including body fat, water, protein, and mineral mass. Four different HCs were calculated and compared using a weight-based HC (HC_Eq1), two HCs estimated from fat and fat-free mass (HC_Eq2 and HC_Eq3), and an HC calculated from fat, protein, water, and mineral mass (HC_Eq4). HC_Eq1 generally produced a larger HC than the other HC equations and had a poorer correlation with the other HC equations. HC equations using body composition data were well-correlated to each other. If HC estimated with HC_Eq4 was regarded as a standard, interestingly, the BSA and weight independently contributed to the variation of HC. The model composed of weight, BSA, and gender was able to predict more than a 99% variation of HC_Eq4. Validation analysis on the test set showed a very high satisfactory level of the predictive model. In conclusion, our results suggest that gender, BSA, and weight are the independent factors for calculating HC. For the first time, a predictive equation based on anthropometry data was developed and this equation could be useful for estimating HC in the general Korean population without body-composition measurement. PMID:26529594

  16. Implementation of a Web-Based Spatial Carbon Calculator for Latin America and the Caribbean

    NASA Astrophysics Data System (ADS)

    Degagne, R. S.; Bachelet, D. M.; Grossman, D.; Lundin, M.; Ward, B. C.

    2013-12-01

    A multi-disciplinary team from the Conservation Biology Institute is creating a web-based tool for the InterAmerican Development Bank (IDB) to assess the impact of potential development projects on carbon stocks in Latin America and the Caribbean. Funded by the German Society for International Cooperation (GIZ), this interactive carbon calculator is an integrated component of the IDB Decision Support toolkit which is currently utilized by the IDB's Environmental Safeguards Group. It is deployed on the Data Basin (www.databasin.org) platform and provides a risk screening function to indicate the potential carbon impact of various types of projects, based on a user-delineated development footprint. The tool framework employs the best available geospatial carbon data to quantify above-ground carbon stocks and highlights potential below-ground and soil carbon hotspots in the proposed project area. Results are displayed in the web mapping interface, as well as summarized in PDF documents generated by the tool.

  17. Electronic structures of halogen-doped Cu2O based on DFT calculations

    NASA Astrophysics Data System (ADS)

    Zhao, Zong-Yan; Yi, Juan; Zhou, Da-Cheng

    2014-01-01

    In order to construct p—n homojunction of Cu2O-based thin film solar cells that may increase its conversion efficiency, to synthesize n-type Cu2O with high conductivity is extremely crucial, and considered as a challenge in the near future. The doping effects of halogen on electronic structure of Cu2O have been investigated by density function theory calculations in the present work. Halogen dopants form donor levels below the bottom of conduction band through gaining or losing electrons, suggesting that halogen doping could make Cu2O have n-type conductivity. The lattice distortion, the impurity formation energy, the position, and the band width of donor level of Cu2O1-xHx (H = F, Cl, Br, I) increase with the halogen atomic number. Based on the calculated results, chlorine doping is an effective n-type dopant for Cu2O, owing to the lower impurity formation energy and suitable donor level.

  18. Three dimensional gait analysis using wearable acceleration and gyro sensors based on quaternion calculations.

    PubMed

    Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki

    2013-01-01

    This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128

  19. Calculations of helium separation via uniform pores of stanene-based membranes

    PubMed Central

    Gao, Guoping; Jiao, Yan; Jiao, Yalong; Ma, Fengxian; Kou, Liangzhi

    2015-01-01

    Summary The development of low energy cost membranes to separate He from noble gas mixtures is highly desired. In this work, we studied He purification using recently experimentally realized, two-dimensional stanene (2D Sn) and decorated 2D Sn (SnH and SnF) honeycomb lattices by density functional theory calculations. To increase the permeability of noble gases through pristine 2D Sn at room temperature (298 K), two practical strategies (i.e., the application of strain and functionalization) are proposed. With their high concentration of large pores, 2D Sn-based membrane materials demonstrate excellent helium purification and can serve as a superior membrane over traditionally used, porous materials. In addition, the separation performance of these 2D Sn-based membrane materials can be significantly tuned by application of strain to optimize the He purification properties by taking both diffusion and selectivity into account. Our results are the first calculations of He separation in a defect-free honeycomb lattice, highlighting new interesting materials for helium separation for future experimental validation. PMID:26885459

  20. Hydration in discrete water. A mean field, cellular automata based approach to calculating hydration free energies.

    PubMed

    Setny, Piotr; Zacharias, Martin

    2010-07-01

    A simple, semiheuristic solvation model based on a discrete, BCC grid of solvent cells has been presented. The model utilizes a mean field approach for the calculation of solute-solvent and solvent-solvent interaction energies and a cellular automata based algorithm for the prediction of solvent distribution in the presence of solute. The construction of the effective Hamiltonian for a solvent cell provides an explicit coupling between orientation-dependent water-solute electrostatic interactions and water-water hydrogen bonding. The water-solute dispersion interaction is also explicitly taken into account. The model does not depend on any arbitrary definition of the solute-solvent interface nor does it use a microscopic surface tension for the calculation of nonpolar contributions to the hydration free energies. It is demonstrated that the model provides satisfactory predictions of hydration free energies for drug-like molecules and is able to reproduce the distribution of buried water molecules within protein structures. The model is computationally efficient and is applicable to arbitrary molecules described by atomistic force field. PMID:20552986

  1. Three Dimensional Gait Analysis Using Wearable Acceleration and Gyro Sensors Based on Quaternion Calculations

    PubMed Central

    Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki

    2013-01-01

    This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128

  2. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    NASA Technical Reports Server (NTRS)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  3. GPU Based Fast Free-Wake Calculations For Multiple Horizontal Axis Wind Turbine Rotors

    NASA Astrophysics Data System (ADS)

    Türkal, M.; Novikov, Y.; Üşenmez, S.; Sezer-Uzol, N.; Uzol, O.

    2014-06-01

    Unsteady free-wake solutions of wind turbine flow fields involve computationally intensive interaction calculations, which generally limit the total amount of simulation time or the number of turbines that can be simulated by the method. This problem, however, can be addressed easily using high-level of parallelization. Especially when exploited with a GPU, a Graphics Processing Unit, this property can provide a significant computational speed-up, rendering the most intensive engineering problems realizable in hours of computation time. This paper presents the results of the simulation of the flow field for the NREL Phase VI turbine using a GPU-based in-house free-wake panel method code. Computational parallelism involved in the free-wake methodology is exploited using a GPU, allowing thousands of similar operations to be performed simultaneously. The results are compared to experimental data as well as to those obtained by running a corresponding CPU-based code. Results show that the GPU based code is capable of producing wake and load predictions similar to the CPU- based code and in a substantially reduced amount of time. This capability could allow free- wake based analysis to be used in the possible design and optimization studies of wind farms as well as prediction of multiple turbine flow fields and the investigation of the effects of using different vortex core models, core expansion and stretching models on the turbine rotor interaction problems in multiple turbine wake flow fields.

  4. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  5. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun

    2015-10-01

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  6. Model-based dose calculations for {sup 125}I lung brachytherapy

    SciTech Connect

    Sutherland, J. G. H.; Furutani, K. M.; Garces, Y. I.; Thomson, R. M.

    2012-07-15

    Purpose: Model-baseddose calculations (MBDCs) are performed using patient computed tomography (CT) data for patients treated with intraoperative {sup 125}I lung brachytherapy at the Mayo Clinic Rochester. Various metallic artifact correction and tissue assignment schemes are considered and their effects on dose distributions are studied. Dose distributions are compared to those calculated under TG-43 assumptions. Methods: Dose distributions for six patients are calculated using phantoms derived from patient CT data and the EGSnrc user-code BrachyDose. {sup 125}I (GE Healthcare/Oncura model 6711) seeds are fully modeled. Four metallic artifact correction schemes are applied to the CT data phantoms: (1) no correction, (2) a filtered back-projection on a modified virtual sinogram, (3) the reassignment of CT numbers above a threshold in the vicinity of the seeds, and (4) a combination of (2) and (3). Tissue assignment is based on voxel CT number and mass density is assigned using a CT number to mass density calibration. Three tissue assignment schemes with varying levels of detail (20, 11, and 5 tissues) are applied to metallic artifact corrected phantoms. Simulations are also performed under TG-43 assumptions, i.e., seeds in homogeneous water with no interseed attenuation. Results: Significant dose differences (up to 40% for D{sub 90}) are observed between uncorrected and metallic artifact corrected phantoms. For phantoms created with metallic artifact correction schemes (3) and (4), dose volume metrics are generally in good agreement (less than 2% differences for all patients) although there are significant local dose differences. The application of the three tissue assignment schemes results in differences of up to 8% for D{sub 90}; these differences vary between patients. Significant dose differences are seen between fully modeled and TG-43 calculations with TG-43 underestimating the dose (up to 36% in D{sub 90}) for larger volumes containing higher proportions of

  7. Spinel compounds as multivalent battery cathodes: A systematic evaluation based on ab initio calculations

    DOE PAGESBeta

    Liu, Miao; Rong, Ziqin; Malik, Rahul; Canepa, Pieremanuele; Jain, Anubhav; Ceder, Gerbrand; Persson, Kristin A.

    2014-12-16

    In this study, batteries that shuttle multivalent ions such as Mg2+ and Ca2+ ions are promising candidates for achieving higher energy density than available with current Li-ion technology. Finding electrode materials that reversibly store and release these multivalent cations is considered a major challenge for enabling such multivalent battery technology. In this paper, we use recent advances in high-throughput first-principles calculations to systematically evaluate the performance of compounds with the spinel structure as multivalent intercalation cathode materials, spanning a matrix of five different intercalating ions and seven transition metal redox active cations. We estimate the insertion voltage, capacity, thermodynamic stabilitymore » of charged and discharged states, as well as the intercalating ion mobility and use these properties to evaluate promising directions. Our calculations indicate that the Mn2O4 spinel phase based on Mg and Ca are feasible cathode materials. In general, we find that multivalent cathodes exhibit lower voltages compared to Li cathodes; the voltages of Ca spinels are ~0.2 V higher than those of Mg compounds (versus their corresponding metals), and the voltages of Mg compounds are ~1.4 V higher than Zn compounds; consequently, Ca and Mg spinels exhibit the highest energy densities amongst all the multivalent cation species. The activation barrier for the Al³⁺ ion migration in the Mn₂O₄ spinel is very high (~1400 meV for Al3+ in the dilute limit); thus, the use of an Al based Mn spinel intercalation cathode is unlikely. Amongst the choice of transition metals, Mn-based spinel structures rank highest when balancing all the considered properties.« less

  8. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    SciTech Connect

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y.

    2012-05-15

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at

  9. GPU-based fast Monte Carlo simulation for radiotherapy dose calculation.

    PubMed

    Jia, Xun; Gu, Xuejun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B

    2011-11-21

    Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress toward the development of a graphics processing unit (GPU)-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original dose planning method (DPM) code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. A high-performance random number generator and a hardware linear interpolation are also utilized. We have also developed various components to handle the fluence map and linac geometry, so that gDPM can be used to compute dose distributions for realistic IMRT or VMAT treatment plans. Our gDPM package is tested for its accuracy and efficiency in both phantoms and realistic patient cases. In all cases, the average relative uncertainties are less than 1%. A statistical t-test is performed and the dose difference between the CPU and the GPU results is not found to be statistically significant in over 96% of the high dose region and over 97% of the entire region. Speed-up factors of 69.1 ∼ 87.2 have been observed using an NVIDIA Tesla C2050 GPU card against a 2.27 GHz Intel Xeon CPU processor. For realistic IMRT and VMAT plans, MC dose calculation can be completed with less than 1% standard deviation in 36.1 ∼ 39.6 s using gDPM. PMID:22016026

  10. Acceptance and commissioning of a treatment planning system based on Monte Carlo calculations.

    PubMed

    Lopez-Tarjuelo, J; Garcia-Molla, R; Juan-Senabre, X J; Quiros-Higueras, J D; Santos-Serra, A; de Marco-Blancas, N; Calzada-Feliu, S

    2014-04-01

    The Monaco Treatment Planning System (TPS), based on a virtual energy fluence model of the photon beam head components of the linac and a dose computation engine made with Monte Carlo (MC) algorithm X-Ray Voxel MC (XVMC), has been tested before being put into clinical use. An Elekta Synergy with 6 MV was characterized using routine equipment. After the machine's model was installed, a set of functionality, geometric, dosimetric and data transfer tests were performed. The dosimetric tests included dose calculations in water, heterogeneous phantoms and Intensity Modulated Radiation Therapy (IMRT) verifications. Data transfer tests were run for every imaging device, TPS and the electronic medical record linked to Monaco. Functionality and geometric tests were run properly. Dose calculations in water were in accordance with measurements so that, in 95% of cases, differences were up to 1.9%. Dose calculation in heterogeneous media showed expected results found in the literature. IMRT verification results with an ionization chamber led to dose differences lower than 2.5% for points inside a standard gradient. When an 2-D array was used, all the fields passed the g (3%, 3 mm) test with a percentage of succeeding points between 90% and 95%, of which the majority of the mentioned fields had a percentage of succeeding points between 95% and 100%. Data transfer caused problems that had to be solved by means of changing our workflow. In general, tests led to satisfactory results. Monaco performance complied with published international recommendations and scored highly in the dosimetric ambit. However, the problems detected when the TPS was put to work together with our current equipment showed that this kind of product must be completely commissioned, without neglecting data workflow, before treating the first patient. PMID:23862746

  11. Star sub-pixel centroid calculation based on multi-step minimum energy difference method

    NASA Astrophysics Data System (ADS)

    Wang, Duo; Han, YanLi; Sun, Tengfei

    2013-09-01

    The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better

  12. Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital

    PubMed Central

    Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud

    2016-01-01

    Background: Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. Objective: This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. Methods: This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. Results: The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. Conclusion: By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department. PMID:26234974

  13. Experimentation and Theoretic Calculation of a BODIPY Sensor Based on Photoinduced Electron Transfer for Ions Detection

    NASA Astrophysics Data System (ADS)

    Lu, Hua; Zhang, Shushu; Liu, Hanzhuang; Wang, Yanwei; Shen, Zhen; Liu, Chungen; You, Xiaozeng

    2009-12-01

    A boron-dipyrromethene (BODIPY)-based fluorescence probe with a N,N'-(pyridine-2, 6-diylbis(methylene))-dianiline substituent (1) has been prepared by condensation of 2,6-pyridinedicarboxaldehyde with 8-(4-amino)-4,4-difluoro-1,3,5,7-tetramethyl-4-bora-3a,4a-diaza-s-indacene and reduction by NaBH4. The sensing properties of compound 1 toward various metal ions are investigated via fluorometric titration in methanol, which show highly selective fluorescent turn-on response in the presence of Hg2+ over the other metal ions, such as Li+, Na+, K+, Ca2+, Mg2+, Pb2+, Fe2+, Co2+, Ni2+, Cu2+, Zn2+, Cd2+, Ag+, and Mn2+. Computational approach has been carried out to investigate the mechanism why compound 1 provides different fluorescent signal for Hg2+ and other ions. Theoretic calculations of the energy levels show that the quenching of the bright green fluorescence of boradiazaindacene fluorophore is due to the reductive photoinduced electron transfer (PET) from the aniline subunit to the excited state of BODIPY fluorophore. In metal complexes, the frontier molecular orbital energy levels changes greatly. Binding Zn2+ or Cd2+ ion leads to significant decreasing of both the HOMO and LUMO energy levels of the receptor, thus inhibit the reductive PET process, whereas an oxidative PET from the excited state fluorophore to the receptor occurs, vice versa, which also quenches the fluorescence. However, for 1-Hg2+ complex, both the reductive and oxidative PETs are prohibited; therefore, strong fluorescence emission from the fluorophore can be observed experimentally. The agreement of the experimental results and theoretic calculations suggests that our calculation method can be applicable as guidance for the design of new chemosensors for other metal ions.

  14. Critical comparison of electrode models in density functional theory based quantum transport calculations

    NASA Astrophysics Data System (ADS)

    Jacob, D.; Palacios, J. J.

    2011-01-01

    We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.

  15. A modified W-W interatomic potential based on ab initio calculations

    NASA Astrophysics Data System (ADS)

    Wang, J.; Zhou, Y. L.; Li, M.; Hou, Q.

    2014-01-01

    In this paper we have developed a Finnis-Sinclair-type interatomic potential for W-W interactions that is based on ab initio calculations. The modified potential is able to reproduce the correct formation energies of self-interstitial atom (SIA) defects in tungsten, offering a significant improvement over the Ackland-Thetford tungsten potential. Using the modified potential, the thermal expansion is calculated in a temperature range from 0 to 3500 K. The results are in reasonable agreement with the experimental data, thus overcoming the shortcomings of the negative thermal expansion using the Derlet-Nguyen-Manh-Dudarev tungsten potential. The W-W potential presented here is also applied to study in detail the diffusion of SIAs in tungsten. We reveal that the initial SIA initiates a sequence of tungsten atom displacements and replacements in the <1 1 1> direction. An Arrhenius fit to the diffusion data at temperatures below 550 K indicates a migration energy of 0.022 eV, which is in reasonable agreement with the experimental data.

  16. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Kramer, Richard

    2011-08-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  17. Adaptation of GEANT4 to Monte Carlo dose calculations based on CT data.

    PubMed

    Jiang, H; Paganetti, H

    2004-10-01

    The GEANT4 Monte Carlo code provides many powerful functions for conducting particle transport simulations with great reliability and flexibility. However, as a general purpose Monte Carlo code, not all the functions were specifically designed and fully optimized for applications in radiation therapy. One of the primary issues is the computational efficiency, which is especially critical when patient CT data have to be imported into the simulation model. In this paper we summarize the relevant aspects of the GEANT4 tracking and geometry algorithms and introduce our work on using the code to conduct dose calculations based on CT data. The emphasis is focused on modifications of the GEANT4 source code to meet the requirements for fast dose calculations. The major features include a quick voxel search algorithm, fast volume optimization, and the dynamic assignment of material density. These features are ready to be used for tracking the primary types of particles employed in radiation therapy such as photons, electrons, and heavy charged particles. Recalculation of a proton therapy treatment plan generated by a commercial treatment planning program for a paranasal sinus case is presented as an example. PMID:15543788

  18. Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics

    NASA Astrophysics Data System (ADS)

    Hošek, Petr; Spiwok, Vojtěch

    2016-01-01

    Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.

  19. Adjoint-based deviational Monte Carlo methods for phonon transport calculations

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.

    2015-06-01

    In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.

  20. A 3D pencil-beam-based superposition algorithm for photon dose calculation in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Tillikainen, L.; Helminen, H.; Torsti, T.; Siljamäki, S.; Alakuijala, J.; Pyyry, J.; Ulmer, W.

    2008-07-01

    In this work, a novel three-dimensional superposition algorithm for photon dose calculation is presented. The dose calculation is performed as a superposition of pencil beams, which are modified based on tissue electron densities. The pencil beams have been derived from Monte Carlo simulations, and are separated into lateral and depth-directed components. The lateral component is modeled using exponential functions, which allows accurate modeling of lateral scatter in heterogeneous tissues. The depth-directed component represents the total energy deposited on each plane, which is spread out using the lateral scatter functions. Finally, convolution in the depth direction is applied to account for tissue interface effects. The method can be used with the previously introduced multiple-source model for clinical settings. The method was compared against Monte Carlo simulations in several phantoms including lung- and bone-type heterogeneities. Comparisons were made for several field sizes for 6 and 18 MV energies. The deviations were generally within (2%, 2 mm) of the field central axis dmax. Significantly larger deviations (up to 8%) were found only for the smallest field in the lung slab phantom for 18 MV. The presented method was found to be accurate in a wide range of conditions making it suitable for clinical planning purposes.

  1. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    SciTech Connect

    Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan

    2014-03-24

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.

  2. Direct calculation of correlation length based on quasi-cumulant method

    NASA Astrophysics Data System (ADS)

    Fukushima, Noboru

    2014-03-01

    We formulate a method of directly obtaining a correlation length without full calculation of correlation functions, as a high-temperature series. The method is based on the quasi-cumulant method, which was formulated by the author in J. Stat. Phys. 111, 1049-1090 (2003) as a complementary method for the high-temperature series expansion originally for an SU(n) Heisenberg model, but is applicable to general spin models according to our recent reformulation. A correlation function divided by its lowest-order nonzero contribution has properties very similar to a generating function of some kind of moments, which we call quasi-moments. Their corresponding quasi-cumulants can be also derived, whose generating function is related to the correlation length. In addition, applications to other numerical methods such as the quantum Monte Carlo method are also discussed. JSPS KAKENHI Grant Number 25914008.

  3. Langevin spin dynamics based on ab initio calculations: numerical schemes and applications.

    PubMed

    Rózsa, L; Udvardi, L; Szunyogh, L

    2014-05-28

    A method is proposed to study the finite-temperature behaviour of small magnetic clusters based on solving the stochastic Landau-Lifshitz-Gilbert equations, where the effective magnetic field is calculated directly during the solution of the dynamical equations from first principles instead of relying on an effective spin Hamiltonian. Different numerical solvers are discussed in the case of a one-dimensional Heisenberg chain with nearest-neighbour interactions. We performed detailed investigations for a monatomic chain of ten Co atoms on top of a Au(0 0 1) surface. We found a spiral-like ground state of the spins due to Dzyaloshinsky-Moriya interactions, while the finite-temperature magnetic behaviour of the system was well described by a nearest-neighbour Heisenberg model including easy-axis anisotropy. PMID:24806308

  4. Ionic liquid based lithium battery electrolytes: charge carriers and interactions derived by density functional theory calculations.

    PubMed

    Angenendt, Knut; Johansson, Patrik

    2011-06-23

    The solvation of lithium salts in ionic liquids (ILs) leads to the creation of a lithium ion carrying species quite different from those found in traditional nonaqueous lithium battery electrolytes. The most striking differences are that these species are composed only of ions and in general negatively charged. In many IL-based electrolytes, the dominant species are triplets, and the charge, stability, and size of the triplets have a large impact on the total ion conductivity, the lithium ion mobility, and also the lithium ion delivery at the electrode. As an inherent advantage, the triplets can be altered by selecting lithium salts and ionic liquids with different anions. Thus, within certain limits, the lithium ion carrying species can even be tailored toward distinct important properties for battery application. Here, we show by DFT calculations that the resulting charge carrying species from combinations of ionic liquids and lithium salts and also some resulting electrolyte properties can be predicted. PMID:21591707

  5. Prediction of topological insulators in supercubane-like materials based on first-principles calculations.

    PubMed

    Wang, Guo-Xiang; Dong, Shuai; Hou, Jing-Min

    2016-03-31

    The lattice structures and topological properties of [Formula: see text] (X  =  C, Si, Ge, Sn, Pb) under hydrostatic strain have been investigated based on first-principle calculations. Among the materials, [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text] are dynamically stable with negative formation energy and no imaginary phonon frequency. We find that the hydrostatic strain cannot induce a quantum phase transition between topological trivial and nontrivial state for both [Formula: see text] and [Formula: see text], while for [Formula: see text] and [Formula: see text] the tensile strain can play a unique role in tuning the band topology, which will lead to a topological nontrivial state with Z 2 invariants (1;111). Although the topological transition occurs above the Fermi level, the Fermi level can be tuned by applying electrostatic gating voltage. PMID:26932939

  6. Chemical evolution of dwarf spheroidal galaxies based on model calculations incorporating observed star formation histories

    NASA Astrophysics Data System (ADS)

    Homma, H.; Murayama, T.

    We investigate the chemical evolution model explaining the chemical composition and the star formation histories (SFHs) simultaneously for the dwarf spheroidal galaxies (dSphs). Recently, wide imaging photometry and multi-object spectroscopy give us a large number of data. Therefore, we start to develop the chemical evolution model based on an SFH given by photometric observations and estimates a metallicity distribution function (MDF) comparing with spectroscopic observations. With this new model we calculate the chemical evolution for 4 dSphs (Fornax, Sculptor, Leo II, Sextans), and then we found that the model of 0.1 Gyr for the delay time of type Ia SNe is too short to explain the observed [alpha /Fe] vs. [Fe/H] diagrams.

  7. Fast GPU-based calculations in few-body quantum scattering

    NASA Astrophysics Data System (ADS)

    Pomerantsev, V. N.; Kukulin, V. I.; Rubtsova, O. A.; Sakhiev, S. K.

    2016-07-01

    A principally novel approach towards solving the few-particle (many-dimensional) quantum scattering problems is described. The approach is based on a complete discretization of few-particle continuum and usage of massively parallel computations of integral kernels for scattering equations by means of GPU. The discretization for continuous spectrum of few-particle Hamiltonian is realized with a projection of all scattering operators and wave functions onto the stationary wave-packet basis. Such projection procedure leads to a replacement of singular multidimensional integral equations with linear matrix ones having finite matrix elements. Different aspects of the employment of multithread GPU computing for fast calculation of the matrix kernel of the equation are studied in detail. As a result, the fully realistic three-body scattering problem above the break-up threshold is solved on an ordinary desktop PC with GPU for a rather small computational time.

  8. A study of potential numerical pitfalls in GPU-based Monte Carlo dose calculation

    NASA Astrophysics Data System (ADS)

    Magnoux, Vincent; Ozell, Benoît; Bonenfant, Éric; Després, Philippe

    2015-07-01

    The purpose of this study was to evaluate the impact of numerical errors caused by the floating point representation of real numbers in a GPU-based Monte Carlo code used for dose calculation in radiation oncology, and to identify situations where this type of error arises. The program used as a benchmark was bGPUMCD. Three tests were performed on the code, which was divided into three functional components: energy accumulation, particle tracking and physical interactions. First, the impact of single-precision calculations was assessed for each functional component. Second, a GPU-specific compilation option that reduces execution time as well as precision was examined. Third, a specific function used for tracking and potentially more sensitive to precision errors was tested by comparing it to a very high-precision implementation. Numerical errors were found in two components of the program. Because of the energy accumulation process, a few voxels surrounding a radiation source end up with a lower computed dose than they should. The tracking system contained a series of operations that abnormally amplify rounding errors in some situations. This resulted in some rare instances (less than 0.1%) of computed distances that are exceedingly far from what they should have been. Most errors detected had no significant effects on the result of a simulation due to its random nature, either because they cancel each other out or because they only affect a small fraction of particles. The results of this work can be extended to other types of GPU-based programs and be used as guidelines to avoid numerical errors on the GPU computing platform.

  9. Inertial sensor-based stride parameter calculation from gait sequences in geriatric patients.

    PubMed

    Rampp, Alexander; Barth, Jens; Schülein, Samuel; Gaßmann, Karl-Günter; Klucken, Jochen; Eskofier, Björn M

    2015-04-01

    A detailed and quantitative gait analysis can provide evidence of various gait impairments in elderly people. To provide an objective decision-making basis for gait analysis, simple applicable tests analyzing a high number of strides are required. A mobile gait analysis system, which is mounted on shoes, can fulfill these requirements. This paper presents a method for computing clinically relevant temporal and spatial gait parameters. Therefore, an accelerometer and a gyroscope were positioned laterally below each ankle joint. Temporal gait events were detected by searching for characteristic features in the signals. To calculate stride length, the gravity compensated accelerometer signal was double integrated, and sensor drift was modeled using a piece-wise defined linear function. The presented method was validated using GAITRite-based gait parameters from 101 patients (average age 82.1 years). Subjects performed a normal walking test with and without a wheeled walker. The parameters stride length and stride time showed a correlation of 0.93 and 0.95 between both systems. The absolute error of stride length was 6.26 cm on normal walking test. The developed system as well as the GAITRite showed an increased stride length, when using a four-wheeled walker as walking aid. However, the walking aid interfered with the automated analysis of the GAITRite system, but not with the inertial sensor-based approach. In summary, an algorithm for the calculation of clinically relevant gait parameters derived from inertial sensors is applicable in the diagnostic workup and also during long-term monitoring approaches in the elderly population. PMID:25389237

  10. A cultural study of a science classroom and graphing calculator-based technology

    NASA Astrophysics Data System (ADS)

    Casey, Dennis Alan

    Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology, has found its way from commercial and domestic applications into the pedagogy of science and math education. The purpose of this study was to investigate the culture of an "alternative" science classroom and how it functions with graphing calculator-based technology. Using ethnographic methods, a case study of one secondary, team-taught, Environmental/Physical Science (EPS) classroom was conducted. Nearly half of the 23 students were identified as students with special education needs. Over a four-month period, field data was gathered from written observations, videotaped interactions, audio taped interviews, and document analyses to determine how technology was used and what meaning it had for the participants. Analysis indicated that the technology helped to keep students from getting frustrated with handling data and graphs. In a relatively short period of time, students were able to gather data, produce graphs, and to use inscriptions in meaningful classroom discussions. In addition, teachers used the technology as a means to involve and motivate students to want to learn science. By employing pedagogical skills and by utilizing a technology that might not otherwise be readily available to these students, an environment of appreciation, trust, and respect was fostered. Further, the use of technology by these teachers served to expand students' social capital---the benefits that come from an individual's social contacts, social skills, and social resources.

  11. Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ruf, Joe

    2007-01-01

    As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.

  12. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    SciTech Connect

    Giuseppe Palmiotti

    2015-05-01

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  13. First-principles calculation method for electron transport based on the grid Lippmann-Schwinger equation.

    PubMed

    Egami, Yoshiyuki; Iwase, Shigeru; Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji

    2015-09-01

    We develop a first-principles electron-transport simulator based on the Lippmann-Schwinger (LS) equation within the framework of the real-space finite-difference scheme. In our fully real-space-based LS (grid LS) method, the ratio expression technique for the scattering wave functions and the Green's function elements of the reference system is employed to avoid numerical collapse. Furthermore, we present analytical expressions and/or prominent calculation procedures for the retarded Green's function, which are utilized in the grid LS approach. In order to demonstrate the performance of the grid LS method, we simulate the electron-transport properties of the semiconductor-oxide interfaces sandwiched between semi-infinite jellium electrodes. The results confirm that the leakage current through the (001)Si-SiO_{2} model becomes much larger when the dangling-bond state is induced by a defect in the oxygen layer, while that through the (001)Ge-GeO_{2} model is insensitive to the dangling bond state. PMID:26465580

  14. First-principles calculation method for electron transport based on the grid Lippmann-Schwinger equation

    NASA Astrophysics Data System (ADS)

    Egami, Yoshiyuki; Iwase, Shigeru; Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji

    2015-09-01

    We develop a first-principles electron-transport simulator based on the Lippmann-Schwinger (LS) equation within the framework of the real-space finite-difference scheme. In our fully real-space-based LS (grid LS) method, the ratio expression technique for the scattering wave functions and the Green's function elements of the reference system is employed to avoid numerical collapse. Furthermore, we present analytical expressions and/or prominent calculation procedures for the retarded Green's function, which are utilized in the grid LS approach. In order to demonstrate the performance of the grid LS method, we simulate the electron-transport properties of the semiconductor-oxide interfaces sandwiched between semi-infinite jellium electrodes. The results confirm that the leakage current through the (001 )Si -SiO2 model becomes much larger when the dangling-bond state is induced by a defect in the oxygen layer, while that through the (001 )Ge -GeO2 model is insensitive to the dangling bond state.

  15. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

    2014-10-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  16. 42 CFR 413.220 - Methodology for calculating the per-treatment base rate under the ESRD prospective payment system...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Methodology for calculating the per-treatment base...-treatment base rate under the ESRD prospective payment system effective January 1, 2011. (a) Data sources. The methodology for determining the per treatment base rate under the ESRD prospective payment...

  17. Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.

    PubMed

    Demol, Benjamin; Viard, Romain; Reynaert, Nick

    2015-01-01

    The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using

  18. GIS supported calculations of (137)Cs deposition in Sweden based on precipitation data.

    PubMed

    Almgren, Sara; Nilsson, Elisabeth; Erlandsson, Bengt; Isaksson, Mats

    2006-09-15

    It is of interest to know the spatial variation and the amount of (137)Cs e.g. in case of an accident with a radioactive discharge. In this study, the spatial distribution of the quarterly (137)Cs deposition over Sweden due to nuclear weapons fallout (NWF) during the period 1962-1966 was determined by relating the measured deposition density at a reference site to the amount of precipitation. Measured quarterly values of (137)Cs deposition density per unit precipitation at three reference sites and quarterly precipitation at 62 weather stations distributed over Sweden were used in the calculations. The reference sites were assumed to represent areas with different quarterly mean precipitation. The extent of these areas was determined from the distribution of the mean measured precipitation between 1961 and 1990 and varied according to seasonal variations in the mean precipitation pattern. Deposition maps were created by interpolation within a geographical information system (GIS). Both integrated (total) and cumulative (decay corrected) deposition densities were calculated. The lowest levels of NWF (137)Cs deposition density were noted in north-eastern and eastern parts of Sweden and the highest levels in the western parts of Sweden. Furthermore the deposition density of (137)Cs, resulting from the Chernobyl accident was determined for an area in western Sweden based on precipitation data. The highest levels of Chernobyl (137)Cs in western Sweden were found in the western parts of the area along the coast and the lowest in the east. The sum of the deposition densities from NWF and Chernobyl in western Sweden was then compared to the total activity measured in soil samples at 27 locations. Comparisons between the predicted values of this study show a good agreement with measured values and other studies. PMID:16647743

  19. [Calculation on ecological security baseline based on the ecosystem services value and the food security].

    PubMed

    He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao

    2016-01-01

    The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone. PMID:27228612

  20. Nurse Staffing Calculation in the Emergency Department - Performance-Oriented Calculation Based on the Manchester Triage System at the University Hospital Bonn

    PubMed Central

    Gräff, Ingo; Goldschmidt, Bernd; Glien, Procula; Klockner, Sophia; Erdfelder, Felix; Schiefer, Jennifer Lynn; Grigutsch, Daniel

    2016-01-01

    Background To date, there are no valid statistics regarding the number of full time staff necessary for nursing care in emergency departments in Europe. Material and Methods Staff requirement calculations were performed using state-of-the art procedures which take both fluctuating patient volume and individual staff shortfall rates into consideration. In a longitudinal observational study, the average nursing staff engagement time per patient was assessed for 503 patients. For this purpose, a full-time staffing calculation was estimated based on the five priority levels of the Manchester Triage System (MTS), taking into account specific workload fluctuations (50th-95th percentiles). Results Patients classified to the MTS category red (n = 35) required the most engagement time with an average of 97.93 min per patient. On weighted average, for orange MTS category patients (n = 118), nursing staff were required for 85.07 min, for patients in the yellow MTS category (n = 181), 40.95 min, while the two MTS categories with the least acute patients, green (n = 129) and blue (n = 40) required 23.18 min and 14.99 min engagement time per patient, respectively. Individual staff shortfall due to sick days and vacation time was 20.87% of the total working hours. When extrapolating this to 21,899 (2010) emergency patients, 67–123 emergency patients (50–95% percentile) per month can be seen by one nurse. The calculated full time staffing requirement depending on the percentiles was 14.8 to 27.1. Conclusion Performance-oriented staff planning offers an objective instrument for calculation of the full-time nursing staff required in emergency departments. PMID:27138492

  1. Impact of Heterogeneity-Based Dose Calculation Using a Deterministic Grid-Based Boltzmann Equation Solver for Intracavitary Brachytherapy

    SciTech Connect

    Mikell, Justin K.; Klopp, Ann H.; Gonzalez, Graciela M.N.; Kisling, Kelly D.; Price, Michael J.; Berner, Paula A.; Eifel, Patricia J.; Mourtada, Firas

    2012-07-01

    Purpose: To investigate the dosimetric impact of the heterogeneity dose calculation Acuros (Transpire Inc., Gig Harbor, WA), a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials: The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received {sup 192}Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed with and without solid model applicator, with and without overriding the patient contour to 1 g/cm{sup 3} muscle, and with and without overriding contrast materials to muscle or 2.25 g/cm{sup 3} bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. American Association of Physicists in Medicine Task Group 43 (TG-43) guidelines and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, International Commission on Radiation Units and Measurements (ICRU) report 38 rectal and bladder points, three and nine o'clock, and {sub D2cm3} to the bladder, rectum, and sigmoid. Results: Points A and B, D{sub 2} cm{sup 3} bladder, ICRU bladder, and three and nine o'clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43 guidelines. The D{sub 2cm3} rectum (n = 3), D{sub 2cm3} sigmoid (n = 1), and ICRU rectum (n = 6) had differences of >5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast were mapped to bone and radiopaque packing was not overridden. Conclusions

  2. Impact of heterogeneity-based dose calculation using a deterministic grid-based Boltzmann equation solver for intracavitary brachytherapy

    PubMed Central

    Mikell, Justin K.; Klopp, Ann H.; Gonzalez, Graciela M. N.; Kisling, Kelly D.; Price, Michael J.; Berner, Paula A.; Eifel, Patricia J.; Mourtada, and Firas

    2014-01-01

    Purpose To investigate the dosimetric impact of the heterogeneity dose calculation Acuros, a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received 192Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance (MR)-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed: with and without solid model applicator, with and without overriding the patient contour to 1g/cc muscle, and with and without overriding contrast materials to muscle or 2.25 g/cc bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. TG-43 and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, ICRU report #38 rectal and bladder points, three and nine o'clock, and D2cc to the bladder, rectum, and sigmoid. Results Points A, B, D2cc bladder, ICRU bladder, and three and nine o'clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43. The D2cc rectum (n=3), D2cc sigmoid (n=1), and ICRU rectum (n=6) had differences > 5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast mapped to bone and radiopaque packing was not overridden. Conclusions The GBBS has minimal impact on clinical parameters for this cohort of GYN patients with unshielded applicators. The incorrect mapping of rectal and balloon contrast does not have a significant impact on clinical parameters. Rectal parameters may be

  3. Accuracy of continuum electrostatic calculations based on three common dielectric boundary definitions

    PubMed Central

    Aguilar, Boris

    2015-01-01

    by doubling of the solute dielectric constant. However, the use of the higher interior dielectric does not eliminate the large individual deviations between pairwise interactions computed within the two DB definitions. It is argued that while the MS based definition of the dielectric boundary is more physically correct in some types of practical calculations, the choice is not so clear in some other common scenarios. PMID:26236064

  4. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom

    SciTech Connect

    Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.

    2014-02-15

    Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model

  5. An anatomically realistic lung model for Monte Carlo-based dose calculations

    SciTech Connect

    Liang Liang; Larsen, Edward W.; Chetty, Indrin J.

    2007-03-15

    Treatment planning for disease sites with large variations of electron density in neighboring tissues requires an accurate description of the geometry. This self-evident statement is especially true for the lung, a highly complex organ having structures with a wide range of sizes that range from about 10{sup -4} to 1 cm. In treatment planning, the lung is commonly modeled by a voxelized geometry obtained using computed tomography (CT) data at various resolutions. The simplest such model, which is often used for QA and validation work, is the atomic mix or mean density model, in which the entire lung is homogenized and given a mean (volume-averaged) density. The purpose of this paper is (i) to describe a new heterogeneous random lung model, which is based on morphological data of the human lung, and (ii) use this model to assess the differences in dose calculations between an actual lung (as represented by our model) and a mean density (homogenized) lung. Eventually, we plan to use the random lung model to assess the accuracy of CT-based treatment plans of the lung. For this paper, we have used Monte Carlo methods to make accurate comparisons between dose calculations for the random lung model and the mean density model. For four realizations of the random lung model, we used a single photon beam, with two different energies (6 and 18 MV) and four field sizes (1x1, 5x5, 10x10, and 20x20 cm{sup 2}). We found a maximum difference of 34% of D{sub max} with the 1x1, 18 MV beam along the central axis (CAX). A ''shadow'' region distal to the lung, with dose reduction up to 7% of D{sub max}, exists for the same realization. The dose perturbations decrease for larger field sizes, but the magnitude of the differences in the shadow region is nearly independent of the field size. We also observe that, compared to the mean density model, the random structures inside the heterogeneous lung can alter the shape of the isodose lines, leading to a broadening or shrinking of the

  6. Ruthenia-based electrochemical supercapacitors: insights from first-principles calculations.

    PubMed

    Ozoliņš, Vidvuds; Zhou, Fei; Asta, Mark

    2013-05-21

    Electrochemical supercapacitors (ECs) have important applications in areas wherethe need for fast charging rates and high energy density intersect, including in hybrid and electric vehicles, consumer electronics, solar cell based devices, and other technologies. In contrast to carbon-based supercapacitors, where energy is stored in the electrochemical double-layer at the electrode/electrolyte interface, ECs involve reversible faradaic ion intercalation into the electrode material. However, this intercalation does not lead to phase change. As a result, ECs can be charged and discharged for thousands of cycles without loss of capacity. ECs based on hydrous ruthenia, RuO2·xH2O, exhibit some of the highest specific capacitances attained in real devices. Although RuO2 is too expensive for widespread practical use, chemists have long used it as a model material for investigating the fundamental mechanisms of electrochemical supercapacitance and heterogeneous catalysis. In this Account, we discuss progress in first-principles density-functional theory (DFT) based studies of the electronic structure, thermodynamics, and kinetics of hydrous and anhydrous RuO2. We find that DFT correctly reproduces the metallic character of the RuO2 band structure. In addition, electron-proton double-insertion into bulk RuO2 leads to the formation of a polar covalent O-H bond with a fractional increase of the Ru charge in delocalized d-band states by only 0.3 electrons. This is in slight conflict with the common assumption of a Ru valence change from Ru(4+) to Ru(3+). Using the prototype electrostatic ground state (PEGS) search method, we predict a crystalline RuOOH compound with a formation energy of only 0.15 eV per proton. The calculated voltage for the onset of bulk proton insertion in the dilute limit is only 0.1 V with respect to the reversible hydrogen electrode (RHE), in reasonable agreement with the 0.4 V threshold for a large diffusion-limited contribution measured experimentally

  7. Experimental verification of internal dosimetry calculations: Construction of a heterogeneous phantom based on human organs

    NASA Astrophysics Data System (ADS)

    Lauridsen, Bente; Hedemann Jensen, Per

    1987-03-01

    The basic dosimetric quantity in ICRP-publication no. 30 is the aborbed fraction AF( T←S). This parameter is the fraction of energy absorbed in a target organ T per emission of radiation from activity deposited in the source organ S. Based upon this fraction it is possible to calculate the Specific Effective Energy SEE( T← S). From this, the committed effective dose equivalent from an intake of radioactive material can be found, and thus the annual limit of intake for given radionuclides can be determined. A male phantom has been constructed with the aim of measuring the Specific Effective Energy SEE(T←S) in various target organs. Impressions-of real human organs have been used to produce vacuum forms. Tissue equivalent plastic sheets were sucked into the vacuum forms producing a shell with a shape identical to the original organ. Each organ has been made of two shells. The same procedure has been used for the body. Thin tubes through the organs make it possible to place TL dose meters in a matrix so the dose distribution can be measured. The phantom has been supplied with lungs, liver, kidneys, spleen, stomach, bladder, pancreas, and thyroid gland. To select a suitable body liquid for the phantom, laboratory experiments have been made with different liquids and different radionuclides. In these experiments the change in dose rate due to changes in density and composition of the liquid was determined. Preliminary results of the experiments are presented.

  8. Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems

    NASA Technical Reports Server (NTRS)

    Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.

    2012-01-01

    Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.

  9. Fission yield calculation using toy model based on Monte Carlo simulation

    SciTech Connect

    Jubaidah; Kurniadi, Rizal

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  10. Absolute position calculation for a desktop mobile rehabilitation robot based on three optical mouse sensors.

    PubMed

    Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry

    2011-01-01

    ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm. PMID:22254744

  11. Stationarity Modeling and Informatics-Based Diagnostics in Monte Carlo Criticality Calculations

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.

    2005-01-15

    In Monte Carlo criticality calculations, source error propagation through the stationary (active) cycles and source convergence in the settling (inactive) cycles are both dominated by the dominance ratio (DR) of fission kernels. For symmetric two-fissile-component systems with the DR close to unity, the extinction of fission source sites can occur in one of the components even when the initial source is symmetric and the number of histories per cycle is more than 1000. When such a system is made slightly asymmetric, the neutron effective multiplication factor at the inactive cycles does not reflect the convergence to stationary source distribution. To overcome this problem, relative entropy has been applied to a slightly asymmetric two-fissile-component problem with a DR of 0.993. The numerical results are mostly satisfactory but also show the possibility of the occasional occurrence of unnecessarily strict stationarity diagnostics. Therefore, a criterion is defined based on the concept of data compression limit in information theory. Numerical results for a pressurized water reactor fuel storage facility with a DR of 0.994 strongly support the efficacy of relative entropy in both the posterior and progressive stationarity diagnostics.

  12. Accelerated materials design of fast oxygen ionic conductors based on first principles calculations

    NASA Astrophysics Data System (ADS)

    He, Xingfeng; Mo, Yifei

    Over the past decades, significant research efforts have been dedicated to seeking fast oxygen ion conductor materials, which have important technological applications in electrochemical devices such as solid oxide fuel cells, oxygen separation membranes, and sensors. Recently, Na0.5Bi0.5TiO3 (NBT) was reported as a new family of fast oxygen ionic conductor. We will present our first principles computation study aims to understand the O diffusion mechanisms in the NBT material and to design this material with enhanced oxygen ionic conductivity. Using the NBT materials as an example, we demonstrate the computation capability to evaluate the phase stability, chemical stability, and ionic diffusion of the ionic conductor materials. We reveal the effects of local atomistic configurations and dopants on oxygen diffusion and identify the intrinsic limiting factors in increasing the ionic conductivity of the NBT materials. Novel doping strategies were predicted and demonstrated by the first principles calculations. In particular, the K doped NBT compound achieved good phase stability and an order of magnitude increase in oxygen ionic conductivity of up to 0.1 S cm-1 at 900 K compared to the experimental Mg doped compositions. Our results provide new avenues for the future design of the NBT materials and demonstrate the accelerated design of new ionic conductor materials based on first principles techniques. This computation methodology and workflow can be applied to the materials design of any (e.g. Li +, Na +) fast ion-conducting materials.

  13. Calculation of Stress Intensity Factors Based on Force-Displacement Curve using Element Free Galerkin Method

    NASA Astrophysics Data System (ADS)

    Parvanova, Sonia

    2012-03-01

    An idea related to the calculation of stress intensity factors based on the standard appearance of the force-displacement curve is developed in this paper. The presented procedure predicts the shape of the graphics around the point under consideration form where indirectly the stress intensity factors are obtained. The numerical implementation of the new approach is achieved by using element free Galerkin method, which is a variant of meshless methods and requires only nodal data for a domain discretization without a finite element mesh. A MATLAB software code for two dimensional elasticity problems has been worked out, along with intrinsic basis enrichment for precise modelling of the singular stress field around the crack tip. One numerical example of a rectangular plate with different lengths of a symmetric edge crack is portrayed. The stress intensity factors obtained by the present numerical approach are compared with analytical solutions. The errors in the stress intensity factors for opening fracture mode I are less than 1% although the model mesh is relatively coarse.

  14. Photon fluence-to-effective dose conversion coefficients calculated from a Saudi population-based phantom

    NASA Astrophysics Data System (ADS)

    Ma, A. K.; Altaher, K.; Hussein, M. A.; Amer, M.; Farid, K. Y.; Alghamdi, A. A.

    2014-02-01

    In this work we will present a new set of photon fluence-to-effective dose conversion coefficients using the Saudi population-based voxel phantom developed recently by our group. The phantom corresponds to an average Saudi male of 173 cm tall weighing 77 kg. There are over 125 million voxels in the phantom each of which is 1.37×1.37×1.00 mm3. Of the 27 organs and tissues of radiological interest specified in the recommendations of ICRP Publication 103, all but the oral mucosa, extrathoracic tissue and the lymph nodes were identified in the current version of the phantom. The bone surface (endosteum) is too thin to be identifiable; it is about 10 μm thick. The dose to the endosteum was therefore approximated by the dose to the bones. Irradiation geometries included anterior-posterior (AP), left (LLAT) and rotational (ROT). The simulations were carried out with the MCNPX code version 2.5.0. The fluence in free air and the energy depositions in each organ were calculated for monoenergetic photon beams from 10 keV to 10 MeV to obtain the conversion coefficients. The radiation and tissue weighting factors were taken from ICRP Publication 60 and 103. The results from this study will also be compared with the conversion coefficients in ICRP Publication 116.

  15. Neutron capture gamma-ray data and calculations for HPGe detector-based applications

    NASA Astrophysics Data System (ADS)

    McNabb, Dennis P.; Firestone, Richard B.

    2004-10-01

    Recently an IAEA Coordinated Research Project published an evaluation of thermal neutron capture gamma-ray cross sections, measured to 1-5% uncertainty, for over 80 elements [1] and produced the Evaluated Gamma-ray Activation File (EGAF) [2] containing nearly 35,000 primary and secondary gamma-rays is available from the IAEA Nuclear Data Section. We have begun an effort to model the quasi-continuum gamma-ray cascade following neutron capture using the approach outlined by Becvar et al. [3] while constraining the calculation to reproduce the measured cross sections deexciting low-lying levels. Our goal is to provide complete neutron capture gamma ray data in ENDF formatted files to use as accurate event generators for high-resolution HPGe detector based applications. The results will be benchmarked to experimental spectroscopic data and compared with existing gamma-decay widths and level densities. [1] Database of Prompt Gamma Rays from Slow Neutron Capture for Elemental Analysis, IAEA-TECDOC-DRAFT (December, 2003); http://www-nds.iaea.org/pgaa/tecdoc.pdf. [2] Evaluated Gamma-ray Activation File maintained by the International Atomic Energy Agency; http://www-nds.iaea.org/pgaa/. [3] F. Becvar, Nucl Instr. Meth. A417, 434 (1998).

  16. Research on Structural Safety of the Stratospheric Airship Based on Multi-Physics Coupling Calculation

    NASA Astrophysics Data System (ADS)

    Ma, Z.; Hou, Z.; Zang, X.

    2015-09-01

    As a large-scale flexible inflatable structure by a huge inner lifting gas volume of several hundred thousand cubic meters, the stratospheric airship's thermal characteristic of inner gas plays an important role in its structural performance. During the floating flight, the day-night variation of the combined thermal condition leads to the fluctuation of the flow field inside the airship, which will remarkably affect the pressure acted on the skin and the structural safety of the stratospheric airship. According to the multi-physics coupling mechanism mentioned above, a numerical procedure of structural safety analysis of stratospheric airships is developed and the thermal model, CFD model, finite element code and criterion of structural strength are integrated. Based on the computation models, the distributions of the deformations and stresses of the skin are calculated with the variation of day-night time. The effects of loads conditions and structural configurations on the structural safety of stratospheric airships in the floating condition are evaluated. The numerical results can be referenced for the structural design of stratospheric airships.

  17. A new Fe–He interatomic potential based on ab initio calculations in α-Fe

    SciTech Connect

    Gao, Fei; Deng, Huiqiu; Heinisch, Howard L.; Kurtz, Richard J.

    2011-11-01

    A new interatomic potential for Fe−He interactions has been developed by fitting to the results obtained from ab initio calculations. Based on the electronic hybridization between Fe d-electrons and He s-electrons, an s-band model, along with a repulsive pair potential, has been developed to describe the Fe−He interaction. The atomic configurations and formation energies of single He defects and small interstitial He clusters are utilized in the fitting process. The binding properties and relative stabilities of the He-vacancy and interstitial He clusters are studied. The present Fe−He potential is also applied to study the emission of self-interstitial atoms from small He clusters in alpha-Fe matrices. It is found that the di-He cluster dissociates when the temperature is higher than 400K, but the larger He clusters can create an interstitial Fe atom. The temperature for kicking out an interstitial Fe atom is found to decrease with increasing size of the He clusters.

  18. GPAW - massively parallel electronic structure calculations with Python-based software.

    SciTech Connect

    Enkovaara, J.; Romero, N.; Shende, S.; Mortensen, J.

    2011-01-01

    Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used this approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.

  19. Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.

    PubMed

    Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P

    2016-06-14

    Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates. PMID:27139005

  20. A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)

    NASA Astrophysics Data System (ADS)

    Bitar, A.; Lisbona, A.; Thedrez, P.; Sai Maurel, C.; LeForestier, D.; Barbet, J.; Bardies, M.

    2007-02-01

    Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

  1. Random and bias errors in simple regression-based calculations of sea-level acceleration

    NASA Astrophysics Data System (ADS)

    Howd, P.; Doran, K. J.; Sallenger, A. H.

    2012-12-01

    We examine the random and bias errors associated with three simple regression-based methods used to calculate the acceleration of sea-level elevation (SL). These methods are: (1) using ordinary least-squares regression (OLSR) to fit a single second-order (in time) equation to an entire elevation time series; (2) using a sliding regression window with OLRS 2nd order fits to provide time and window length dependent estimates; and (3) using a sliding regression window with OLSR 1st order fits to provide time and window length dependent estimates of sea level rate differences (SLRD). A Monte Carlo analysis using synthetic elevation time series with 9 different noise formulations (red, AR(1), and white noise at 3 variance levels) is used to examine the error structure associated with the three analysis methods. We show that, as expected, the single-fit method (1), while providing statistically unbiased estimates of the mean acceleration over an interval, by statistical design does not provide estimates of time-varying acceleration. This technique cannot be expected to detect recent changes in SL acceleration, such as those predicted by some climate models. The two sliding window techniques show similar qualitative results for the test time series, but differ dramatically in their statistical significance. Estimates of acceleration based on the 2nd order fits (2) are numerically smaller than the rate differences (3), and in the presence of near-equal residual noise, are more difficult to detect with statistical significance. We show, using the SLRD estimates from tide gauge data, how statistically significant changes in sea level accelerations can be detected at different temporal and spatial scales.

  2. Accurate all-electron G0W0 quasiparticle energies employing the full-potential augmented plane-wave method

    NASA Astrophysics Data System (ADS)

    Nabok, Dmitrii; Gulans, Andris; Draxl, Claudia

    2016-07-01

    The G W approach of many-body perturbation theory has become a common tool for calculating the electronic structure of materials. However, with increasing number of published results, discrepancies between the values obtained by different methods and codes become more and more apparent. For a test set of small- and wide-gap semiconductors, we demonstrate how to reach the numerically best electronic structure within the framework of the full-potential linearized augmented plane-wave (FLAPW) method. We first evaluate the impact of local orbitals in the Kohn-Sham eigenvalue spectrum of the underlying starting point. The role of the basis-set quality is then further analyzed when calculating the G0W0 quasiparticle energies. Our results, computed with the exciting code, are compared to those obtained using the projector-augmented plane-wave formalism, finding overall good agreement between both methods. We also provide data produced with a typical FLAPW basis set as a benchmark for other G0W0 implementations.

  3. A Lagrangian parcel based mixing plane method for calculating water based mixed phase particle flows in turbo-machinery

    NASA Astrophysics Data System (ADS)

    Bidwell, Colin S.

    2015-05-01

    A method for calculating particle transport through turbo-machinery using the mixing plane analogy was developed and used to analyze the energy efficient engine . This method allows the prediction of temperature and phase change of water based particles along their path and the impingement efficiency and particle impact property data on various components in the engine. This methodology was incorporated into the LEWICE3D V3.5 software. The method was used to predict particle transport in the low pressure compressor of the . The was developed by NASA and GE in the early 1980s as a technology demonstrator and is representative of a modern high bypass turbofan engine. The flow field was calculated using the NASA Glenn ADPAC turbo-machinery flow solver. Computations were performed for a Mach 0.8 cruise condition at 11,887 m assuming a standard warm day for ice particle sizes of 5, 20 and 100 microns and a free stream particle concentration of . The impingement efficiency results showed that as particle size increased average impingement efficiencies and scoop factors increased for the various components. The particle analysis also showed that the amount of mass entering the inner core decreased with increased particle size because the larger particles were less able to negotiate the turn into the inner core due to particle inertia. The particle phase change analysis results showed that the larger particles warmed less as they were transported through the low pressure compressor. Only the smallest 5 micron particles were warmed enough to produce melting with a maximum average melting fraction of 0.18. The results also showed an appreciable amount of particle sublimation and evaporation for the 5 micron particles entering the engine core (22.6 %).

  4. Calculation of vibrational spectra for dioxouranium monochloride monomer and dimers

    NASA Astrophysics Data System (ADS)

    Umreiko, D. S.; Shundalau, M. B.; Zazhogin, A. P.; Komyak, A. I.

    2010-09-01

    Structural models were built and spectral characteristics were calculated based on ab initio calculations for the monomer and dimers of dioxouranium monochoride UO2Cl. The calculations were carried out in the effective core potential LANL2DZ approximation for the uranium atom and all-electron basis sets using DFT methods for oxygen and chlorine atoms (B3LYP/cc-pVDZ). The monomer UO2Cl was found to possess an equilibrium planar (close to T-shaped) configuration with C2v symmetry. The obtained spectral characteristics were analyzed and compared with experimental data. The adequacy of the proposed models and the qualitative agreement between calculation and experiment were demonstrated.

  5. Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education

    ERIC Educational Resources Information Center

    Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.

    2014-01-01

    Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…

  6. An EGS4 based mathematical phantom for radiation protection calculations using standard man

    SciTech Connect

    Wise, K.N.

    1994-11-01

    This note describes an Electron Gamma Shower code (EGS4) Monte Carlo program for calculating radiation transport in adult males and females from internal or external electron and gamma sources which requires minimal knowledge of organ geometry. Calculations of the dose from planar gamma fields and from computerized tomography illustrate two applications of the package. 25 refs., 5 figs.

  7. New calculation schemes for the {open_quotes}building-base{close_quotes} system in conditions of collapsing loess soils

    SciTech Connect

    Mezherovskii, V.A.

    1994-05-01

    New calculation schemes are suggested for the {open_quotes}building-loess collapsing base{close_quotes} system, with the help of which it is possible to obtain values of the forces and movements occurring in a building as a result of collapses of bases that are close to the real ones in the nature of moistening and deformations of loess strata.

  8. Benchmarking of Monte Carlo based shutdown dose rate calculations for applications to JET.

    PubMed

    Petrizzi, L; Batistoni, P; Fischer, U; Loughlin, M; Pereslavtsev, P; Villari, R

    2005-01-01

    The calculation of dose rates after shutdown is an important issue for operating nuclear reactors. A validated computational tool is needed for reliable dose rate calculations. In fusion reactors neutrons induce high levels of radioactivity and presumably high doses. The complex geometries of the devices require the use of sophisticated geometry modelling and computational tools for transport calculations. Simple rule of thumb laws do not always apply well. Two computational procedures have been developed recently and applied to fusion machines. Comparisons between the two methods showed some inherent discrepancies when applied to calculation for the ITER while good agreement was found for a 14 MeV point source neutron benchmark experiment. Further benchmarks were considered necessary to investigate in more detail the reasons for the different results in different cases. In this frame the application to the Joint European Torus JET machine has been considered as a useful benchmark exercise. In a first calculational benchmark with a representative D-T irradiation history of JET the two methods differed by no more than 25%. In another, more realistic benchmark exercise, which is the subject of this paper, the real irradiation history of D-T and D-D campaigns conducted at JET in 1997-98 were used to calculate the shut-down doses at different locations, irradiation and decay times. Experimental dose data recorded at JET for the same conditions offer the possibility to check the prediction capability of the calculations and thus show the applicability (and the constraints) of the procedures and data to the rather complex shutdown dose rate analysis of real fusion devices. Calculation results obtained by the two methods are reported below, comparison with experimental results give discrepancies ranging between 2 and 10. The reasons of that can be ascribed to the high uncertainty on the experimental data and the unsatisfactory JET model used in the calculation. A new

  9. Volume calculation of subsurface structures and traps in hydrocarbon exploration — a comparison between numerical integration and cell based models

    NASA Astrophysics Data System (ADS)

    Slavinić, Petra; Cvetković, Marko

    2016-01-01

    The volume calculation of geological structures is one of the primary goals of interest when dealing with exploration or production of oil and gas in general. Most of those calculations are done using advanced software packages but still the mathematical workflow (equations) has to be used and understood for the initial volume calculation process. In this paper a comparison is given between bulk volume calculations of geological structures using trapezoidal and Simpson's rule and the ones obtained from cell-based models. Comparison in calculation is illustrated with four models; dome - 1/2 of ball/sphere, elongated anticline, stratigraphic trap due to lateral facies change and faulted anticline trap. Results show that Simpson's and trapezoidal rules give a very accurate volume calculation even with a few inputs(isopach areas - ordinates). A test of cell based model volume calculation precision against grid resolution is presented for various cases. For high accuracy, less the 1% of an error from coarsening, a cell area has to be 0.0008% of the reservoir area

  10. [Calculation and analysis of arc temperature field of pulsed TIG welding based on Fowler-Milne method].

    PubMed

    Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang

    2012-09-01

    Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding. PMID:23240389

  11. Current in the Protein Nanowires: Quantum Calculations of the Base States.

    PubMed

    Suprun, Anatol D; Shmeleva, Liudmyla V

    2016-12-01

    It is known that synthesis of adenosine triphosphoric acid in mitochondrions may be only completed on the condition of transport of the electron pairs, which were created due to oxidation processes, to mitochondrions. As of today, many efforts were already taken in order to understand those processes that occur in the course of donor-acceptor electron transport between cellular organelles (that is, between various proteins and protein structures). However, the problem concerning the mechanisms of electron transport over these organelles still remains understudied. This paper is dedicated to the investigation of these same issues.It has been shown that regardless of the amino acid inhomogeneity of the primary structure, it is possible to apply a representation of the second quantization in respect of the protein molecule (hereinafter "numbers of filling representation"). Based on this representation, it has been established that the primary structure of the protein molecule is actually a semiconductor nanowire. In addition, at the same time, its conduction band, into which an electron is injected as the result of donor-acceptor processes, consists of five sub-bands. Three of these sub-bands have normal dispersion laws, while the rest two sub-bands have abnormal dispersion laws (reverse laws). Test calculation of the current density was made under the conditions of the complete absence of the factors, which may be interpreted as external fields. It has been shown that under such conditions, current density is exactly equal to zero. This is the evidence of correctness of the predictive model of the conductivity band of the primary structure of the protein molecule (protein nanowire). At the same time, it makes it possible to apply the obtained results in respect of the actual situation, where factors, which may be interpreted as external fields, exist. PMID:26858156

  12. Ray tracing based path-length calculations for polarized light tomographic imaging

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Kanhirodan, Rajan

    2015-09-01

    A ray tracing based path length calculation is investigated for polarized light transport in a pixel space. Tomographic imaging using polarized light transport is promising for applications in optical projection tomography of small animal imaging and turbid media with low scattering. Polarized light transport through a medium can have complex effects due to interactions such as optical rotation of linearly polarized light, birefringence, di-attenuation and interior refraction. Here we investigate the effects of refraction of polarized light in a non-scattering medium. This step is used to obtain the initial absorption estimate. This estimate can be used as prior in Monte Carlo (MC) program that simulates the transport of polarized light through a scattering medium to assist in faster convergence of the final estimate. The reflectance for p-polarized (parallel) and s-polarized (perpendicular) are different and hence there is a difference in the intensities that reach the detector end. The algorithm computes the length of the ray in each pixel along the refracted path and this is used to build the weight matrix. This weight matrix with corrected ray path length and the resultant intensity reaching the detector for each ray is used in the algebraic reconstruction (ART) method. The proposed method is tested with numerical phantoms for various noise levels. The refraction errors due to regions of different refractive index are discussed, the difference in intensities with polarization is considered. The improvements in reconstruction using the correction so applied is presented. This is achieved by tracking the path of the ray as well as the intensity of the ray as it traverses through the medium.

  13. Full Dimensional Vibrational Calculations for Methane Using AN Accurate New AB Initio Based Potential Energy Surface

    NASA Astrophysics Data System (ADS)

    Majumder, Moumita; Dawes, Richard; Wang, Xiao-Gang; Carrington, Tucker; Li, Jun; Guo, Hua; Manzhos, Sergei

    2014-06-01

    New potential energy surfaces for methane were constructed, represented as analytic fits to about 100,000 individual high-level ab initio data. Explicitly-correlated multireference data (MRCI-F12(AE)/CVQZ-F12) were computed using Molpro [1] and fit using multiple strategies. Fits with small to negligible errors were obtained using adaptations of the permutation-invariant-polynomials (PIP) approach [2,3] based on neural-networks (PIP-NN) [4,5] and the interpolative moving least squares (IMLS) fitting method [6] (PIP-IMLS). The PESs were used in full-dimensional vibrational calculations with an exact kinetic energy operator by representing the Hamiltonian in a basis of products of contracted bend and stretch functions and using a symmetry adapted Lanczos method to obtain eigenvalues and eigenvectors. Very close agreement with experiment was produced from the purely ab initio PESs. References 1- H.-J. Werner, P. J. Knowles, G. Knizia, 2012.1 ed. 2012, MOLPRO, a package of ab initio programs. see http://www.molpro.net. 2- Z. Xie and J. M. Bowman, J. Chem. Theory Comput 6, 26, 2010. 3- B. J. Braams and J. M. Bowman, Int. Rev. Phys. Chem. 28, 577, 2009. 4- J. Li, B. Jiang and Hua Guo, J. Chem. Phys. 139, 204103 (2013). 5- S Manzhos, X Wang, R Dawes and T Carrington, JPC A 110, 5295 (2006). 6- R. Dawes, X-G Wang, A.W. Jasper and T. Carrington Jr., J. Chem. Phys. 133, 134304 (2010).

  14. Calculation procedures for oil free scroll compressors based on mathematical modelling of working process

    NASA Astrophysics Data System (ADS)

    Paranin, Y.; Burmistrov, A.; Salikeev, S.; Fomina, M.

    2015-08-01

    Basic propositions of calculation procedures for oil free scroll compressors characteristics are presented. It is shown that mathematical modelling of working process in a scroll compressor makes it possible to take into account such factors influencing the working process as heat and mass exchange, mechanical interaction in working chambers, leakage through slots, etc. The basic mathematical model may be supplemented by taking into account external heat exchange, elastic deformation of scrolls, inlet and outlet losses, etc. To evaluate the influence of procedure on scroll compressor characteristics calculations accuracy different calculations were carried out. Internal adiabatic efficiency was chosen as a comparative parameter which evaluates the perfection of internal thermodynamic and gas-dynamic compressor processes. Calculated characteristics are compared with experimental values obtained for the compressor pilot sample.

  15. 12 CFR 702.106 - Standard calculation of risk-based net worth requirement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.106 Standard calculation of...) Allowance. Negative one hundred percent (−100%) of the balance of the Allowance for Loan and Lease...

  16. An assessment of TRAC-PD2 refill calculations based on Creare countercurrent flow tests

    SciTech Connect

    Bott, T.F.

    1982-04-01

    An important step in computer code development is the assessment of code capabilities through comparison of calculated results with experimental data A number of Creare countercurrent flow tests were simulated with the Transient Reactor Analysis Code (TRAC)-PD2 code to assess the emergency core coolant (ECC) lower plenum penetration and refill predictive capabilities. The tests examined in this study indicate a prediction of complete bypass and delivery at countercurrent steam flows where these phenomena occurred experimentally. Steam flows leading to partial delivery experimentally did not always lead to partial delivery in the calculations, however. A number of parameters can potentially effect TRAC refill calculations. Sensitivity studies indicate the TRAC results are most sensitive to droplet Weber number variations that affect interfacial shear and heat transfer rates. The condensation model also affects calculations with subcooled ECC liquid.

  17. A regression model for calculating the boiling point isobars of tetrachloromethane-based binary solutions

    NASA Astrophysics Data System (ADS)

    Preobrazhenskii, M. P.; Rudakov, O. B.

    2016-01-01

    A regression model for calculating the boiling point isobars of tetrachloromethane-organic solvent binary homogeneous systems is proposed. The parameters of the model proposed were calculated for a series of solutions. The correlation between the nonadditivity parameter of the regression model and the hydrophobicity criterion of the organic solvent is established. The parameter value of the proposed model is shown to allow prediction of the potential formation of azeotropic mixtures of solvents with tetrachloromethane.

  18. Effectiveness of a computer based medication calculation education and testing programme for nurses.

    PubMed

    Sherriff, Karen; Burston, Sarah; Wallis, Marianne

    2012-01-01

    The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety. PMID:21345550

  19. Calculation of aqueous solubility of crystalline un-ionized organic chemicals and drugs based on structural similarity and physicochemical descriptors.

    PubMed

    Raevsky, Oleg A; Grigor'ev, Veniamin Yu; Polianczyk, Daniel E; Raevskaja, Olga E; Dearden, John C

    2014-02-24

    Solubilities of crystalline organic compounds calculated according to AMP (arithmetic mean property) and LoReP (local one-parameter regression) models based on structural and physicochemical similarities are presented. We used data on water solubility of 2615 compounds in un-ionized form measured at 25±5 °C. The calculation results were compared with the equation based on the experimental data for lipophilicity and melting point. According to statistical criteria, the model based on structural and physicochemical similarities showed a better fit with the experimental data. An additional advantage of this model is that it uses only theoretical descriptors, and this provides means for calculating water solubility for both existing and not yet synthesized compounds. PMID:24456022

  20. First macro Monte Carlo based commercial dose calculation module for electron beam treatment planning—new issues for clinical consideration

    NASA Astrophysics Data System (ADS)

    Ding, George X.; Duggan, Dennis M.; Coffey, Charles W.; Shokrani, Parvaneh; Cygler, Joanna E.

    2006-06-01

    The purpose of this study is to present our experience of commissioning, testing and use of the first commercial macro Monte Carlo based dose calculation algorithm for electron beam treatment planning and to investigate new issues regarding dose reporting (dose-to-water versus dose-to-medium) as well as statistical uncertainties for the calculations arising when Monte Carlo based systems are used in patient dose calculations. All phantoms studied were obtained by CT scan. The calculated dose distributions and monitor units were validated against measurements with film and ionization chambers in phantoms containing two-dimensional (2D) and three-dimensional (3D) type low- and high-density inhomogeneities at different source-to-surface distances. Beam energies ranged from 6 to 18 MeV. New required experimental input data for commissioning are presented. The result of validation shows an excellent agreement between calculated and measured dose distributions. The calculated monitor units were within 2% of measured values except in the case of a 6 MeV beam and small cutout fields at extended SSDs (>110 cm). The investigation on the new issue of dose reporting demonstrates the differences up to 4% for lung and 12% for bone when 'dose-to-medium' is calculated and reported instead of 'dose-to-water' as done in a conventional system. The accuracy of the Monte Carlo calculation is shown to be clinically acceptable even for very complex 3D-type inhomogeneities. As Monte Carlo based treatment planning systems begin to enter clinical practice, new issues, such as dose reporting and statistical variations, may be clinically significant. Therefore it is imperative that a consistent approach to dose reporting is used.

  1. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accordance with § 600.010-08(c)(1)(ii). (3) The manufacturer shall supply total model year sales projections...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment....208-08 Calculation of FTP-based and HFET-based fuel economy values for a model type. (a) Fuel...

  2. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... supply total model year sales projections for each car line/vehicle subconfiguration combination. (i...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment... 1977 and Later Model Year Automobiles § 600.208-08 Calculation of FTP-based and HFET-based fuel...

  3. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... accordance with § 600.010-08(c)(1)(ii). (3) The manufacturer shall supply total model year sales projections...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment....208-08 Calculation of FTP-based and HFET-based fuel economy values for a model type. (a) Fuel...

  4. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... exists for an electric vehicle configuration, all values for that vehicle configuration are harmonically... HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of... Values § 600.206-08 Calculation and use of FTP-based and HFET-based fuel economy values for...

  5. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... exists for an electric vehicle configuration, all values for that vehicle configuration are harmonically... HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of... Values § 600.206-08 Calculation and use of FTP-based and HFET-based fuel economy values for...

  6. GPU-based calculation of scattering characteristics of space target in the visible spectrum

    NASA Astrophysics Data System (ADS)

    Cao, YunHua; Wu, Zhensen; Bai, Lu; Song, Zhan; Guo, Xing

    2014-10-01

    Scattering characteristics of space target in the visible spectrum, which can be used in target detection, target identification, and space docking, is calculated in this paper. Algorithm of scattering characteristics of space target is introduced. In the algorithm, space target is divided into thousands of triangle facets. In order to obtain scattering characteristics of the target, calculation of each facet will be needed. For each facet, calculation will be executed in the spectrum of 400-760 nanometers at intervals of 1 nanometer. Thousands of facets and hundreds of bands of each facet will cause huge calculation, thus the calculation will be very time-consuming. Taking into account the high parallelism of the algorithm, Graphic Processing Units (GPUs) are used to accelerate the algorithm. The acceleration reaches 300 times speedup on single Femi-generation NVDIA GTX 590 as compared to the single-thread CPU version of code on Intel(R) Xeon(R) CPU E5-2620. And a speedup of 412x can be reached when a Kepler-generation NVDIA K20c is used.

  7. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    SciTech Connect

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 103 - 105 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  8. Validation of XiO Electron Monte Carlo-based calculations by measurements in a homogeneous phantom and by EGSnrc calculations in a heterogeneous phantom.

    PubMed

    Edimo, P; Kwato Njock, M G; Vynckier, S

    2013-11-01

    The purpose of the present study is to perform a clinical validation of a new commercial Monte Carlo (MC) based treatment planning system (TPS) for electron beams, i.e. the XiO 4.60 electron MC (XiO eMC). Firstly, MC models for electron beams (4, 8, 12 and 18 MeV) have been simulated using BEAMnrc user code and validated by measurements in a homogeneous water phantom. Secondly, these BEAMnrc models have been set as the reference tool to evaluate the ability of XiO eMC to reproduce dose perturbations in the heterogeneous phantom. In the homogeneous phantom calculations, differences between MC computations (BEAMnrc, XiO eMC) and measurements are less than 2% in the homogeneous dose regions and less than 1 mm shifting in the high dose gradient regions. As for the heterogeneous phantom, the accuracy of XiO eMC has been benchmarked with predicted BEAMnrc models. In the lung tissue, the overall agreement between the two schemes lies under 2.5% for the most tested dose distributions at 8, 12 and 18 MeV and is better than the 4 MeV one. In the non-lung tissue, a good agreement has been found between BEAMnrc simulation and XiO eMC computation for 8, 12 and 18 MeV. Results are worse in the case of 4 MeV calculations (discrepancies ≈ 4%). XiO eMC can predict dose perturbation induced by high-density heterogeneities for 8, 12 and 18 MeV. However, significant deviations found in the case of 4 MeV demonstrate that caution is necessary in using XiO eMC at lower electron energies. PMID:23010450

  9. [The calculation of the intraocular lens power based on raytracing methods: a systematic review].

    PubMed

    Steiner, Deborah; Hoffmann, Peter; Goldblum, David

    2013-04-01

    A problem in cataract surgery consists in the preoperative identification of the appropriate intraocular lens (IOL) power. Different calculation approaches have been developed for this purpose; raytracing methods represent one of the most exact but also mathematically more challenging methods. This article gives a systematic overview of the different raytracing calculations available and described in the literature and compares their results. It has been shown that raytracing includes physical measurements and IOL manufacturing data but no approximations. The prediction error is close to zero and an essential advantage is the applicability to different conditions without the need of modifications. Compared to the classical formulae the raytracing methods are more precise overall, but due to the various data and property situations they are hardly comparable yet. The raytracing calculations represent a good alternative to the 3rd generation formulae. They minimize refractive errors, are wider applicable and provide better results overall, particularly in eyes with preconditions. PMID:23629771

  10. Zonal calculation for large scale drought monitoring based on MODIS data

    NASA Astrophysics Data System (ADS)

    Li, Hongjun; Zheng, Li; Li, Chunqiang; Lei, Yuping

    2006-08-01

    Temperature vegetation dryness index (TVDI) is a simple and effective methods for drought monitoring. In this study, the statistic characteristics of MODIS-EVI and MODI-NDVI at two different times were analyzed and compared. NDVI reaches saturation in well-vegetated areas while EVI has no such a shortcoming. In current study, we used MODIS-EVI as vegetation index for TVDI. The analysis of vegetation index and land surface temperature at different latitudes and different times showed that there was a zonal distribution of land surface parameters. It is therefore necessary to calculate the TVDI with a zonal distribution. Compared with TVDI calculated for the whole region, the zonal calculation of TVDI increases the accuracy of regression equations of wet and dry edge, improves the correlations of TVDI and measured soil moisture, and the effectiveness of the large scale drought monitoring using remote sensing data.

  11. Study on the Calculation of Magnetic Force Based on the Equivalent Magnetic Charge Method

    NASA Astrophysics Data System (ADS)

    Li, Jiangang; Tan, Qingchang; Zhang, Yongqi; Zhang, Kuo

    Magnetic drivers have been used widely in the pharmaceutical, chemical, petroleum, food and other industries with its perfect sealing without contact. Common method of calculating of the magnetic force are the Maxwell equations, empirical formulas, and he equivalent magnetic charge method as well. The Maxwell equations method is the most complicated and the empirical formulas method is the simplest with low accuracy. The equivalent magnetic charge method is simpler than the Maxwell equations method and more accurate than the empirical formulas method. In this paper, the magnetic force of the magnetic driver of reciprocate in line is calculated with the equivalent magnetic charge method and was compared with the experiment.

  12. Optimum design calculations for detectors based on ZnSe(Те,О) scintillators

    NASA Astrophysics Data System (ADS)

    Katrunov, K.; Ryzhikov, V.; Gavrilyuk, V.; Naydenov, S.; Lysetska, O.; Litichevskyi, V.

    2013-06-01

    Light collection in scintillators ZnSe(X), where X is an isovalent dopant, was studied using Monte Carlo calculations. Optimum design was determined for detectors of "scintillator—Si-photodiode" type, which can involve either one scintillation element or scintillation layers of large area made of small-crystalline grains. The calculations were carried out both for determination of the optimum scintillator shape and for design optimization of light guides, on the surface of which the layer of small-crystalline grains is formed.

  13. Spectral linelist of HD16O molecule based on VTT calculations for atmospheric application

    NASA Astrophysics Data System (ADS)

    Voronin, B. A.

    2014-11-01

    Three version line-list of dipole transition for isotopic modification of water molecule HD16O are presented. Line-lists have been created on the basis of VTT calculations (Voronin, Tennyson, Tolchenov et al. MNRAS, 2010) by adding air- and self-broadening coefficient, and temperature exponents for HD16O-air case. Three cut-of values for the line intensities were used: 1e-30, 1e-32 and 1e-35 cm/molecule. Calculated line-lists are available on the site ftp://ftp.iao.ru/pub/VTT/VTT-296/.

  14. Calculation of signal-to-noise ratio (SNR) of infrared detection system based on MODTRAN model

    NASA Astrophysics Data System (ADS)

    Lu, Xue; Li, Chuang; Fan, Xuewu

    2013-09-01

    Signal-to-noise ratio (SNR) is an important parameter of infrared detection system. SNR of infrared detection system is determined by the target infrared radiation, atmospheric transmittance, background infrared radiation and the detector noise. The infrared radiation flux in the atmosphere is determined by the selective absorption of the gas molecules, the atmospheric environment, and the transmission distance of the radiation, etc, so the atmospheric transmittance and infrared radiance flux are intricate parameters. A radiometric model for the calculation of SNR of infrared detection system is developed and used to evaluate the effects of various parameters on signal-to-noise ratio (SNR). An atmospheric modeling tool, MODTRAN, is used to model wavelength-dependent atmospheric transmission and sky background radiance. Then a new expression of SNR is deduced. Instead of using constants such as average atmospheric transmission and average wavelength in traditional method, it uses discrete values for atmospheric transmission and sky background radiance. The integrals in general expression of SNR are converted to summations. The accuracy of SNR obtained from the new method can be improved. By adopting atmospheric condition of the 1976 US standard, no clouds urban aerosols, fall-winter aerosol profiles, the typical spectrum characters of sky background radiance and transmittance are computed by MODTRON. Then the operating ranges corresponding to the threshold quantity of SNR are calculated with the new method. The calculated operating ranges are more close to the measured operating range than those calculated with the traditional method.

  15. Vibrational and structural study of onopordopicrin based on the FTIR spectrum and DFT calculations.

    PubMed

    Chain, Fernando E; Romano, Elida; Leyton, Patricio; Paipa, Carolina; Catalán, César A N; Fortuna, Mario; Brandán, Silvia Antonia

    2015-11-01

    In the present work, the structural and vibrational properties of the sesquiterpene lactone onopordopicrin (OP) were studied by using infrared spectroscopy and density functional theory (DFT) calculations together with the 6-31G(∗) basis set. The harmonic vibrational wavenumbers for the optimized geometry were calculated at the same level of theory. The complete assignment of the observed bands in the infrared spectrum was performed by combining the DFT calculations with Pulay's scaled quantum mechanical force field (SQMFF) methodology. The comparison between the theoretical and experimental infrared spectrum demonstrated good agreement. Then, the results were used to predict the Raman spectrum. Additionally, the structural properties of OP, such as atomic charges, bond orders, molecular electrostatic potentials, characteristics of electronic delocalization and topological properties of the electronic charge density were evaluated by natural bond orbital (NBO), atoms in molecules (AIM) and frontier orbitals studies. The calculated energy band gap and the chemical potential (μ), electronegativity (χ), global hardness (η), global softness (S) and global electrophilicity index (ω) descriptors predicted for OP low reactivity, higher stability and lower electrophilicity index as compared with the sesquiterpene lactone cnicin containing similar rings. PMID:26057092

  16. Parameterization of brachytherapy source phase space file for Monte Carlo-based clinical brachytherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Zou, W.; Chen, T.; Kim, L.; Khan, A.; Haffty, B.; Yue, N. J.

    2014-01-01

    A common approach to implementing the Monte Carlo method for the calculation of brachytherapy radiation dose deposition is to use a phase space file containing information on particles emitted from a brachytherapy source. However, the loading of the phase space file during the dose calculation consumes a large amount of computer random access memory, imposing a higher requirement for computer hardware. In this study, we propose a method to parameterize the information (e.g., particle location, direction and energy) stored in the phase space file by using several probability distributions. This method was implemented for dose calculations of a commercial Ir-192 high dose rate source. Dose calculation accuracy of the parameterized source was compared to the results observed using the full phase space file in a simple water phantom and in a clinical breast cancer case. The results showed the parameterized source at a size of 200 kB was as accurate as the phase space file represented source of 1.1 GB. By using the parameterized source representation, a compact Monte Carlo job can be designed, which allows an easy setup for parallel computing in brachytherapy planning.

  17. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and... viable; sales below the cost of production are disregarded; sales outside the ordinary course of...

  18. Structure of amphotericin B aggregates based on calculations of optical spectra

    SciTech Connect

    Hemenger, R.P.; Kaplan, T.; Gray, L.J.

    1983-01-01

    The degenerate ground state approximation was used to calculate the optical absorption and CD spectra for helical polymer models of amphotericin B aggregates in aqueous solution. Comparisons with experimental spectra indicate that a two-molecule/unit cell helical polymer model is a possible structure for aggregates of amphotericin B.

  19. A web-based calculator for estimating the profit potential of grain segregation by protein concentration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    By ignoring spatial variability in grain quality, conventional harvesting systems may increase the likelihood that growers will not capture price premiums for high quality grain found within fields. The Grain Segregation Profit Calculator was developed to demonstrate the profit potential of segregat...

  20. Empirically Based Conversion Factors for Calculating Couple-Years of Protection.

    ERIC Educational Resources Information Center

    Stover, John; Bertrand, Jane T.; Shelton, James D.

    2000-01-01

    Presents conversion factors to be used to translate the quality of the respective contraception methods distributed to a single measure of protection for calculating couple-years of protection in family planning studies. Discusses the implications for the evaluation of family planning programs. (SLD)

  1. Analysis of the Relationship between Estimation Skills Based on Calculation and Number Sense of Prospective Classroom Teachers

    ERIC Educational Resources Information Center

    Senol, Ali; Dündar, Sefa; Gündüz, Nazan

    2015-01-01

    The aim of this study are to examine the relationship between prospective classroom teachers' estimation skills based on calculation and their number sense and to investigate whether their number sense and estimation skills change according to their class level and gender. The participants of the study are 125 prospective classroom teachers…

  2. Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy

    PubMed Central

    Hünemohr, Nora; Paganetti, Harald; Greilich, Steffen; Jäkel, Oliver; Seco, Joao

    2014-01-01

    Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱe and effective atomic number Zeff are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱe that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱe and the Zeff in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. Mean I-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would profit significantly (up to 2

  3. Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy

    SciTech Connect

    Hünemohr, Nora Greilich, Steffen; Paganetti, Harald; Seco, Joao; Jäkel, Oliver

    2014-06-15

    Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱ{sub e} and effective atomic number Z{sub eff} are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱ{sub e} that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱ{sub e} and the Z{sub eff} in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. MeanI-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would

  4. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    SciTech Connect

    Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  5. Why are diphenylalanine-based peptide nanostructures so rigid? Insights from first principles calculations.

    PubMed

    Azuri, Ido; Adler-Abramovich, Lihi; Gazit, Ehud; Hod, Oded; Kronik, Leeor

    2014-01-22

    The diphenylalanine peptide self-assembles to form nanotubular structures of remarkable mechanical, piezolelectrical, electrical, and optical properties. The tubes are unexpectedly stiff, with reported Young's moduli of 19-27 GPa that were extracted using two independent techniques. Yet the physical basis for the remarkable rigidity is not fully understood. Here, we calculate the Young's modulus for bulk diphenylalanine peptide from first principles, using density functional theory with dispersive corrections. The calculation demonstrates that at least half of the stiffness of the material is the result of dispersive interactions. We further quantify the nature of various inter- and intramolecular interactions. We reveal that despite the porous nature of the lattice, there is an array of rigid nanotube backbones with interpenetrating "zipper-like" aromatic interlocks that result in stiffness and robustness. This presents a general strategy for the analysis of bioinspired functional materials and may pave the way for rational design of bionanomaterials. PMID:24368025

  6. Navier-Stokes calculations on multi-element airfoils using a chimera-based solver

    NASA Technical Reports Server (NTRS)

    Jasper, Donald W.; Agrawal, Shreekant; Robinson, Brian A.

    1993-01-01

    A study of Navier-Stokes calculations of flows about multielement airfoils using a chimera grid approach is presented. The chimera approach utilizes structured, overlapped grids which allow great flexibility of grid arrangement and simplifies grid generation. Calculations are made for two-, three-, and four-element airfoils, and modeling of the effect of gap distance between elements is demonstrated for a two element case. Solutions are obtained using the thin-layer form of the Reynolds averaged Navier-Stokes equations with turbulence closure provided by the Baldwin-Lomax algebraic model or the Baldwin-Barth one equation model. The Baldwin-Barth turbulence model is shown to provide better agreement with experimental data and to dramatically improve convergence rates for some cases. Recently developed, improved farfield boundary conditions are incorporated into the solver for greater efficiency. Computed results show good comparison with experimental data which include aerodynamic forces, surface pressures, and boundary layer velocity profiles.

  7. Acceleration of Monte Carlo Criticality Calculations Using Deterministic-Based Starting Sources

    SciTech Connect

    Ibrahim, A.; Peplow, Douglas E.; Wagner, John C; Mosher, Scott W; Evans, Thomas M

    2012-01-01

    A new automatic approach that uses approximate deterministic solutions for providing the starting fission source for Monte Carlo eigenvalue calculations was evaluated in this analysis. By accelerating the Monte Carlo source convergence and decreasing the number of cycles that has to be skipped before the tallies estimation, this approach was found to increase the efficiency of the overall simulation, even with the inclusion of the extra computational time required by the deterministic calculation. This approach was also found to increase the reliability of the Monte Carlo criticality calculations of loosely coupled systems because the use of the better starting source reduces the likelihood of producing an undersampled k{sub eff} due to the inadequate source convergence. The efficiency improvement was demonstrated using two of the standard test problems devised by the OECD/NEA Expert Group on Source Convergence in Criticality-Safety Analysis to measure the source convergence in Monte Carlo criticality calculations. For a fixed uncertainty objective, this approach increased the efficiency of the overall simulation by factors between 1.2 and 3 depending on the difficulty of the source convergence in these problems. The reliability improvement was demonstrated in a modified version of the 'k{sub eff} of the world' problem that was specifically designed to demonstrate the limitations of the current Monte Carlo power iteration techniques. For this problem, the probability of obtaining a clearly undersampled k{sub eff} decreased from 5% with a uniform starting source to zero with a deterministic starting source when batch sizes with more than 15,000 neutron/cycle were used.

  8. Calculated neutron KERMA factors based on the LLNL ENDL data file. Volume 27

    SciTech Connect

    Howerton, R.J.

    1986-01-01

    Neutron KERMA factors calculated from the LLNL ENDL data file are tabulated for 15 composite materials and for the isotopes or elements in the ENDL file from Z = 1 to Z = 29. The incident neutron energies range from 1.882 x 10/sup -5/ to 20. MeV for the composite materials and from 1.30 x 10/sup -9/ to 20. MeV for the isotopes and elements.

  9. Polynomial regression calculation of the Earth's position based on millisecond pulsar timing

    NASA Astrophysics Data System (ADS)

    Tian, Feng; Tang, Zheng-Hong; Yan, Qing-Zeng; Yu, Yong

    2012-02-01

    Prior to achieving high precision navigation of a spacecraft using X-ray observations, a pulsar rotation model must be built and analysis of the precise position of the Earth should be performed using ground pulsar timing observations. We can simulate time-of-arrival ground observation data close to actual observed values before using pulsar timing observation data. Considering the correlation between the Earth's position and its short arc section of an orbit, we use polynomial regression to build the correlation. Regression coefficients can be calculated using the least square method, and a coordinate component series can also be obtained; that is, we can calculate Earth's position in the Barycentric Celestial Reference System according to pulse arrival time data and a precise pulsar rotation model. In order to set appropriate parameters before the actual timing observations for Earth positioning, we can calculate the influence of the spatial distribution of pulsars on errors in the positioning result and the influence of error source variation on positioning by simulation. It is significant that the threshold values of the observation and systematic errors can be established before an actual observation occurs; namely, we can determine the observation mode with small errors and reject the observed data with big errors, thus improving the positioning result.

  10. A new approach to calculating powder diffraction patterns based on the Debye scattering equation.

    PubMed

    Thomas, Noel William

    2010-01-01

    A new method is defined for the calculation of X-ray and neutron powder diffraction patterns from the Debye scattering equation (DSE). Pairwise atomic interactions are split into two contributions, the first from lattice-pair vectors and the second from cell-pair vectors. Since the frequencies of lattice-pair vectors can be directly related to crystallite size, application of the DSE is thereby extended to crystallites of lengths up to approximately 200 nm. The input data correspond to unit-cell parameters, atomic coordinates and displacement factors. The calculated diffraction patterns are characterized by full backgrounds as well as complete reflection profiles. Four illustrative systems are considered: sodium chloride (NaCl), alpha-quartz, monoclinic lead zirconate titanate (PZT) and kaolinite. The effects of varying crystallite size on diffraction patterns are calculated for NaCl, quartz and kaolinite, and a method of modelling static structural disorder is defined for kaolinite. The idea of partial diffraction patterns is introduced and a treatment of atomic displacement parameters is included. Although the method uses pair distribution functions as an intermediate stage, it is anticipated that further progress in reducing computational times will be made by proceeding directly from crystal structure to diffraction pattern. PMID:20029134

  11. Accurate calculations of the high-pressure elastic constants based on the first-principles

    NASA Astrophysics Data System (ADS)

    Wang, Chen-Ju; Gu, Jian-Bing; Kuang, Xiao-Yu; Yang, Xiang-Dong

    2015-08-01

    The energy term corresponding to the first order of the strain in Taylor series expansion of the energy with respect to strain is always ignored when high-pressure elastic constants are calculated. Whether the modus operandi would affect the results of the high-pressure elastic constants is still unsolved. To clarify this query, we calculate the high-pressure elastic constants of tantalum and rhenium when the energy term mentioned above is considered and neglected, respectively. Results show that the neglect of the energy term corresponding to the first order of the strain indeed would influence the veracity of the high-pressure elastic constants, and this influence becomes larger with pressure increasing. Therefore, the energy term corresponding to the first-order of the strain should be considered when the high-pressure elastic constants are calculated. Project supported by the National Natural Science Foundation of China (Grant No. 11274235), the Young Scientist Fund of the National Natural Science Foundation of China (Grant No. 11104190), and the Doctoral Education Fund of Education Ministry of China (Grant Nos. 20100181110086 and 20110181120112).

  12. Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate

    NASA Astrophysics Data System (ADS)

    Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef

    2016-04-01

    The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.

  13. An investigation of voxel geometries for MCNP-based radiation dose calculations.

    PubMed

    Zhang, Juying; Bednarz, Bryan; Xu, X George

    2006-11-01

    Voxelized geometry such as those obtained from medical images is increasingly used in Monte Carlo calculations of absorbed doses. One useful application of calculated absorbed dose is the determination of fluence-to-dose conversion factors for different organs. However, confusion still exists about how such a geometry is defined and how the energy deposition is best computed, especially involving a popular code, MCNP5. This study investigated two different types of geometries in the MCNP5 code, cell and lattice definitions. A 10 cm x 10 cm x 10 cm test phantom, which contained an embedded 2 cm x 2 cm x 2 cm target at its center, was considered. A planar source emitting parallel photons was also considered in the study. The results revealed that MCNP5 does not calculate total target volume for multi-voxel geometries. Therefore, tallies which involve total target volume must be divided by the user by the total number of voxels to obtain a correct dose result. Also, using planar source areas greater than the phantom size results in the same fluence-to-dose conversion factor. PMID:17023800

  14. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    NASA Astrophysics Data System (ADS)

    Ying, Zhang; Zhengqiang, Li; Yan, Wang

    2014-03-01

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.

  15. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations...

  16. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600.208-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF...

  17. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    PubMed

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  18. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark

    NASA Astrophysics Data System (ADS)

    Renner, F.; Wulff, J.; Kapsch, R.-P.; Zink, K.

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  19. Angular-divergence calculation for Experimental Advanced Superconducting Tokamak neutral beam injection ion source based on spectroscopic measurements

    SciTech Connect

    Chi, Yuan; Hu, Chundong; Zhuang, Ge

    2014-02-15

    Calorimetric method has been primarily applied for several experimental campaigns to determine the angular divergence of high-current ion source for the neutral beam injection system on the Experimental Advanced Superconducting Tokamak (EAST). A Doppler shift spectroscopy has been developed to provide the secondary measurement of the angular divergence to improve the divergence measurement accuracy and for real-time and non-perturbing measurement. The modified calculation model based on the W7AS neutral beam injectors is adopted to accommodate the slot-type accelerating grids used in the EAST's ion source. Preliminary spectroscopic experimental results are presented comparable to the calorimetrically determined value of theoretical calculation.

  20. Δg: The new aromaticity index based on g-factor calculation applied for polycyclic benzene rings

    NASA Astrophysics Data System (ADS)

    Ucun, Fatih; Tokatlı, Ahmet

    2015-02-01

    In this work, the aromaticity of polycyclic benzene rings was evaluated by the calculation of g-factor for a hydrogen placed perpendicularly at geometrical center of related ring plane at a distance of 1.2 Å. The results have compared with the other commonly used aromatic indices, such as HOMA, NICSs, PDI, FLU, MCI, CTED and, generally been found to be in agreement with them. So, it was proposed that the calculation of the average g-factor as Δg could be applied to study the aromaticity of polycyclic benzene rings without any restriction in the number of benzene rings as a new magnetic-based aromaticity index.

  1. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.

    2011-06-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (~5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  2. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation.

    PubMed

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B

    2011-06-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning. PMID:21558589

  3. Influence of channel base current and varying return stroke speed on the calculated fields of three important return stroke models

    NASA Technical Reports Server (NTRS)

    Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard

    1991-01-01

    Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.

  4. Measurement-based model of a wide-bore CT scanner for Monte Carlo dosimetric calculations with GMCTdospp software.

    PubMed

    Skrzyński, Witold

    2014-11-01

    The aim of this work was to create a model of a wide-bore Siemens Somatom Sensation Open CT scanner for use with GMCTdospp, which is an EGSnrc-based software tool dedicated for Monte Carlo calculations of dose in CT examinations. The method was based on matching spectrum and filtration to half value layer and dose profile, and thus was similar to the method of Turner et al. (Med. Phys. 36, pp. 2154-2164). Input data on unfiltered beam spectra were taken from two sources: the TASMIP model and IPEM Report 78. Two sources of HVL data were also used, namely measurements and documentation. Dose profile along the fan-beam was measured with Gafchromic RTQA-1010 (QA+) film. Two-component model of filtration was assumed: bow-tie filter made of aluminum with 0.5 mm thickness on central axis, and flat filter made of one of four materials: aluminum, graphite, lead, or titanium. Good agreement between calculations and measurements was obtained for models based on the measured values of HVL. Doses calculated with GMCTdospp differed from the doses measured with pencil ion chamber placed in PMMA phantom by less than 5%, and root mean square difference for four tube potentials and three positions in the phantom did not exceed 2.5%. The differences for models based on HVL values from documentation exceeded 10%. Models based on TASMIP spectra and IPEM78 spectra performed equally well. PMID:25028213

  5. Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations

    NASA Astrophysics Data System (ADS)

    Stripling, Hayes Franklin

    Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.

  6. Effects of sulfur on lead partitioning during sludge incineration based on experiments and thermodynamic calculations.

    PubMed

    Liu, Jing-yong; Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe; Li, Xiao-ming; Chen, Tao; Luo, Guang-qian; Xie, Wu-ming; Wang, Yu-Jie; Zhuo, Zhong-xu; Fu, Jie-wen

    2015-04-01

    Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na2S and Na2SO4) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na2SO4 and Na2S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO4(s) at low temperatures (<1000 K). The equilibrium calculation prediction also suggested that SiO2, CaO, TiO2, and Al2O3 containing materials function as condensed phase solids in the temperature range of 800-1100 K as sorbents to stabilize Pb. However, in the presence of sulfur or chlorine or the co-existence of sulfur and chlorine, these sorbents were inactive. The effect of sulfur on Pb partitioning in the sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the concentration of Si, Ca and Al-containing compounds in the sludge. These findings provide useful information for understanding the partitioning behavior of Pb, facilitating the development of strategies to control the volatilization of Pb during sludge incineration. PMID:25554470

  7. Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems

    NASA Astrophysics Data System (ADS)

    da Jornada, Felipe H.

    2015-03-01

    Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.

  8. Effects of sulfur on lead partitioning during sludge incineration based on experiments and thermodynamic calculations

    SciTech Connect

    Liu, Jing-yong; Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe; Li, Xiao-ming; Chen, Tao; Luo, Guang-qian; Xie, Wu-ming; Wang, Yu-jie; Zhuo, Zhong-xu; Fu, Jie-wen

    2015-04-15

    Highlights: • A thermodynamic equilibrium calculation was carried out. • Effects of three types of sulfurs on Pb distribution were investigated. • The mechanism for three types of sulfurs acting on Pb partitioning were proposed. • Lead partitioning and species in bottom ash and fly ash were identified. - Abstract: Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na{sub 2}S and Na{sub 2}SO{sub 4}) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na{sub 2}SO{sub 4} and Na{sub 2}S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO{sub 4}(s) at low temperatures (<1000 K). The equilibrium calculation prediction also suggested that SiO{sub 2}, CaO, TiO{sub 2}, and Al{sub 2}O{sub 3} containing materials function as condensed phase solids in the temperature range of 800–1100 K as sorbents to stabilize Pb. However, in the presence of sulfur or chlorine or the co-existence of sulfur and chlorine, these sorbents were inactive. The effect of sulfur on Pb partitioning in the sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the

  9. A Simplified Soil-Structure Interaction Based Method for Calculating Deflection of Buried Pipe

    NASA Astrophysics Data System (ADS)

    Dhar, Ashutosh Sutra; Kabir, Md. Aynul

    Soil-pipe interaction analysis was performed using the continuum theory solution and the finite element method to develop simplified equations for deflection of buried flexible pipes. The hoop and bending components of pipe deflections were studied extensively to determine the influence of different soil and pipe parameters on deflection calculations. Then, two separate simplified equations were developed for the hoop and bending components of the pipe deflection. Two factors were incorporated in the equation for bending deflection to capture the effects of different parameters. Values of those factors were determined for steel and thermoplastic pipes. The proposed simplified equations logically incorporate the hoop and bending stiffness of the soil-pipe interaction.

  10. Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation

    USGS Publications Warehouse

    Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.

    2000-01-01

    We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.

  11. Sea breeze analysis based on LES simulations and the particle trace calculations in MM21 district

    NASA Astrophysics Data System (ADS)

    Sugiyama, Toru; Soga, Yuta; Goto, Koji; Sadohara, Satoru; Takahashi, Keiko

    2016-04-01

    We have performed thermal and wind environment LES simulations in MM21 district in Yokohama. The used simulation model is MSSG (Multi-Scale Simulator for the Geo-environment). The spatial resolution is about 5[m] in horizontal and vertical axis. We have also performed the particle trace calculations in order to investigate the route of the sea-breeze. We have found the cool wind is gradually warmed/heated up as flowing into the district, then it blows up and is diffused. We will also discuss the contributions of the DHC (District Heating & Cooling) system in the area.

  12. Downwind hazard calculations for space shuttle launches at Kennedy Space Center and Vandenberg Air Force Base

    NASA Technical Reports Server (NTRS)

    Susko, M.; Hill, C. K.; Kaufman, J. W.

    1974-01-01

    The quantitative estimates are presented of pollutant concentrations associated with the emission of the major combustion products (HCl, CO, and Al2O3) to the lower atmosphere during normal launches of the space shuttle. The NASA/MSFC Multilayer Diffusion Model was used to obtain these calculations. Results are presented for nine sets of typical meteorological conditions at Kennedy Space Center, including fall, spring, and a sea-breeze condition, and six sets at Vandenberg AFB. In none of the selected typical meteorological regimes studied was a 10-min limit of 4 ppm exceeded.

  13. Calculations of kaonic nuclei based on chiral meson-baryon amplitudes

    NASA Astrophysics Data System (ADS)

    Gazda, Daniel; Mareš, Jiří

    2013-09-01

    In-medium KbarN scattering amplitudes developed within a chirally motivated coupled-channel model are used to construct K- nuclear potentials for calculations of K- nuclear quasi-bound states. Self-consistent evaluations yield K- potential depths -Re VK(ρ0) of order 100 MeV. Dynamical polarization effects and two-nucleon KbarNN→YN absorption modes are discussed. The widths ΓK of allK- nuclear quasi-bound states are comparable or even larger than the corresponding binding energies BK, exceeding considerably the energy level spacing.

  14. Development of an ab-initio calculation method for 2D layered materials-based optoelectronic devices

    NASA Astrophysics Data System (ADS)

    Kim, Han Seul; Kim, Yong-Hoon

    We report on the development of a novel first-principles method for the calculation of non-equilibrium nanoscale device operation process. Based on region-dependent Δ self-consistent field method beyond the standard density functional theory (DFT), we will introduce a novel method to describe non-equilibrium situations such as external bias and simultaneous optical excitations. In particular, we will discuss the limitation of conventional method and advantage of our scheme in describing 2D layered materials-based devices operations. Then, we investigate atomistic mechanism of optoelectronic effects from 2D layered materials-based devices and suggest the optimal material and architecture for such devices.

  15. The use of density functional theory-based reactivity descriptors in molecular similarity calculations

    NASA Astrophysics Data System (ADS)

    Boon, Greet; De Proft, Frank; Langenaeker, Wilfried; Geerlings, Paul

    1998-10-01

    Molecular similarity is studied via density functional theory-based similarity indices using a numerical integration method. Complementary to the existing similarity indices, we introduce a reactivity-related similarity index based on the local softness. After a study of some test systems, a series of peptide isosteres is studied in view of their importance in pharmacology. The whole of the present work illustrates the importance of the study of molecular similarity based on both shape and reactivity.

  16. Web-based Tsunami Early Warning System with instant Tsunami Propagation Calculations in the GPU Cloud

    NASA Astrophysics Data System (ADS)

    Hammitzsch, M.; Spazier, J.; Reißland, S.

    2014-12-01

    Usually, tsunami early warning and mitigation systems (TWS or TEWS) are based on several software components deployed in a client-server based infrastructure. The vast majority of systems importantly include desktop-based clients with a graphical user interface (GUI) for the operators in early warning centers. However, in times of cloud computing and ubiquitous computing the use of concepts and paradigms, introduced by continuously evolving approaches in information and communications technology (ICT), have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in three research projects - 'German Indonesian Tsunami Early Warning System' (GITEWS), 'Distant Early Warning System' (DEWS), and 'Collaborative, Complex, and Critical Decision-Support in Evolving Crises' (TRIDEC) - new technologies are exploited to implement a cloud-based and web-based prototype to open up new prospects for EWS. This prototype, named 'TRIDEC Cloud', merges several complementary external and in-house cloud-based services into one platform for automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The prototype in its current version addresses tsunami early warning and mitigation. The integration of GPU accelerated tsunami simulation computations have been an integral part of this prototype to foster early warning with on-demand tsunami predictions based on actual source parameters. However, the platform is meant for researchers around the world to make use of the cloud-based GPU computation to analyze other types of geohazards and natural hazards and react upon the computed situation picture with a web-based GUI in a web browser at remote sites. The current website is an early alpha version for demonstration purposes to give the

  17. Analysis of the initiation of a mesoscale convective system based on heat and moisture budget calculations

    NASA Astrophysics Data System (ADS)

    Kalthoff, Norbert; Adler, Bianca; Gantner, Leonhard

    2010-05-01

    COSMO runs were performed to simulate a mesoscale convective system (MCS), which was observed on 11 June, 2006 (pre-onset phase of the monsoon, SOP 1). Different simulation scenarios were investigated including a realistic soil moisture distribution (i), a simulation with increased soil moisture (ii) and a homogeneous soil moisture and soil texture in the whole investigation area (iii). The simulations showed that convection was initiated in all experiments. However, the amount of cells and its origin differed. While in experiment (i) and (iii) several cells were initiated and merged into an organized convective system, in experiment (ii) only a small, short-lived cell was simulated. In order to study the conditions which led to the different evolution, heat and moisture budgets were calculated. The boxes for which budgets were calculated included the whole area, where convective cells were initiated, as well as isolated cells only. The different contributions of the components of the budgets and its differences between the three scenarios were discussed. Special attention was laid on the impact of the components of the budgets (e.g. heat flux convergence, horizontal advection) on the evolution of convection-related parameters (CAPE, CIN) and thermally induced circulation systems.

  18. Surface energy budget and thermal inertia at Gale Crater: Calculations from ground-based measurements

    PubMed Central

    Martínez, G M; Rennó, N; Fischer, E; Borlina, C S; Hallet, B; de la Torre Juárez, M; Vasavada, A R; Ramos, M; Hamilton, V; Gomez-Elvira, J; Haberle, R M

    2014-01-01

    The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia (I) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ∼104 m2 to ∼107 m2. Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ∼102 m2. We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I = 452 J m−2 K−1 s−1/2 (SI units used throughout this article) is found at YKB followed by PL with I = 306 and RCK with I = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars. PMID:26213666

  19. Time reversed test particle calculations at Titan, based on CAPS-IMS measurements

    NASA Astrophysics Data System (ADS)

    Bebesi, Zsofia; Erdos, Geza; Szego, Karoly; Young, David T.

    2013-04-01

    We used the theoretical approach of Kobel and Flückiger (1994) to construct a magnetic environment model in the vicinity of Titan - with the exception of placing the bow shock (which is not present at Titan) into infinity. The model has 4 free parameters to calibrate the shape and orientation of the field. We investigate the CAPS-IMS Singles data to calculate/estimate the location of origin of the detected cold ions at Titan, and we also use the measurements of the onboard Magnetometer to set the parameters of the model magnetic field. A 4th order Runge-Kutta method is applied to calculate the test particle trajectories in a time reversed scenario, in the curved magnetic environment. Several different ion species can be tracked by the model along their possible trajectories, as a first approach we considered three particle groups (1, 2 and 16 amu ions). In this initial study we show the results for some thoroughly discussed flybys like TA, TB and T5, but we consider more recent tailside encounters as well. Reference: Kobel, E. and E.O. Flückiger, A model of the steady state magnetic field in the magnetosheath, JGR 99, Issue A12, 23617, 1994

  20. Surface energy budget and thermal inertia at Gale Crater: Calculations from ground-based measurements

    NASA Astrophysics Data System (ADS)

    Martínez, G. M.; Rennó, N.; Fischer, E.; Borlina, C. S.; Hallet, B.; Torre Juárez, M.; Vasavada, A. R.; Ramos, M.; Hamilton, V.; Gomez-Elvira, J.; Haberle, R. M.

    2014-08-01

    The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia (I) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ~104 m2 to ~107 m2. Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ~102 m2. We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I = 452 J m-2 K-1 s-1/2 (SI units used throughout this article) is found at YKB followed by PL with I = 306 and RCK with I = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars.

  1. A Bayesian-Based EDA Tool for Nano-circuits Reliability Calculations

    NASA Astrophysics Data System (ADS)

    Ibrahim, Walid; Beiu, Valeriu

    As the sizes of (nano-)devices are aggressively scaled deep into the nanometer range, the design and manufacturing of future (nano-)circuits will become extremely complex and inevitably will introduce more defects while their functioning will be adversely affected by transient faults. Therefore, accurately calculating the reliability of future designs will become a very important aspect for (nano-)circuit designers as they investigate several design alternatives to optimize the trade-offs between the conflicting metrics of area-power-energy-delay versus reliability. This paper introduces a novel generic technique for the accurate calculation of the reliability of future nano-circuits. Our aim is to provide both educational and research institutions (as well as the semiconductor industry at a later stage) with an accurate and easy to use tool for closely comparing the reliability of different design alternatives, and for being able to easily select the design that best fits a set of given (design) constraints. Moreover, the reliability model generated by the tool should empower designers with the unique opportunity of understanding the influence individual gates play on the design’s overall reliability, and identifying those (few) gates which impact the design’s reliability most significantly.

  2. Three dimensional (3D) distribution calculation of chlorophyll in rice based on infrared imaging technique

    NASA Astrophysics Data System (ADS)

    Li, Zong-nan; Xie, Jing; Zhang, Jian

    2014-11-01

    Chlorophyll content and distribution in leaf can reflect the plant health and nutrient status of the plant indirectly. It is meaningful to monitor the 3D distribution of chlorophyll in plant science. It can be done by the method in this paper: Firstly, the chlorophyll contents at different point in leaf are measured with the SPAD-502 chlorophyll meter, and the RGN images composed by the channel R, G and NIR are captured with the imaging system. Secondly, the 3D model is built from the RGN images and the RGN texture map containing all the information of R, G and NIR is generated. Thirdly, the regression model between chlorophyll content and color characteristics is established. Finally, the 3D distribution of chlorophyll in rice is captured by mapping the 2D distribution map of chlorophyll calculated by the regression model to the 3D model. This methodology achieves the combination of phenotype and physiology, it can calculated the 3D distribution of chlorophyll in rice well. The color characteristic g is good indicator of chlorophyll content which can be used to measure the 3D distribution of chlorophyll quickly. Moreover, the methodology can be used to high throughout analyze the rice.

  3. Review of Advances in Cobb Angle Calculation and Image-Based Modelling Techniques for Spinal Deformities

    NASA Astrophysics Data System (ADS)

    Giannoglou, V.; Stylianidis, E.

    2016-06-01

    Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s) calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s) and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.

  4. Calculations of K- nuclear quasi-bound states based on chiral meson-baryon amplitudes

    NASA Astrophysics Data System (ADS)

    Gazda, Daniel; Mareš, Jiří

    2012-05-01

    In-medium K¯N scattering amplitudes developed within a new chirally motivated coupled-channel model due to Cieplý and Smejkal that fits the recent SIDDHARTA kaonic hydrogen 1s level shift and width are used to construct K- nuclear potentials for calculations of K- nuclear quasi-bound states. The strong energy and density dependence of scattering amplitudes at and near threshold leads to K- potential depths -Re VK≈80-120 MeV. Self-consistent calculations of all K- nuclear quasi-bound states, including excited states, are reported. Model dependence, polarization effects, the role of p-wave interactions, and two-nucleon K-NN→YN absorption modes are discussed. The K- absorption widths ΓK are comparable or even larger than the corresponding binding energies BK for allK- nuclear quasi-bound states, exceeding considerably the level spacing. This discourages search for K- nuclear quasi-bound states in any but the lightest nuclear systems.

  5. Comparison of polynomial approximations to speed up planewave-based quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Parker, William D.; Umrigar, C. J.; Alfè, Dario; Petruzielo, F. R.; Hennig, Richard G.; Wilkins, John W.

    2015-04-01

    The computational cost of quantum Monte Carlo (QMC) calculations of realistic periodic systems depends strongly on the method of storing and evaluating the many-particle wave function. Previous work by Williamson et al. (2001) [35] and Alfè and Gillan, (2004) [36] has demonstrated the reduction of the O (N3) cost of evaluating the Slater determinant with planewaves to O (N2) using localized basis functions. We compare four polynomial approximations as basis functions - interpolating Lagrange polynomials, interpolating piecewise-polynomial-form (pp-) splines, and basis-form (B-) splines (interpolating and smoothing). All these basis functions provide a similar speedup relative to the planewave basis. The pp-splines have eight times the memory requirement of the other methods. To test the accuracy of the basis functions, we apply them to the ground state structures of Si, Al, and MgO. The polynomial approximations differ in accuracy most strongly for MgO, and smoothing B-splines most closely reproduce the planewave value for of the variational Monte Carlo energy. Using separate approximations for the Laplacian of the orbitals increases the accuracy sufficiently to justify the increased memory requirement, making smoothing B-splines, with separate approximation for the Laplacian, the preferred choice for approximating planewave-represented orbitals in QMC calculations.

  6. Molecular structure, spectroscopic characterization of (S)-2-Oxopyrrolidin-1-yl Butanamide and ab initio, DFT based quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Ramya, T.; Gunasekaran, S.; Ramkumaar, G. R.

    2015-10-01

    The experimental and theoretical spectra of (S)-2-Oxopyrrolidin-1-yl Butanamide (S2OPB) were studied. FT-IR and FT-Raman spectra of S2OPB in the solid phase were recorded and analyzed in the range 4000-450 and 5000-50 cm-1 respectively. The structural and spectroscopic analyses of S2OPB were calculated using ab initio Hartree Fock (HF) and density functional theory calculations (B3PW91, B3LYP) with 6-31G(d,p) basis set. A complete vibrational interpretation has been made on the basis of the calculated Potential Energy Distribution (PED). The HF, B3LYP and B3PW91 methods based NMR calculation has been used to assign the 1H NMR and 13C NMR chemical shift of S2OPB. Comparative study on UV-Vis spectral analysis between the experimental and theoretical (B3PW91, B3LYP) methods and the global chemical parameters and local descriptor of reactivity through the Fukui function were performed. Finally the thermodynamic properties of S2OPB were calculated at different temperatures and the corresponding relations between the properties and temperature were also studied.

  7. Molecular structure, spectroscopic characterization of (S)-2-Oxopyrrolidin-1-yl Butanamide and ab initio, DFT based quantum chemical calculations.

    PubMed

    Ramya, T; Gunasekaran, S; Ramkumaar, G R

    2015-10-01

    The experimental and theoretical spectra of (S)-2-Oxopyrrolidin-1-yl Butanamide (S2OPB) were studied. FT-IR and FT-Raman spectra of S2OPB in the solid phase were recorded and analyzed in the range 4000-450 and 5000-50 cm(-1) respectively. The structural and spectroscopic analyses of S2OPB were calculated using ab initio Hartree Fock (HF) and density functional theory calculations (B3PW91, B3LYP) with 6-31G(d,p) basis set. A complete vibrational interpretation has been made on the basis of the calculated Potential Energy Distribution (PED). The HF, B3LYP and B3PW91 methods based NMR calculation has been used to assign the (1)H NMR and (13)C NMR chemical shift of S2OPB. Comparative study on UV-Vis spectral analysis between the experimental and theoretical (B3PW91, B3LYP) methods and the global chemical parameters and local descriptor of reactivity through the Fukui function were performed. Finally the thermodynamic properties of S2OPB were calculated at different temperatures and the corresponding relations between the properties and temperature were also studied. PMID:25956325

  8. Calculating the detection limits of chamber-based soil greenhouse gas flux measurements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Renewed interest in quantifying greenhouse gas emissions from soil has lead to an increase in the application of chamber-based flux measurement techniques. Despite the apparent conceptual simplicity of chamber-based methods, nuances in chamber design, deployment, and data analyses can have marked ef...

  9. AM1 calculation of the nucleic acid bases structure and vibrational spectra

    NASA Astrophysics Data System (ADS)

    Govorun, D. N.; Danchuk, V. D.; Mishchuk, Ya. R.; Kondratyuk, I. V.; Radomsky, N. F.; Zheltovsky, N. V.

    1992-03-01

    The conformational analysis of canonical nucleotide bases has been performed using the semiempirical quantum-chemical AM1 method. It is determined that uracil and thymine are of plane of symmetry. Guanine, cytosine and adenine have two mirror symmetrical nonplanar conformers, their CNH 2 fragments are of pyramidal structure. The results obtained explain correctly the existing microwave spectroscopy data. Discovered mirror symmetrical conformational states of aminosubstituted nucleotide bases might be the source of the thin polymorphysm of DNA, and the transitions between these states in nucleotide base pairs are considered the possible causes for resonance response of biological objects to the microwave radiation.

  10. Calculation of Shuttle Base Heating Environments and Comparison with Flight Data

    NASA Technical Reports Server (NTRS)

    Greenwood, T. F.; Lee, Y. C.; Bender, R. L.; Carter, R. E.

    1983-01-01

    The techniques, analytical tools, and experimental programs used initially to generate and later to improve and validate the Shuttle base heating design environments are discussed. In general, the measured base heating environments for STS-1 through STS-5 were in good agreement with the preflight predictions. However, some changes were made in the methodology after reviewing the flight data. The flight data is described, preflight predictions are compared with the flight data, and improvements in the prediction methodology based on the data are discussed.

  11. Model of the catalytic mechanism of human aldose reductase based on quantum chemical calculations.

    SciTech Connect

    Cachau, R. C.; Howard, E. H.; Barth, P. B.; Mitschler, A. M.; Chevrier, B. C.; Lamour, V.; Joachimiak, A.; Sanishvili, R.; Van Zandt, M.; Sibley, E.; Moras, D.; Podjarny, A.; UPR de Biologie Structurale; National Cancer Inst.; Univ. Louis Pasteur; Inst. for Diabetes Discovery, Inc.

    2000-01-01

    Aldose Reductase is an enzyme involved in diabetic complications, thoroughly studied for the purpose of inhibitor development. The structure of an enzyme-inhibitor complex solved at sub-atomic resolution has been used to develop a model for the catalytic mechanism. This model has been refined using a combination of Molecular Dynamics and Quantum calculations. It shows that the proton donation, the subject of previous controversies, is the combined effect of three residues: Lys 77, Tyr 48 and His 110. Lys 77 polarises the Tyr 48 OH group, which donates the proton to His 110, which becomes doubly protonated. His 110 then moves and donates the proton to the substrate. The key information from the sub-atomic resolution structure is the orientation of the ring and the single protonafion of the His 110 in the enzyme-inhibitor complex. This model is in full agreement with all available experimental data.

  12. Thermal state of SNPS ``Topaz'' units: Calculation basing and experimental confirmation

    NASA Astrophysics Data System (ADS)

    Bogush, Igor P.; Bushinsky, Alexander V.; Galkin, Anatoly Ya.; Serbin, Victor I.; Zhabotinsky, Evgeny E.

    1991-01-01

    The ensuring thermal state parameters of thermionic space nuclear power system (SNPS) units in required limits on all operating regimes is a factor which determines SNPSs lifetime. The requirements to unit thermal state are distinguished to a marked degree, and both the corresponding units arragement in SNPS power generating module and the use of definite control algorithms, special thermal regulation and protection are neccessary for its provision. The computer codes which permit to define the thermal transient performances of liquid metal loop and main units had been elaborated for calculation basis of required SNPS ``Topaz'' unit thermal state. The conformity of these parameters to a given requirements are confirmed by results of autonomous unit tests, tests of mock-ups, power tests of ground SNPS prototypes and flight tests of two SNPS ``Topaz''.

  13. First-principles calculation of dielectric response in molecule-based materials.

    PubMed

    Heitzer, Henry M; Marks, Tobin J; Ratner, Mark A

    2013-07-01

    The dielectric properties of materials are of fundamental significance to many chemical processes and the functioning of numerous solid-state device technologies. While experimental methods for measuring bulk dielectric constants are well-established, far less is known, either experimentally or theoretically, about the origin of dielectric response at the molecular/multimolecular scale. In this contribution we report the implementation of an accurate first-principles approach to calculating the dielectric response of molecular systems. We assess the accuracy of the method by reproducing the experimental dielectric constants of several bulk π-electron materials and demonstrating the ability of the method to capture dielectric properties as a function of frequency and molecular orientation in representative arrays of substituted aromatic derivatives. The role of molecular alignment and packing density on dielectric response is also examined, showing that the local dielectric behavior of molecular assemblies can diverge significantly from that of the bulk material. PMID:23734640

  14. Development of a non-equilibrium quantum transport calculation method based on constrained density functional

    NASA Astrophysics Data System (ADS)

    Kim, Han Seul; Kim, Yong-Hoon

    2015-03-01

    We report on the development of a novel first-principles method for the calculation of non-equilibrium quantum transport process. Within the scheme, non-equilibrium situation and quantum transport within the open-boundary condition are described by the region-dependent Δ self-consistent field method and matrix Green's function theory, respectively. We will discuss our solutions to the technical difficulties in describing bias-dependent electron transport at complicated nanointerfaces and present several application examples. Global Frontier Program (2013M3A6B1078881), Basic Science Research Grant (2012R1A1A2044793), EDISON Program (No. 2012M3C1A6035684), and 2013 Global Ph.D fellowship program of the National Research Foundation. KISTI Supercomputing Center (KSC-2014-C3-021).

  15. Tetrahedral-mesh-based computational human phantom for fast Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Yeom, Yeon Soo; Jeong, Jong Hwi; Han, Min Cheol; Kim, Chan Hyeong

    2014-06-01

    Although polygonal-surface computational human phantoms can address several critical limitations of conventional voxel phantoms, their Monte Carlo simulation speeds are much slower than those of voxel phantoms. In this study, we sought to overcome this problem by developing a new type of computational human phantom, a tetrahedral mesh phantom, by converting a polygonal surface phantom to a tetrahedral mesh geometry. The constructed phantom was implemented in the Geant4 Monte Carlo code to calculate organ doses as well as to measure computation speed, the values were then compared with those for the original polygonal surface phantom. It was found that using the tetrahedral mesh phantom significantly improved the computation speed by factors of between 150 and 832 considering all of the particles and simulated energies other than the low-energy neutrons (0.01 and 1 MeV), for which the improvement was less significant (17.2 and 8.8 times, respectively).

  16. Thermal state of SNPS Topaz'' units: Calculation basing and experimental confirmation

    SciTech Connect

    Bogush, I.P.; Bushinsky, A.V.; Galkin, A.Y.; Serbin, V.I.; Zhabotinsky, E.E. )

    1991-01-01

    The ensuring thermal state parameters of thermionic space nuclear power system (SNPS) units in required limits on all operating regimes is a factor which determines SNPSs lifetime. The requirements to unit thermal state are distinguished to a marked degree, and both the corresponding units arragement in SNPS power generating module and the use of definite control algorithms, special thermal regulation and protection are neccessary for its provision. The computer codes which permit to define the thermal transient performances of liquid metal loop and main units had been elaborated for calculation basis of required SNPS Topaz'' unit thermal state. The conformity of these parameters to a given requirements are confirmed by results of autonomous unit tests, tests of mock-ups, power tests of ground SNPS prototypes and flight tests of two SNPS Topaz''.

  17. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    PubMed

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. PMID:15931680

  18. Improved pKa calculations through flexibility based sampling of a water-dominated interaction scheme

    PubMed Central

    Warwicker, Jim

    2004-01-01

    Ionizable groups play critical roles in biological processes. Computation of pKas is complicated by model approximations and multiple conformations. Calculated and experimental pKas are compared for relatively inflexible active-site side chains, to develop an empirical model for hydration entropy changes upon charge burial. The modification is found to be generally small, but large for cysteine, consistent with small molecule ionization data and with partial charge distributions in ionized and neutral forms. The hydration model predicts significant entropic contributions for ionizable residue burial, demonstrated for components in the pyruvate dehydrogenase complex. Conformational relaxation in a pH-titration is estimated with a mean-field assessment of maximal side chain solvent accessibility. All ionizable residues interact within a low protein dielectric finite difference (FD) scheme, and more flexible groups also access water-mediated Debye-Hückel (DH) interactions. The DH method tends to match overall pH-dependent stability, while FD can be more accurate for active-site groups. Tolerance for side chain rotamer packing is varied, defining access to DH interactions, and the best fit with experimental pKas obtained. The new (FD/DH) method provides a fast computational framework for making the distinction between buried and solvent-accessible groups that has been qualitatively apparent from previous work, and pKa calculations are significantly improved for a mixed set of ionizable residues. Its effectiveness is also demonstrated with computation of the pH-dependence of electrostatic energy, recovering favorable contributions to folded state stability and, in relation to structural genomics, with substantial improvement (reduction of false positives) in active-site identification by electrostatic strain. PMID:15388865

  19. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    NASA Astrophysics Data System (ADS)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  20. Use of thermodynamic data to calculate surface tension and viscosity of Sn-based soldering alloy systems

    NASA Astrophysics Data System (ADS)

    Lee, Jong Ho; Lee, Dong Nyung

    2001-09-01

    A thermodynamic database for the Pb-free soldering alloy systems, which include Sn, Ag, Cu, Bi, and In, has been made using the CALPHAD method. The resulting thermodynamic properties of the Sn-based binary alloy systems were used to determine the surface tensions and viscosities. The surface tensions were calculated using Butler’s monolayer model and the viscosities by Hirai’s and Seetharaman’s models. Butler’s model was also used to determine the surface active element. The results for binary systems were extended to the Sn-based ternary systems (Sn-Ag-Cu, Sn-Ag-Bi). The surface tensions of commercial eutectic Sn-Pb and Sn-Pb-Ag solder alloys were measured by the sessile drop method. The measured values and other researchers’ results were compared with the calculated data.

  1. Prediction of delayed graft function by means of a novel web-based calculator: a single-center experience.

    PubMed

    Rodrigo, E; Miñambres, E; Ruiz, J C; Ballesteros, A; Piñera, C; Quintanar, J; Fernández-Fresnedo, G; Palomar, R; Gómez-Alamillo, C; Arias, M

    2012-01-01

    Renal failure persisting after renal transplant is known as delayed graft function (DGF). DGF predisposes the graft to acute rejection and increases the risk of graft loss. In 2010, Irish et al. developed a new model designed to predict DGF risk. This model was used to program a web-based DGF risk calculator, which can be accessed via http://www.transplantcalculator.com . The predictive performance of this score has not been tested in a different population. We analyzed 342 deceased-donor adult renal transplants performed in our hospital. Individual and population DGF risk was assessed using the web-based calculator. The area under the ROC curve to predict DGF was 0.710 (95% CI 0.653-0.767, p < 0.001). The "goodness-of-fit" test demonstrates that the DGF risk was well calibrated (p = 0.309). Graft survival was significantly better for patients with a lower DGF risk (5-year survival 71.1% vs. 60.1%, log rank p = 0.036). The model performed well with good discrimination ability and good calibration to predict DGF in a single transplant center. Using the web-based DGF calculator, we can predict the risk of developing DGF with a moderate to high degree of certainty only by using information available at the time of transplantation. PMID:22026730

  2. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    SciTech Connect

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.

    2010-12-07

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm{sup 2}). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm{sup 2}. Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm{sup 2}) only 92% of the data meet the criteria. Total scatter factors show a good agreement (<2.6%) between MC calculated and measured data, except for the smaller fields (12x12 and 6x6 mm{sup 2}) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm{sup 2}. Special care must be taken for smaller fields.

  3. Patient reactions to a web-based cardiovascular risk calculator in type 2 diabetes: a qualitative study in primary care

    PubMed Central

    Nolan, Tom; Dack, Charlotte; Pal, Kingshuk; Ross, Jamie; Stevenson, Fiona A; Peacock, Richard; Pearson, Mike; Spiegelhalter, David; Sweeting, Michael; Murray, Elizabeth

    2015-01-01

    Background Use of risk calculators for specific diseases is increasing, with an underlying assumption that they promote risk reduction as users become better informed and motivated to take preventive action. Empirical data to support this are, however, sparse and contradictory. Aim To explore user reactions to a cardiovascular risk calculator for people with type 2 diabetes. Objectives were to identify cognitive and emotional reactions to the presentation of risk, with a view to understanding whether and how such a calculator could help motivate users to adopt healthier behaviours and/or improve adherence to medication. Design and setting Qualitative study combining data from focus groups and individual user experience. Adults with type 2 diabetes were recruited through website advertisements and posters displayed at local GP practices and diabetes groups. Method Participants used a risk calculator that provided individualised estimates of cardiovascular risk. Estimates were based on UK Prospective Diabetes Study (UKPDS) data, supplemented with data from trials and systematic reviews. Risk information was presented using natural frequencies, visual displays, and a range of formats. Data were recorded and transcribed, then analysed by a multidisciplinary group. Results Thirty-six participants contributed data. Users demonstrated a range of complex cognitive and emotional responses, which might explain the lack of change in health behaviours demonstrated in the literature. Conclusion Cardiovascular risk calculators for people with diabetes may best be used in conjunction with health professionals who can guide the user through the calculator and help them use the resulting risk information as a source of motivation and encouragement. PMID:25733436

  4. Pre-drilling calculation of geomechanical parameters for safe geothermal wells based on outcrop analogue samples

    NASA Astrophysics Data System (ADS)

    Reyer, Dorothea; Philipp, Sonja

    2014-05-01

    It is desirable to enlarge the profit margin of geothermal projects by reducing the total drilling costs considerably. Substantiated assumptions on uniaxial compressive strengths and failure criteria are important to avoid borehole instabilities and adapt the drilling plan to rock mechanical conditions to minimise non-productive time. Because core material is rare we aim at predicting in situ rock properties from outcrop analogue samples which are easy and cheap to provide. The comparability of properties determined from analogue samples with samples from depths is analysed by performing physical characterisation (P-wave velocities, densities), conventional triaxial tests, and uniaxial compressive strength tests of both quarry and equivalent core samples. "Equivalent" means that the quarry sample is of the same stratigraphic age and of comparable sedimentary facies and composition as the correspondent core sample. We determined the parameters uniaxial compressive strength (UCS) and Young's modulus for 35 rock samples from quarries and 14 equivalent core samples from the North German Basin. A subgroup of these samples was used for triaxial tests. For UCS versus Young's modulus, density and P-wave velocity, linear- and non-linear regression analyses were performed. We repeated regression separately for clastic rock samples or carbonate rock samples only as well as for quarry samples or core samples only. Empirical relations were used to calculate UCS values from existing logs of sampled wellbore. Calculated UCS values were then compared with measured UCS of core samples of the same wellbore. With triaxial tests we determined linearized Mohr-Coulomb failure criteria, expressed in both principal stresses and shear and normal stresses, for quarry samples. Comparison with samples from larger depths shows that it is possible to apply the obtained principal stress failure criteria to clastic and volcanic rocks, but less so for carbonates. Carbonate core samples have higher

  5. Assumed oxygen consumption based on calculation from dye dilution cardiac output: an improved formula.

    PubMed

    Bergstra, A; van Dijk, R B; Hillege, H L; Lie, K I; Mook, G A

    1995-05-01

    This study was performed because of observed differences between dye dilution cardiac output and the Fick cardiac output, calculated from estimated oxygen consumption according to LaFarge and Miettinen, and to find a better formula for assumed oxygen consumption. In 250 patients who underwent left and right heart catheterization, the oxygen consumption VO2 (ml.min-1) was calculated using Fick's principle. Either pulmonary or systemic flow, as measured by dye dilution, was used in combination with the concordant arteriovenous oxygen concentration difference. In 130 patients, who matched the age of the LaFarge and Miettinen population, the obtained values of oxygen consumption VO2(dd) were compared with the estimated oxygen consumption values VO2(lfm), found using the LaFarge and Miettinen formulae. The VO2(lfm) was significantly lower than VO2(dd); -21.8 +/- 29.3 ml.min-1 (mean +/- SD), P < 0.001, 95% confidence interval (95% CI) -26.9 to -16.7, limits of agreement (LA) -80.4 to 36.9. A new regression formula for the assumed oxygen consumption VO2(ass) was derived in 250 patients by stepwise multiple regression analysis. The VO2(dd) was used as a dependent variable, and body surface area BSA (m2). Sex (0 for female, 1 for male), Age (years), Heart rate (min-1) and the presence of a left to right shunt as independent variables. The best fitting formula is expressed as: VO2(ass) = (157.3 x BSA + 10.0 x Sex - 10.5 x In Age + 4.8) ml.min-1, where ln Age = the natural logarithm of the age. This formula was validated prospectively in 60 patients. A non-significant difference between VO2(ass) and VO2(dd) was found; mean 2.0 +/- 23.4 ml.min-1, P = 0.771, 95% Cl = -4.0 to +8.0, LA -44.7 to +48.7. In conclusion, assumed oxygen consumption values, using our new formula, are in better agreement with the actual values than those found according to LaFarge and Miettinen's formulae. PMID:7588904

  6. Highly correlated configuration interaction calculations on water with large orbital bases

    SciTech Connect

    Almora-Díaz, César X.

    2014-05-14

    A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupled cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, −76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the “experimental” value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol)

  7. Ray-Based Calculations with DEPLETE of Laser Backscatter in ICF Targets

    SciTech Connect

    Strozzi, D J; Williams, E; Hinkel, D; Froula, D; London, R; Callahan, D

    2008-05-19

    A steady-state model for Brillouin and Raman backscatter along a laser ray path is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code Deplete, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as 'plane-wave' simulations with the paraxial propagation code pF3D. Comparisons with Brillouin-scattering experiments at the Omega Laser Facility show that laser speckles greatly enhance the reflectivity over the Deplete results. An approximate upper bound on this enhancement is given by doubling the Deplete coupling coefficient. Analysis with Deplete of an ignition design for the National Ignition Facility (NIF), with a peak radiation temperature of 285 eV, shows encouragingly low reflectivity. Doubling the coupling to bracket speckle effects suggests a less optimistic picture. Re-absorption of Raman light is seen to be significant in this design.

  8. Calculation of room temperature conductivity and mobility in tin-based topological insulator nanoribbons

    SciTech Connect

    Vandenberghe, William G. Fischetti, Massimo V.

    2014-11-07

    Monolayers of tin (stannanane) functionalized with halogens have been shown to be topological insulators. Using density functional theory (DFT), we study the electronic properties and room-temperature transport of nanoribbons of iodine-functionalized stannanane showing that the overlap integral between the wavefunctions associated to edge-states at opposite ends of the ribbons decreases with increasing width of the ribbons. Obtaining the phonon spectra and the deformation potentials also from DFT, we calculate the conductivity of the ribbons using the Kubo-Greenwood formalism and show that their mobility is limited by inter-edge phonon backscattering. We show that wide stannanane ribbons have a mobility exceeding 10{sup 6} cm{sup 2}/Vs. Contrary to ordinary semiconductors, two-dimensional topological insulators exhibit a high conductivity at low charge density, decreasing with increasing carrier density. Furthermore, the conductivity of iodine-functionalized stannanane ribbons can be modulated over a range of three orders of magnitude, thus rendering this material extremely interesting for classical computing applications.

  9. First-principles based calculation of the macroscopic α/β interface in titanium

    NASA Astrophysics Data System (ADS)

    Li, Dongdong; Zhu, Lvqi; Shao, Shouqi; Jiang, Yong

    2016-06-01

    The macroscopic α/β interface in titanium and titanium alloys consists of a ledge interface (112)β/(01-10)α and a side interface (11-1)β/(2-1-10)α in a zig-zag arrangement. Here, we report a first-principles study for predicting the atomic structure and the formation energy of the α/β-Ti interface. Both component interfaces were calculated using supercell models within a restrictive relaxation approach, with various staking sequences and high-symmetry parallel translations being considered. The ledge interface energy was predicted as 0.098 J/m2 and the side interface energy as 0.811 J/m2. By projecting the zig-zag interface area onto the macroscopic broad face, the macroscopic α/β interface energy was estimated to be as low as ˜0.12 J/m2, which, however, is almost double the ad hoc value used in previous phase-field simulations.

  10. Ab initio calculations and crystal symmetry considerations for novel FeSe-based superconductors

    NASA Astrophysics Data System (ADS)

    Mazin, Igor

    2013-03-01

    Density functional calculations disagree with the ARPES measurements on both K0.3Fe2Se2 superconducting phase and FeSe/SrTiO3 monolayers. Yet they can still be dramatically useful for the reason that they respect full crystallographic symmetry and take good account of electron-ion interaction. Using just symmetry analysis, it is shown that nodeless d-wave superconductivity is not an option in these systems, and a microscopic framework is derived that leads to a novel s-wave sign-reversal state, qualitatively different from the already familiar s+/- state in pnictides and bulk binary selenides. Regarding the FeSe monolayer, bonding and charge transfer between the film and the substrate is analyzed and it is shown that the former is weak and the latter negligible, which sets important restrictions on possible mechanisms of doping and superconductivity in these monolayers. In particular, the role of the so-called ``Se etching,'' necessary for superconductivity in FeSe monolayers, is analyzed in terms of electronic structure and bonding with the substrate.

  11. Electronegativity calculation of bulk modulus and band gap of ternary ZnO-based alloys

    SciTech Connect

    Li, Keyan; Kang, Congying; Xue, Dongfeng; State Key Laboratory of Rare Earth Resource Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun 130022

    2012-10-15

    In this work, the bulk moduli and band gaps of M{sub x}Zn{sub 1−x}O (M = Be, Mg, Ca, Cd) alloys in the whole composition range were quantitatively calculated by using the electronegativity-related models for bulk modulus and band gap, respectively. We found that the change trends of bulk modulus and band gap with an increase of M concentration x are same for Be{sub x}Zn{sub 1−x}O and Cd{sub x}Zn{sub 1−x}O, while the change trends are reverse for Mg{sub x}Zn{sub 1−x}O and Ca{sub x}Zn{sub 1−x}O. It was revealed that the bulk modulus is related to the valence electron density of atoms whereas the band gap is strongly influenced by the detailed chemical bonding behaviors of constituent atoms. The current work provides us a useful guide to compositionally design advanced alloy materials with both good mechanical and optoelectronic properties.

  12. Comparing the Kentucky phosphorus index with P loss calculated with a process-based model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Eutrophication from excess phosphorus (P) loading is widespread among U.S. water bodies with a substantial portion of the P originating from agricultural fields. To reduce the impact agriculture has on water quality, USDA-NRCS includes P-based planning strategies in their 590 Standard to restrict P ...

  13. SPS: A Simulation Tool for Calculating Power of Set-Based Genetic Association Tests.

    PubMed

    Li, Jiang; Sham, Pak Chung; Song, Youqiang; Li, Miaoxin

    2015-07-01

    Set-based association tests, combining a set of single-nucleotide polymorphisms into a unified test, have become important approaches to identify weak-effect or low-frequency risk loci of complex diseases. However, there is no comprehensive and user-friendly tool to estimate power of set-based tests for study design. We developed a simulation tool to estimate statistical power of multiple representative set-based tests (SPS). SPS has a graphic interface to facilitate parameter settings and result visualization. Advanced functions include loading real genotypes to define genetic architecture, set-based meta-analysis for risk loci with or without heterogeneity, and parallel simulations. In proof-of-principle examples, SPS took no more than 3 sec on average to estimate the power in a conventional setting. The SPS has been integrated into a user-friendly software tool (KGG) as an independent functional module and it is freely available at http://statgenpro.psychiatry.hku.hk/limx/kgg/. PMID:25995121

  14. Calculating the detection limits of chamber-based greenhouse gas flux measurements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Chamber-based measurement of greenhouse gas emissions from soil is a common technique. However, when changes in chamber headspace gas concentrations are small over time, determination of the flux can be problematic. Several factors contribute to the reliability of measured fluxes, including: samplin...

  15. Exploring positron characteristics utilizing two new positron-electron correlation schemes based on multiple electronic structure calculation methods

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Shuai; Gu, Bing-Chuan; Han, Xiao-Xi; Liu, Jian-Dang; Ye, Bang-Jiao

    2015-10-01

    We make a gradient correction to a new local density approximation form of positron-electron correlation. The positron lifetimes and affinities are then probed by using these two approximation forms based on three electronic-structure calculation methods, including the full-potential linearized augmented plane wave (FLAPW) plus local orbitals approach, the atomic superposition (ATSUP) approach, and the projector augmented wave (PAW) approach. The differences between calculated lifetimes using the FLAPW and ATSUP methods are clearly interpreted in the view of positron and electron transfers. We further find that a well-implemented PAW method can give near-perfect agreement on both the positron lifetimes and affinities with the FLAPW method, and the competitiveness of the ATSUP method against the FLAPW/PAW method is reduced within the best calculations. By comparing with the experimental data, the new introduced gradient corrected correlation form is proved to be competitive for positron lifetime and affinity calculations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11175171 and 11105139).

  16. Verification study of thorium cross section in MVP calculation of thorium based fuel core using experimental data

    SciTech Connect

    Mai, V. T.; Fujii, T.; Wada, K.; Kitada, T.; Takaki, N.; Yamaguchi, A.; Watanabe, H.; Unesaki, H.

    2012-07-01

    Considering the importance of thorium data and concerning about the accuracy of Th-232 cross section library, a series of experiments of thorium critical core carried out at KUCA facility of Kyoto Univ. Research Reactor Inst. have been analyzed. The core was composed of pure thorium plates and 93% enriched uranium plates, solid polyethylene moderator with hydro to U-235 ratio of 140 and Th-232 to U-235 ratio of 15.2. Calculations of the effective multiplication factor, control rod worth, reactivity worth of Th plates have been conducted by MVP code using JENDL-4.0 library [1]. At the experiment site, after achieving the critical state with 51 fuel rods inserted inside the reactor, the measurements of the reactivity worth of control rod and thorium sample are carried out. By comparing with the experimental data, the calculation overestimates the effective multiplication factor about 0.90%. Reactivity worth of the control rods evaluation using MVP is acceptable with the maximum discrepancy about the statistical error of the measured data. The calculated results agree to the measurement ones within the difference range of 3.1% for the reactivity worth of one Th plate. From this investigation, further experiments and research on Th-232 cross section library need to be conducted to provide more reliable data for thorium based fuel core design and safety calculation. (authors)

  17. How many base-pairs per turn does DNA have in solution and in chromatin? Some theoretical calculations.

    PubMed Central

    Levitt, M

    1978-01-01

    Calculations on a 20-base pair segment of DNA double helix using empirical energy functions show that DNA can be bent smoothly and uniformly into a superhelix with a small enough radius (45 A) to fit the dimensions of chromatin. The variation of energy with the twist of the base pairs about the helix axis shows the straight DNA free in solution is most stable with about 10 1/2 base pairs per turn rather than 10 as observed in the solid state, whereas superhelical DNA in chromatin is most stable with about 10 base pairs per turn. This result, which has a simple physical interpretation, explains the pattern of nuclease cuts and the linkage number changes observed for DNA arranged in chromatin. PMID:273227

  18. Effect of Oblique Electromagnetic Ion Cyclotron Waves on Relativistic Electron Scattering: CRRES Based Calculation

    NASA Technical Reports Server (NTRS)

    Gamayunov, K. V.; Khazanov, G. V.

    2007-01-01

    We consider the effect of oblique EMIC waves on relativistic electron scattering in the outer radiation belt using simultaneous observations of plasma and wave parameters from CRRES. The main findings can be s ummarized as follows: 1. In 1comparison with field-aligned waves, int ermediate and highly oblique distributions decrease the range of pitc h-angles subject to diffusion, and reduce the local scattering rate b y an order of magnitude at pitch-angles where the principle absolute value of n = 1 resonances operate. Oblique waves allow the absolute va lue of n > 1 resonances to operate, extending the range of local pitc h-angle diffusion down to the loss cone, and increasing the diffusion at lower pitch angles by orders of magnitude; 2. The local diffusion coefficients derived from CRRES data are qualitatively similar to the local results obtained for prescribed plasma/wave parameters. Conseq uently, it is likely that the bounce-averaged diffusion coefficients, if estimated from concurrent data, will exhibit the dependencies similar to those we found for model calculations; 3. In comparison with f ield-aligned waves, intermediate and highly oblique waves decrease th e bounce-averaged scattering rate near the edge of the equatorial lo ss cone by orders of magnitude if the electron energy does not excee d a threshold (approximately equal to 2 - 5 MeV) depending on specified plasma and/or wave parameters; 4. For greater electron energies_ ob lique waves operating the absolute value of n > 1 resonances are more effective and provide the same bounce_averaged diffusion rate near the loss cone as fiel_aligned waves do.

  19. Calculation of specific, highly excited vibrational states based on a Davidson scheme: application to HFCO.

    PubMed

    Iung, Christophe; Ribeiro, Fabienne

    2005-11-01

    We present the efficiency of a new modified Davidson scheme which yields selectively one high-energy vibrationally excited eigenstate or a series of eigenstates. The calculation of a highly vibrationally excited state psi located in a dense part of the spectrum requires a specific prediagonalization step before the Davidson scheme. It consists in building a small active space P containing the zero-order states which are coupled with the zero-order description of the eigenstate of interest. We propose a general way to define this active space P which plays a crucial role in the method. The efficiency of the method is illustrated by computing and analyzing the high-energy excited overtones of the out-of-plane mode [formula: see text] in HFCO. These overtone energies correspond to the 234th, 713th, and 1774th energy levels in our reference basis set which contains roughly 140,000 states. One of the main advantages of this Davidson scheme comes from the fact that the eigenstate and eigenvalue convergence can be assessed during the iterations by looking at the residual [formula: see text]. The maximum value epsilon allowed for this residual constitutes a very sensitive and efficient parameter which sets the accuracy of the eigenvalues and eigenstates, even when the studied states are highly excited and are localized in a dense part of the spectrum. The physical analysis of the eigenstates associated with the 5th, 7th, and 9th out-of-plane overtones in HFCO provides some interesting information on the energy localization in this mode and on the role played by the in-plane modes. Also, it provides some ideas on the numerical methods which should be developed in the future to tackle higher-energy excited states in polyatomics. PMID:16375515

  20. Ga(+) Basicity and Affinity Scales Based on High-Level Ab Initio Calculations.

    PubMed

    Brea, Oriana; Mó, Otilia; Yáñez, Manuel

    2015-10-26

    The structure, relative stability and bonding of complexes formed by the interaction between Ga(+) and a large set of compounds, including hydrocarbons, aromatic systems, and oxygen-, nitrogen-, fluorine and sulfur-containing Lewis bases have been investigated through the use of the high-level composite ab initio Gaussian-4 theory. This allowed us to establish rather accurate Ga(+) cation affinity (GaCA) and Ga(+) cation basicity (GaCB) scales. The bonding analysis of the complexes under scrutiny shows that, even though one of the main ingredients of the Ga(+) -base interaction is electrostatic, it exhibits a non-negligible covalent character triggered by the presence of the low-lying empty 4p orbital of Ga(+) , which favors a charge donation from occupied orbitals of the base to the metal ion. This partial covalent character, also observed in AlCA scales, is behind the dissimilarities observed when GaCA are compared with Li(+) cation affinities, where these covalent contributions are practically nonexistent. Quite unexpectedly, there are some dissimilarities between several Ga(+) -complexes and the corresponding Al(+) -analogues, mainly affecting the relative stability of π-complexes involving aromatic compounds. PMID:26269224

  1. Analytical Calculation of Sensing Parameters on Carbon Nanotube Based Gas Sensors

    PubMed Central

    Akbari, Elnaz; Buntat, Zolkafle; Ahmad, Mohd Hafizi; Enzevaee, Aria; Yousof, Rubiyah; Iqbal, Syed Muhammad Zafar; Ahmadi, Mohammad Taghi.; Sidik, Muhammad Abu Bakar; Karimi, Hediyeh

    2014-01-01

    Carbon Nanotubes (CNTs) are generally nano-scale tubes comprising a network of carbon atoms in a cylindrical setting that compared with silicon counterparts present outstanding characteristics such as high mechanical strength, high sensing capability and large surface-to-volume ratio. These characteristics, in addition to the fact that CNTs experience changes in their electrical conductance when exposed to different gases, make them appropriate candidates for use in sensing/measuring applications such as gas detection devices. In this research, a model for a Field Effect Transistor (FET)-based structure has been developed as a platform for a gas detection sensor in which the CNT conductance change resulting from the chemical reaction between NH3 and CNT has been employed to model the sensing mechanism with proposed sensing parameters. The research implements the same FET-based structure as in the work of Peng et al. on nanotube-based NH3 gas detection. With respect to this conductance change, the I–V characteristic of the CNT is investigated. Finally, a comparative study shows satisfactory agreement between the proposed model and the experimental data from the mentioned research. PMID:24658617

  2. Free energy calculations to estimate ligand-binding affinities in structure-based drug design.

    PubMed

    Reddy, M Rami; Reddy, C Ravikumar; Rathore, R S; Erion, Mark D; Aparoy, P; Reddy, R Nageswara; Reddanna, P

    2014-01-01

    Post-genomic era has led to the discovery of several new targets posing challenges for structure-based drug design efforts to identify lead compounds. Multiple computational methodologies exist to predict the high ranking hit/lead compounds. Among them, free energy methods provide the most accurate estimate of predicted binding affinity. Pathway-based Free Energy Perturbation (FEP), Thermodynamic Integration (TI) and Slow Growth (SG) as well as less rigorous end-point methods such as Linear interaction energy (LIE), Molecular Mechanics-Poisson Boltzmann./Generalized Born Surface Area (MM-PBSA/GBSA) and λ-dynamics have been applied to a variety of biologically relevant problems. The recent advances in free energy methods and their applications including the prediction of protein-ligand binding affinity for some of the important drug targets have been elaborated. Results using a recently developed Quantum Mechanics (QM)/Molecular Mechanics (MM) based Free Energy Perturbation (FEP) method, which has the potential to provide a very accurate estimation of binding affinities to date has been discussed. A case study for the optimization of inhibitors for the fructose 1,6- bisphosphatase inhibitors has been described. PMID:23947646

  3. Analytical calculation of sensing parameters on carbon nanotube based gas sensors.

    PubMed

    Akbari, Elnaz; Buntat, Zolkafle; Ahmad, Mohd Hafizi; Enzevaee, Aria; Yousof, Rubiyah; Iqbal, Syed Muhammad Zafar; Ahmadi, Mohammad Taghi; Sidik, Muhammad Abu Bakar; Karimi, Hediyeh

    2014-01-01

    Carbon Nanotubes (CNTs) are generally nano-scale tubes comprising a network of carbon atoms in a cylindrical setting that compared with silicon counterparts present outstanding characteristics such as high mechanical strength, high sensing capability and large surface-to-volume ratio. These characteristics, in addition to the fact that CNTs experience changes in their electrical conductance when exposed to different gases, make them appropriate candidates for use in sensing/measuring applications such as gas detection devices. In this research, a model for a Field Effect Transistor (FET)-based structure has been developed as a platform for a gas detection sensor in which the CNT conductance change resulting from the chemical reaction between NH3 and CNT has been employed to model the sensing mechanism with proposed sensing parameters. The research implements the same FET-based structure as in the work of Peng et al. on nanotube-based NH3 gas detection. With respect to this conductance change, the I-V characteristic of the CNT is investigated. Finally, a comparative study shows satisfactory agreement between the proposed model and the experimental data from the mentioned research. PMID:24658617

  4. Prospective demonstration of brain plasticity after intensive abacus-based mental calculation training: An fMRI study

    NASA Astrophysics Data System (ADS)

    Chen, C. L.; Wu, T. H.; Cheng, M. C.; Huang, Y. H.; Sheu, C. Y.; Hsieh, J. C.; Lee, J. S.

    2006-12-01

    Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training.

  5. Generation and use of measurement-based 3-D dose distributions for 3-D dose calculation verification.

    PubMed

    Stern, R L; Fraass, B A; Gerhardsson, A; McShan, D L; Lam, K L

    1992-01-01

    A 3-D radiation therapy treatment planning system calculates dose to an entire volume of points and therefore requires a 3-D distribution of measured dose values for quality assurance and dose calculation verification. To measure such a volumetric distribution with a scanning ion chamber is prohibitively time consuming. A method is presented for the generation of a 3-D grid of dose values based on beam's-eye-view (BEV) film dosimetry. For each field configuration of interest, a set of BEV films at different depths is obtained and digitized, and the optical densities are converted to dose. To reduce inaccuracies associated with film measurement of megavoltage photon depth doses, doses on the different planes are normalized using an ion-chamber measurement of the depth dose. A 3-D grid of dose values is created by interpolation between BEV planes along divergent beam rays. This matrix of measurement-based dose values can then be compared to calculations over the entire volume of interest. This method is demonstrated for three different field configurations. Accuracy of the film-measured dose values is determined by 1-D and 2-D comparisons with ion chamber measurements. Film and ion chamber measurements agree within 2% in the central field regions and within 2.0 mm in the penumbral regions. PMID:1620042

  6. Calculation of Shear Stiffness in Noise Dominated Magnetic Resonance Elastography (MRE) Data Based on Principal Frequency Estimation

    PubMed Central

    McGee, K. P.; Lake, D.; Mariappan, Y; Hubmayr, R. D.; Manduca, A.; Ansell, K.; Ehman, R. L.

    2011-01-01

    Magnetic resonance elastography (MRE) is a non invasive phase-contrast based method for quantifying the shear stiffness of biological tissues. Synchronous application of a shear wave source and motion encoding gradient waveforms within the MRE pulse sequence enable visualization of the propagating shear wave throughout the medium under investigation. Encoded shear wave induced displacements are then processed to calculate the local shear stiffness of each voxel. An important consideration in local shear stiffness estimates is that the algorithms employed typically calculate shear stiffness using relatively high signal-to-noise ratio (SNR) MRE images and have difficulties at extremely low SNR. A new method of estimating shear stiffness based on the principal spatial frequency of the shear wave displacement map is presented. Finite element simulations were performed to assess the relative insensitivity of this approach to decreases in SNR. Additionally, ex vivo experiments were conducted on normal rat lungs to assess the robustness of this approach in low SNR biological tissue. Simulation and experimental results indicate that calculation of shear stiffness by the principal frequency method is less sensitive to extremely low SNR than previously reported MRE inversion methods but at the expense of loss of spatial information within the region of interest from which the principal frequency estimate is derived. PMID:21701049

  7. Calculating discharge of phosphorus and nitrogen with groundwater base flow to a small urban stream reach

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Alex; Roy, James W.; Smith, James E.

    2015-09-01

    Elevated levels of nutrients, especially phosphorus, in urban streams can lead to eutrophication and general degradation of stream water quality. Contributions of phosphorus from groundwater have typically been assumed minor, though elevated concentrations have been associated with riparian areas and urban settings. The objective of this study was to investigate the importance of groundwater as a pathway for phosphorus and nitrogen input to a gaining urban stream. The stream at the 28-m study reach was 3-5 m wide and straight, flowing generally eastward, with a relatively smooth bottom of predominantly sand, with some areas of finer sediments and a few boulders. Temperature-based methods were used to estimate the groundwater flux distribution. Detailed concentration distributions in discharging groundwater were mapped using in-stream piezometers and diffusion-based peepers, and showed elevated levels of soluble reactive phosphorus (SRP) and ammonium compared to the stream (while nitrate levels were lower), especially along the south bank, where groundwater fluxes were lower and geochemically reducing conditions dominated. Field evidence suggests the ammonium may originate from nearby landfills, but that local sediments likely contribute the SRP. Ammonium and SRP mass discharges with groundwater were then estimated as the product of the respective concentration distributions and the groundwater flux distribution. These were determined as approximately 9 and 200 g d-1 for SRP and ammonium, respectively, which compares to stream mass discharges over the observed range of base flows of 20-1100 and 270-7600 g d-1, respectively. This suggests that groundwater from this small reach, and any similar areas along Dyment's Creek, has the potential to contribute substantially to the stream nutrient concentrations.

  8. Bases, Assumptions, and Results of the Flowsheet Calculations for the Decision Phase Salt Disposition Alternatives

    SciTech Connect

    Dimenna, R.A.; Jacobs, R.A.; Taylor, G.A.; Durate, O.E.; Paul, P.K.; Elder, H.H.; Pike, J.A.; Fowler, J.R.; Rutland, P.L.; Gregory, M.V.; Smith III, F.G.; Hang, T.; Subosits, S.G.; Campbell, S.G.

    2001-03-26

    The High Level Waste (HLW) Salt Disposition Systems Engineering Team was formed on March 13, 1998, and chartered to identify options, evaluate alternatives, and recommend a selected alternative(s) for processing HLW salt to a permitted wasteform. This requirement arises because the existing In-Tank Precipitation process at the Savannah River Site, as currently configured, cannot simultaneously meet the HLW production and Authorization Basis safety requirements. This engineering study was performed in four phases. This document provides the technical bases, assumptions, and results of this engineering study.

  9. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun

    2013-04-01

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  10. Brine release based on structural calculations of damage around an excavation at the Waste Isolation Pilot Plant (WIPP)

    SciTech Connect

    Munson, D.E.; Jensen, A.L.; Webb, S.W.; DeVries, K.L.

    1996-02-01

    In a large in situ experimntal circular room, brine inflow was measured over 5 years. After correcting for evaporation losses into mine ventilation air, the measurements gave data for a period of nearly 3 years. Predicted brine accumulation based on a mechanical ``snow plow`` model of the volume swept by creep-induced damage as calculated with the Multimechanism Deformation Coupled Fracture model was found to agree with experiment. Calculation suggests the damage zone at 5 years effectively exends only some 0.7 m into the salt around the room. Also, because the mecahnical model of brine release gives an adequate explanation of the measured data, the hydrological process of brine flow appears to be rapid compared to the mechanical process of brine release.

  11. Calculating the properties of C2H2-C9H16 alkynes, based on the additivity of energy contributions

    NASA Astrophysics Data System (ADS)

    Smolyakov, V. M.; Grebeshkov, V. V.

    2015-05-01

    A ten-constant additive model is obtained for calculating the physicochemical properties of a number of C n H2 n-2 alkynes, based on the group additivity method (with allowance for the initial atomic environment), two topological indices that allow for the second atomic environment, and pairwise non-valence interactions (in implicit form) between three atoms, four atoms, and so forth along the chain of a molecule. Two linear dependences are revealed. The obtained formula is used for numerical calculations of the normal heats of vaporization L NBT and normal boiling temperatures T b of C2H2-C9H16 alkynes, neither of which had been studied experimentally.

  12. Methodologie de conception numerique d'un ventilateur helico-centrifuge basee sur l'emploi du calcul meridien

    NASA Astrophysics Data System (ADS)

    Lallier-Daniels, Dominic

    La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul m

  13. Porphyrin-based polymeric nanostructures for light harvesting applications: Ab initio calculations

    NASA Astrophysics Data System (ADS)

    Orellana, Walter

    The capture and conversion of solar energy into electricity is one of the most important challenges to the sustainable development of mankind. Among the large variety of materials available for this purpose, porphyrins concentrate great attention due to their well-known absorption properties in the visible range. However, extended materials like polymers with similar absorption properties are highly desirable. In this work, we investigate the stability, electronic and optical properties of polymeric nanostructures based on free-base porphyrins and phthalocyanines (H2P, H2Pc), within the framework of the time-dependent density functional perturbation theory. The aim of this work is the stability, electronic, and optical characterization of polymeric sheets and nanotubes obtained from H2P and H2Pc monomers. Our results show that H2P and H2Pc sheets exhibit absorption bands between 350 and 400 nm, slightly different that the isolated molecules. However, the H2P and H2Pc nanotubes exhibit a wide absorption in the visible and near-UV range, with larger peaks at 600 and 700 nm, respectively, suggesting good characteristic for light harvesting. The stability and absorption properties of similar structures obtained from ZnP and ZnPc molecules is also discussed. Departamento de Ciencias Físicas, República 220, 037-0134 Santiago, Chile.

  14. Comparison of Traditional and Simultaneous IMRT Boost Technique Basing on Therapeutic Gain Calculation

    SciTech Connect

    Slosarek, Krzysztof; Zajusz, Aleksander; Szlag, Marta

    2008-01-01

    Two different radiotherapy techniques, a traditional one (CRT) - based on consecutive decreasing of irradiation fields during treatment, and intensity modulated radiation therapy technique (IMRT) with concomitant boost, deliver different doses to treated volumes, increasing the dose in regions of interest. The fractionation schedule differs depending on the applied technique of irradiation. The aim of this study was to compare different fractionation schedules considering tumor control and normal tissue complications. The analysis of tumor control probability (TCP) and normal tissue complication probability (NTCP) were based on the linear quadratic (LQ) model of biologically equivalent dose. A therapeutic gain (TG) formula that combines NTCP and TCP for selected irradiated volumes was introduced to compare CRT and simultaneous boost (SIB) methods. TG refers to the different doses per fraction, overall treatment time (OTT), and selected biological factors such as tumor cell and repopulation time. Therapeutic gain increases with the dose per fraction and reaches the maximum for the doses at about 3 Gy. Further increase in dose per fraction results in decrease of TG, mainly because of the escalation of NTCP. The presented TG formula allows the optimization of radiotherapy planning by comparing different treatment plans for individual patients and by selecting optimal fraction dose.

  15. Spectral reconstruction of high energy photon beams for kernel based dose calculations.

    PubMed

    Hinson, William H; Bourland, J Daniel

    2002-08-01

    A kernel-based dose computation method with finite-size pencil beams (FSPBs) requires knowledge of the photon spectrum. Published methods of indirect spectral measurements using transmission measurements through beam attenuators use mathematical fits with a large number of parameters and constraints. In this study, we examine a simple strategy for fitting transmission data that models important physical characteristics of photon beams produced in clinical linear accelerators. The shape of an unattenuated bremsstrahlung spectrum is known, varying linearly from a maximum at zero energy to a value of zero at a maximum energy. This unattenuated spectrum is altered primarily by absorption of low energy photons by the flattening filter, causing the true spectrum to roll off to zero at low photon energies. A fitting equation models this behavior and has these advantages over previous methods: (1) the equation describes the shape of a bremsstrahlung spectrum based on physical expectations; and (2) only three fit parameters are required with a single constraint. Results for 4 MV and 6 MV accelerators for central axis and off-axis beams show good agreement with the maximum, average and modal energies for known spectra. Previously published models, representations of beam fluence (energy fluence, dN/dE), experimental methods, and the fitting process are discussed. PMID:12201426

  16. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    PubMed Central

    Parodi, K; Ferrari, A; Sommerer, F; Paganetti, H

    2008-01-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography / computed tomography) imaging for in-vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modeling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield Unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  17. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.

    2007-07-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  18. Parallel calculations on shared memory, NUMA-based computers using MATLAB

    NASA Astrophysics Data System (ADS)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2014-05-01

    Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU

  19. Bispectrum feature extraction of gearbox faults based on nonnegative Tucker3 decomposition with 3D calculations

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng

    2013-11-01

    Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.

  20. Modeling of nanosecond-laser ablation: calculations based on a nonstationary averaging technique (spatial moments)

    NASA Astrophysics Data System (ADS)

    Arnold, N. D.; Luk'yanchuk, Boris S.; Bityurin, Nikita M.; Baeuerle, D.

    1998-09-01

    dependence in (alpha) g (T). Small vaporization enthalpy results in a sub-linear h((phi) ) dependence, which, nevertheless, remains faster than logarithmic. With weakly absorbing materials ablation may proceed in two significantly different regimes -- without or with ablation of the heated subsurface layer. The latter occurs at higher fluences and reveals significantly higher ablation temperatures, but is weakly reflected on the ablation curves. Calculations are performed in order to study the: (1) Influence of the duration and temporal profile of the laser pulse on the threshold fluence, (phi) th. This is particularly important for strong absorbers were the heat conduction determines the temperature distribution. (2) Influence of the temperature dependences in material parameters on the ablation curves (ablated depth versus laser fluence) for regimes (phi) approximately equals (phi) th and (phi) very much greater than (phi) th. (3) Consequences of shielding of the incoming radiation at high fluences. (4) Differences in ablation curves for materials with big and small ablation enthalpy (e.g., metals and polymers which ablate differences in ablation curves for materials with big and small ablation enthalpy (e.g., metals and polymers which ablate thermally). Nanosecond laser ablation has been studied for a large variety of different materials and laser wavelengths. As an illustrative example, the method is applied to the quantitative anlaysis of the single pulse ablation of polyimide Kapton TM H.

  1. Transport calculations in support of simulation of nuclear-based explosive detection systems

    SciTech Connect

    Shayer, Z.; Bendahan, J.; Schulze, M.

    1993-12-31

    Explosives concealed in trucks or large containers can be detected utilizing a system based on pulse fast neutron analysis (PFNA) or thermal neutron analysis (TNA). These systems are able to determine the spatial distribution of the various elements in interrogated volume. In the design of the above systems, the charged and neutral particles are traced from the source through their arrival time in the detectors. On-line analysis of the signals from the detectors is used to identify the materials which constitute the sample employing statistical and inverse methods. An extensive research program to develop the computational capability to model this process is underway. The results will produce an optimized and cost effective design of a TNA and PFNA system.

  2. A Novel Multi-objective Genetic Algorithms-Based Calculation of Hill's Coefficients

    NASA Astrophysics Data System (ADS)

    Hariharan, Krishnaswamy; Chakraborti, Nirupam; Barlat, Frédéric; Lee, Myoung-Gyu

    2014-06-01

    The anisotropic coefficients of Hill's yield criterion are determined through a novel genetic algorithms-based multi-objective optimization approach. The classical method of determining anisotropic coefficients is sensitive to the effective plastic strain. In the present procedure, that limitation is overcome using a genetically evolved meta-model of the entire stress strain curve, obtained from uniaxial tension tests conducted in the rolling direction and transverse directions, and biaxial tension. Then, an effective strain that causes the least error in terms of two theoretically derived objective functions is chosen. The anisotropic constants evolved through genetic algorithms correlate very well with the classical results. This approach is expected to be successful for more complex constitutive equations as well.

  3. The Venus nitric oxide night airglow - Model calculations based on the Venus Thermospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.

    1990-01-01

    The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.

  4. Use of ground-based remotely sensed data for surface energy balance calculations during Monsoon '90

    NASA Technical Reports Server (NTRS)

    Moran, M. S.; Kustas, William P.; Vidal, Alain; Stannard, David I.; Blanford, James

    1991-01-01

    Surface energy balance was evaluated at a semiarid watershed using direct and indirect measurements of the turbulent fluxes, a remote technique based on measurements of surface reflectance and temperature, and conventional meteorological information. Comparison of remote estimates of net radiant flux and soil heat flux densities with measured values showed errors on the order of +/-40 W/sq m. To account for the effects of sparse vegetation, semi-empirical adjustments to aerodynamic resistance were required for evaluation of sensible heat flux density (H). However, a significant scatter in estimated versus measured latent heat flux density (LE) was still observed, +/-75 W/sq m over a range from 100-400 W/sq m. The errors of H and LE estimates were reduced to +/-50 W/sq m when observations were restricted to clear sky conditions.

  5. Application of Model Based Parameter Estimation for RCS Frequency Response Calculations Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    1998-01-01

    An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.

  6. Yield estimation based on calculated comparisons to particle velocity data recorded at low stress

    SciTech Connect

    Rambo, J.

    1993-05-01

    This paper deals with the problem of optimizing the yield estimation process if some of the material properties are known from geophysical measurements and others are inferred from in-situ dynamic measurements. The material models and 2-D simulations of the event are combined to determine the yield. Other methods of yield determination from peak particle velocity data have mostly been based on comparisons of nearby events in similar media at the Nevada Test Site. These methods are largely empirical and are subject to additional error when a new event has different properties than the population being used for a basis of comparison. The effect of material variations can be examined using Lawrence Livermore National Laboratory`s KDYNA computer code. The data from the FLAX event provide an instructive example for simulation.

  7. Yield estimation based on calculated comparisons to particle velocity data recorded at low stress

    SciTech Connect

    Rambo, J.

    1993-05-01

    This paper deals with the problem of optimizing the yield estimation process if some of the material properties are known from geophysical measurements and others are inferred from in-situ dynamic measurements. The material models and 2-D simulations of the event are combined to determine the yield. Other methods of yield determination from peak particle velocity data have mostly been based on comparisons of nearby events in similar media at the Nevada Test Site. These methods are largely empirical and are subject to additional error when a new event has different properties than the population being used for a basis of comparison. The effect of material variations can be examined using Lawrence Livermore National Laboratory's KDYNA computer code. The data from the FLAX event provide an instructive example for simulation.

  8. Calculation Of Position And Velocity Of GLONASS Satellite Based On Analytical Theory Of Motion

    NASA Astrophysics Data System (ADS)

    Góral, W.; Skorupa, B.

    2015-09-01

    The presented algorithms of computation of orbital elements and positions of GLONASS satellites are based on the asymmetric variant of the generalized problem of two fixed centers. The analytical algorithm embraces the disturbing acceleration due to the second J2 and third J3 coefficients, and partially fourth zonal harmonics in the expansion of the Earth's gravitational potential. Other main disturbing accelerations - due to the Moon and the Sun attraction - are also computed analytically, where the geocentric position vector of the Moon and the Sun are obtained by evaluating known analytical expressions for their motion. The given numerical examples show that the proposed analytical method for computation of position and velocity of GLONASS satellites can be an interesting alternative for presently used numerical methods.

  9. Model calculations of Raman responses for multiband iron-based superconductors

    NASA Astrophysics Data System (ADS)

    Sauer, Christoph

    In this thesis I compute Raman responses for a free electron band structure model based on ARPES measurements on multiband iron-based superconductors. First a constant and then a k-dependent superconducting gap is used. Applying an effective mass approximation leaves A 1g and B2 g as the only nonvanishing symmetry channels. In the latter only one band contributes and a square root singularity is observed for a constant gap. The k-dependent gap leads to a threshold- log-singularity structure. The unscreened A 1g channel shows the same features but all bands contribute and sum up. The screened single band A1g response vanishes for both gaps. Two band responses with the same constant gap are perfectly screened with identical Raman vertices, unscreened with opposite signs and equal mass ratios and partially screened in all other cases. With two different constant gaps the singularities are removed and a dome-like shape appears except for the vanishing case of equal vertices. The n-band response consists of a sum of two band terms normalized by all n bands and the singularities corresponding to all uniquely present gap values are removed. With the k-dependent gap the singularities are removed and a dome-like shape appears in all combinations of two band responses and in the response for all bands. The dome in the response for all bands shows a flat continuum in between a threshold and a sharp peak produced by the two band terms containing bands of opposite signs.

  10. Phase noise calculation and variability analysis of RFCMOS LC oscillator based on physics-based mixed-mode simulation

    NASA Astrophysics Data System (ADS)

    Hong, Sung-Min; Oh, Yongho; Kim, Namhyung; Rieh, Jae-Sung

    2013-01-01

    A mixed-mode technology computer-aided design framework, which can evaluate the periodic steady-state solution of the oscillator efficiently, has been applied to an RFCMOS LC oscillator. Physics-based simulation of active devices makes it possible to link the internal parameters inside the devices and the performance of the oscillator directly. The phase noise of the oscillator is simulated with physics-based device simulation and the results are compared with the experimental data. Moreover, the statistical effect of the random dopant fluctuation on the oscillation frequency is investigated.

  11. Voronoi-cell finite difference method for accurate electronic structure calculation of polyatomic molecules on unstructured grids

    SciTech Connect

    Son, Sang-Kil

    2011-03-01

    We introduce a new numerical grid-based method on unstructured grids in the three-dimensional real-space to investigate the electronic structure of polyatomic molecules. The Voronoi-cell finite difference (VFD) method realizes a discrete Laplacian operator based on Voronoi cells and their natural neighbors, featuring high adaptivity and simplicity. To resolve multicenter Coulomb singularity in all-electron calculations of polyatomic molecules, this method utilizes highly adaptive molecular grids which consist of spherical atomic grids. It provides accurate and efficient solutions for the Schroedinger equation and the Poisson equation with the all-electron Coulomb potentials regardless of the coordinate system and the molecular symmetry. For numerical examples, we assess accuracy of the VFD method for electronic structures of one-electron polyatomic systems, and apply the method to the density-functional theory for many-electron polyatomic molecules.

  12. Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions

    SciTech Connect

    Li, Jun; Yim, Man-Sung; McNelis, David N.

    2007-07-01

    explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)

  13. Aromatic character of planar boron-based clusters revisited by ring current calculations.

    PubMed

    Pham, Hung Tan; Lim, Kie Zen; Havenith, Remco W A; Nguyen, Minh Tho

    2016-04-28

    The planarity of small boron-based clusters is the result of an interplay between geometry, electron delocalization, covalent bonding and stability. These compounds contain two different bonding patterns involving both σ and π delocalized bonds, and up to now, their aromaticity has been assigned mainly using the classical (4N + 2) electron count for both types of electrons. In the present study, we reexplored the aromatic feature of different types of planar boron-based clusters making use of the ring current approach. B3(+/-), B4(2-), B5(+/-), B6, B7(-), B8(2-), B9(-), B10(2-), B11(-), B12, B13(+), B14(2-) and B16(2-) are characterized by magnetic responses to be doubly σ and π aromatic species in which the π aromaticity can be predicted using the (4N + 2) electron count. The triply aromatic character of B12 and B13(+) is confirmed. The π electrons of B18(2-), B19(-) and B20(2-) obey the disk aromaticity rule with an electronic configuration of [1σ(2)1π(4)1δ(4)2σ(2)] rather than the (4N + 2) count. The double aromaticity feature is observed for boron hydride cycles including B@B5H5(+), Li7B5H5 and M@BnHn(q) clusters from both the (4N + 2) rule and ring current maps. The double π and σ aromaticity in carbon-boron planar cycles B7C(-), B8C, B6C2, B9C(-), B8C2 and B7C3(-) is in conflict with the Hückel electron count. This is also the case for the ions B11C5(+/-) whose ring current indicators suggest that they belong to the class of double aromaticity, in which the π electrons obey the disk aromaticity characteristics. In many clusters, the classical electron count cannot be applied, and the magnetic responses of the electron density expressed in terms of the ring current provide us with a more consistent criterion for determining their aromatic character. PMID:26956732

  14. A comparison of a statistical-mechanics based plasticity model with discrete dislocation plasticity calculations

    NASA Astrophysics Data System (ADS)

    Yefimov, S.; Groma, I.; van der Giessen, E.

    2004-02-01

    A two-dimensional nonlocal version of continuum crystal plasticity theory is proposed, which is based on a statistical-mechanics description of the collective behavior of dislocations coupled to standard small-strain crystal continuum kinematics for single slip. It involves a set of transport equations for the total dislocation density field and for the net-Burgers vector density field, which include a slip system back stress associated to the gradient of the net-Burgers vector density. The theory is applied to the problem of shearing of a two-dimensional composite material with elastic reinforcements in a crystalline matrix. The results are compared to those of discrete dislocation simulations of the same problem. The continuum theory is shown to be able to pick up the distinct dependence on the size of the reinforcing particles for one of the morphologies being studied. Also, its predictions are consistent with the discrete dislocation results during unloading, showing a pronounced Bauschinger effect. None of these features are captured by standard local plasticity theories.

  15. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors.

    PubMed

    Herschtal, A; Te Marvelde, L; Mengersen, K; Hosseinifard, Z; Foroudi, F; Devereux, T; Pham, D; Ball, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T

    2015-03-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement ('random error') than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts -19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. PMID:25658193

  16. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations.

    PubMed

    Attia, Khalid A M; El-Abasawi, Nasr M; Abdel-Azim, Ahmed H

    2016-04-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10(-2)-1.0 × 10(-5) M with detection limit 8.5 × 10(-6) M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. PMID:26838908

  17. First-principle electronic structure calculations for magnetic moment in iron-based superconductors: An LSDA + negative U study

    NASA Astrophysics Data System (ADS)

    Nakamura, H.; Hayashi, N.; Nakai, N.; Okumura, M.; Machida, M.

    2009-10-01

    In order to resolve a discrepancy of the magnetic moment on Fe between the experimental and calculation results, we perform first-principle electronic structure calculations for iron-based superconductors LaFeAsO1-x and LiFeAs also show similar SDW. So far, the first-principle calculations on LaFeAsO actually predicted the SDW state as a ground state. However, the predicted magnetic moment (∼2 μB) per an Fe atom is much larger than the observed one (∼0.35 μB) in experiments [2,4]. The authors suggested that the discrepancy can be resolved by expanding U into a negative U range within LSDA + U framework. In this paper, we revisit the discrepancy and clarify why the negative correction is essential in these compounds. See Ref. [5] for the details of calculation data by LSDA + negative U. In the first-principle calculation on compounds including transition metals, the total energy is frequently corrected by “LSDA + U” approach. The parameter U is theoretically re-expressed as U(≡U-J), where U is the on-site Coulomb repulsion (Hubbard U) and J is the atomic-orbital intra-exchange energy (Hund’s coupling parameter) [6]. The parameter U employed in the electronic structure calculations is usually positive. The positivity promotes the localized character of d-electrons and enhances the magnetic moment in the cases of magnetically ordered compounds. Normally, this positive correction successfully works. In choosing the parameter, one can principally extend the parameter U range to a negative region. The negative case [7] is not popular, but it can occur in the following two cases [8]: (i) the Hubbard U becomes negative and (ii) the intra-exchange J is effectively larger than the Hubbard U. The case (i) has been suggested by many authors based on various theoretical considerations. Here, we note that U should be estimated once screening effects on the long-range Coulomb interaction are taken into account. In fact, small U has been reported [9]. Thus, when the

  18. A robust cooperative spectrum sensing scheme based on Dempster-Shafer theory and trustworthiness degree calculation in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru

    2014-12-01

    Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.

  19. Electronic absorption spectra of imidazolium-based ionic liquids studied by far-ultraviolet spectroscopy and quantum chemical calculations.

    PubMed

    Tanabe, Ichiro; Kurawaki, Yuji; Morisawa, Yusuke; Ozaki, Yukihiro

    2016-08-10

    Electronic absorption spectra of imidazolium-based ionic liquids were studied by far- and deep-ultraviolet spectroscopy and quantum chemical calculations. The absorption spectra in the 145-300 nm region of imidazolium-based ionic liquids, [Cnmim](+)[BF4](-) (n = 2, 4, 8) and [C4mim](+)[PF6](-), were recorded using our original attenuated total reflectance (ATR) system spectrometer. The obtained spectra had two definitive peaks at ∼160 and ∼210 nm. Depending on the number of carbon atoms in the alkyl side chain, the peak wavelength around 160 nm changed, while that around 210 nm remained at almost the same wavelength. Quantum chemical calculation results based on the time-dependent density functional theory (TD-DFT) also showed the corresponding peak shifts. In contrast, there was almost no significant difference between [C4mim](+)[BF4](-) and [C4mim](+)[PF6](-), which corresponded with our calculations. Therefore, it can be concluded that the absorption spectra in the 145-300 nm region are mainly determined by the cations when fluorine-containing anions are adopted. In addition, upon addition of organic solvent (acetonitrile) to [C4mim](+)[BF4](-), small peak shifts to the longer wavelength were revealed for both peaks at ∼160 and ∼210 nm. The peak shift in the deep-ultraviolet region (≤200 nm) in the presence of the solvent, which indicates the change of electronic states of the ionic liquid, was experimentally observed for the first time by using the ATR spectrometer. PMID:27471106

  20. Calculating averted caries attributable to school-based sealant programs with a minimal data set

    PubMed Central

    Griffin, Susan O.; Jones, Kari; Crespin, Matthew

    2016-01-01

    Objectives We describe a methodology for school-based sealant programs (SBSP) to estimate averted cavities,(i.e.,difference in cavities without and with SBSP) over 9 years using a minimal data set. Methods A Markov model was used to estimate averted cavities. SBSP would input estimates of their annual attack rate (AR) and 1-year retention rate. The model estimated retention 2+ years after placement with a functional form obtained from the literature. Assuming a constant AR, SBSP can estimate their AR with child-level data collected prior to sealant placement on sealant presence, number of decayed/filled first molars, and age. We demonstrate the methodology with data from the Wisconsin SBSP. Finally, we examine how sensitive averted cavities obtained with this methodology is if an SBSP were to over or underestimate their AR or 1-year retention. Results Demonstrating the methodology with estimated AR (= 7 percent) and 1-year retention (= 92 percent) from the Wisconsin SBSP data, we found that placing 31,324 sealants averted 10,718 cavities. Sensitivity analysis indicated that for any AR, the magnitude of the error (percent) in estimating averted cavities was always less than the magnitude of the error in specifying the AR and equal to the error in specifying the 1-year retention rate. We also found that estimates of averted cavities were more robust to misspecifications of AR for higher- versus lower-risk children. Conclusions With Excel (Microsoft Corporation, Redmond, WA, USA) spreadsheets available upon request, SBSP can use this methodology to generate reasonable estimates of their impact with a minimal data set. PMID:24423023

  1. POLYANA-A tool for the calculation of molecular radial distribution functions based on Molecular Dynamics trajectories

    NASA Astrophysics Data System (ADS)

    Dimitroulis, Christos; Raptis, Theophanes; Raptis, Vasilios

    2015-12-01

    We present an application for the calculation of radial distribution functions for molecular centres of mass, based on trajectories generated by molecular simulation methods (Molecular Dynamics, Monte Carlo). When designing this application, the emphasis was placed on ease of use as well as ease of further development. In its current version, the program can read trajectories generated by the well-known DL_POLY package, but it can be easily extended to handle other formats. It is also very easy to 'hack' the program so it can compute intermolecular radial distribution functions for groups of interaction sites rather than whole molecules.

  2. Satellite based calculation of spatially distributed crop water requirements for cotton and wheat cultivation in Fergana Valley, Uzbekistan

    NASA Astrophysics Data System (ADS)

    Conrad, Christopher; Rahmann, Maren; Machwitz, Miriam; Stulina, Galina; Paeth, Heiko; Dech, Stefan

    2013-11-01

    This study focuses on the generation of reliable data for improving land and water use in Central Asia. An object-based remote sensing classification is applied and combined with the CropWat model developed by the Food and Agriculture Organization (FAO) to determine crop distribution and water requirements for irrigation of cotton and winter-wheat in Fergana Valley, Uzbekistan. The crop classification is conducted on RapidEye and Landsat data acquired before the onset of the main summer irrigation phases in July using a random forest algorithm. The ClimWat database of FAO is utilized for calculating crop water requirements (CWR) and crop irrigation requirements (CIR).

  3. Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force

    NASA Astrophysics Data System (ADS)

    Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.

    2016-01-01

    The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.

  4. Development of a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport.

    PubMed

    Jia, Xun; Gu, Xuejun; Sempau, Josep; Choi, Dongju; Majumdar, Amitava; Jiang, Steve B

    2010-06-01

    Monte Carlo simulation is the most accurate method for absorbed dose calculations in radiotherapy. Its efficiency still requires improvement for routine clinical applications, especially for online adaptive radiotherapy. In this paper, we report our recent development on a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport. We have implemented the dose planning method (DPM) Monte Carlo dose calculation package (Sempau et al 2000 Phys. Med. Biol. 45 2263-91) on the GPU architecture under the CUDA platform. The implementation has been tested with respect to the original sequential DPM code on the CPU in phantoms with water-lung-water or water-bone-water slab geometry. A 20 MeV mono-energetic electron point source or a 6 MV photon point source is used in our validation. The results demonstrate adequate accuracy of our GPU implementation for both electron and photon beams in the radiotherapy energy range. Speed-up factors of about 5.0-6.6 times have been observed, using an NVIDIA Tesla C1060 GPU card against a 2.27 GHz Intel Xeon CPU processor. PMID:20463376

  5. Large-scale screening of metal hydrides for hydrogen storage from first-principles calculations based on equilibrium reaction thermodynamics.

    PubMed

    Kim, Ki Chul; Kulkarni, Anant D; Johnson, J Karl; Sholl, David S

    2011-04-21

    Systematic thermodynamics calculations based on density functional theory-calculated energies for crystalline solids have been a useful complement to experimental studies of hydrogen storage in metal hydrides. We report the most comprehensive set of thermodynamics calculations for mixtures of light metal hydrides to date by performing grand canonical linear programming screening on a database of 359 compounds, including 147 compounds not previously examined by us. This database is used to categorize the reaction thermodynamics of all mixtures containing any four non-H elements among Al, B, C, Ca, K, Li, Mg, N, Na, Sc, Si, Ti, and V. Reactions are categorized according to the amount of H(2) that is released and the reaction's enthalpy. This approach identifies 74 distinct single step reactions having that a storage capacity >6 wt.% and zero temperature heats of reaction 15 ≤ΔU(0)≤ 75 kJ mol(-1) H(2). Many of these reactions, however, are likely to be problematic experimentally because of the role of refractory compounds, B(12)H(12)-containing compounds, or carbon. The single most promising reaction identified in this way involves LiNH(2)/LiH/KBH(4), storing 7.48 wt.% H(2) and having ΔU(0) = 43.6 kJ mol(-1) H(2). We also examined the complete range of reaction mixtures to identify multi-step reactions with useful properties; this yielded 23 multi-step reactions of potential interest. PMID:21409194

  6. Excitons in poly(para phenylene vinylene): a quantum-chemical perspective based on high-level ab initio calculations.

    PubMed

    Mewes, Stefanie A; Mewes, Jan-Michael; Dreuw, Andreas; Plasser, Felix

    2016-01-28

    Excitonic effects play a fundamental role in the photophysics of organic semiconductors such as poly(para phenylene vinylene) (PPV). The emergence of these effects is examined for PPV oligomers based on high level ab initio excited-state calculations. The computed many-body wavefunctions are subjected to our recently developed exciton analysis protocols to provide a qualitative and quantitative characterization of excitonic effects. The discussion is started by providing high-level benchmark calculations using the algebraic-diagrammatic construction for the polarization propagator in third order of perturbation theory (ADC(3)). These calculations support the general adequacy of the computationally more efficient ADC(2) method in the case of singly excited states but also reveal the existence of low-energy doubly excited states. In a next step, a series of oligomers with chains of two to eight phenyl rings is studied at the ADC(2) level showing that the confinement effects are dominant for small oligomers, while delocalized exciton bands emerge for larger systems. In the case of the largest oligomer, the first twenty singlet and triplet excited states are computed and a detailed analysis in terms of the Wannier and Frenkel models is presented. The presence of different Wannier bands becomes apparent, showing a general trend that exciton sizes are lowered with increasing quasi-momentum within the bands. PMID:26700493

  7. Dynamics study of the OH + NH3 hydrogen abstraction reaction using QCT calculations based on an analytical potential energy surface

    NASA Astrophysics Data System (ADS)

    Monge-Palacios, M.; Corchado, J. C.; Espinosa-Garcia, J.

    2013-06-01

    To understand the reactivity and mechanism of the OH + NH3 → H2O + NH2 gas-phase reaction, which evolves through wells in the entrance and exit channels, a detailed dynamics study was carried out using quasi-classical trajectory calculations. The calculations were performed on an analytical potential energy surface (PES) recently developed by our group, PES-2012 [Monge-Palacios et al. J. Chem. Phys. 138, 084305 (2013)], 10.1063/1.4792719. Most of the available energy appeared as H2O product vibrational energy (54%), reproducing the only experimental evidence, while only the 21% of this energy appeared as NH2 co-product vibrational energy. Both products appeared with cold and broad rotational distributions. The excitation function (constant collision energy in the range 1.0-14.0 kcal mol-1) increases smoothly with energy, contrasting with the only theoretical information (reduced-dimensional quantum scattering calculations based on a simplified PES), which presented a peak at low collision energies, related to quantized states. Analysis of the individual reactive trajectories showed that different mechanisms operate depending on the collision energy. Thus, while at high energies (Ecoll ≥ 6 kcal mol-1) all trajectories are direct, at low energies about 20%-30% of trajectories are indirect, i.e., with the mediation of a trapping complex, mainly in the product well. Finally, the effect of the zero-point energy constraint on the dynamics properties was analyzed.

  8. A new technique for calculating reentry base heating. [analysis of laminar base flow field of two dimensional reentry body

    NASA Technical Reports Server (NTRS)

    Meng, J. C. S.

    1973-01-01

    The laminar base flow field of a two-dimensional reentry body has been studied by Telenin's method. The flow domain was divided into strips along the x-axis, and the flow variations were represented by Lagrange interpolation polynomials in the transformed vertical coordinate. The complete Navier-Stokes equations were used in the near wake region, and the boundary layer equations were applied elsewhere. The boundary conditions consisted of the flat plate thermal boundary layer in the forebody region and the near wake profile in the downstream region. The resulting two-point boundary value problem of 33 ordinary differential equations was then solved by the multiple shooting method. The detailed flow field and thermal environment in the base region are presented in the form of temperature contours, Mach number contours, velocity vectors, pressure distributions, and heat transfer coefficients on the base surface. The maximum heating rate was found on the centerline, and the two-dimensional stagnation point flow solution was adquate to estimate the maximum heating rate so long as the local Reynolds number could be obtained.

  9. Investigation of possibility of surface rupture derived from PFDHA and calculation of surface displacement based on dislocation

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Irikura, K.

    2013-12-01

    A probability of surface rupture is important to configure the seismic source, such as area sources or fault models, for a seismic hazard evaluation. In Japan, Takemura (1998) estimated the probability based on the historical earthquake data. Kagawa et al. (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated probability indicates a sigmoid curve and increases between Mj (the local magnitude defined and calculated by Japan Meteorological Agency) =6.5 and Mj=7.0. The probability of surface rupture is also used in a probabilistic fault displacement analysis (PFDHA). The probability is determined from the collected earthquake catalog, which were classified into two categories: with surface rupture or without surface rupture. The logistic regression is performed for the classified earthquake data. Youngs et al. (2003), Ross and Moss (2011) and Petersen et al. (2011) indicate the logistic curves of the probability of surface rupture by normal, reverse and strike-slip faults, respectively. Takao et al. (2013) shows the logistic curve derived from only Japanese earthquake data. The Japanese probability curve shows the sharply increasing in narrow magnitude range by comparison with other curves. In this study, we estimated the probability of surface rupture applying the logistic analysis to the surface displacement derived from a surface displacement calculation. A source fault was defined in according to the procedure of Kagawa et al. (2004), which determined a seismic moment from a magnitude and estimated the area size of the asperity and the amount of slip. Strike slip and reverse faults were considered as source faults. We applied Wang et al. (2003) for calculations. The surface displacements with defined source faults were calculated by varying the depth of the fault. A threshold value as 5cm of surface displacement was used to evaluate whether a surface rupture reach or do not reach to the surface. We carried out the

  10. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08... GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.207-08 Calculation and use of vehicle-specific 5-cycle-based fuel...

  11. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08... GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.207-08 Calculation and use of vehicle-specific 5-cycle-based fuel...

  12. Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams

    NASA Astrophysics Data System (ADS)

    Vassiliev, Oleg N.; Wareing, Todd A.; McGhee, John; Failla, Gregory; Salehpour, Mohammad R.; Mourtada, Firas

    2010-02-01

    A new grid-based Boltzmann equation solver, Acuros™, was developed specifically for performing accurate and rapid radiotherapy dose calculations. In this study we benchmarked its performance against Monte Carlo for 6 and 18 MV photon beams in heterogeneous media. Acuros solves the coupled Boltzmann transport equations for neutral and charged particles on a locally adaptive Cartesian grid. The Acuros solver is an optimized rewrite of the general purpose Attila© software, and for comparable accuracy levels, it is roughly an order of magnitude faster than Attila. Comparisons were made between Monte Carlo (EGSnrc) and Acuros for 6 and 18 MV photon beams impinging on a slab phantom comprising tissue, bone and lung materials. To provide an accurate reference solution, Monte Carlo simulations were run to a tight statistical uncertainty (σ ≈ 0.1%) and fine resolution (1-2 mm). Acuros results were output on a 2 mm cubic voxel grid encompassing the entire phantom. Comparisons were also made for a breast treatment plan on an anthropomorphic phantom. For the slab phantom in regions where the dose exceeded 10% of the maximum dose, agreement between Acuros and Monte Carlo was within 2% of the local dose or 1 mm distance to agreement. For the breast case, agreement was within 2% of local dose or 2 mm distance to agreement in 99.9% of voxels where the dose exceeded 10% of the prescription dose. Elsewhere, in low dose regions, agreement for all cases was within 1% of the maximum dose. Since all Acuros calculations required less than 5 min on a dual-core two-processor workstation, it is efficient enough for routine clinical use. Additionally, since Acuros calculation times are only weakly dependent on the number of beams, Acuros may ideally be suited to arc therapies, where current clinical algorithms may incur long calculation times.

  13. Transport and optical properties of CH2 plastics: Ab initio calculation and density-of-states-based analysis

    NASA Astrophysics Data System (ADS)

    Knyazev, D. V.; Levashov, P. R.

    2015-11-01

    This work covers an ab initio calculation of transport and optical properties of plastics of the effective composition CH2 at density 0.954 g/cm3 in the temperature range from 5 kK up to 100 kK. The calculation is based on the quantum molecular dynamics, density functional theory and the Kubo-Greenwood formula. The temperature dependence of the static electrical conductivity σ1DC (T) has an unusual shape: σ1DC(T) grows rapidly for 5 kK ≤ T ≤ 10 kK and is almost constant for 20 kK ≤ T ≤ 60 kK. The additional analysis based on the investigation of the electron density of states (DOS) was performed. The rapid growth of σ1DC(T) at 5 kK ≤ T ≤ 10 kK is connected with the increase of DOS at the electron energy equal to the chemical potential ɛ = μ. The frequency dependence of the dynamic electrical conductivity σ1(ω) at 5 kK has the distinct non-Drude shape with the peak at ω ≈ 10 eV. This behavior of σ1(ω) was explained by the dip at the electron DOS.

  14. A method for fast evaluation of neutron spectra for BNCT based on in-phantom figure-of-merit calculation.

    PubMed

    Martín, Guido

    2003-03-01

    In this paper a fast method to evaluate neutron spectra for brain BNCT is developed. The method is based on an algorithm to calculate dose distribution in the brain, for which a data matrix has been taken into account, containing weighted biological doses per position per incident energy and the incident neutron spectrum to be evaluated. To build the matrix, using the MCNP 4C code, nearly monoenergetic neutrons were transported into a head model. The doses were scored and an energy-dependent function to biologically weight the doses was used. To find the beam quality, dose distribution along the beam centerline was calculated. A neutron importance function for this therapy to bilaterally treat deep-seated tumors was constructed in terms of neutron energy. Neutrons in the energy range of a few tens of kilo-electron-volts were found to produce the best dose gain, defined as dose to tumor divided by maximum dose to healthy tissue. Various neutron spectra were evaluated through this method. An accelerator-based neutron source was found to be more reliable for this therapy in terms of therapeutic gain than reactors. PMID:12674238

  15. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    SciTech Connect

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.; Oedegaard-Jensen, A.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, and to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)

  16. Metal Accretion onto White Dwarfs. II. A Better Approach Based on Time-Dependent Calculations in Static Models

    NASA Astrophysics Data System (ADS)

    Fontaine, G.; Dufour, P.; Chayer, P.; Dupuis, J.; Brassard, P.

    2015-06-01

    The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. Inferences on the process based on diffusion timescale arguments make the implicit assumption that the concentration gradient of a given metal at the base of the convection zone is negligible. This assumption is, in fact, not rigorously valid, but it allows the decoupling of the surface abundance from the evolving distribution of a given metal in deeper layers. A better approach is a full time-dependent calculation of the evolution of the abundance profile of an accreting-diffusing element. We used the same approach as that developed by Dupuis et al. to model accretion episodes involving many more elements than those considered by these authors. Our calculations incorporate the improvements to diffusion physics mentioned in Paper I. The basic assumption in the Dupuis et al. approach is that the accreted metals are trace elements, i.e, that they have no effects on the background (DA or non-DA) stellar structure. This allows us to consider an arbitrary number of accreting elements.

  17. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    PubMed

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  18. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  19. Impact of heterogeneity-corrected dose calculation using a grid-based Boltzmann solver on breast and cervix cancer brachytherapy

    PubMed Central

    Hofbauer, Julia; Kirisits, Christian; Resch, Alexandra; Xu, Yingjie; Sturdza, Alina; Pötter, Richard

    2016-01-01

    Purpose To analyze the impact of heterogeneity-corrected dose calculation on dosimetric quality parameters in gynecological and breast brachytherapy using Acuros, a grid-based Boltzmann equation solver (GBBS), and to evaluate the shielding effects of different cervix brachytherapy applicators. Material and methods Calculations with TG-43 and Acuros were based on computed tomography (CT) retrospectively, for 10 cases of accelerated partial breast irradiation and 9 cervix cancer cases treated with tandem-ring applicators. Phantom CT-scans of different applicators (plastic and titanium) were acquired. For breast cases the V20Gyαβ3 to lung, the D0.1cm3, D1cm3, D2cm3 to rib, the D0.1cm3, D1cm3, D10cm3 to skin, and Dmax for all structures were reported. For cervix cases, the D0.1cm3, D2cm3 to bladder, rectum and sigmoid, and the D50, D90, D98, V100 for the CTVHR were reported. For the phantom study, surrogates for target and organ at risk were created for a similar dose volume histogram (DVH) analysis. Absorbed dose and equivalent dose to 2 Gy fractionation (EQD2) were used for comparison. Results Calculations with TG-43 overestimated the dose for all dosimetric indices investigated. For breast, a decrease of ~8% was found for D10cm3 to the skin and 5% for D2cm3 to rib, resulting in a difference ~ –1.5 Gy EQD2 for overall treatment. Smaller effects were found for cervix cases with the plastic applicator, with up to –2% (–0.2 Gy EQD2) per fraction for organs at risk and –0.5% (–0.3 Gy EQD2) per fraction for CTVHR. The shielding effect of the titanium applicator resulted in a decrease of 2% for D2cm3 to the organ at risk versus 0.7% for plastic. Conclusions Lower doses were reported when calculating with Acuros compared to TG-43. Differences in dose parameters were larger in breast cases. A lower impact on clinical dose parameters was found for the cervix cases. Applicator material causes systematic shielding effects that can be taken into account. PMID

  20. Non-equilibrium Green's function calculation of AlGaAs-well-based and GaSb-based terahertz quantum cascade laser structures

    SciTech Connect

    Yasuda, H. Hosako, I.

    2015-03-16

    We investigate the performance of terahertz quantum cascade lasers (THz-QCLs) based on Al{sub x}Ga{sub 1−x}As/Al{sub y}Ga{sub 1−y}As and GaSb/AlGaSb material systems to realize higher-temperature operation. Calculations with the non-equilibrium Green's function method reveal that the AlGaAs-well-based THz-QCLs do not show improved performance, mainly because of alloy scattering in the ternary compound semiconductor. The GaSb-based THz-QCLs offer clear advantages over GaAs-based THz-QCLs. Weaker longitudinal optical phonon–electron interaction in GaSb produces higher peaks in the spectral functions of the lasing levels, which enables more electrons to be accumulated in the upper lasing level.

  1. Spinel compounds as multivalent battery cathodes: A systematic evaluation based on ab initio calculations

    SciTech Connect

    Liu, Miao; Rong, Ziqin; Malik, Rahul; Canepa, Pieremanuele; Jain, Anubhav; Ceder, Gerbrand; Persson, Kristin A.

    2014-12-16

    In this study, batteries that shuttle multivalent ions such as Mg2+ and Ca2+ ions are promising candidates for achieving higher energy density than available with current Li-ion technology. Finding electrode materials that reversibly store and release these multivalent cations is considered a major challenge for enabling such multivalent battery technology. In this paper, we use recent advances in high-throughput first-principles calculations to systematically evaluate the performance of compounds with the spinel structure as multivalent intercalation cathode materials, spanning a matrix of five different intercalating ions and seven transition metal redox active cations. We estimate the insertion voltage, capacity, thermodynamic stability of charged and discharged states, as well as the intercalating ion mobility and use these properties to evaluate promising directions. Our calculations indicate that the Mn2O4 spinel phase based on Mg and Ca are feasible cathode materials. In general, we find that multivalent cathodes exhibit lower voltages compared to Li cathodes; the voltages of Ca spinels are ~0.2 V higher than those of Mg compounds (versus their corresponding metals), and the voltages of Mg compounds are ~1.4 V higher than Zn compounds; consequently, Ca and Mg spinels exhibit the highest energy densities amongst all the multivalent cation species. The activation barrier for the Al³⁺ ion migration in the Mn₂O₄ spinel is very high (~1400 meV for Al3+ in the dilute limit); thus, the use of an Al based Mn spinel intercalation cathode is unlikely. Amongst the choice of transition metals, Mn-based spinel structures rank highest when balancing all the considered properties.

  2. PbBr-Based Layered Perovskite Organic-Inorganic Superlattice Having Carbazole Chromophore; Hole-Mobility and Quantum Mechanical Calculation.

    PubMed

    Era, Masanao; Yasuda, Takeshi; Mori, Kento; Tomotsu, Norio; Kawano, Naoki; Koshimizu, Masanori; Asai, Keisuke

    2016-04-01

    We have successfully evaluated hole mobility in a spin-coated film of a lead-bromide based layered perovskite having carbazole chromophore-linked ammonium molecules as organic layer by using FET measurement. The values of hole mobility, threshold voltage and on/off ratio at room temperature were evaluated.to.be 1.7 x 10(-6) cm2 V-1 s-1, 27 V and 28 V, respectively. However, the spin-coated films on Si substrates were not so uniform compared with those on fused quartz substrates. To improve the film uniformity, we examined the relationship between substrate temperature during spin-coating and film morphology in the layered perovskite spin-coated films. The mean roughness of the spin-coated films on Si substrates was dependent on the substrate temperature. At 353 K, the mean roughness was minimized and the carrier mobility was enhanced by one order of magnitude; the values of hole mobility and threshold voltage were .estimated to be 3.4 x 10(-5) cm2 V-1 s-1, and 22 V at room temperature in a preliminary FET evaluation, respectively. In addition, we determined a crystal structure of the layered perovskite by X-ray diffraction analysis. To gain a better understanding of the observed hole transports, we conducted quantum mechanical calculations using the obtained crystal structure information. The calculated band structure of the layered organic perovskite showed that the valence band is composed of the organic carbazole layer, which confirms that.the measured hole mobility is mainly derived from the organic part of the layered perovskite. Band and hopping transport mechanisms were discussed by calculating the effective masses and transfer integrals for the 2D periodic system of the organic layer in isolation. PMID:27451598

  3. Multi-Server Approach for High-Throughput Molecular Descriptors Calculation based on Multi-Linear Algebraic Maps.

    PubMed

    García-Jacas, César R; Aguilera-Mendoza, Longendri; González-Pérez, Reisel; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Avdeenko, Tatiana

    2015-01-01

    The present report introduces a novel module of the QuBiLS-MIDAS software for the distributed computation of the 3D Multi-Linear algebraic molecular indices. The main motivation for developing this module is to deal with the computational complexity experienced during the calculation of the descriptors over large datasets. To accomplish this task, a multi-server computing platform named T-arenal was developed, which is suited for institutions with many workstations interconnected through a local network and without resources particularly destined for computation tasks. This new system was deployed in 337 workstations and it was perfectly integrated with the QuBiLS-MIDAS software. To illustrate the usability of the T-arenal platform, performance tests over a dataset comprised of 15 000 compounds are carried out, yielding a 52 and 60 fold reduction in the sequential processing time for the 2-Linear and 3-Linear indices, respectively. Therefore, it can be stated that the T-arenal based distribution of computation tasks constitutes a suitable strategy for performing high-throughput calculations of 3D Multi-Linear descriptors over thousands of chemical structures for posterior QSAR and/or ADME-Tox studies. PMID:27490863

  4. Structure investigation of three hydrazones Schiff's bases by spectroscopic, thermal and molecular orbital calculations and their biological activities

    NASA Astrophysics Data System (ADS)

    Belal, Arafa A. M.; Zayed, M. A.; El-Desawy, M.; Rakha, Sh. M. A. H.

    2015-03-01

    Three Schiff's bases AI (2(1-hydrazonoethyl)phenol), AII (2, 4-dibromo 6-(hydrazonomethyl)phenol) and AIII (2(hydrazonomethyl)phenol) were prepared as new hydrazone compounds via condensation reactions with molar ratio (1:1) of reactants. Firstly by reaction of 2-hydroxy acetophenone solution and hydrazine hydrate; it gives AI. Secondly condensation between 3,5-dibromo-salicylaldehyde and hydrazine hydrate gives AII. Thirdly condensation between salicylaldehyde and hydrazine hydrate gives AIII. The structures of AI-AIII were characterized by elemental analysis (EA), mass (MS), FT-IR and 1H NMR spectra, and thermal analyses (TG, DTG, and DTA). The activation thermodynamic parameters, such as, ΔE∗, ΔH∗, ΔS∗ and ΔG∗ were calculated from the TG curves using Coats-Redfern method. It is important to investigate their molecular structures to know the active groups and weak bond responsible for their biological activities. Consequently in the present work, the obtained thermal (TA) and mass (MS) practical results are confirmed by semi-empirical MO-calculations (MOCS) using PM3 procedure. Their biological activities have been tested in vitro against Escherichia coli, Proteus vulgaris, Bacillissubtilies and Staphylococcus aurous bacteria in order to assess their anti-microbial potential.

  5. Numerical calculation of thermo-mechanical problems at large strains based on complex step derivative approximation of tangent stiffness matrices

    NASA Astrophysics Data System (ADS)

    Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg

    2015-05-01

    In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.

  6. Ab-initio calculations for a realistic sensor: A study of CO sensors based on nitrogen-rich carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Souza, A. M.; Rocha, A. R.; Fazzio, A.; da Silva, A. J. R.

    2012-09-01

    The use of nanoscale low-dimensional systems could boost the sensitivity of gas sensors. In this work we simulate a nanoscopic sensor based on carbon nanotubes with a large number of binding sites using ab initio density functional electronic structure calculations coupled to the Non-Equilibrium Green's Function formalism. We present a recipe where the adsorption process is studied followed by conductance calculations of a single defect system and of more realistic disordered system considering different coverages of molecules as one would expect experimentally. We found that the sensitivity of the disordered system is enhanced by a factor of 5 when compared to the single defect one. Finally, our results from the atomistic electronic transport are used as input to a simple model that connects them to experimental parameters such as temperature and partial gas pressure, providing a procedure for simulating a realistic nanoscopic gas sensor. Using this methodology we show that nitrogen-rich carbon nanotubes could work at room temperature with extremely high sensitivity.

  7. Density functional theory based calculations of the transfer integral in a redox-active single-molecule junction

    NASA Astrophysics Data System (ADS)

    Kastlunger, Georg; Stadler, Robert

    2014-03-01

    There are various quantum chemical approaches for an ab initio description of transfer integrals within the framework of Marcus theory in the context of electron transfer reactions. In our paper, we aim to calculate transfer integrals in redox-active single molecule junctions, where we focus on the coherent tunneling limit with the metal leads taking the position of donor and acceptor and the molecule acting as a transport mediating bridge. This setup allows us to derive a conductance, which can be directly compared with recent results from a nonequilibrium Green's function approach. Compared with purely molecular systems we face additional challenges due to the metallic nature of the leads, which rules out some of the common techniques, and due to their periodicity, which requires k-space integration. We present three different methods, all based on density functional theory, for calculating the transfer integral under these constraints, which we benchmark on molecular test systems from the relevant literature. We also discuss many-body effects and apply all three techniques to a junction with a Ruthenium complex in different oxidation states.

  8. Structure investigation of three hydrazones Schiff's bases by spectroscopic, thermal and molecular orbital calculations and their biological activities.

    PubMed

    Belal, Arafa A M; Zayed, M A; El-Desawy, M; Rakha, Sh M A H

    2015-03-01

    Three Schiff's bases AI (2(1-hydrazonoethyl)phenol), AII (2, 4-dibromo 6-(hydrazonomethyl)phenol) and AIII (2(hydrazonomethyl)phenol) were prepared as new hydrazone compounds via condensation reactions with molar ratio (1:1) of reactants. Firstly by reaction of 2-hydroxy acetophenone solution and hydrazine hydrate; it gives AI. Secondly condensation between 3,5-dibromo-salicylaldehyde and hydrazine hydrate gives AII. Thirdly condensation between salicylaldehyde and hydrazine hydrate gives AIII. The structures of AI-AIII were characterized by elemental analysis (EA), mass (MS), FT-IR and (1)H NMR spectra, and thermal analyses (TG, DTG, and DTA). The activation thermodynamic parameters, such as, ΔE(∗), ΔH(∗), ΔS(∗) and ΔG(∗) were calculated from the TG curves using Coats-Redfern method. It is important to investigate their molecular structures to know the active groups and weak bond responsible for their biological activities. Consequently in the present work, the obtained thermal (TA) and mass (MS) practical results are confirmed by semi-empirical MO-calculations (MOCS) using PM3 procedure. Their biological activities have been tested in vitro against Escherichia coli, Proteus vulgaris, Bacillissubtilies and Staphylococcus aurous bacteria in order to assess their anti-microbial potential. PMID:25437844

  9. Methods for Calculating the Absolute Entropy and free energy of biological systems based on ideas from Polymer Physics

    PubMed Central

    Meirovitch, Hagai

    2009-01-01

    The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, PiB while the value of PiB is not provided directly; therefore, it is difficult to obtain the absolute entropy, S ~ -ln PiB, and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the “local states” (LS) and the “hypothetical scanning” (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks, and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic α-amylase and acetylcholineesterase in explicit water, where the difference of F between the bound and free states of the loop was calculated. Currently HSMD is being extended for

  10. Methods for calculating the absolute entropy and free energy of biological systems based on ideas from polymer physics.

    PubMed

    Meirovitch, Hagai

    2010-01-01

    The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently

  11. Feasibility of MV CBCT-based treatment planning for urgent radiation therapy: dosimetric accuracy of MV CBCT-based dose calculations.

    PubMed

    Held, Mareike; Sneed, Penny K; Fogh, Shannon E; Pouliot, Jean; Morin, Olivier

    2015-01-01

    Unlike scheduled radiotherapy treatments, treatment planning time and resources are limited for emergency treatments. Consequently, plans are often simple 2D image-based treatments that lag behind technical capabilities available for nonurgent radiotherapy. We have developed a novel integrated urgent workflow that uses onboard MV CBCT imaging for patient simulation to improve planning accuracy and reduce the total time for urgent treatments. This study evaluates both MV CBCT dose planning accuracy and novel urgent workflow feasibility for a variety of anatomic sites. We sought to limit local mean dose differences to less than 5% compared to conventional CT simulation. To improve dose calculation accuracy, we created separate Hounsfield unit-to-density calibration curves for regular and extended field-of-view (FOV) MV CBCTs. We evaluated dose calculation accuracy on phantoms and four clinical anatomical sites (brain, thorax/spine, pelvis, and extremities). Plans were created for each case and dose was calculated on both the CT and MV CBCT. All steps (simulation, planning, setup verification, QA, and dose delivery) were performed in one 30 min session using phantoms. The monitor units (MU) for each plan were compared and dose distribution agreement was evaluated using mean dose difference over the entire volume and gamma index on the central 2D axial plane. All whole-brain dose distributions gave gamma passing rates higher than 95% for 2%/2 mm criteria, and pelvic sites ranged between 90% and 98% for 3%/3 mm criteria. However, thoracic spine treatments produced gamma passing rates as low as 47% for 3%/3 mm criteria. Our novel MV CBCT-based dose planning and delivery approach was feasible and time-efficient for the majority of cases. Limited MV CBCT FOV precluded workflow use for pelvic sites of larger patients and resulted in image clearance issues when tumor position was far off midline. The agreement of calculated MU on CT and MV CBCT was acceptable for all

  12. Determination of the hyperfine magnetic field in magnetic carbon-based materials: DFT calculations and NMR experiments

    PubMed Central

    Freitas, Jair C. C.; Scopel, Wanderlã L.; Paz, Wendel S.; Bernardes, Leandro V.; Cunha-Filho, Francisco E.; Speglich, Carlos; Araújo-Moreira, Fernando M.; Pelc, Damjan; Cvitanić, Tonči; Požek, Miroslav

    2015-01-01

    The prospect of carbon-based magnetic materials is of immense fundamental and practical importance, and information on atomic-scale features is required for a better understanding of the mechanisms leading to carbon magnetism. Here we report the first direct detection of the microscopic magnetic field produced at 13C nuclei in a ferromagnetic carbon material by zero-field nuclear magnetic resonance (NMR). Electronic structure calculations carried out in nanosized model systems with different classes of structural defects show a similar range of magnetic field values (18–21 T) for all investigated systems, in agreement with the NMR experiments. Our results are strong evidence of the intrinsic nature of defect-induced magnetism in magnetic carbons and establish the magnitude of the hyperfine magnetic field created in the neighbourhood of the defects that lead to magnetic order in these materials. PMID:26434597

  13. Thermodynamics of polymer nematics described with a worm-like chain model: particle-based simulations and SCF theory calculations

    NASA Astrophysics Data System (ADS)

    Greco, Cristina; Yiang, Ying; Kremer, Kurt; Chen, Jeff; Daoulas, Kostas

    Polymer liquid crystals, apart from traditional applications as high strength materials, are important for new technologies, e.g. Organic Electronics. Their studies often invoke mesoscale models, parameterized to reproduce thermodynamic properties of the real material. Such top-down strategies require advanced simulation techniques, predicting accurately the thermodynamics of mesoscale models as a function of characteristic features and parameters. Here a recently developed model describing nematic polymers as worm-like chains interacting with soft directional potentials is considered. We present a special thermodynamic integration scheme delivering free energies in particle-based Monte Carlo simulations of this model, avoiding thermodynamic singularities. Conformational and structural properties, as well as Helmholtz free energies are reported as a function of interaction strength. They are compared with state-of-art SCF calculations invoking a continuum analog of the same model, demonstrating the role of liquid-packing and fluctuations.

  14. Interaction of curcumin with Al(III) and its complex structures based on experiments and theoretical calculations

    NASA Astrophysics Data System (ADS)

    Jiang, Teng; Wang, Long; Zhang, Sui; Sun, Ping-Chuan; Ding, Chuan-Fan; Chu, Yan-Qiu; Zhou, Ping

    2011-10-01

    Curcumin has been recognized as a potential natural drug to treat the Alzheimer's disease (AD) by chelating baleful metal ions, scavenging radicals and preventing the amyloid β (Aβ) peptides from the aggregation. In this paper, Al(III)-curcumin complexes with Al(III) were synthesized and characterized by liquid-state 1H, 13C and 27Al nuclear magnetic resonance (NMR), mass spectroscopy (MS), ultraviolet spectroscopy (UV) and generalized 2D UV-UV correlation spectroscopy. In addition, the density functional theory (DFT)-based UV and chemical shift calculations were also performed to view insight into the structures and properties of curcumin and its complexes. It was revealed that curcumin could interact strongly with Al(III) ion, and form three types of complexes under different molar ratios of [Al(III)]/[curcumin], which would restrain the interaction of Al(III) with the Aβ peptide, reducing the toxicity effect of Al(III) on the peptide.

  15. Molecules-in-Molecules: An Extrapolated Fragment-Based Approach for Accurate Calculations on Large Molecules and Materials.

    PubMed

    Mayhall, Nicholas J; Raghavachari, Krishnan

    2011-05-10

    We present a new extrapolated fragment-based approach, termed molecules-in-molecules (MIM), for accurate energy calculations on large molecules. In this method, we use a multilevel partitioning approach coupled with electronic structure studies at multiple levels of theory to provide a hierarchical strategy for systematically improving the computed results. In particular, we use a generalized hybrid energy expression, similar in spirit to that in the popular ONIOM methodology, that can be combined easily with any fragmentation procedure. In the current work, we explore a MIM scheme which first partitions a molecule into nonoverlapping fragments and then recombines the interacting fragments to form overlapping subsystems. By including all interactions with a cheaper level of theory, the MIM approach is shown to significantly reduce the errors arising from a single level fragmentation procedure. We report the implementation of energies and gradients and the initial assessment of the MIM method using both biological and materials systems as test cases. PMID:26610128

  16. Determination of the hyperfine magnetic field in magnetic carbon-based materials: DFT calculations and NMR experiments

    NASA Astrophysics Data System (ADS)

    Freitas, Jair C. C.; Scopel, Wanderlã L.; Paz, Wendel S.; Bernardes, Leandro V.; Cunha-Filho, Francisco E.; Speglich, Carlos; Araújo-Moreira, Fernando M.; Pelc, Damjan; Cvitanić, Tonči; Požek, Miroslav

    2015-10-01

    The prospect of carbon-based magnetic materials is of immense fundamental and practical importance, and information on atomic-scale features is required for a better understanding of the mechanisms leading to carbon magnetism. Here we report the first direct detection of the microscopic magnetic field produced at 13C nuclei in a ferromagnetic carbon material by zero-field nuclear magnetic resonance (NMR). Electronic structure calculations carried out in nanosized model systems with different classes of structural defects show a similar range of magnetic field values (18-21 T) for all investigated systems, in agreement with the NMR experiments. Our results are strong evidence of the intrinsic nature of defect-induced magnetism in magnetic carbons and establish the magnitude of the hyperfine magnetic field created in the neighbourhood of the defects that lead to magnetic order in these materials.

  17. Detecting sea-level hazards: Simple regression-based methods for calculating the acceleration of sea level

    USGS Publications Warehouse

    Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H., Jr.

    2015-01-01

    Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.

  18. Determination of the hyperfine magnetic field in magnetic carbon-based materials: DFT calculations and NMR experiments.

    PubMed

    Freitas, Jair C C; Scopel, Wanderlã L; Paz, Wendel S; Bernardes, Leandro V; Cunha-Filho, Francisco E; Speglich, Carlos; Araújo-Moreira, Fernando M; Pelc, Damjan; Cvitanić, Tonči; Požek, Miroslav

    2015-01-01

    The prospect of carbon-based magnetic materials is of immense fundamental and practical importance, and information on atomic-scale features is required for a better understanding of the mechanisms leading to carbon magnetism. Here we report the first direct detection of the microscopic magnetic field produced at (13)C nuclei in a ferromagnetic carbon material by zero-field nuclear magnetic resonance (NMR). Electronic structure calculations carried out in nanosized model systems with different classes of structural defects show a similar range of magnetic field values (18-21 T) for all investigated systems, in agreement with the NMR experiments. Our results are strong evidence of the intrinsic nature of defect-induced magnetism in magnetic carbons and establish the magnitude of the hyperfine magnetic field created in the neighbourhood of the defects that lead to magnetic order in these materials. PMID:26434597

  19. Vibrational spectra and DFT calculations of the vibrational modes of Schiff base C18H17N3O2

    NASA Astrophysics Data System (ADS)

    Antunes, J. A.; Silva, L. E.; Bento, R. R. F.; Teixeira, A. M. R.; Freire, P. T. C.; Faria, J. L. B.; Ramos, R. J.; Silva, C. B.; Lima, J. A.

    2012-04-01

    The Schiff base 4-{[(1E)-(2-Hydroxyphenyl)methylidene]amino}-1,5-dimethyl-2-phenyl-1,2-dihydro-3H-pyrazol-3-one (C18H17N3O2) is a synthetic compound with a variety of scientific and technological applications, such as clinic, analytic and pharmacologic. In this work FT-Raman spectrum and FT-infrared spectrum of C18H17N3O2 were investigated at 300 K. Vibrational wavenumber and wave vector have been predicted using Density Functional Theory (B3LYP) calculations with the 6-31 G(d,p) basis set. The description of the normal modes was performed by means of the potential energy distribution. A comparison with experiment allowed us to assign most of the normal modes of the crystal.

  20. Electro-optic Mach-Zehnder Interferometer based Optical Digital Magnitude Comparator and 1's Complement Calculator

    NASA Astrophysics Data System (ADS)

    Kumar, Ajay; Raghuwanshi, Sanjeev Kumar

    2016-06-01

    The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.

  1. Calculating Individual Resources Variability and Uncertainty Factors Based on Their Contributions to the Overall System Balancing Needs

    SciTech Connect

    Makarov, Yuri V.; Du, Pengwei; Pai, M. A.; McManus, Bart

    2014-01-14

    The variability and uncertainty of wind power production requires increased flexibility in power systems, or more operational reserves to main a satisfactory level of reliability. The incremental increase in reserve requirement caused by wind power is often studied separately from the effects of loads. Accordingly, the cost in procuring reserves is allocated based on this simplification rather than a fair and transparent calculation of the different resources’ contribution to the reserve requirement. This work proposes a new allocation mechanism for intermittency and variability of resources regardless of their type. It is based on a new formula, called grid balancing metric (GBM). The proposed GBM has several distinct features: 1) it is directly linked to the control performance standard (CPS) scores and interconnection frequency performance, 2) it provides scientifically defined allocation factors for individual resources, 3) the sum of allocation factors within any group of resources is equal to the groups’ collective allocation factor (linearity), and 4) it distinguishes helpers and harmers. The paper illustrates and provides results of the new approach based on actual transmission system operator (TSO) data.

  2. Microstructure-based calculations and experimental results for sound absorbing porous layers of randomly packed rigid spherical beads

    NASA Astrophysics Data System (ADS)

    Zieliński, Tomasz G.

    2014-07-01

    Acoustics of stiff porous media with open porosity can be very effectively modelled using the so-called Johnson-Champoux-Allard-Pride-Lafarge model for sound absorbing porous media with rigid frame. It is an advanced semi-phenomenological model with eight parameters, namely, the total porosity, the viscous permeability and its thermal analogue, the tortuosity, two characteristic lengths (one specific for viscous forces, the other for thermal effects), and finally, viscous and thermal tortuosities at the frequency limit of 0 Hz. Most of these parameters can be measured directly, however, to this end specific equipment is required different for various parameters. Moreover, some parameters are difficult to determine. This is one of several reasons for the so-called multiscale approach, where the parameters are computed from specific finite-element analyses based on some realistic geometric representations of the actual microstructure of porous material. Such approach is presented and validated for layers made up of loosely packed small identical rigid spheres. The sound absorption of such layers was measured experimentally in the impedance tube using the so-called two-microphone transfer function method. The layers are characterised by open porosity and semi-regular microstructure: the identical spheres are loosely packed by random pouring and mixing under the gravity force inside the impedance tubes of various size. Therefore, the regular sphere packings were used to generate Representative Volume Elements suitable for calculations at the micro-scale level. These packings involve only one, two, or four spheres so that the three-dimensional finite-element calculations specific for viscous, thermal, and tortuous effects are feasible. In the proposed geometric packings, the spheres were slightly shifted in order to achieve the correct value of total porosity which was precisely estimated for the layers tested experimentally. Finally, in this paper some results based on

  3. SU-D-BRD-01: Cloud-Based Radiation Treatment Planning: Performance Evaluation of Dose Calculation and Plan Optimization

    SciTech Connect

    Na, Y; Kapp, D; Kim, Y; Xing, L; Suh, T

    2014-06-01

    Purpose: To report the first experience on the development of a cloud-based treatment planning system and investigate the performance improvement of dose calculation and treatment plan optimization of the cloud computing platform. Methods: A cloud computing-based radiation treatment planning system (cc-TPS) was developed for clinical treatment planning. Three de-identified clinical head and neck, lung, and prostate cases were used to evaluate the cloud computing platform. The de-identified clinical data were encrypted with 256-bit Advanced Encryption Standard (AES) algorithm. VMAT and IMRT plans were generated for the three de-identified clinical cases to determine the quality of the treatment plans and computational efficiency. All plans generated from the cc-TPS were compared to those obtained with the PC-based TPS (pc-TPS). The performance evaluation of the cc-TPS was quantified as the speedup factors for Monte Carlo (MC) dose calculations and large-scale plan optimizations, as well as the performance ratios (PRs) of the amount of performance improvement compared to the pc-TPS. Results: Speedup factors were improved up to 14.0-fold dependent on the clinical cases and plan types. The computation times for VMAT and IMRT plans with the cc-TPS were reduced by 91.1% and 89.4%, respectively, on average of the clinical cases compared to those with pc-TPS. The PRs were mostly better for VMAT plans (1.0 ≤ PRs ≤ 10.6 for the head and neck case, 1.2 ≤ PRs ≤ 13.3 for lung case, and 1.0 ≤ PRs ≤ 10.3 for prostate cancer cases) than for IMRT plans. The isodose curves of plans on both cc-TPS and pc-TPS were identical for each of the clinical cases. Conclusion: A cloud-based treatment planning has been setup and our results demonstrate the computation efficiency of treatment planning with the cc-TPS can be dramatically improved while maintaining the same plan quality to that obtained with the pc-TPS. This work was supported in part by the National Cancer Institute (1

  4. An algorithm for calculating exam quality as a basis for performance-based allocation of funds at medical schools

    PubMed Central

    Kirschstein, Timo; Wolters, Alexander; Lenz, Jan-Hendrik; Fröhlich, Susanne; Hakenberg, Oliver; Kundt, Günther; Darmüntzel, Martin; Hecker, Michael; Altiner, Attila; Müller-Hilke, Brigitte

    2016-01-01

    Objective: The amendment of the Medical Licensing Act (ÄAppO) in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality. Methods: In the spring of 2014, the students‘ dean commissioned the „core group“ for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. Results: This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and – in analogy to impact factors and third party grants – a ranking among faculty. Conclusion: Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds. PMID:27275509

  5. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    SciTech Connect

    Goffin, Mark A.; Baker, Christopher M.J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k{sub eff}, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k{sub eff} with directional dependence. General error estimators are derived for any given functional of the flux and applied to k{sub eff} to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k{sub eff} goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  6. Theoretical study of dehydration-carbonation reaction on brucite surface based on ab initio quantum mechanic calculations

    NASA Astrophysics Data System (ADS)

    Churakov, S. V.; Parrinello, M.

    2003-04-01

    The carbonation of brucite (Mg(OH)2) has been considered as a potential technology for cleaning industrial carbon dioxide waste. The kinetics of the reaction Mg(OH)2 + CO2 -> MgCO3 + H2O have been studied experimentally at 573°C by Bearat at al. [1]. Their experiments suggest that the carbonation of magnesium hydroxide proceeds by the reaction Mg(OH)2 -> MgO + H2O followed by the adsorption of CO2 molecules on the dehydrated brucite surface. Due to the large difference in volumes between Mg(OH)2 and MgO, dehydration causes the formation of dislocations and cracks, allowing water molecules to leave the brucite surface and facilitating the advance of the carbonation front in the bulk solid. The detailed mechanism of this process is however unknown. We used the Car-Parrinello ab initio molecular dynamics method to study the structure and dynamics of the (0001), (1-100) and (11-20) surfaces of brucit and calculated the enthalpy and activation barrier of H2O nucleation and dehydration on different surfaces. The results obtained are in agreement with previous studies of brucite dehydration by Masini and Bernasconi [2]. The reactive Car-Parrinello molecular dynamics method [3] has been applied to investigate the detailed mechanism of the dehydration-carbonation reaction at the (1-100) interface of brucite with the gas phase. Based on the results of our MD simulations and the calculated enthalpy of CO2 adsorption on the dehydrated brucite surfaces we propose a mechanism for the dehydration/carbonation reaction. [1] Bearat H, McKelvy MJ, Chizmeshya AVG, Sharma R, Carpenter RW (2002) J. Amer. Ceram. Soc. 85(4):742 [2] Masini P and Bernasconi M (2001) J. Phys. Cond. Mat. 13: 1-12 [3] Iannuzzi M, Laio A and Parrinello M (2003) Phys. Rev. Lett. (submitted)

  7. Calculating Hillslope Contributions to River Basin Sediment Yield Using Observations in Small Watersheds and an Index-based Model

    NASA Astrophysics Data System (ADS)

    Kinner, D. A.; Kinner, D. A.; Stallard, R. F.

    2001-12-01

    Detailed observations of hillslope erosion are generally made in < 1 km2 watersheds to gain a process-level understanding in a given geomorphic setting. In addressing sediment and nutrient source-to-sink questions, a broader, river basin ( > 1000 km2) view of erosion and deposition is necessary to incorporate the geographic variability in the factors controlling sediment mobilization and storage. At the river basin scale, floodplain and reservoir storage become significant in sediment budgets. In this study, we used observations from USDA experimental watersheds to constrain an index-based model of hillslope erosion for the 7270 km2 Nishnabotna River Basin in the agricultural, loess-mantled region of southwest Iowa. Spatial and time-series measurements from two watersheds near Treynor, Iowa were used to calibrate the model for the row-cropped fields of the basin. By modeling rainfall events over an 18-year period, model error was quantified. We then applied the model to calculate basin-wide hillslope erosion and colluvial storage. Soil maps and the National Land-Cover Dataset were used to estimate model soil erodibility and land-use factors. By comparing modeled hillslope yields with observed basin sediment yields, we calculated that hillslope contributions to sediment yield were < 50% for the period 1974-1992. A major uncertainty in modeling is the percentage of basin area that is terraced. We will use the isotopes Cs137 and Pb210 to distinguish bank (isotope-poor) and hillslope (isotope-rich) contributions in flood plain deposits. This independent estimate of the relative hillslope contribution to sediment yield will reduce modeling uncertainty.

  8. Affinity capillary electrophoresis and quantum mechanical calculations applied to the investigation of hexaarylbenzene-based receptor binding with lithium ion.

    PubMed

    Ehala, Sille; Toman, Petr; Rathore, Rajendra; Makrlík, Emanuel; Kašička, Václav

    2011-09-01

    In this study, two complementary approaches, affinity capillary electrophoresis (ACE) and quantum mechanical density functional theory (DFT) calculations, have been employed for quantitative characterization and structure elucidation of the complex between hexaarylbenzene (HAB)-based receptor R and lithium ion Li(+) . First, by means of ACE, the apparent binding constant of LiR(+) complex (K LiR +) in methanol was determined from the dependence of the effective electrophoretic mobilities of LiR(+) complex on the concentration of lithium ions in the 25 mM Tris/50 mM chloroacetate background electrolyte (BGE) using non-linear regression analysis. Prior to regression analysis, the effective electrophoretic mobilities of the LiR(+) complex were corrected to reference temperature 25 °C and constant ionic strength 25 mM. The apparent binding constant of the LiR(+) complex in the above methanolic BGE was evaluated as logK LiR + = 1.15±0.09. Second, the most probable structures of nonhydrated LiR(+) and hydrated LiR(+)·3H(2)O complexes were derived by DFT calculations. The optimized structure of the hydrated LiR(+)·3H(2)O complex was found to be more realistic than the nonhydrated LiR(+) complex because of the considerably higher binding energy of LiR(+)·3H(2)O complex (500.4 kJ/mol) as compared with LiR(+) complex (427.5 kJ/mol). PMID:21780285

  9. Advances in binding free energies calculations: QM/MM-based free energy perturbation method for drug design.

    PubMed

    Rathore, R S; Sumakanth, M; Reddy, M Siva; Reddanna, P; Rao, Allam Appa; Erion, Mark D; Reddy, M R

    2013-01-01

    Multiple approaches have been devised and evaluated to computationally estimate binding free energies. Results using a recently developed Quantum Mechanics (QM)/Molecular Mechanics (MM) based Free Energy Perturbation (FEP) method suggest that this method has the potential to provide the most accurate estimation of binding affinities to date. The method treats ligands/inhibitors using QM while using MM for the rest of the system. The method has been applied and validated for a structurally diverse set of fructose 1,6- bisphosphatase (FBPase) inhibitors suggesting that the approach has the potential to be used as an integral part of drug discovery for both lead identification lead optimization, where there is a structure available. In addition, this QM/MM-based FEP method was shown to accurately replicate the anomalous hydration behavior exhibited by simple amines and amides suggesting that the method may also prove useful in predicting physical properties of molecules. While the method is about 5-fold more computationally demanding than conventional FEP, it has the potential to be less demanding on the end user since it avoids development of MM force field parameters for novel ligands and thereby eliminates this time-consuming step that often contributes significantly to the inaccuracy of binding affinity predictions using conventional FEP methods. The QM/MM-based FEP method has been extensively tested with respect to important considerations such as the length of the simulation required to obtain satisfactory convergence in the calculated relative solvation and binding free energies for both small and large structural changes between ligands. Future automation of the method and parallelization of the code is expected to enhance the speed and increase its use for drug design and lead optimization. PMID:23260025

  10. Single-crystal nickel-based superalloys developed by numerical multi-criteria optimization techniques: design based on thermodynamic calculations and experimental validation

    NASA Astrophysics Data System (ADS)

    Rettig, Ralf; Ritter, Nils C.; Helmer, Harald E.; Neumeier, Steffen; Singer, Robert F.

    2015-04-01

    A method for finding the optimum alloy compositions considering a large number of property requirements and constraints by systematic exploration of large composition spaces is proposed. It is based on a numerical multi-criteria global optimization algorithm (multistart solver using Sequential Quadratic Programming), which delivers the exact optimum considering all constraints. The CALPHAD method is used to provide the thermodynamic equilibrium properties, and the creep strength of the alloys is predicted based on a qualitative numerical model considering the solid solution strengthening of the matrix by the elements Re, Mo and W and the optimum morphology and fraction of the γ‧-phase. The calculated alloy properties which are required as an input for the optimization algorithm are provided via very fast Kriging surrogate models. This greatly reduces the total calculation time of the optimization to the order of minutes on a personal computer. The capability of the multi-criteria optimization method developed was experimentally verified with two new single crystal superalloys. Their compositions were designed such that the content of expensive elements was reduced. One of the newly designed alloys, termed ERBO/13, is found to possess creep strength of only 14 K below CMSX-4 in the high-temperature/low-stress regime although it is a Re-free alloy.

  11. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    NASA Astrophysics Data System (ADS)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  12. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework

    SciTech Connect

    Berger, Daniel Oberhofer, Harald; Reuter, Karsten; Logsdail, Andrew J. Farrow, Matthew R.; Catlow, C. Richard A.; Sokol, Alexey A.; Sherwood, Paul; Blum, Volker

    2014-07-14

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO{sub 2}(110)

  13. Towards an automated and efficient calculation of resonating vibrational states based on state-averaged multiconfigurational approaches

    SciTech Connect

    Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian; Rauhut, Guntram

    2015-12-28

    Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.

  14. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT

    SciTech Connect

    Park, Justin C.; Li, Jonathan G.; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray

    2015-04-15

    Purpose: The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Results: Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm{sup 2} square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm{sup 2}, where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (

  15. Assessment of the ultraviolet radiation field in ocean waters from space-based measurements and full radiative-transfer calculations.

    PubMed

    Vasilkov, Alexander P; Herman, Jay R; Ahmad, Ziauddin; Kahru, Mati; Mitchell, B Greg

    2005-05-10

    Quantitative assessment of the UV effects on aquatic ecosystems requires an estimate of the in-water radiation field. Actual ocean UV reflectances are needed for improving the total ozone retrievals from the total ozone mapping spectrometer (TOMS) and the ozone monitoring instrument (OMI) flown on NASA's Aura satellite. The estimate of underwater UV radiation can be done on the basis of measurements from the TOMS/OMI and full models of radiative transfer (RT) in the atmosphere-ocean system. The Hydrolight code, modified for extension to the UV, is used for the generation of look-up tables for in-water irradiances. A look-up table for surface radiances generated with a full RT code is input for the Hydrolight simulations. A model of seawater inherent optical properties (IOPs) is an extension of the Case 1 water model to the UV. A new element of the IOP model is parameterization of particulate matter absorption based on recent in situ data. A chlorophyll product from ocean color sensors is input for the IOP model. Verification of the in-water computational scheme shows that the calculated diffuse attenuation coefficient Kd is in good agreement with the measured Kd. PMID:15943340

  16. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W

    2015-02-21

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient's 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry. PMID:25615567

  17. A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies

    SciTech Connect

    Eça, L.; Hoekstra, M.

    2014-04-01

    This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.

  18. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  19. Cooling Capacity Optimization: Calculation of Hardening Power of Aqueous Solution Based on Poly(N-Vinyl-2-Pyrrolidone)

    NASA Astrophysics Data System (ADS)

    Koudil, Z.; Ikkene, R.; Mouzali, M.

    2013-11-01

    Polymer quenchants are becoming increasingly popular as substitutes for traditional quenching media in hardening metallic alloys. Water-soluble organic polymer offers a number of environmental, economic, and technical advantages, as well as eliminating the quench-oil fire hazard. The close control of polymer quenchant solutions is essential for their successful applications, in order to avoid the defects of structure of steels, such as shrinkage cracks and deformations. The aim of the present paper is to evaluate and optimize the experimental parameters of polymer quenching bath which gives the best behavior quenching process and homogeneous microstructure of the final work-piece. This study has been carried out on water-soluble polymer based on poly(N-vinyl-2-pyrrolidone) PVP K30, which does not exhibit inverse solubility phenomena in water. The studied parameters include polymer concentration, bath temperature, and agitation speed. Evaluation of cooling power and hardening performance has been measured with IVF SmartQuench apparatus, using standard ISO Inconel-600 alloy. The original numerical evaluation method has been introduced in the computation software called SQ Integra. The heat transfer coefficients were used as input data for calculation of microstructural constituents and the hardness profile of cylindrical sample.

  20. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system.

    PubMed

    Wang, Lilie; Ding, George X

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose. PMID:24925858

  1. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system

    NASA Astrophysics Data System (ADS)

    Wang, Lilie; Ding, George X.

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.

  2. A hybrid phase-space and histogram source model for GPU-based Monte Carlo radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Townson, Reid W.; Zavgorodni, Sergei

    2014-12-01

    In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics

  3. Phase stability of ScN-based solid solutions for thermoelectric applications from first-principles calculations

    NASA Astrophysics Data System (ADS)

    Kerdsongpanya, Sit; Alling, Björn; Eklund, Per

    2013-08-01

    We have used first-principles calculations to investigate the trends in mixing thermodynamics of ScN-based solid solutions in the cubic B1 structure. 13 different Sc1-xMxN (M = Y, La, Ti, Zr, Hf, V, Nb, Ta, Gd, Lu, Al, Ga, In) and three different ScN1-xAx (A = P, As, Sb) solid solutions are investigated and their trends for forming disordered or ordered solid solutions or to phase separate are revealed. The results are used to discuss suitable candidate materials for different strategies to reduce the high thermal conductivity in ScN-based systems, a material having otherwise promising thermoelectric properties for medium and high temperature applications. Our results indicate that at a temperature of T = 800 °C, Sc1-xYxN; Sc1-xLaxN; Sc1-xGdxN, Sc1-xGaxN, and Sc1-xInxN; and ScN1-xPx, ScN1-xAsx, and ScN1-xSbx solid solutions have phase separation tendency, and thus, can be used for forming nano-inclusion or superlattices, as they are not intermixing at high temperature. On the other hand, Sc1-xTixN, Sc1-xZrxN, Sc1-xHfxN, and Sc1-xLuxN favor disordered solid solutions at T = 800 °C. Thus, the Sc1-xLuxN system is suggested for a solid solution strategy for phonon scattering as Lu has the same valence as Sc and much larger atomic mass.

  4. Molecular exciton theory calculations based on experimental results for Solophenyl red 3BL azo dye-surfactants interactions

    NASA Astrophysics Data System (ADS)

    Hassanzadeh, Ali; Zeini-Isfahani, Asghar; Habibi, Mohammad Hossein

    2006-05-01

    The influence of anionic surfactant: sodium dodecyl sulfate (SDS) and cationic surfactants: cetyltrimethylammonium bormides (C 16TAB) and cetylpyridinium chloride (CPC) on the electronic spectrum of Solophenyl red 3BL azo dye (C.I. Direct 80) in aqueous solution was studied by means of UV-vis spectroscopy. Since, Solophenyl red 3BL azo dye was an anionic soluble dye, therefore, did not observed any interaction between SDS and 3BL dye. On the other hand, in the case of C 16TAB, aggregation was reflected by a hyosochromic shift of the main absorption band and dye H-aggregation was responsible for the short wavelength absorption band. Also, UV-vis spectra showed that micelle formation occurs for C 16TAB surfactant in 3BL dye aqueous solution in lower concentration in comparison with C 16TAB in aqueous solution lonely. Micelle formation was indicated by a red shift of the whole spectra with respect to monomer location. The importance of hydrophobic interactions was revealed by the dependence of aggregation on the cationic surfactant structure. Further results showed that dye H-aggregation was occurred under the cationic surfactant CPC as well, but in this case micelle formation could not occur. Addition of CPC surfactant into the J-aggregate dye solution in highly acidic aqueous solution was also caused completely disaggregating of dimer molecules, which may be related to occuring an acid-base reaction between them. Applicability of the molecular exciton (Kasha) theory in order to interpret of aggregation results and to estimate dimer structure of 3BL dye under C 16TAB and CPC surfactants addition was very poor and the calculated data based on this model showed that this simple point-dipole model could not describe our experimental results.

  5. Electronic structure and bonding of the 3d transition metal borides, MB, M =Sc, Ti, V, Cr, Mn, Fe, Co, Ni, and Cu through all electron ab initio calculations

    NASA Astrophysics Data System (ADS)

    Tzeli, Demeter; Mavridis, Aristides

    2008-01-01

    The electronic structure and bonding of the ground and some low-lying states of all first row transition metal borides (MB), ScB, TiB, VB, CrB, MnB, FeB, CoB, NiB, and CuB have been studied by multireference configuration interaction (MRCI) methods employing a correlation consistent basis set of quintuple cardinality (5Z). It should be stressed that for all the above nine molecules, experimental results are essentially absent, whereas with the exception of ScB and CuB the remaining seven species are studied theoretically for the first time. We have constructed full potential energy curves at the MRCI/5Z level for a total of 27 low-lying states, subsequently used to extract binding energies, spectroscopic parameters, and bonding schemes. In addition, some 20 or more states for every MB species have been examined at the MRCI/4Z level of theory. The ground state symmetries and corresponding binding energies (in kcal/mol) are Σ-5(ScB), 76; Δ6(TiB), 65; Σ+7(VB), 55; Σ+6(CrB), 31; Π5(MnB), 20; Σ-4(FeB), 54; Δ3(CoB), 66; Σ+2(NiB), 79; and Σ+1(CuB), 49.

  6. System and method for radiation dose calculation within sub-volumes of a monte carlo based particle transport grid

    DOEpatents

    Bergstrom, Paul M.; Daly, Thomas P.; Moses, Edward I.; Patterson, Jr., Ralph W.; Schach von Wittenau, Alexis E.; Garrett, Dewey N.; House, Ronald K.; Hartmann-Siantar, Christine L.; Cox, Lawrence J.; Fujino, Donald H.

    2000-01-01

    A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.

  7. Calculation of core loss and copper loss in amorphous/nanocrystalline core-based high-frequency transformer

    NASA Astrophysics Data System (ADS)

    Liu, Xiaojing; Wang, Youhua; Zhu, Jianguo; Guo, Youguang; Lei, Gang; Liu, Chengcheng

    2016-05-01

    Amorphous and nanocrystalline alloys are now widely used for the cores of high-frequency transformers, and Litz-wire is commonly used as the windings, while it is difficult to calculate the resistance accurately. In order to design a high-frequency transformer, it is important to accurately calculate the core loss and copper loss. To calculate the core loss accurately, the additional core loss by the effect of end stripe should be considered. It is difficult to simulate the whole stripes in the core due to the limit of computation, so a scale down model with 5 stripes of amorphous alloy is simulated by the 2D finite element method (FEM). An analytical model is presented to calculate the copper loss in the Litz-wire, and the results are compared with the calculations by FEM.

  8. Health Risk Assessment for Uranium in Groundwater - An Integrated Case Study Based on Hydrogeological Characterization and Dose Calculation

    NASA Astrophysics Data System (ADS)

    Franklin, M. R.; Veiga, L. H.; Py, D. A., Jr.; Fernandes, H. M.

    2010-12-01

    The uranium mining and milling facilities of Caetité (URA) is the only active uranium production center in Brazil. Operations take place at a very sensitive semi-arid region in the country where water resources are very scarce. Therefore, any contamination of the existing water bodies may trigger critical consequences to local communities because their sustainability is closely related to the availability of the groundwater resources. Due to the existence of several uranium anomalies in the region, groundwater can present radionuclide concentrations above the world average. The radiological risk associated to the ingestion of these waters have been questioned by members of the local communities, NGO’s and even regulatory bodies that suspected that the observed levels of radionuclide concentrations (specially Unat) could be related to the uranium mining and milling operations. Regardless the origin of these concentrations the fear that undesired health effects were taking place (e.g. increase in cancer incidence) remain despite the fact that no evidence - based on epidemiological studies - is available. This paper intends to present the connections between the local hydrogeology and the radiological characterization of groundwater in the neighboring areas of the uranium production center to understand the implications to the human health risk due to the ingestion of groundwater. The risk assessment was performed, taking into account the radiological and the toxicological risks. Samples from 12 wells have been collected and determinations of Unat, Thnat, 226Ra, 228Ra and 210Pb were performed. The radiation-related risks were estimated for adults and children by the calculation of the annual effective doses. The potential non-carcinogenic effects due to the ingestion of uranium were evaluated by the estimation of the hazard index (HI). Monte Carlo simulations were used to calculate the uncertainty associated with these estimates, i.e. the 95% confidence interval

  9. Benchmarking the performance of density functional theory based Green's function formalism utilizing different self-energy models in calculating electronic transmission through molecular systems.

    PubMed

    Prociuk, Alexander; Van Kuiken, Ben; Dunietz, Barry D

    2006-11-28

    Electronic transmission through a metal-molecule-metal system is calculated by employing a Green's function formalism in the scattering based scheme. Self-energy models representing the bulk and the potential bias are used to describe electron transport through the molecular system. Different self-energies can be defined by varying the partition between device and bulk regions of the metal-molecule-metal model system. In addition, the self-energies are calculated with different representations of the bulk through its Green's function. In this work, the dependence of the calculated transmission on varying the self-energy subspaces is benchmarked. The calculated transmission is monitored with respect to the different choices defining the self-energy model. In this report, we focus on one-dimensional model systems with electronic structures calculated at the density functional level of theory. PMID:17144733

  10. Calculation of acid-base equilibrium constants at the oxide-electrolyte interface from the dependence of oxide surface charge on pH of the electrolyte

    SciTech Connect

    Gorichev, I.G.; Dorofeev, M.V.; Batrakov, V.V.

    1994-09-01

    The dependences of the catalytic activity of oxides and acid-base properties on ph of solution are similar. A procedure is developed for calculating acid-base equilibrium constants from the dependence of the oxide surface charge q on pH. The values q can be determined by potentiometric titration of aqueous suspensions of oxides. The acid-base equilibrium constants for Fe{sub 3}O{sub 4} and CuO were calculated in accordance with the proposed procedure.

  11. 40 CFR 600.207-12 - Calculation and use of vehicle-specific 5-cycle-based fuel economy and CO2 emission values for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Carbon-Related Exhaust Emission Values § 600.207-12 Calculation and use of vehicle-specific 5-cycle-based...-specific 5-cycle-based fuel economy and CO2 emission values for vehicle configurations. 600.207-12 Section... vehicle-specific 5-cycle city and highway fuel economy and CO2 emission values for each...

  12. 40 CFR 600.207-12 - Calculation and use of vehicle-specific 5-cycle-based fuel economy and CO2 emission values for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Carbon-Related Exhaust Emission Values § 600.207-12 Calculation and use of vehicle-specific 5-cycle-based...-specific 5-cycle-based fuel economy and CO2 emission values for vehicle configurations. 600.207-12 Section... vehicle-specific 5-cycle city and highway fuel economy and CO2 emission values for each...

  13. 40 CFR 600.207-12 - Calculation and use of vehicle-specific 5-cycle-based fuel economy and CO2 emission values for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Carbon-Related Exhaust Emission Values § 600.207-12 Calculation and use of vehicle-specific 5-cycle-based...-specific 5-cycle-based fuel economy and CO2 emission values for vehicle configurations. 600.207-12 Section... vehicle-specific 5-cycle city and highway fuel economy and CO2 emission values for each...

  14. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    SciTech Connect

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics. In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.

  15. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGESBeta

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  16. Gamma-ray exposure from neutron-induced radionuclides in soil in Hiroshima and Nagasaki based on DS02 calculations.

    PubMed

    Imanaka, Tetsuji; Endo, Satoru; Tanaka, Kenichi; Shizuma, Kiyoshi

    2008-07-01

    As a result of joint efforts by Japanese, US and German scientists, the Dosimetry System 2002 (DS02) was developed as a new dosimetry system, to evaluate individual radiation dose to atomic bomb survivors in Hiroshima and Nagasaki. Although the atomic bomb radiation consisted of initial radiation and residual radiation, only initial radiation was reevaluated in DS02 because, for most survivors in the life span study group, the residual dose was negligible compared to the initial dose. It was reported, however, that there were individuals who entered the city at the early stage after the explosion and experienced hemorrhage, diarrhea, etc., which were symptoms of acute radiation syndrome. In this study, external exposure due to radionuclides induced in soil by atomic bomb neutrons was reevaluated based on DS02 calculations, as a function of both the distance from the hypocenters and the elapsed time after the explosions. As a result, exposure rates of 6 and 4 Gy h(-1) were estimated at the hypocenter at 1 min after the explosion in Hiroshima and Nagasaki, respectively. These exposure rates decreased rapidly by a factor of 1,000 1 day later, and by a factor of 1 million 1 week later. Maximum cumulative exposure from the time of explosion was 1.2 and 0.6 Gy at the hypocenters in Hiroshima and Nagasaki, respectively. Induced radiation decreased also with distance from the hypocenters, by a factor of about 10 at 500 m and a factor of three to four hundreds at 1,000 m. Consequently, a significant exposure due to induced radiation is considered feasible to those who entered the area closer to a distance of 1,000 m from the hypocenters, within one week after the bombing. PMID:18368418

  17. Thermal Expansion Calculation of Silicate Glasses at 210°C, Based on the Systematic Analysis of Global Databases

    SciTech Connect

    Fluegel, Alex

    2010-10-01

    Thermal expansion data for more than 5500 compositions of silicate glasses were analyzed statistically. These data were gathered from the scientific literature, summarized in SciGlass© 6.5, a new version of the well known glass property database and information system. The analysis resulted in a data reduction from 5500 glasses to a core of 900, where the majority of the published values is located within commercial glass composition ranges and obtained over the temperature range 20 to 500°C. A multiple regression model for the linear thermal expansivity at 210°C, including error formula and detailed application limits, was developed based on those 900 core data from over 100 publications. The accuracy of the model predictions is improved about twice compared to previous work because systematic errors from certain laboratories were investigated and corrected. The standard model error (precision) was 0.37 ppm/K, with R² = 0.985. The 95% confidence interval for individual predictions largely depends on the glass composition of interest and the composition uncertainty. The model is valid for commercial silicate glasses containing Na2O, CaO, Al2O3, K2O, MgO, B2O3, Li2O, BaO, ZrO2, TiO2, ZnO, PbO, SrO, Fe2O3, CeO2, fining agents, and coloring and de-coloring components. In addition, a special model for ultra-low expansion glasses in the system SiO2-TiO2 is presented. The calculations allow optimizing the time-temperature cooling schedule of glassware, the development of glass sealing materials, and the design of specialty glass products that are exposed to varying temperatures.

  18. Evaluation of a deterministic grid-based Boltzmann solver (GBBS) for voxel-level absorbed dose calculations in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Mikell, Justin; Cheenu Kappadath, S.; Wareing, Todd; Erwin, William D.; Titt, Uwe; Mourtada, Firas

    2016-06-01

    To evaluate the 3D Grid-based Boltzmann Solver (GBBS) code ATTILA ® for coupled electron and photon transport in the nuclear medicine energy regime for electron (beta, Auger and internal conversion electrons) and photon (gamma, x-ray) sources. Codes rewritten based on ATTILA are used clinically for both high-energy photon teletherapy and 192Ir sealed source brachytherapy; little information exists for using the GBBS to calculate voxel-level absorbed doses in nuclear medicine. We compared DOSXYZnrc Monte Carlo (MC) with published voxel-S-values to establish MC as truth. GBBS was investigated for mono-energetic 1.0, 0.1, and 0.01 MeV electron and photon sources as well as 131I and 90Y radionuclides. We investigated convergence of GBBS by analyzing different meshes ({{M}0},{{M}1},{{M}2} ), energy group structures ({{E}0},{{E}1},{{E}2} ) for each radionuclide component, angular quadrature orders (≤ft. {{S}4},{{S}8},{{S}16}\\right) , and scattering order expansions ({{P}0} –{{P}6} ); higher indices imply finer discretization. We compared GBBS to MC in (1) voxel-S-value geometry for soft tissue, lung, and bone, and (2) a source at the interface between combinations of lung, soft tissue, and bone. Excluding Auger and conversion electrons, MC agreed within  ≈5% of published source voxel absorbed doses. For the finest discretization, most GBBS absorbed doses in the source voxel changed by less than 1% compared to the next finest discretization along each phase space variable indicating sufficient convergence. For the finest discretization, agreement with MC in the source voxel ranged from  ‑3% to  ‑20% with larger differences at lower energies (‑3% for 1 MeV electron in lung to  ‑20% for 0.01 MeV photon in bone); similar agreement was found for the interface geometries. Differences between GBBS and MC in the source voxel for 90Y and 131I were  ‑6%. The GBBS ATTILA was benchmarked against MC in the nuclear medicine regime. GBBS can be a

  19. Evaluation of a deterministic grid-based Boltzmann solver (GBBS) for voxel-level absorbed dose calculations in nuclear medicine.

    PubMed

    Mikell, Justin; Cheenu Kappadath, S; Wareing, Todd; Erwin, William D; Titt, Uwe; Mourtada, Firas

    2016-06-21

    To evaluate the 3D Grid-based Boltzmann Solver (GBBS) code ATTILA (®) for coupled electron and photon transport in the nuclear medicine energy regime for electron (beta, Auger and internal conversion electrons) and photon (gamma, x-ray) sources. Codes rewritten based on ATTILA are used clinically for both high-energy photon teletherapy and (192)Ir sealed source brachytherapy; little information exists for using the GBBS to calculate voxel-level absorbed doses in nuclear medicine. We compared DOSXYZnrc Monte Carlo (MC) with published voxel-S-values to establish MC as truth. GBBS was investigated for mono-energetic 1.0, 0.1, and 0.01 MeV electron and photon sources as well as (131)I and (90)Y radionuclides. We investigated convergence of GBBS by analyzing different meshes ([Formula: see text]), energy group structures ([Formula: see text]) for each radionuclide component, angular quadrature orders ([Formula: see text], and scattering order expansions ([Formula: see text]-[Formula: see text]); higher indices imply finer discretization. We compared GBBS to MC in (1) voxel-S-value geometry for soft tissue, lung, and bone, and (2) a source at the interface between combinations of lung, soft tissue, and bone. Excluding Auger and conversion electrons, MC agreed within  ≈5% of published source voxel absorbed doses. For the finest discretization, most GBBS absorbed doses in the source voxel changed by less than 1% compared to the next finest discretization along each phase space variable indicating sufficient convergence. For the finest discretization, agreement with MC in the source voxel ranged from  -3% to  -20% with larger differences at lower energies (-3% for 1 MeV electron in lung to  -20% for 0.01 MeV photon in bone); similar agreement was found for the interface geometries. Differences between GBBS and MC in the source voxel for (90)Y and (131)I were  -6%. The GBBS ATTILA was benchmarked against MC in the nuclear medicine regime. GBBS can be a

  20. A new approach to account for the medium-dependent effect in model-based dose calculations for kilovoltage x-rays

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jason M.; Ding, George X.

    2011-07-01

    This study presents a new approach to accurately account for the medium-dependent effect in model-based dose calculations for kilovoltage (kV) x-rays. This approach is based on the hypothesis that the correction factors needed to convert dose from model-based dose calculations to absorbed dose-to-medium depend on both the attenuation characteristics of the absorbing media and the changes to the energy spectrum of the incident x-rays as they traverse media with an effective atomic number different than that of water. Using Monte Carlo simulation techniques, we obtained empirical medium-dependent correction factors that take both effects into account. We found that the correction factors can be expressed as a function of a single quantity, called the effective bone depth, which is a measure of the amount of bone that an x-ray beam must penetrate to reach a voxel. Since the effective bone depth can be calculated from volumetric patient CT images, the medium-dependent correction factors can be obtained for model-based dose calculations based on patient CT images. We tested the accuracy of this new approach on 14 patients for the case of calculating imaging dose from kilovoltage cone-beam computed tomography used for patient setup in radiotherapy, and compared it with the Monte Carlo method, which is regarded as the 'gold standard'. For all patients studied, the new approach resulted in mean dose errors of less than 3%. This is in contrast to current available inhomogeneity corrected methods, which have been shown to result in mean errors of up to -103% for bone and 8% for soft tissue. Since there is a huge gain in the calculation speed relative to the Monte Carlo method (~two orders of magnitude) with an acceptable loss of accuracy, this approach provides an alternative accurate dose calculation method for kV x-rays.