Mizutani, U; Inukai, M; Sato, H; Zijlstra, E S; Lin, Q
2014-05-16
There are three key electronic parameters in elucidating the physics behind the Hume–Rothery electron concentration rule: the square of the Fermi diameter (2kF)2, the square of the critical reciprocal lattice vector and the electron concentration parameter or the number of itinerant electrons per atom e/a. We have reliably determined these three parameters for 10 Rhombic Triacontahedron-type 2/1–2/1–2/1 (N = 680) and 1/1–1/1–1/1 (N = 160–162) approximants by making full use of the full-potential linearized augmented plane wave-Fourier band calculations based on all-electron density-functional theory. We revealed that the 2/1–2/1–2/1 approximants Al13Mg27Zn45 and Na27Au27Ga31 belong to two different sub-groups classified in terms of equal to 126 and 109 and could explain why they take different e/a values of 2.13 and 1.76, respectively. Among eight 1/1–1/1–1/1 approximants Al3Mg4Zn3, Al9Mg8Ag3, Al21Li13Cu6, Ga21Li13Cu6, Na26Au24Ga30, Na26Au37Ge18, Na26Au37Sn18 and Na26Cd40Pb6, the first two, the second two and the last four compounds were classified into three sub-groups with = 50, 46 and 42; and were claimed to obey the e/a = 2.30, 2.10–2.15 and 1.70–1.80 rules, respectively.
Ishida, Toyokazu
2008-09-17
To further understand the catalytic role of the protein environment in the enzymatic process, the author has analyzed the reaction mechanism of the Claisen rearrangement of Bacillus subtilis chorismate mutase (BsCM). By introducing a new computational strategy that combines all-electron QM calculations with ab initio QM/MM modelings, it was possible to simulate the molecular interactions between the substrate and the protein environment. The electrostatic nature of the transition state stabilization was characterized by performing all-electron QM calculations based on the fragment molecular orbital technique for the entire enzyme.
Kuang, Xiang-Jun; Wang, Xin-Qiang; Liu, Gao-Bin
2015-02-01
Under the framework of DFT, an all-electron scalar relativistic calculation on the adsorption of Aun (n = 1-13) clusters toward methanol molecule has been performed with the generalized gradient approximation at PW91 level. Our calculation results reveal that the small gold cluster would like to bond with oxygen of methanol molecule at the edge of gold cluster plane. After adsorption, the chemical activities of hydroxyl group and methyl group are enhanced to some extent. The even-numbered AunCH3OH cluster with closed-shell electronic configuration is relatively more stable than the neighboring odd-numbered AunCH3OH cluster with open-shell electronic configuration. All the AunCH3OH clusters prefer low spin multiplicity (M = 1 for even-numbered AuNCH3OH clusters, M = 2 for odd-numbered AunCH3OH clusters) and the magnetic moments are mainly contributed by gold atoms. The odd-even alterations of magnetic moments and electronic configurations can be observed clearly and may be simply understood in terms of the electron pairing effect. PMID:26353643
Norm-conserving pseudopotentials with chemical accuracy compared to all-electron calculations
NASA Astrophysics Data System (ADS)
Willand, Alex; Kvashnin, Yaroslav O.; Genovese, Luigi; Vázquez-Mayagoitia, Álvaro; Deb, Arpan Krishna; Sadeghi, Ali; Deutsch, Thierry; Goedecker, Stefan
2013-03-01
By adding a nonlinear core correction to the well established dual space Gaussian type pseudopotentials for the chemical elements up to the third period, we construct improved pseudopotentials for the Perdew-Burke-Ernzerhof [J. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996), 10.1103/PhysRevLett.77.3865] functional and demonstrate that they exhibit excellent accuracy. Our benchmarks for the G2-1 test set show average atomization energy errors of only half a kcal/mol. The pseudopotentials also remain highly reliable for high pressure phases of crystalline solids. When supplemented by empirical dispersion corrections [S. Grimme, J. Comput. Chem. 27, 1787 (2006), 10.1002/jcc.20495; S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, J. Chem. Phys. 132, 154104 (2010), 10.1063/1.3382344] the average error in the interaction energy between molecules is also about half a kcal/mol. The accuracy that can be obtained by these pseudopotentials in combination with a systematic basis set is well superior to the accuracy that can be obtained by commonly used medium size Gaussian basis sets in all-electron calculations.
All-electron scalar relativistic calculation of water molecule adsorption onto small gold clusters.
Kuang, Xiang-Jun; Wang, Xin-Qiang; Liu, Gao-Bin
2011-08-01
An all-electron scalar relativistic calculation was performed on Au( n )H(2)O (n = 1-13) clusters using density functional theory (DFT) with the generalized gradient approximation at PW91 level. The calculation results reveal that, after adsorption, the small gold cluster would like to bond with oxygen and the H(2)O molecule prefers to occupy the single fold coordination site. Reflecting the strong scalar relativistic effect, Au( n ) geometries are distorted slightly but still maintain a planar structure. The Au-Au bond is strengthened and the H-O bond is weakened, as manifested by the shortening of the Au-Au bond-length and the lengthening of the H-O bond-length. The H-O-H bond angle becomes slightly larger. The enhancement of reactivity of the H(2)O molecule is obvious. The Au-O bond-lengths, adsorption energies, VIPs, HLGs, HOMO (LUMO) energy levels, charge transfers and the highest vibrational frequencies of the Au-O mode for Au( n )H(2)O clusters exhibit an obvious odd-even oscillation. The most favorable adsorption between small gold clusters and the H(2)O molecule takes place when the H(2)O molecule is adsorbed onto an even-numbered Au( n ) cluster and becomes an Au( n )H(2)O cluster with an even number of valence electrons. The odd-even alteration of magnetic moments is observed in Au( n )H(2)O clusters and may serve as material with a tunable code capacity of "0" and "1" by adsorbing a H(2)O molecule onto an odd or even-numbered small gold cluster. PMID:21140279
Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations
Webster, R. Harrison, N. M.; Bernasconi, L.
2015-06-07
We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features of the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green’s function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional c{sub HF} and show that there exists one value of c{sub HF} (∼0.32) that reproduces at least semi-quantitatively the optical gap of this material.
Potential energy curves of Li+2 from all-electron EA-EOM-CCSD calculations
NASA Astrophysics Data System (ADS)
Musiał, Monika; Medrek, Magdalena; Kucharski, Stanisław A.
2015-10-01
The electron attachment (EA) equation-of-motion coupled-cluster theory provides description of the states obtained by the attachment of an electron to the reference system. If the reference is assumed to be a doubly ionised cation, then the EA results relate to the singly ionised ion. In the current work, the above scheme is applied to the calculations of the potential energy curves (PECs) of the Li+2 cation adopting the doubly ionised Li2 +2 structure as the reference system. The advantage of such computational strategy relies on the fact that the closed-shell Li2 +2 reference dissociates into closed-shell fragments (Li2 +2 ⇒ Li+ + Li+), hence the RHF (restricted Hartree-Fock) function can be used as the reference in the whole range of interatomic distances. This scheme offers the first principle method without any model or effective potential parameters for the description of the bond-breaking processes. In this study, the PECs and selected spectroscopic constants for 18 electronic states of the Li+2 ion were computed and compared with experimental and other theoretical results. †In honour of Professor Sourav Pal on the occasion of an anniversary in his private and scientific life.
All-electron G W +Bethe-Salpeter calculations on small molecules
NASA Astrophysics Data System (ADS)
Hirose, Daichi; Noguchi, Yoshifumi; Sugino, Osamu
2015-05-01
Accuracy of the first-principles G W +Bethe-Salpeter equation (BSE) method is examined for low-energy excited states of small molecules. The standard formalism, which is based on the one-shot G W approximation and the Tamm-Dancoff approximation (TDA), is found to underestimate the optical gap of N2, CO, H2O ,C2H4 , and CH2O by about 1 eV. Possible origins are investigated separately for the effect of TDA and for the approximate schemes of the self-energy operator, which are known to cause overbinding of the electron-hole pair and overscreening of the interaction. By applying the known correction formula, we find the amount of the correction is too small to overcome the underestimated excitation energy. This result indicates a need for fundamental revision of the G W +BSE method rather than adjustment of the standard one. We expect that this study makes the problems in the current G W +BSE formalism clearer and provides useful information for further intrinsic development beyond the current framework.
NASA Astrophysics Data System (ADS)
Jorge, F. E.; Martins, L. S. C.; Franco, M. L.
2016-01-01
Segmented all-electron basis sets of valence double zeta quality plus polarization functions (DZP) for the elements from Ce to Lu are generated to be used with the non-relativistic and Douglas-Kroll-Hess (DKH) Hamiltonians. At the B3LYP level, the DZP-DKH atomic ionization energies and equilibrium bond lengths and atomization energies of the lanthanide trifluorides are evaluated and compared with benchmark theoretical and experimental data reported in the literature. In general, this compact size set shows to have a regular, efficient, and reliable performance. It can be particularly useful in molecular property calculations that require explicit treatment of the core electrons.
All-electron mixed basis G W calculations of TiO2 and ZnO crystals
NASA Astrophysics Data System (ADS)
Zhang, Ming; Ono, Shota; Nagatsuka, Naoki; Ohno, Kaoru
2016-04-01
In transition metal oxide systems, there exists a serious discrepancy between the theoretical quasiparticle energies and the experimental photoemission energies. To improve the accuracy of electronic structure calculations for these systems, we use the all-electron mixed basis GW method, in which single-particle wave functions are accurately described by the linear combinations of plane waves and atomic orbitals. We adopt the full ω integration to evaluate the correlation part of the self-energy and compare the results with those obtained by plasmon pole models. We present the quasiparticle energies and band gap of titanium dioxide (TiO2) and zinc oxide (ZnO) within the one-shot GW approximation. The results are in reasonable agreement with experimental data in the case of TiO2 but underestimated by about 0.6-1.4 eV from experimental data in the case of ZnO, although our results are comparable to previous one-shot GW calculations. We also explain a new approach to perform ω integration very efficiently and accurately.
NASA Astrophysics Data System (ADS)
Rivelino, Roberto; Malaspina, Thaciana; Fileti, Eudes E.
2009-01-01
We have investigated the stability, electronic properties, Rayleigh (elastic), and Raman (inelastic) depolarization ratios, infrared and Raman absorption vibrational spectra of fullerenols [C60(OH)n] with different degrees of hydroxylation by using all-electron density-functional-theory (DFT) methods. Stable arrangements of these molecules were found by means of full geometry optimizations using Becke’s three-parameter exchange functional with the Lee, Yang, and Parr correlation functional. This DFT level has been combined with the 6-31G(d,p) Gaussian-type basis set, as a compromise between accuracy and capability to treat highly hydroxylated fullerenes, e.g., C60(OH)36 . Thus, the molecular properties of fullerenols were systematically analyzed for structures with n=1 , 2, 3, 4, 8, 10, 16, 18, 24, 32, and 36. From the electronic structure analysis of these molecules, we have evidenced an important effect related to the weak chemical reactivity of a possible C60(OH)24 isomer. To investigate Raman scattering and the vibrational spectra of the different fullerenols, frequency calculations are carried out within the harmonic approximation. In this case a systematic study is only performed for n=1-4 , 8, 10, 16, 18, and 24. Our results give good agreements with the expected changes in the spectral absorptions due to the hydroxylation of fullerenes.
Kuwahara, Riichi; Tadokoro, Yoichi; Ohno, Kaoru
2014-08-28
In this paper, we calculate kinetic and potential energy contributions to the electronic ground-state total energy of several isolated atoms (He, Be, Ne, Mg, Ar, and Ca) by using the local density approximation (LDA) in density functional theory, the Hartree-Fock approximation (HFA), and the self-consistent GW approximation (GWA). To this end, we have implemented self-consistent HFA and GWA routines in our all-electron mixed basis code, TOMBO. We confirm that virial theorem is fairly well satisfied in all of these approximations, although the resulting eigenvalue of the highest occupied molecular orbital level, i.e., the negative of the ionization potential, is in excellent agreement only in the case of the GWA. We find that the wave function of the lowest unoccupied molecular orbital level of noble gas atoms is a resonating virtual bound state, and that of the GWA spreads wider than that of the LDA and thinner than that of the HFA. PMID:25173006
NASA Astrophysics Data System (ADS)
Ono, Tomoya; Heide, Marcus; Atodiresei, Nicolae; Baumeister, Paul; Tsukamoto, Shigeru; Blügel, Stefan
2010-11-01
We have developed an efficient computational scheme utilizing the real-space finite-difference formalism and the projector augmented-wave (PAW) method to perform precise first-principles electronic-structure simulations based on the density-functional theory for systems containing transition metals with a modest computational effort. By combining the advantages of the time-saving double-grid technique and the Fourier-filtering procedure for the projectors of pseudopotentials, we can overcome the egg box effect in the computations even for first-row elements and transition metals, which is a problem of the real-space finite-difference formalism. In order to demonstrate the potential power in terms of precision and applicability of the present scheme, we have carried out simulations to examine several bulk properties and structural energy differences between different bulk phases of transition metals and have obtained excellent agreement with the results of other precise first-principles methods such as a plane-wave-based PAW method and an all-electron full-potential linearized augmented plane-wave (FLAPW) method.
NASA Astrophysics Data System (ADS)
Witek, Henryk A.; Nakijima, Takahito; Hirao, Kimihiko
2000-11-01
We report relativistic all-electron multireference based perturbation calculations on the low-lying excited states of gold and silver hydrides. For AuH, we consider all molecular states dissociating to the Au(2S)+H(2S) and Au(2D)+H(2S) atomic limits, and for AgH, the states corresponding to the Ag(2S)+H(2S), Ag(2P)+H(2S), and Ag(2D)+H(2S) dissociation channels. Spin-free relativistic effects and the correlation effects are treated on the same footing through the relativistic scheme of eliminating small components (RESC). Spin-orbit effects are included perturbatively. The calculated potential energy curves for AgH are the first reported in the literature. The computed spectroscopic properties agree well with experimental findings; however, the assignment of states does not correspond to our calculations. Therefore, we give a reinterpretation of the experimentally observed C 1Π, a 3Π, B 1Σ+, b(3Δ1)1, D 1Π, c13Π1, and c0(3Π0) states. A labeling suggested by us is a1, C0+, b0-, c2, B3Π0+, d3Π1, e1, f1 and g1, respectively. The spin-orbit states corresponding to Ag(2D)+H(2S) have not well defined the Λ and S quantum numbers, and therefore, they probably correspond to Hund's coupling case c. For AuH, we present a comparison of the calculated potential energy curves and spectroscopic parameters with the previous configuration interaction study and the experiment.
Sharkey, Keeper L.; Pavanello, Michele; Bubin, Sergiy; Adamowicz, Ludwik
2009-12-15
A new algorithm for calculating the Hamiltonian matrix elements with all-electron explicitly correlated Gaussian functions for quantum-mechanical calculations of atoms with two p electrons or a single d electron have been derived and implemented. The Hamiltonian used in the approach was obtained by rigorously separating the center-of-mass motion and it explicitly depends on the finite mass of the nucleus. The approach was employed to perform test calculations on the isotopes of the carbon atom in their ground electronic states and to determine the finite-nuclear-mass corrections for these states.
All-electron first principles calculations of the ground and some low-lying excited states of BaI.
Miliordos, Evangelos; Papakondylis, Aristotle; Tsekouras, Athanasios A; Mavridis, Aristides
2007-10-01
The electronic structure of the heavy diatomic molecule BaI has been examined for the first time by ab initio multiconfigurational configuration interaction (MRCI) and coupled cluster (RCCSD(T)) methods. The effects of special relativity have been taken into account through the second-order Douglas-Kroll-Hess approximation. The construction of Omega(omega,omega) potential energy curves allows for the estimation of "experimental" dissociation energies (De) of the first few excited states by exploiting the accurately known De experimental value of the X2Sigma+ ground state. All states examined are of ionic character with a Mulliken charge transfer of 0.5 e- from Ba to I, and this is reflected to large dipole moments ranging from 6 to 11 D. Despite the inherent difficulties of a heavy system like BaI, our results are encouraging. With the exception of bond distances that on the average are calculated 0.05 A longer than the experimental ones, common spectroscopic parameters are in fair agreement with experiment, whereas De values are on the average 10 kcal/mol smaller. PMID:17850123
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.
1991-01-01
Dirac-Hartree-Fock calculations have been carried out on the ground states of the group IV monoxides GeO, SnO and PbO. Geometries, dipole moments and infrared data are presented. For comparison, nonrelativistic, first-order perturbation and relativistic effective core potential calculations have also been carried out. Where appropriate the results are compared with the experimental data and previous calculations. Spin-orbit effects are of great importance for PbO, where first-order perturbation theory including only the mass-velocity and Darwin terms is inadequate to predict the relativistic corrections to the properties. The relativistic effective core potential results show a larger deviation from the all-electron values than for the hydrides, and confirm the conclusions drawn on the basis of the hydride calculations.
Sharkey, Keeper L; Kirnosov, Nikita; Adamowicz, Ludwik
2013-03-14
A new algorithm for quantum-mechanical nonrelativistic calculation of the Hamiltonian matrix elements with all-electron explicitly correlated Gaussian functions for atoms with an arbitrary number of s electrons and with three p electrons, or one p electron and one d electron, or one f electron is developed and implemented. In particular the implementation concerns atomic states with L = 3 and M = 0. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. The approach is employed to perform test calculations on the lowest (2)F state of the two main isotopes of the lithium atom, (7)Li and (6)Li. PMID:23514465
Evarestov, R A; Losev, M V
2009-12-01
For the first time the convergence of the phonon frequencies and dispersion curves in terms of the supercell size is studied in ab initio frozen phonon calculations on LiF crystal. Helmann-Feynman forces over atomic displacements are found in all-electron calculations with the localized atomic functions (LCAO) basis using CRYSTAL06 program. The Parlinski-Li-Kawazoe method and FROPHO program are used to calculate the dynamical matrix and phonon frequencies of the supercells. For fcc lattice, it is demonstrated that use of the full supercell space group (including the supercell inner translations) enables to reduce essentially the number of the displacements under consideration. For Hartree-Fock (HF), PBE and hybrid PBE0, B3LYP, and B3PW exchange-correlation functionals the atomic basis set optimization is performed. The supercells up to 216 atoms (3 x 3 x 3 conventional unit cells) are considered. The phonon frequencies using the supercells of different size and shape are compared. For the commensurate with supercell k-points the best agreement of the theoretical results with the experimental data is found for B3PW exchange-correlation functional calculations with the optimized basis set. The phonon frequencies at the most non-commensurate k-points converged for the supercell consisting of 4 x 4 x 4 primitive cells and ensures the accuracy 1-2% in the thermodynamic properties calculated (the Helmholtz free energy, entropy, and heat capacity at the room temperature). PMID:19382176
NASA Astrophysics Data System (ADS)
Blum, Volker
This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.
NASA Astrophysics Data System (ADS)
Klüppelberg, Daniel A.; Betzinger, Markus; Blügel, Stefan
2015-01-01
We analyze the accuracy of the atomic force within the all-electron full-potential linearized augmented plane-wave (FLAPW) method using the force formalism of Yu et al. [Phys. Rev. B 43, 6411 (1991), 10.1103/PhysRevB.43.6411]. A refinement of this formalism is presented that explicitly takes into account the tail of high-lying core states leaking out of the muffin-tin sphere and considers the small discontinuities of LAPW wave function, density, and potential at the muffin-tin sphere boundaries. For MgO and EuTiO3 it is demonstrated that these amendments substantially improve the acoustic sum rule and the symmetry of the force constant matrix. Sum rule and symmetry are realized with an accuracy of μ Htr /aB .
Ching, W. Y.; Aryal, Sitram; Rulis, Paul; Schnick, Wolfgang
2011-04-15
Using density-functional-theory-based ab initio methods, the electronic structure and physical properties of the newly synthesized nitride BeP{sub 2}N{sub 4} with a phenakite-type structure and the predicted high-pressure spinel phase of BeP{sub 2}N{sub 4} are studied in detail. It is shown that both polymorphs are wide band-gap semiconductors with relatively small electron effective masses at the conduction-band minima. The spinel-type phase is more covalently bonded due to the increased number of P-N bonds for P at the octahedral sites. Calculations of mechanical properties indicate that the spinel-type polymorph is a promising superhard material with notably large bulk, shear, and Young's moduli. Also calculated are the Be K, P K, P L{sub 3}, and N K edges of the electron energy-loss near-edge structure for both phases. They show marked differences because of the different local environments of the atoms in the two crystalline polymorphs. These differences will be very useful for the experimental identification of the products of high-pressure syntheses targeting the predicted spinel-type phase of BeP{sub 2}N{sub 4}.
Mitin, Alexander V; van Wüllen, Christoph
2006-02-14
A two-component quasirelativistic Hamiltonian based on spin-dependent effective core potentials is used to calculate ionization energies and electron affinities of the heavy halogen atom bromine through the superheavy element 117 (eka-astatine) as well as spectroscopic constants of the homonuclear dimers of these atoms. We describe a two-component Hartree-Fock and density-functional program that treats spin-orbit coupling self-consistently within the orbital optimization procedure. A comparison with results from high-order Douglas-Kroll calculations--for the superheavy systems also with zeroth-order regular approximation and four-component Dirac results--demonstrates the validity of the pseudopotential approximation. The density-functional (but not the Hartree-Fock) results show very satisfactory agreement with theoretical coupled cluster as well as experimental data where available, such that the theoretical results can serve as an estimate for the hitherto unknown properties of astatine, element 117, and their dimers. PMID:16483205
NASA Astrophysics Data System (ADS)
Sharkey, Keeper L.; Bubin, Sergiy; Adamowicz, Ludwik
2014-11-01
Accurate variational nonrelativistic quantum-mechanical calculations are performed for the five lowest 1D and four lowest 3D states of the 9Be isotope of the beryllium atom. All-electron explicitly correlated Gaussian (ECG) functions are used in the calculations and their nonlinear parameters are optimized with the aid of the analytical energy gradient determined with respect to these parameters. The effect of the finite nuclear mass is directly included in the Hamiltonian used in the calculations. The singlet-triplet energy gaps between the corresponding 1D and 3D states, are reported.
Sharkey, Keeper L; Adamowicz, Ludwik
2014-05-01
An algorithm for quantum-mechanical nonrelativistic variational calculations of L = 0 and M = 0 states of atoms with an arbitrary number of s electrons and with three p electrons have been implemented and tested in the calculations of the ground (4)S state of the nitrogen atom. The spatial part of the wave function is expanded in terms of all-electrons explicitly correlated Gaussian functions with the appropriate pre-exponential Cartesian angular factors for states with the L = 0 and M = 0 symmetry. The algorithm includes formulas for calculating the Hamiltonian and overlap matrix elements, as well as formulas for calculating the analytic energy gradient determined with respect to the Gaussian exponential parameters. The gradient is used in the variational optimization of these parameters. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all-particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. With that, the mass effect on the total ground-state energy is determined. PMID:24811630
Velocity Based Modulus Calculations
NASA Astrophysics Data System (ADS)
Dickson, W. C.
2007-12-01
A new set of equations are derived for the modulus of elasticity E and the bulk modulus K which are dependent only upon the seismic wave propagation velocities Vp, Vs and the density ρ. The three elastic moduli, E (Young's modulus), the shear modulus μ (Lamé's second parameter) and the bulk modulus K are found to be simple functions of the density and wave propagation velocities within the material. The shear and elastic moduli are found to equal the density of the material multiplied by the square of their respective wave propagation-velocities. The bulk modulus may be calculated from the elastic modulus using Poisson's ratio. These equations and resultant values are consistent with published literature and values in both magnitude and dimension (N/m2) and are applicable to the solid, liquid and gaseous phases. A 3D modulus of elasticity model for the Parkfield segment of the San Andreas Fault is presented using data from the wavespeed model of Thurber et al. [2006]. A sharp modulus gradient is observed across the fault at seismic depths, confirming that "variation in material properties play a key role in fault segmentation and deformation style" [Eberhart-Phillips et al., 1993] [EPM93]. The three elastic moduli E, μ and K may now be calculated directly from seismic pressure and shear wave propagation velocities. These velocities may be determined using conventional seismic reflection, refraction or transmission data and techniques. These velocities may be used in turn to estimate the density. This allows velocity based modulus calculations to be used as a tool for geophysical analysis, modeling, engineering and prospecting.
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Taylor, Peter R.; Faegri, Knut, Jr.; Partridge, Harry
1990-01-01
A basis-set-expansion Dirac-Hartree-Fock program for molecules is described. Bond lengths and harmonic frequencies are presented for the ground states of the group 4 tetrahydrides, CH4, SiH4, GeH4, SnH4, and PbH4. The results are compared with relativistic effective core potential (RECP) calculations, first-order perturbation theory (PT) calculations and with experimental data. The bond lengths are well predicted by first-order perturbation theory for all molecules, but non of the RECP's considered provides a consistent prediction. Perturbation theory overestimates the relativistic correction to the harmonic frequencies; the RECP calculations underestimate the correction.
Increasing the detection speed of an all-electronic real-time biosensor.
Leyden, Matthew R; Messinger, Robert J; Schuman, Canan; Sharf, Tal; Remcho, Vincent T; Squires, Todd M; Minot, Ethan D
2012-03-01
Biosensor response time, which depends sensitively on the transport of biomolecules to the sensor surface, is a critical concern for future biosensor applications. We have fabricated carbon nanotube field-effect transistor biosensors and quantified protein binding rates onto these nanoelectronic sensors. Using this experimental platform we test the effectiveness of a protein repellent coating designed to enhance protein flux to the all-electronic real-time biosensor. We observe a 2.5-fold increase in the initial protein flux to the sensor when upstream binding sites are blocked. Mass transport modelling is used to calculate the maximal flux enhancement that is possible with this strategy. Our results demonstrate a new methodology for characterizing nanoelectronic biosensor performance, and demonstrate a mass transport optimization strategy that is applicable to a wide range of microfluidic based biosensors. PMID:22252647
Upper Subcritical Calculations Based on Correlated Data
Sobes, Vladimir; Rearden, Bradley T; Mueller, Don; Marshall, William BJ J; Scaglione, John M; Dunn, Michael E
2015-01-01
The American National Standards Institute and American Nuclear Society standard for Validation of Neutron Transport Methods for Nuclear Criticality Safety Calculations defines the upper subcritical limit (USL) as “a limit on the calculated k-effective value established to ensure that conditions calculated to be subcritical will actually be subcritical.” Often, USL calculations are based on statistical techniques that infer information about a nuclear system of interest from a set of known/well-characterized similar systems. The work in this paper is part of an active area of research to investigate the way traditional trending analysis is used in the nuclear industry, and in particular, the research is assessing the impact of the underlying assumption that the experimental data being analyzed for USL calculations are statistically independent. In contrast, the multiple experiments typically used for USL calculations can be correlated because they are often performed at the same facilities using the same materials and measurement techniques. This paper addresses this issue by providing a set of statistical inference methods to calculate the bias and bias uncertainty based on the underlying assumption that the experimental data are correlated. Methods to quantify these correlations are the subject of a companion paper and will not be discussed here. The newly proposed USL methodology is based on the assumption that the integral experiments selected for use in the establishment of the USL are sufficiently applicable and that experimental correlations are known. Under the assumption of uncorrelated data, the new methods collapse directly to familiar USL equations currently used. We will demonstrate our proposed methods on real data and compare them to calculations of currently used methods such as USLSTATS and NUREG/CR-6698. Lastly, we will also demonstrate the effect experiment correlations can have on USL calculations.
Exact-exchange-based quasiparticle calculations
Aulbur, Wilfried G.; Staedele, Martin; Goerling, Andreas
2000-09-15
One-particle wave functions and energies from Kohn-Sham calculations with the exact local Kohn-Sham exchange and the local density approximation (LDA) correlation potential [EXX(c)] are used as input for quasiparticle calculations in the GW approximation (GWA) for eight semiconductors. Quasiparticle corrections to EXX(c) band gaps are small when EXX(c) band gaps are close to experiment. In the case of diamond, quasiparticle calculations are essential to remedy a 0.7 eV underestimate of the experimental band gap within EXX(c). The accuracy of EXX(c)-based GWA calculations for the determination of band gaps is as good as the accuracy of LDA-based GWA calculations. For the lowest valence band width a qualitatively different behavior is observed for medium- and wide-gap materials. The valence band width of medium- (wide-) gap materials is reduced (increased) in EXX(c) compared to the LDA. Quasiparticle corrections lead to a further reduction (increase). As a consequence, EXX(c)-based quasiparticle calculations give valence band widths that are generally 1-2 eV smaller (larger) than experiment for medium- (wide-) gap materials. (c) 2000 The American Physical Society.
Calculations of NMR chemical shifts with APW-based methods
NASA Astrophysics Data System (ADS)
Laskowski, Robert; Blaha, Peter
2012-01-01
We present a full potential, all electron augmented plane wave (APW) implementation of first-principles calculations of NMR chemical shifts. In order to obtain the induced current we follow a perturbation approach [Pickard and Mauri, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.63.245101 63, 245101 (2001)] and extended the common APW + local orbital (LO) basis by several LOs at higher energies. The calculated all-electron current is represented in traditional APW manner as Fourier series in the interstitial region and with a spherical harmonics representation inside the nonoverlapping atomic spheres. The current is integrated using a “pseudocharge” technique. The implementation is validated by comparison of the computed chemical shifts with some “exact” results for spherical atoms and for a set of solids and molecules with available published data.
Label-free all-electronic biosensing in microfluidic systems
NASA Astrophysics Data System (ADS)
Stanton, Michael A.
Label-free, all-electronic detection techniques offer great promise for advancements in medical and biological analysis. Electrical sensing can be used to measure both interfacial and bulk impedance changes in conducting solutions. Electronic sensors produced using standard microfabrication processes are easily integrated into microfluidic systems. Combined with the sensitivity of radiofrequency electrical measurements, this approach offers significant advantages over competing biological sensing methods. Scalable fabrication methods also provide a means of bypassing the prohibitive costs and infrastructure associated with current technologies. We describe the design, development and use of a radiofrequency reflectometer integrated into a microfluidic system towards the specific detection of biologically relevant materials. We developed a detection protocol based on impedimetric changes caused by the binding of antibody/antigen pairs to the sensing region. Here we report the surface chemistry that forms the necessary capture mechanism. Gold-thiol binding was utilized to create an ordered alkane monolayer on the sensor surface. Exposed functional groups target the N-terminus, affixing a protein to the monolayer. The general applicability of this method lends itself to a wide variety of proteins. To demonstrate specificity, commercially available mouse anti- Streptococcus Pneumoniae monoclonal antibody was used to target the full-length recombinant pneumococcal surface protein A, type 2 strain D39 expressed by Streptococcus Pneumoniae. We demonstrate the RF response of the sensor to both the presence of the surface decoration and bound SPn cells in a 1x phosphate buffered saline solution. The combined microfluidic sensor represents a powerful platform for the analysis and detection of cells and biomolecules.
Numerical inductance calculations based on first principles.
Shatz, Lisa F; Christensen, Craig W
2014-01-01
A method of calculating inductances based on first principles is presented, which has the advantage over the more popular simulators in that fundamental formulas are explicitly used so that a deeper understanding of the inductance calculation is obtained with no need for explicit discretization of the inductor. It also has the advantage over the traditional method of formulas or table lookups in that it can be used for a wider range of configurations. It relies on the use of fast computers with a sophisticated mathematical computing language such as Mathematica to perform the required integration numerically so that the researcher can focus on the physics of the inductance calculation and not on the numerical integration. PMID:25402467
GPU-based fast gamma index calculation
NASA Astrophysics Data System (ADS)
Gu, Xuejun; Jia, Xun; Jiang, Steve B.
2011-03-01
The γ-index dose comparison tool has been widely used to compare dose distributions in cancer radiotherapy. The accurate calculation of γ-index requires an exhaustive search of the closest Euclidean distance in the high-resolution dose-distance space. This is a computational intensive task when dealing with 3D dose distributions. In this work, we combine a geometric method (Ju et al 2008 Med. Phys. 35 879-87) with a radial pre-sorting technique (Wendling et al 2007 Med. Phys. 34 1647-54) and implement them on computer graphics processing units (GPUs). The developed GPU-based γ-index computational tool is evaluated on eight pairs of IMRT dose distributions. The γ-index calculations can be finished within a few seconds for all 3D testing cases on one single NVIDIA Tesla C1060 card, achieving 45-75× speedup compared to CPU computations conducted on an Intel Xeon 2.27 GHz processor. We further investigated the effect of various factors on both CPU and GPU computation time. The strategy of pre-sorting voxels based on their dose difference values speeds up the GPU calculation by about 2.7-5.5 times. For n-dimensional dose distributions, γ-index calculation time on CPU is proportional to the summation of γn over all voxels, while that on GPU is affected by γn distributions and is approximately proportional to the γn summation over all voxels. We found that increasing the resolution of dose distributions leads to a quadratic increase of computation time on CPU, while less-than-quadratic increase on GPU. The values of dose difference and distance-to-agreement criteria also have an impact on γ-index calculation time.
GPU-based fast gamma index calculation.
Gu, Xuejun; Jia, Xun; Jiang, Steve B
2011-03-01
The γ-index dose comparison tool has been widely used to compare dose distributions in cancer radiotherapy. The accurate calculation of γ-index requires an exhaustive search of the closest Euclidean distance in the high-resolution dose-distance space. This is a computational intensive task when dealing with 3D dose distributions. In this work, we combine a geometric method (Ju et al 2008 Med. Phys. 35 879-87) with a radial pre-sorting technique (Wendling et al 2007 Med. Phys. 34 1647-54) and implement them on computer graphics processing units (GPUs). The developed GPU-based γ-index computational tool is evaluated on eight pairs of IMRT dose distributions. The γ-index calculations can be finished within a few seconds for all 3D testing cases on one single NVIDIA Tesla C1060 card, achieving 45-75× speedup compared to CPU computations conducted on an Intel Xeon 2.27 GHz processor. We further investigated the effect of various factors on both CPU and GPU computation time. The strategy of pre-sorting voxels based on their dose difference values speeds up the GPU calculation by about 2.7-5.5 times. For n-dimensional dose distributions, γ-index calculation time on CPU is proportional to the summation of γ(n) over all voxels, while that on GPU is affected by γ(n) distributions and is approximately proportional to the γ(n) summation over all voxels. We found that increasing the resolution of dose distributions leads to a quadratic increase of computation time on CPU, while less-than-quadratic increase on GPU. The values of dose difference and distance-to-agreement criteria also have an impact on γ-index calculation time. PMID:21317484
Rapid Bacterial Detection via an All-Electronic CMOS Biosensor.
Nikkhoo, Nasim; Cumby, Nichole; Gulak, P Glenn; Maxwell, Karen L
2016-01-01
The timely and accurate diagnosis of infectious diseases is one of the greatest challenges currently facing modern medicine. The development of innovative techniques for the rapid and accurate identification of bacterial pathogens in point-of-care facilities using low-cost, portable instruments is essential. We have developed a novel all-electronic biosensor that is able to identify bacteria in less than ten minutes. This technology exploits bacteriocins, protein toxins naturally produced by bacteria, as the selective biological detection element. The bacteriocins are integrated with an array of potassium-selective sensors in Complementary Metal Oxide Semiconductor technology to provide an inexpensive bacterial biosensor. An electronic platform connects the CMOS sensor to a computer for processing and real-time visualization. We have used this technology to successfully identify both Gram-positive and Gram-negative bacteria commonly found in human infections. PMID:27618185
Grid-based electronic structure calculations: The tensor decomposition approach
NASA Astrophysics Data System (ADS)
Rakhuba, M. V.; Oseledets, I. V.
2016-05-01
We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.
GPU-based calculations in digital holography
NASA Astrophysics Data System (ADS)
Madrigal, R.; Acebal, P.; Blaya, S.; Carretero, L.; Fimia, A.; Serrano, F.
2013-05-01
In this work we are going to apply GPU (Graphical Processing Units) with CUDA environment for scientific calculations, concretely high cost computations on the field of digital holography. For this, we have studied three typical problems in digital holography such as Fourier transforms, Fresnel reconstruction of the hologram and the calculation of vectorial diffraction integral. In all cases the runtime at different image size and the corresponding accuracy were compared to the obtained by traditional calculation systems. The programs have been carried out on a computer with a graphic card of last generation, Nvidia GTX 680, which is optimized for integer calculations. As a result a large reduction of runtime has been obtained which allows a significant improvement. Concretely, 15 fold shorter times for Fresnel approximation calculations and 600 times for the vectorial diffraction integral. These initial results, open the possibility for applying such kind of calculations in real time digital holography.
Lehtovaara, Lauri; Havu, Ville; Puska, Martti
2011-10-21
We present an all-electron method for time-dependent density functional theory which employs hierarchical nonuniform finite-element bases and the time-propagation approach. The method is capable of treating linear and nonlinear response of valence and core electrons to an external field. We also introduce (i) a preconditioner for the propagation equation, (ii) a stable way to implement absorbing boundary conditions, and (iii) a new kind of absorbing boundary condition inspired by perfectly matched layers. PMID:22029294
NASA Astrophysics Data System (ADS)
Betzinger, Markus; Friedrich, Christoph; Görling, Andreas; Blügel, Stefan
2015-12-01
We present a methodology to calculate frequency and momentum dependent all-electron response functions determined within Kohn-Sham density functional theory. It overcomes the main obstacle in calculating response functions in practice, which is the slow convergence with respect to the number of unoccupied states and the basis-set size. In this approach, the usual sum-over-states expression of perturbation theory is complemented by the response of the orbital basis functions, explicitly constructed by radial integrations of frequency-dependent Sternheimer equations. To an essential extent an infinite number of unoccupied states are included in this way. Furthermore, the response of the core electrons is treated virtually exactly, which is out of reach otherwise. The method is an extension of the recently introduced incomplete-basis-set correction (IBC) [Betzinger et al., Phys. Rev. B 85, 245124 (2012), 10.1103/PhysRevB.85.245124; Phys. Rev. B 88, 075130 (2013), 10.1103/PhysRevB.88.075130] to the frequency and momentum domain. We have implemented the generalized IBC within the all-electron full-potential linearized augmented-plane-wave method and demonstrate for rocksalt BaO the improved convergence of the dynamical Kohn-Sham polarizability. We apply this technique to compute (a) quasiparticle energies employing the COHSEX approximation for the self-energy of many-body perturbation theory and (b) all-electron RPA correlation energies. It is shown that the favorable convergence of the polarizability is passed over to the COHSEX and RPA calculation.
Spreadsheet Based Scaling Calculations and Membrane Performance
Wolfe, T D; Bourcier, W L; Speth, T F
2000-12-28
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI) for each solid of
NASA Astrophysics Data System (ADS)
Rury, Aaron S.; Mansour, Kamjou; Yu, Nan
2015-07-01
This study examines the capability to significantly suppress the frequency noise of a semiconductor distributed feedback diode laser using a universally applicable approach: a combination of a high-Q crystalline whispering gallery mode microresonator reference and the Pound-Drever-Hall locking scheme using an all-electronic servo loop. An out-of-loop delayed self-heterodyne measurement system demonstrates the ability of this approach to reduce a test laser's absolute line width by nearly a factor of 100. In addition, in-loop characterization of the laser stabilized using this method demonstrates a 1-kHz residual line width with reference to the resonator frequency. Based on these results, we propose that utilization of an all-electronic loop combined with the use of the wide transparency window of crystalline materials enable this approach to be readily applicable to diode lasers emitting in other regions of the electromagnetic spectrum, especially in the UV and mid-IR.
SPREADSHEET BASED SCALING CALCULATIONS AND MEMBRANE PERFORMANCE
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total...
NASA Astrophysics Data System (ADS)
Betzinger, Markus; Friedrich, Christoph; Blügel, Stefan
2013-08-01
In a previous publication [Betzinger, Friedrich, Görling, and Blügel, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.85.245124 85, 245124 (2012)] we presented a technique to compute accurate all-electron response functions, e.g., the density response function, within the full-potential linearized augmented-plane-wave (FLAPW) method. Response contributions that are not captured (completely) within the finite Hilbert space spanned by the LAPW basis are taken into account by an incomplete-basis-set correction (IBC). The latter is based on a formal response of the basis functions themselves, which is derived by exploiting their dependence on the effective potential. Its construction requires the solution of radial differential equations, having the form of Sternheimer equations, by numerical integration. The approach includes a formally exact treatment of the response contribution from the core states. While we restricted the formalism to spherical perturbations in the previous work, we here generalize the formalism to nonspherical perturbations. The improvements are demonstrated with exact-exchange optimized-effective-potential (EXX-OEP) calculations of antiferromagnetic NiO. It is shown that with the generalized IBC a basis-set convergence is realized that is as fast as in density-functional theory calculations using standard local or semilocal functionals. The EXX-OEP band gap, magnetic moment, and spectral function of NiO are in substantially better agreement with experiment than results obtained from calculations with local and semilocal functionals.
Relaxation of Actinide Surfaces: An All Electron Study
NASA Astrophysics Data System (ADS)
Atta-Fynn, Raymond; Dholabhai, Pratik; Ray, Asok
2006-10-01
Fully relativistic full potential density functional calculations with a linearized augmented plane wave plus local orbitals basis (LAPW + lo) have been performed to investigate the relaxations of heavy actinide surfaces, namely the (111) surface of fcc δ-Pu and the (0001) surface of dhcp Am using WIEN2k. This code uses the LAPW + lo method with the unit cell divided into non-overlapping atom-centered spheres and an interstitial region. The APW+lo basis is used to describe all s, p, d, and f states and LAPW basis to describe all higher angular momentum states. Each surface was modeled by a three-layer periodic slab separated by 60 Bohr vacuum with four atoms per surface unit cell. In general, we have found a contraction of the interlayer separations for both Pu and Am. We will report, in detail, the electronic and geometric structures of the relaxed surfaces and comparisons with the respective non-relaxed surfaces.
NASA Astrophysics Data System (ADS)
Zhu, C. G.; Chang, J.; Wang, P. P.; Wang, Q.; Wei, W.; Tian, J. Q.; Chang, H. T.; Liu, X. Z.; Zhang, S. S.
2014-03-01
Single-beam balanced radiometric detection (BRD) system with all-electronic feedback stabilization has been proposed for high reliability water vapor detection under rough environmental conditions, which is insensitive to the fluctuation of transmission loss of light. The majority of photocurrent attenuation caused by the optical loss can be effectively compensated by automatically adjusting the splitting ratio of probe photocurrent. Based on the Ebers-Moll model, we present a theoretical analysis which can be suppressed the photocurrent attenuation caused by optical loss from 0.5552 dB to 0.0004 dB by using the all-electronic feedback stabilization. The deviation of the single-beam BRD system is below 0.29% with the bending loss of 0.31 dB in fiber, which is obviously lower than the dual-beam BRD system (5.96%) and subtraction system (11.3%). After averaging and filtering, the absorption sensitivity of water vapor at 1368.597 nm has been demonstrated, which is 7.368×10-6.
Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set.
Paschoal, D; Guerra, C Fonseca; de Oliveira, M A L; Ramalho, T C; Dos Santos, H F
2016-10-01
Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the Pt-195 NMR chemical shift, are presented for Pt and all elements commonly found as Pt-ligands. The new basis sets identified as NMR-DKH were partially contracted as a triple-zeta doubly polarized scheme with all coefficients obtained from a Douglas-Kroll-Hess (DKH) second-order scalar relativistic calculation. The Pt-195 chemical shift was predicted through empirical models fitted to reproduce experimental data for a set of 183 Pt(II) complexes which NMR sign ranges from -1000 to -6000 ppm. Furthermore, the models were validated using a new set of 75 Pt(II) complexes, not included in the descriptive set. The models were constructed using non-relativistic Hamiltonian at density functional theory (DFT-PBEPBE) level with NMR-DKH basis set for all atoms. For the best model, the mean absolute deviation (MAD) and the mean relative deviation (MRD) were 150 ppm and 6%, respectively, for the validation set (75 Pt-complexes) and 168 ppm (MAD) and 5% (MRD) for all 258 Pt(II) complexes. These results were comparable with relativistic DFT calculation, 200 ppm (MAD) and 6% (MRD). © 2016 Wiley Periodicals, Inc. PMID:27510431
All-electron Kohn–Sham density functional theory on hierarchic finite element spaces
Schauer, Volker; Linder, Christian
2013-10-01
In this work, a real space formulation of the Kohn–Sham equations is developed, making use of the hierarchy of finite element spaces from different polynomial order. The focus is laid on all-electron calculations, having the highest requirement onto the basis set, which must be able to represent the orthogonal eigenfunctions as well as the electrostatic potential. A careful numerical analysis is performed, which points out the numerical intricacies originating from the singularity of the nuclei and the necessity for approximations in the numerical setting, with the ambition to enable solutions within a predefined accuracy. In this context the influence of counter-charges in the Poisson equation, the requirement of a finite domain size, numerical quadratures and the mesh refinement are examined as well as the representation of the electrostatic potential in a high order finite element space. The performance and accuracy of the method is demonstrated in computations on noble gases. In addition the finite element basis proves its flexibility in the calculation of the bond-length as well as the dipole moment of the carbon monoxide molecule.
Proton dose calculation based on in-air fluence measurements.
Schaffner, Barbara
2008-03-21
Proton dose calculation algorithms--as well as photon and electron algorithms--are usually based on configuration measurements taken in a water phantom. The exceptions to this are proton dose calculation algorithms for modulated scanning beams. There, it is usual to measure the spot profiles in air. We use the concept of in-air configuration measurements also for scattering and uniform scanning (wobbling) proton delivery techniques. The dose calculation includes a separate step for the calculation of the in-air fluence distribution per energy layer. The in-air fluence calculation is specific to the technique and-to a lesser extent-design of the treatment machine. The actual dose calculation uses the in-air fluence as input and is generic for all proton machine designs and techniques. PMID:18367787
Proton dose calculation based on in-air fluence measurements
NASA Astrophysics Data System (ADS)
Schaffner, Barbara
2008-03-01
Proton dose calculation algorithms—as well as photon and electron algorithms—are usually based on configuration measurements taken in a water phantom. The exceptions to this are proton dose calculation algorithms for modulated scanning beams. There, it is usual to measure the spot profiles in air. We use the concept of in-air configuration measurements also for scattering and uniform scanning (wobbling) proton delivery techniques. The dose calculation includes a separate step for the calculation of the in-air fluence distribution per energy layer. The in-air fluence calculation is specific to the technique and—to a lesser extent—design of the treatment machine. The actual dose calculation uses the in-air fluence as input and is generic for all proton machine designs and techniques.
A basic insight to FEM_based temperature distribution calculation
NASA Astrophysics Data System (ADS)
Purwaningsih, A.; Khairina
2012-06-01
A manual for finite element method (FEM)-based temperature distribution calculation has been performed. The code manual is written in visual basic that is operated in windows. The calculation of temperature distribution based on FEM has three steps namely preprocessor, processor and post processor. Therefore, three manuals are produced namely a preprocessor to prepare the data, a processor to solve the problem, and a post processor to display the result. In these manuals, every step of a general procedure is described in detail. It is expected, by these manuals, the understanding of calculating temperature distribution be better and easier.
Fluorescent color factor calculation using dBASE-II.
King, R L; Carter, H A; Birckbichler, P J
1986-06-01
A software system utilizing dBASE-II operating on a dual-drive Apple II+ computer is described. Color factors and retention times for 15 amino acids and epsilon-(gamma-glutamyl)lysine dipeptide are calculated following high performance liquid chromatography. The software package produces a listing of acceptable limits for these parameters calculated as plus and minus 2 standard deviations of the mean. The code is distributed in source form. PMID:3450360
Algorithm for calculating torque base in vehicle traction control system
NASA Astrophysics Data System (ADS)
Li, Hongzhi; Li, Liang; Song, Jian; Wu, Kaihui; Qiao, Yanjuan; Liu, Xingchun; Xia, Yongguang
2012-11-01
Existing research on the traction control system(TCS) mainly focuses on control methods, such as the PID control, fuzzy logic control, etc, aiming at achieving an ideal slip rate of the drive wheel over long control periods. The initial output of the TCS (referred to as the torque base in this paper), which has a great impact on the driving performance of the vehicle in early cycles, remains to be investigated. In order to improve the control performance of the TCS in the first several cycles, an algorithm is proposed to determine the torque base. First, torque bases are calculated by two different methods, one based on states judgment and the other based on the vehicle dynamics. The confidence level of the torque base calculated based on the vehicle dynamics is also obtained. The final torque base is then determined based on the two torque bases and the confidence level. Hardware-in-the-loop(HIL) simulation and vehicle tests emulating sudden start on low friction roads have been conducted to verify the proposed algorithm. The control performance of a PID-controlled TCS with and without the proposed torque base algorithm is compared, showing that the proposed algorithm improves the performance of the TCS over the first several cycles and enhances about 5% vehicle speed by contrast. The proposed research provides a more proper initial value for TCS control, and improves the performance of the first several control cycles of the TCS.
Putting Math in Motion with Calculator-Based Labs.
ERIC Educational Resources Information Center
Doerr, Helen M.; Rieff, Cathieann; Tabor, Jason
1999-01-01
Many students have difficulties in interpreting position versus time graphs. Presents an activity involving calculator-based motion labs that allows students to bring these graphs to life by turning their own motion into a graph that can be analyzed, investigated, and interpreted in terms of how they actually moved. (ASK)
Software-Based Visual Loan Calculator For Banking Industry
NASA Astrophysics Data System (ADS)
Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.
2012-03-01
industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.
Gamma Knife radiosurgery with CT image-based dose calculation.
Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful
2015-01-01
The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution
All-electronic biosensing in microfluidics: bulk and surface impedance sensing
NASA Astrophysics Data System (ADS)
Fraikin, Jean-Luc
All-electronic, impedance-based sensing techniques offer promising new routes for probing nanoscale biological processes. The ease with which electrical probes can be fabricated at the nanoscale and integrated into microfluidic systems, combined with the large bandwidth afforded by radiofrequency electrical measurement, gives electrical detection significant advantages over other sensing approaches. We have developed two microfluidic devices for impedance-based biosensing. The first is a novel radiofrequency (rf) field-effect transistor which uses the electrolytic Debye layer as its active element. We demonstrate control of the nm-thick Debye layer using an external gate voltage, with gate modulation at frequencies as high 5 MHz. We use this sensor to make quantitative measurements of the electric double-layer capacitance, including determining and controlling the potential of zero charge of the electrodes, a quantity of importance for electrochemistry and impedance-based biosensing. The second device is a microfluidic analyzer for high-throughput, label-free measurement of nanoparticles suspended in a fluid. We demonstrate detection and volumetric analysis of individual synthetic nanoparticles (<100 nm dia.) with sufficient throughput to analyze >500,000 particles/second, and are able to distinguish subcomponents of a polydisperse particle mixture with diameters larger than about 30-40 nm. We also demonstrate the rapid (seconds) size and titer analysis of unlabeled bacteriophage T7 (55-65 nm dia.) in both salt solution and mouse blood plasma, using ˜ 1 muL of analyte. Surprisingly, we find that the background of naturally-occurring nanoparticles in plasma have a power-law size distribution. The scalable fabrication of these instruments, and the simple electronics required for readout make them well-suited for practical applications.
NASA Astrophysics Data System (ADS)
Nishioka, Hirotaka; Ando, Koji
2011-05-01
By making use of an ab initio fragment-based electronic structure method, fragment molecular orbital-linear combination of MOs of the fragments (FMO-LCMO), developed by Tsuneyuki et al. [Chem. Phys. Lett. 476, 104 (2009)], 10.1016/j.cplett.2009.05.069, we propose a novel approach to describe long-distance electron transfer (ET) in large system. The FMO-LCMO method produces one-electron Hamiltonian of whole system using the output of the FMO calculation with computational cost much lower than conventional all-electron calculations. Diagonalizing the FMO-LCMO Hamiltonian matrix, the molecular orbitals (MOs) of the whole system can be described by the LCMOs. In our approach, electronic coupling TDA of ET is calculated from the energy splitting of the frontier MOs of whole system or perturbation method in terms of the FMO-LCMO Hamiltonian matrix. Moreover, taking into account only the valence MOs of the fragments, we can considerably reduce computational cost to evaluate TDA. Our approach was tested on four different kinds of model ET systems with non-covalent stacks of methane, non-covalent stacks of benzene, trans-alkanes, and alanine polypeptides as their bridge molecules, respectively. As a result, it reproduced reasonable TDA for all cases compared to the reference all-electron calculations. Furthermore, the tunneling pathway at fragment-based resolution was obtained from the tunneling current method with the FMO-LCMO Hamiltonian matrix.
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... specified in 40 CFR 86.144 or 40 CFR part 1065, subpart G. (b) For composite emission calculations over... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and...
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... specified in 40 CFR 86.144 or 40 CFR part 1065, subpart G. (b) For composite emission calculations over... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and...
Electronic Structure Calculations of delta-Pu Based Alloys
Landa, A; Soderlind, P; Ruban, A
2003-11-13
First-principles methods are employed to study the ground-state properties of {delta}-Pu-based alloys. The calculations show that an alloy component larger than {delta}-Pu has a stabilizing effect. Detailed calculations have been performed for the {delta}-Pu{sub 1-c}Am{sub c} system. Calculated density of Pu-Am alloys agrees well with the experimental data. The paramagnetic {yields} antiferromagnetic transition temperature (T{sub c}) of {delta}-Pu{sub 1-c}Am{sub c} alloys is calculated by a Monte-Carlo technique. By introducing Am into the system, one could lower T{sub c} from 548 K (pure Pu) to 372 K (Pu{sub 70}Am{sub 30}). We also found that, contrary to pure Pu where this transition destabilizes {delta}-phase, Pu{sub 3}Am compound remains stable in the antiferromagnetic phase that correlates with the recent discovery of a Curie-Weiss behavior of {delta}-Pu{sub 1-c}Am{sub c} at c {approx} 24 at. %.
Calculating track-based observables for the LHC.
Chang, Hsi-Ming; Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J
2013-09-01
By using observables that only depend on charged particles (tracks), one can efficiently suppress pileup contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new nonperturbative objects called track functions which absorb infrared divergences. The track function Ti(x) describes the energy fraction x of a hard parton i which is converted into charged hadrons. We give a field-theoretic definition of the track function and derive its renormalization group evolution, which is in excellent agreement with the pythia parton shower. We then perform a next-to-leading order calculation of the total energy fraction of charged particles in e+ e-→ hadrons. To demonstrate the implications of our framework for the LHC, we match the pythia parton shower onto a set of track functions to describe the track mass distribution in Higgs plus one jet events. We also show how to reduce smearing due to hadronization fluctuations by measuring dimensionless track-based ratios. PMID:25166657
Vertical emission profiles for Europe based on plume rise calculations.
Bieser, J; Aulinger, A; Matthias, V; Quante, M; Denier van der Gon, H A C
2011-10-01
The vertical allocation of emissions has a major impact on results of Chemistry Transport Models. However, in Europe it is still common to use fixed vertical profiles based on rough estimates to determine the emission height of point sources. This publication introduces a set of new vertical profiles for the use in chemistry transport modeling that were created from hourly gridded emissions calculated by the SMOKE for Europe emission model. SMOKE uses plume rise calculations to determine effective emission heights. Out of more than 40,000 different vertical emission profiles 73 have been chosen by means of hierarchical cluster analysis. These profiles show large differences to those currently used in many emission models. Emissions from combustion processes are released in much lower altitudes while those from production processes are allocated to higher altitudes. The profiles have a high temporal and spatial variability which is not represented by currently used profiles. PMID:21561695
Helium diffusion in olivine based on first principles calculations
NASA Astrophysics Data System (ADS)
Wang, Kai; Brodholt, John; Lu, Xiancai
2015-05-01
As a key trace element involved in mantle evolution, the transport properties of helium in the mantle are important for understanding the thermal and chemical evolution of the Earth. However, the mobility of helium in the mantle is still unclear due to the scarcity of measured diffusion data from minerals under mantle conditions. In this study, we used first principles calculations based on density functional theory to calculate the absolute diffusion coefficients of the helium in olivine. Using the climbing images nudged elastic band method, we defined the diffusion pathways, the activation energies (Ea), and the prefactors. Our results demonstrate that the diffusion of helium has moderate anisotropy. The directionally dependent diffusion of helium in olivine can be written in Arrhenius form as follows.
Advancing QCD-based calculations of energy loss
NASA Astrophysics Data System (ADS)
Tywoniuk, Konrad
2013-08-01
We give a brief overview of the basics and current developments of QCD-based calculations of radiative processes in medium. We put an emphasis on the underlying physics concepts and discuss the theoretical uncertainties inherently associated with the fundamental parameters to be extracted from data. An important area of development is the study of the single-gluon emission in medium. Moreover, establishing the correct physical picture of multi-gluon emissions is imperative for comparison with data. We will report on progress made in both directions and discuss perspectives for the future.
Supersampling method for efficient grid-based electronic structure calculations
NASA Astrophysics Data System (ADS)
Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn
2016-03-01
The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.
Supersampling method for efficient grid-based electronic structure calculations.
Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn
2016-03-01
The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing. PMID:26957151
Sensor Based Engine Life Calculation: A Probabilistic Perspective
NASA Technical Reports Server (NTRS)
Guo, Ten-Huei; Chen, Philip
2003-01-01
It is generally known that an engine component will accumulate damage (life usage) during its lifetime of use in a harsh operating environment. The commonly used cycle count for engine component usage monitoring has an inherent range of uncertainty which can be overly costly or potentially less safe from an operational standpoint. With the advance of computer technology, engine operation modeling, and the understanding of damage accumulation physics, it is possible (and desirable) to use the available sensor information to make a more accurate assessment of engine component usage. This paper describes a probabilistic approach to quantify the effects of engine operating parameter uncertainties on the thermomechanical fatigue (TMF) life of a selected engine part. A closed-loop engine simulation with a TMF life model is used to calculate the life consumption of different mission cycles. A Monte Carlo simulation approach is used to generate the statistical life usage profile for different operating assumptions. The probabilities of failure of different operating conditions are compared to illustrate the importance of the engine component life calculation using sensor information. The results of this study clearly show that a sensor-based life cycle calculation can greatly reduce the risk of component failure as well as extend on-wing component life by avoiding unnecessary maintenance actions.
Probabilistic Study Conducted on Sensor-Based Engine Life Calculation
NASA Technical Reports Server (NTRS)
Guo, Ten-Huei
2004-01-01
Turbine engine life management is a very complicated process to ensure the safe operation of an engine subjected to complex usage. The challenge of life management is to find a reasonable compromise between the safe operation and the maximum usage of critical parts to reduce maintenance costs. The commonly used "cycle count" approach does not take the engine operation conditions into account, and it oversimplifies the calculation of the life usage. Because of the shortcomings, many engine components are regularly pulled for maintenance before their usable life is over. And, if an engine has been running regularly under more severe conditions, components might not be taken out of service before they exceed their designed risk of failure. The NASA Glenn Research Center and its industrial and academic partners have been using measurable parameters to improve engine life estimation. This study was based on the Monte Carlo simulation of 5000 typical flights under various operating conditions. First a closed-loop engine model was developed to simulate the engine operation across the mission profile and a thermomechanical fatigue (TMF) damage model was used to calculate the actual damage during takeoff, where the maximum TMF accumulates. Next, a Weibull distribution was used to estimate the implied probability of failure for a given accumulated cycle count. Monte Carlo simulations were then employed to find the profiles of the TMF damage under different operating assumptions including parameter uncertainties. Finally, probabilities of failure for different operating conditions were analyzed to demonstrate the importance of a sensor-based damage calculation in order to better manage the risk of failure and on-wing life.
Bubin, Sergiy; Adamowicz, Ludwik
2014-01-14
Benchmark variational calculations are performed for the seven lowest 1s(2)2s np ((1)P), n = 2...8, states of the beryllium atom. The calculations explicitly include the effect of finite mass of (9)Be nucleus and account perturbatively for the mass-velocity, Darwin, and spin-spin relativistic corrections. The wave functions of the states are expanded in terms of all-electron explicitly correlated Gaussian functions. Basis sets of up to 12,500 optimized Gaussians are used. The maximum discrepancy between the calculated nonrelativistic and experimental energies of 1s(2)2s np ((1)P) →1s(2)2s(2) ((1)S) transition is about 12 cm(-1). The inclusion of the relativistic corrections reduces the discrepancy to bellow 0.8 cm(-1). PMID:24437871
Error propagation in PIV-based Poisson pressure calculations
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2015-11-01
After more than 20 years of development, PIV has become a standard non-invasive velocity field measurement technique, and promises to make PIV-based pressure calculations possible. However, the errors inherent in PIV velocity fields propagate through integration and contaminate the calculated pressure field. We propose an analysis that shows how the uncertainties in the velocity field propagate to the pressure field through the Poisson equation. First we model the dynamics of error propagation using boundary value problems (BVPs). Next, L2-norm and/or L∞-norm are utilized as the measure of error in the velocity and pressure field. Finally, using analysis techniques including the maximum principle, the Poincare inequality pressure field can be bounded by the error level of the data by considering the well-posedness of the BVPs. Specifically, we exam if and how the error in the pressure field depend continually on the BVP data. Factors such as flow field geometry, boundary conditions, and velocity field noise levels will be discussed analytically.
Water on the sun: line assignments based on variational calculations.
Polyansky, O L; Zobov, N F; Viti, S; Tennyson, J; Bernath, P F; Wallace, L
1997-07-18
The infrared spectrum of hot water observed in a sunspot has been assigned. The high temperature of the sunspot (3200 K) gave rise to a highly congested pure rotational spectrum in the 10-micrometer region that involved energy levels at least halfway to dissociation. Traditional spectroscopy, based on perturbation theory, is inadequate for this problem. Instead, accurate variational solutions of the vibration-rotation Schrödinger equation were used to make assignments, revealing unexpected features, including rotational difference bands and fewer degeneracies than anticipated. These results indicate that a shift away from perturbation theory to first principles calculations is necessary in order to assign spectra of hot polyatomic molecules such as water. PMID:9219686
Wavelet-Based DFT calculations on Massively Parallel Hybrid Architectures
NASA Astrophysics Data System (ADS)
Genovese, Luigi
2011-03-01
In this contribution, we present an implementation of a full DFT code that can run on massively parallel hybrid CPU-GPU clusters. Our implementation is based on modern GPU architectures which support double-precision floating-point numbers. This DFT code, named BigDFT, is delivered within the GNU-GPL license either in a stand-alone version or integrated in the ABINIT software package. Hybrid BigDFT routines were initially ported with NVidia's CUDA language, and recently more functionalities have been added with new routines writeen within Kronos' OpenCL standard. The formalism of this code is based on Daubechies wavelets, which is a systematic real-space based basis set. As we will see in the presentation, the properties of this basis set are well suited for an extension on a GPU-accelerated environment. In addition to focusing on the implementation of the operators of the BigDFT code, this presentation also relies of the usage of the GPU resources in a complex code with different kinds of operations. A discussion on the interest of present and expected performances of Hybrid architectures computation in the framework of electronic structure calculations is also adressed.
Bernard, F.; Borovnicar, I.; Ghirlanda, M.
1996-12-01
The windows based computer program for gasket calculation was presented. C++ computer language was used. On the basis of experimental results and data sets available in the literature and calculated with the help of FSA and PVRC method, the assembly parameters were determined. The result is DONIT TESNITI Diskette, a smart tool to select gaskets on the basis of service conditions and tightness requirements.
Rapid Parallel Calculation of shell Element Based On GPU
NASA Astrophysics Data System (ADS)
Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao
2010-06-01
Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.
Fast calculation of object infrared spectral scattering based on CUDA
NASA Astrophysics Data System (ADS)
Li, Liang-chao; Niu, Wu-bin; Wu, Zhen-sen
2010-11-01
Computational unified device architecture (CUDA) is used for paralleling the spectral scattering calculation from non-Lambertian object of sky and earth background irradiation. The bidirectional reflectance distribution function (BRDF) of five parameter model is utilized in object surface element scattering calculation. The calculation process is partitioned into many threads running in GPU kernel and each thread computes a visible surface element infrared spectral scattering intensity in a specific incident direction, all visible surface elements' intensity are weighted and averaged to obtain the object surface scattering intensity. The comparison of results of the CPU calculation and CUDA parallel calculation of a cylinder shows that the CUDA parallel calculation speed improves more than two hundred times in meeting the accuracy, with a high engineering value.
Locally Refined Multigrid Solution of the All-Electron Kohn-Sham Equation.
Cohen, Or; Kronik, Leeor; Brandt, Achi
2013-11-12
We present a fully numerical multigrid approach for solving the all-electron Kohn-Sham equation in molecules. The equation is represented on a hierarchy of Cartesian grids, from coarse ones that span the entire molecule to very fine ones that describe only a small volume around each atom. This approach is adaptable to any type of geometry. We demonstrate it for a variety of small molecules and obtain high accuracy agreement with results obtained previously for diatomic molecules using a prolate-spheroidal grid. We provide a detailed presentation of the numerical methodology and discuss possible extensions of this approach. PMID:26583393
Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization
LaMar, E; Hamann, B; Joy, K I
2001-10-16
Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.
Trajectory Based Heating and Ablation Calculations for MESUR Pathfinder Aeroshell
NASA Technical Reports Server (NTRS)
Chen, Y. K.; Henline, W. D.; Tauber, M. E.; Arnold, James O. (Technical Monitor)
1994-01-01
Based on the geometry of Mars Environment Survey (MESUR) Pathfinder aeroshell and an estimated Mars entry trajectory, two-dimensional axisymmetric time dependent calculations have been obtained using GIANTS (Gauss-Siedel Implicit Aerothermodynamic Navier-Stokes code with Thermochemical Surface Conditions) code and CMA (Charring Material Thermal Response and Ablation) Program for heating analysis and heat shield material sizing. These two codes are interfaced using a loosely coupled technique. The flowfield and convective heat transfer coefficients are computed by the GIANTS code with a species balance condition for an ablating surface, and the time dependent in-depth conduction with surface blowing is simulated by the CMA code with a complete surface energy balance condition. In this study, SLA-561V has been selected as heat shield material. The solutions, including the minimum heat shield thicknesses over aeroshell forebody, pyrolysis gas blowing rates, surface heat fluxes and temperature distributions, flowfield, and in-depth temperature history of SLA-561V, are presented and discussed in detail.
Nonideal thermoequilibrium calculations using a large product species data base
Hobbs, M.L.; Baer, M.R.
1992-06-01
Thermochemical data fits for approximately 900 gaseous and 600 condensed species found in the JANAF tables (Chase et al., 1985) have been completed for use with the TIGER nonideal thermoequilibrium code (Cowperthwaite and Zwisler, 1973). The TIGER code has been modified to allow systems containing up to 400 gaseous and 100 condensed constituents composed of up to 50 elements. Gaseous covolumes have been estimated following the procedure outlined by Mader (1979) using estimates of van der Waals radii for 48 elements and three-dimensional molecular mechanics. Molecular structures for all gaseous components were explicitly defined in terms of atomic coordinates in {Angstrom}. The Becker-Kistiakowsky-Wilson equation of state (BKW-EOS) has been calibrated near C-J states using detonation temperatures measured in liquid and solid explosives and a large product species data base. Detonation temperatures for liquid and solid explosives were predicted adequately with a single set of BKW parameters. Values for the empirical BKW constants {alpha}, {beta}, k, and {theta} were 0.5, 0.174, 11.85, and 5160, respectively. Values for the covolume factors, k{sub i}, were assumed to be invariant. The liquid explosives included mixtures of hydrazine nitrate with hydrazine, hydrazine hydrate, and water; mixtures of tetranitromethane with nitromethane; liquid isomers ethyl nitrate and 2-nitroethanol; and nitroglycerine. The solid explosives included HMX, RDX, PETN, Tetryl, and TNT. Color contour plots of HMX equilibrium products as well as thermodynamic variables are shown in pressure and temperature space. Similar plots for a pyrotechnic reaction composed of TiH{sub 2} and KC1O{sub 4} are also reported. Calculations for a typical HMX-based propellant are also discussed.
Method of characteristics - Based sensitivity calculations for international PWR benchmark
Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.
2013-07-01
Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)
An Analysis of Differential Item Functioning Based on Calculator Type.
ERIC Educational Resources Information Center
Schwarz, Richard; Rich, Changhua; Arenson, Ethan; Podrabsky, Tracy; Cook, Gary
The effect of calculator type on student performance on a mathematics examination was studied. Differential item functioning (DIF) methodology was applied to examine group differences (calculator use) on item performance while conditioning on the relevant ability. Other survey questions were developed to ask students the extent to which they used…
NASA Astrophysics Data System (ADS)
Knuth, Franz; Carbogno, Christian; Atalla, Viktor; Blum, Volker; Scheffler, Matthias
2015-05-01
We derive and implement the strain derivatives of the total energy of solids, i.e., the analytic stress tensor components, in an all-electron, numeric atom-centered orbital based density-functional formalism. We account for contributions that arise in the semi-local approximation (LDA/GGA) as well as in the generalized Kohn-Sham case, in which a fraction of exact exchange (hybrid functionals) is included. In this work, we discuss the details of the implementation including the numerical corrections for sparse integrations grids which allow to produce accurate results. We validate the implementation for a variety of test cases by comparing to strain derivatives performed via finite differences. Additionally, we include the detailed definition of the overlapping atom-centered integration formalism used in this work to obtain total energies and their derivatives.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Public Debt accept all electronically signed transaction requests? An electronic signature will not be... accept all electronically signed transaction requests? 370.35 Section 370.35 Money and Finance: Treasury... PUBLIC DEBT ELECTRONIC TRANSACTIONS AND FUNDS TRANSFERS RELATING TO UNITED STATES SECURITIES...
Magnetic susceptibility of semiconductors by an all-electron first-principles approach
Ohno, K. |; Mauri, F.; Louie, S.G. |
1997-07-01
The magnetic susceptibility ({chi}) of the semiconductors (diamond, Si, GaAs, and GaP) and of the inert-gas solids (Ne, Ar, and Kr) are evaluated within density-functional theory in the local-density approximation, using a mixed-basis all-electron approach. In Si, GaAs, GaP, Ar, and Kr, the contribution of core electrons to {chi} is comparable to that of valence electrons. However, our results show that the contribution associated with the core states is independent of the chemical environment and can be computed from the isolated atoms. Moreover, our results indicate that the use of a {open_quotes}scissor operator{close_quotes} does not improve the agreement of the theoretical {chi} with experiments. {copyright} {ital 1997} {ital The American Physical Society}
Procedure for calculating general aircraft noise based on ISO 3891
Hediger, J.R.
1982-01-01
The standard ISO-3891 specifies the presentation of aircraft noise heard on the ground or of noise exposure by succession of aircraft, without giving any details on different parameters required to their calculation. The following study provides some of these parameters considering acoustic measurements as well as laboratory analysis realized in co-operation with the Swiss Federal Office for Civil Aviation.
Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine
Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois
2013-01-01
Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6 MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, − 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3 mm criteria. The mean and standard deviation of pixels passing
Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine.
Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois
2013-01-01
Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, - 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3mm criteria. The mean and standard deviation of pixels passing gamma
Storchi, Loriano; Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Quiney, Harry M
2013-12-10
We propose a new complete memory-distributed algorithm, which significantly improves the parallel implementation of the all-electron four-component Dirac-Kohn-Sham (DKS) module of BERTHA (J. Chem. Theory Comput. 2010, 6, 384). We devised an original procedure for mapping the DKS matrix between an efficient integral-driven distribution, guided by the structure of specific G-spinor basis sets and by density fitting algorithms, and the two-dimensional block-cyclic distribution scheme required by the ScaLAPACK library employed for the linear algebra operations. This implementation, because of the efficiency in the memory distribution, represents a leap forward in the applicability of the DKS procedure to arbitrarily large molecular systems and its porting on last-generation massively parallel systems. The performance of the code is illustrated by some test calculations on several gold clusters of increasing size. The DKS self-consistent procedure has been explicitly converged for two representative clusters, namely Au20 and Au34, for which the density of electronic states is reported and discussed. The largest gold cluster uses more than 39k basis functions and DKS matrices of the order of 23 GB. PMID:26592273
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
A probability-based formula for calculating interobserver agreement1
Yelton, Ann R.; Wildman, Beth G.; Erickson, Marilyn T.
1977-01-01
Estimates of observer agreement are necessary to assess the acceptability of interval data. A common method for assessing observer agreement, per cent agreement, includes several major weaknesses and varies as a function of the frequency of behavior recorded and the inclusion or exclusion of agreements on nonoccurrences. Also, agreements that might be expected to occur by chance are not taken into account. An alternative method for assessing observer agreement that determines the exact probability that the obtained number of agreements or better would have occurred by chance is presented and explained. Agreements on both occurrences and nonoccurrences of behavior are considered in the calculation of this probability. PMID:16795541
Freeway Travel Speed Calculation Model Based on ETC Transaction Data
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107
Freeway travel speed calculation model based on ETC transaction data.
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107
Integral-transport-based deterministic brachytherapy dose calculations
NASA Astrophysics Data System (ADS)
Zhou, Chuanyu; Inanc, Feyzi
2003-01-01
We developed a transport-equation-based deterministic algorithm for computing three-dimensional brachytherapy dose distributions. The deterministic algorithm has been based on the integral transport equation. The algorithm provided us with the capability of computing dose distributions for multiple isotropic point and/or volumetric sources in a homogenous/heterogeneous medium. The algorithm results have been benchmarked against the results from the literature and MCNP results for isotropic point sources and volumetric sources.
Safety assessment of the conversion of toll plazas to all-electronic toll collection system.
Abuzwidah, Muamer; Abdel-Aty, Mohamed
2015-07-01
Traditional mainline toll plaza (TMTP) is considered the most high-risk location on the toll roads. Conversion from TMTP or hybrid mainline toll plaza (HMTP) to an all-electronic toll collection (AETC) system has demonstrated measured improvement in traffic operations and environmental issues. However, there is a lack of research that quantifies the safety impacts of these new tolling systems. This study evaluated the safety effectiveness of the conversion from TMTP or HMTP to AETC system. An extensive data collection was conducted that included hundred mainline toll plazas located on more than 750 miles of toll roads in Florida. Various observational before-after studies including the empirical Bayes method were applied. The results indicated that the conversion from the TMTP to an AETC system resulted in an average crash reduction of 76, 75, and 68% for total, fatal-and-injury and property damage only (PDO) crashes, respectively; for rear end and lane change related (LCR) crashes the average reductions were 80 and 74%, respectively. The conversion from HMTP to AETC system enhanced traffic safety by reducing crashes by 24, 28 and 20% of total, fatal-and-injury, and PDO crashes respectively; also, for rear end and LCR crashes, the average reductions were 15 and 22%, respectively. Overall, this paper provided an up-to-date safety impact of using different toll collection systems. The results proved that the AETC system significantly improved traffic safety for all crash categories; and changed toll plazas from the highest risk on Expressways to be similar to regular segments. PMID:25909391
Formation flying benefits based on vortex lattice calculations
NASA Technical Reports Server (NTRS)
Maskew, B.
1977-01-01
A quadrilateral vortex-lattice method was applied to a formation of three wings to calculate force and moment data for use in estimating potential benefits of flying aircraft in formation on extended range missions, and of anticipating the control problems which may exist. The investigation led to two types of formation having virtually the same overall benefits for the formation as a whole, i.e., a V or echelon formation and a double row formation (with two staggered rows of aircraft). These formations have unequal savings on aircraft within the formation, but this allows large longitudinal spacings between aircraft which is preferable to the small spacing required in formations having equal benefits for all aircraft. A reasonable trade-off between a practical formation size and range benefit seems to lie at about three to five aircraft with corresponding maximum potential range increases of about 46 percent to 67 percent. At this time it is not known what fraction of this potential range increase is achievable in practice.
Coupled-cluster based basis sets for valence correlation calculations
NASA Astrophysics Data System (ADS)
Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J.
2016-03-01
Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via
Coupled-cluster based basis sets for valence correlation calculations.
Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J
2016-03-14
Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via ⟨r(n)⟩ (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers. PMID:26979680
Ray-Based Calculations of Backscatter in Laser Fusion Targets
Strozzi, D J; Williams, E A; Hinkel, D E; Froula, D H; London, R A; Callahan, D A
2008-02-26
A steady-state model for Brillouin and Raman backscatter along a laser ray path is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code deplete, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as 'plane-wave' simulations with the paraxial propagation code pf3d. Comparisons with Brillouin-scattering experiments at the OMEGA Laser Facility [T. R. Boehly et al., Opt. Commun. 133, p. 495 (1997)] show that laser speckles greatly enhance the reflectivity over the deplete results. An approximate upper bound on this enhancement, motivated by phase conjugation, is given by doubling the deplete coupling coefficient. Analysis with deplete of an ignition design for the National Ignition Facility (NIF) [J. A. Paisner, E. M. Campbell, and W. J. Hogan, Fusion Technol. 26, p. 755 (1994)], with a peak radiation temperature of 285 eV, shows encouragingly low reflectivity. Doubling the coupling to bound the speckle enhancement suggests a less optimistic picture. Re-absorption of Raman light is seen to be significant in this design.
UAV-based NDVI calculation over grassland: An alternative approach
NASA Astrophysics Data System (ADS)
Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc
2016-04-01
The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a
Gulans, Andris; Kontur, Stefan; Meisenbichler, Christian; Nabok, Dmitrii; Pavone, Pasquale; Rigamonti, Santiago; Sagmeister, Stephan; Werner, Ute; Draxl, Claudia
2014-09-10
Linearized augmented planewave methods are known as the most precise numerical schemes for solving the Kohn-Sham equations of density-functional theory (DFT). In this review, we describe how this method is realized in the all-electron full-potential computer package, exciting. We emphasize the variety of different related basis sets, subsumed as (linearized) augmented planewave plus local orbital methods, discussing their pros and cons and we show that extremely high accuracy (microhartrees) can be achieved if the basis is chosen carefully. As the name of the code suggests, exciting is not restricted to ground-state calculations, but has a major focus on excited-state properties. It includes time-dependent DFT in the linear-response regime with various static and dynamical exchange-correlation kernels. These are preferably used to compute optical and electron-loss spectra for metals, molecules and semiconductors with weak electron-hole interactions. exciting makes use of many-body perturbation theory for charged and neutral excitations. To obtain the quasi-particle band structure, the GW approach is implemented in the single-shot approximation, known as G(0)W(0). Optical absorption spectra for valence and core excitations are handled by the solution of the Bethe-Salpeter equation, which allows for the description of strongly bound excitons. Besides these aspects concerning methodology, we demonstrate the broad range of possible applications by prototypical examples, comprising elastic properties, phonons, thermal-expansion coefficients, dielectric tensors and loss functions, magneto-optical Kerr effect, core-level spectra and more. PMID:25135665
Calculation of thermomechanical fatigue life based on isothermal behavior
NASA Technical Reports Server (NTRS)
Halford, Gary R.; Saltsman, James F.
1987-01-01
The isothermal and thermomechanical fatigue (TMF) crack initiation response of a hypothetical material was analyzed. Expected thermomechanical behavior was evaluated numerically based on simple, isothermal, cyclic stress-strain - time characteristics and on strainrange versus cyclic life relations that have been assigned to the material. The attempt was made to establish basic minimum requirements for the development of a physically accurate TMF life-prediction model. A worthy method must be able to deal with the simplest of conditions: that is, those for which thermal cycling, per se, introduces no damage mechanisms other than those found in isothermal behavior. Under these assumed conditions, the TMF life should be obtained uniquely from known isothermal behavior. The ramifications of making more complex assumptions will be dealt with in future studies. Although analyses are only in their early stages, considerable insight has been gained in understanding the characteristics of several existing high-temperature life-prediction methods. The present work indicates that the most viable damage parameter is based on the inelastic strainrange.
Validation of KENO based criticality calculations at Rocky Flats
Felsher, P.D.; McKamy, J.N.; Monahan, S.P.
1992-01-01
In the absence of experimental data it is necessary to rely on computer based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the codes ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper we discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG G Rocky Flats.
Glass viscosity calculation based on a global statistical modelling approach
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
Validation of KENO-based criticality calculations at Rocky Flats
Felsher, P.D.; McKamy, J.N.; Monahan, S.P. )
1992-01-01
In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k{sub eff} limit of 0.95 for the limiting-accident scanarios of a criticality evaluation.
GYutsis: heuristic based calculation of general recoupling coefficients
NASA Astrophysics Data System (ADS)
Van Dyck, D.; Fack, V.
2003-08-01
General angular momentum recoupling coefficients can be expressed as a summation formula over products of 6- j coefficients. Yutsis, Levinson and Vanagas developed graphical techniques for representing the general recoupling coefficient as a cubic graph and they describe a set of reduction rules allowing a stepwise generation of the corresponding summation formula. This paper is a follow up to [Van Dyck and Fack, Comput. Phys. Comm. 151 (2003) 353-368] where we described a heuristic algorithm based on these techniques. In this article we separate the heuristic from the algorithm and describe some new heuristic approaches which can be plugged into the generic algorithm. We show that these new heuristics lead to good results: in many cases we get a more efficient summation formula than our previous approach, in particular for problems of higher order. In addition the new features and the use of our program GYutsis, which implements these techniques, is described both for end users and application programmers. Program summaryTitle of program: CycleCostAlgorithm, GYutsis Catalogue number: ADSA Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSA Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland. Users may obtain the program also by downloading either the compressed tar file gyutsis.tgz (for Unix and Linux) or the zip file gyutsis.zip (for Windows) from our website ( http://caagt.rug.ac.be/yutsis/). An applet version of the program is also available on our website and can be run in a web browser from the URL http://caagt.rug.ac.be/yutsis/GYutsisApplet.html. Licensing provisions: none Computers for which the program is designed: any computer with Sun's Java Runtime Environment 1.4 or higher installed. Programming language used: Java 1.2 (Compiler: Sun's SDK 1.4.0) No. of lines in program: approximately 9400 No. of bytes in distributed program, including test data, etc.: 544 117 Distribution format: tar gzip file Nature of
All-electron GW quasiparticle band structures of group 14 nitride compounds
NASA Astrophysics Data System (ADS)
Chu, Iek-Heng; Kozhenikov, Anton; Schulthess, Thomas; Cheng, Hai-Ping
2014-03-01
We have investigated the group 14 nitrides (M3N4) in both the spinel phase (with M =C, Si, Ge and Sn) and the beta phase (with M =Si, Ge and Sn) using density functional theory (DFT) with the local density approximation (LDA). The Kohn-Sham energies of these systems are first calculated within the framework of full-potential LAPW and then corrected using single-shot G0W0 calculations, which we have implemented in the Exciting-Plus code. Direct bands gap at the Γ point are found for all spinel-type nitrides. The calculated band gaps of Si3N4, Ge3N4 and Sn3N4 agree with experiment. We also find that for all systems studied, our GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core 3d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications. This work is supported by NSF/DMR-0804407 and DOE/BES-DE-FG02-02ER45995. Computations are performed using facilities at NERSC.
Bubin, Sergiy; Adamowicz, Ludwik
2014-01-14
Benchmark variational calculations are performed for the seven lowest 1s{sup 2}2s np ({sup 1}P), n = 2…8, states of the beryllium atom. The calculations explicitly include the effect of finite mass of {sup 9}Be nucleus and account perturbatively for the mass-velocity, Darwin, and spin-spin relativistic corrections. The wave functions of the states are expanded in terms of all-electron explicitly correlated Gaussian functions. Basis sets of up to 12 500 optimized Gaussians are used. The maximum discrepancy between the calculated nonrelativistic and experimental energies of 1s{sup 2}2s np ({sup 1}P) →1s{sup 2}2s{sup 2} ({sup 1}S) transition is about 12 cm{sup −1}. The inclusion of the relativistic corrections reduces the discrepancy to bellow 0.8 cm{sup −1}.
Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.
Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano
2014-09-01
A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system. PMID:26588521
All-electron GW quasiparticle band structures of group 14 nitride compounds
NASA Astrophysics Data System (ADS)
Chu, Iek-Heng; Kozhevnikov, Anton; Schulthess, Thomas C.; Cheng, Hai-Ping
2014-07-01
We have investigated the group 14 nitrides (M3N4) in the spinel phase (γ-M3N4 with M = C, Si, Ge, and Sn) and β phase (β-M3N4 with M = Si, Ge, and Sn) using density functional theory with the local density approximation and the GW approximation. The Kohn-Sham energies of these systems have been first calculated within the framework of full-potential linearized augmented plane waves (LAPW) and then corrected using single-shot G0W0 calculations, which we have implemented in the modified version of the Elk full-potential LAPW code. Direct band gaps at the Γ point have been found for spinel-type nitrides γ-M3N4 with M = Si, Ge, and Sn. The corresponding GW-corrected band gaps agree with experiment. We have also found that the GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications because of their direct and tunable band gaps.
All-electron GW quasiparticle band structures of group 14 nitride compounds
Chu, Iek-Heng; Cheng, Hai-Ping; Kozhevnikov, Anton; Schulthess, Thomas C.
2014-07-28
We have investigated the group 14 nitrides (M{sub 3}N{sub 4}) in the spinel phase (γ-M{sub 3}N{sub 4} with M = C, Si, Ge, and Sn) and β phase (β-M{sub 3}N{sub 4} with M = Si, Ge, and Sn) using density functional theory with the local density approximation and the GW approximation. The Kohn-Sham energies of these systems have been first calculated within the framework of full-potential linearized augmented plane waves (LAPW) and then corrected using single-shot G{sub 0}W{sub 0} calculations, which we have implemented in the modified version of the Elk full-potential LAPW code. Direct band gaps at the Γ point have been found for spinel-type nitrides γ-M{sub 3}N{sub 4} with M = Si, Ge, and Sn. The corresponding GW-corrected band gaps agree with experiment. We have also found that the GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications because of their direct and tunable band gaps.
Operating distance calculation of ground-based and air-based infrared system based on Lowtran7
NASA Astrophysics Data System (ADS)
Ren, Kan; Tian, Jie; Gu, Guohua; Chen, Qian
2016-07-01
In this paper, the infrared system operating distance model of point target based on the contrast is used, starting from the target radiance and atmospheric transmission parameters in the operating distance formula. The radiance of different point targets detected by ground-based and air-based detector are analyzed, and the spectral division method is used for the integration of target and background radiance, the databases of atmospheric spectral radiance and transmittance are established by calling Lowtran7. A new method for solving the operating distance formula is proposed. And the operating distance calculation system is established, which improves the efficiency and accuracy of calculation. The databases of atmospheric spectral radiance and transmittance of five meteorological conditions are generated, and the variations of them with wavelength and range are given. The atmospheric radiance of infinite transmission range can be considered as the atmospheric radiance of 100 km by calculating the integration of wavelength. The targets and detectors parameters are set to be simulated by using the generated database. The operating distance of each zenith angle is calculated, and spatial distribution of operating distance is given in the meteorological condition of mid latitude summer.
Fast calculation with point-based method to make CGHs of the polygon model
NASA Astrophysics Data System (ADS)
Ogihara, Yuki; Ichikawa, Tsubasa; Sakamoto, Yuji
2014-02-01
Holography is one of the three-dimensional technology. Light waves from an object are recorded and reconstructed by using a hologram. Computer generated holograms (CGHs), which are made by simulating light propagation using a computer, are able to represent virtual object. However, an enormous amount of computation time is required to make CGHs. There are two primary methods of calculating CGHs: the polygon-based method and the point-based method. In the polygon-based method with Fourier transforms, CGHs are calculated using a fast Fourier transform (FFT). The calculation of complex objects composed of multiple polygons requires as many FFTs, so unfortunately the calculation time become enormous. In contrast, in the point-based method, it is easy to express complex objects, an enormous calculation time is still required. Graphics processing units (GPUs) have been used to speed up the calculations of point-based method. Because a GPU is specialized for parallel computation and CGH calculation can be calculated independently for each pixel. However, expressing a planar object by the point-based method requires a signi cant increase in the density of points and consequently in the number of point light sources. In this paper, we propose a fast calculation algorithm to express planar objects by the point-based method with a GPU. The proposed method accelerate calculation by obtaining the distance between a pixel and the point light source from the adjacent point light source by a difference method. Under certain speci ed conditions, the difference between adjacent object points becomes constant, so the distance is obtained by only an additions. Experimental results showed that the proposed method is more effective than the polygon-based method with FFT when the number of polygons composing an objects are high.
Hybrid functionals within the all-electron FLAPW method: Implementation and applications of PBE0
NASA Astrophysics Data System (ADS)
Betzinger, Markus; Friedrich, Christoph; Blügel, Stefan
2010-05-01
We present an efficient implementation of the Perdew-Burke-Ernzerhof hybrid functional PBE0 within the full-potential linearized augmented-plane-wave (FLAPW) method. The Hartree-Fock exchange term, which is a central ingredient of hybrid functionals, gives rise to a computationally expensive nonlocal potential in the one-particle Schrödinger equation. The matrix elements of this exchange potential are calculated with the help of an auxiliary basis that is constructed from products of FLAPW basis functions. By representing the Coulomb interaction in this basis the nonlocal exchange term becomes a Brillouin-zone sum over vector-matrix-vector products. The Coulomb matrix is calculated only once at the beginning of a self-consistent-field cycle. We show that it can be made sparse by a suitable unitary transformation of the auxiliary basis, which accelerates the computation of the vector-matrix-vector products considerably. Additionally, we exploit spatial and time-reversal symmetry to identify the nonvanishing exchange matrix elements in advance and to restrict the k summations for the nonlocal potential to an irreducible set of k points. Favorable convergence of the self-consistent-field cycle is achieved by a nested density-only and density-matrix iteration scheme. We discuss the convergence with respect to the parameters of our numerical scheme and show results for a variety of semiconductors and insulators, including the oxides ZnO, EuO, Al2O3 , and SrTiO3 , where the PBE0 hybrid functional improves the band gaps and the description of localized states in comparison with the PBE functional. Furthermore, we find that in contrast to conventional local exchange-correlation functionals ferromagnetic EuO is correctly predicted to be a semiconductor.
19 CFR 351.405 - Calculation of normal value based on constructed value.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value,...
Creative Uses for Calculator-based Laboratory (CBL) Technology in Chemistry.
ERIC Educational Resources Information Center
Sales, Cynthia L.; Ragan, Nicole M.; Murphy, Maureen Kendrick
1999-01-01
Reviews three projects that use a graphing calculator linked to a calculator-based laboratory device as a portable data-collection system for students in chemistry classes. Projects include Isolation, Purification and Quantification of Buckminsterfullerene from Woodstove Ashes; Determination of the Activation Energy Associated with the…
NASA Astrophysics Data System (ADS)
Jeraj, Robert; Keall, Paul
2000-12-01
The effect of the statistical uncertainty, or noise, in inverse treatment planning for intensity modulated radiotherapy (IMRT) based on Monte Carlo dose calculation was studied. Sets of Monte Carlo beamlets were calculated to give uncertainties at Dmax ranging from 0.2% to 4% for a lung tumour plan. The weights of these beamlets were optimized using a previously described procedure based on a simulated annealing optimization algorithm. Several different objective functions were used. It was determined that the use of Monte Carlo dose calculation in inverse treatment planning introduces two errors in the calculated plan. In addition to the statistical error due to the statistical uncertainty of the Monte Carlo calculation, a noise convergence error also appears. For the statistical error it was determined that apparently successfully optimized plans with a noisy dose calculation (3% 1σ at Dmax ), which satisfied the required uniformity of the dose within the tumour, showed as much as 7% underdose when recalculated with a noise-free dose calculation. The statistical error is larger towards the tumour and is only weakly dependent on the choice of objective function. The noise convergence error appears because the optimum weights are determined using a noisy calculation, which is different from the optimum weights determined for a noise-free calculation. Unlike the statistical error, the noise convergence error is generally larger outside the tumour, is case dependent and strongly depends on the required objectives.
NASA Astrophysics Data System (ADS)
Kehlenbeck, Matthias; Breitner, Michael H.
Business users define calculated facts based on the dimensions and facts contained in a data warehouse. These business calculation definitions contain necessary knowledge regarding quantitative relations for deep analyses and for the production of meaningful reports. The business calculation definitions are implementation and widely organization independent. But no automated procedures facilitating their exchange across organization and implementation boundaries exist. Separately each organization currently has to map its own business calculations to analysis and reporting tools. This paper presents an innovative approach based on standard Semantic Web technologies. This approach facilitates the exchange of business calculation definitions and allows for their automatic linking to specific data warehouses through semantic reasoning. A novel standard proxy server which enables the immediate application of exchanged definitions is introduced. Benefits of the approach are shown in a comprehensive case study.
Er, Li; Xiangying, Zeng
2014-01-01
To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models. PMID:25026574
NASA Astrophysics Data System (ADS)
Betzinger, Markus; Friedrich, Christoph; Görling, Andreas; Blügel, Stefan
2012-06-01
The optimized-effective-potential method is a special technique to construct local Kohn-Sham potentials from general orbital-dependent energy functionals. In a recent publication [M. Betzinger, C. Friedrich, S. Blügel, A. Görling, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.83.045105 83, 045105 (2011)] we showed that uneconomically large basis sets were required to obtain a smooth local potential without spurious oscillations within the full-potential linearized augmented-plane-wave method. This could be attributed to the slow convergence behavior of the density response function. In this paper, we derive an incomplete-basis-set correction for the response, which consists of two terms: (1) a correction that is formally similar to the Pulay correction in atomic-force calculations and (2) a numerically more important basis response term originating from the potential dependence of the basis functions. The basis response term is constructed from the solutions of radial Sternheimer equations in the muffin-tin spheres. With these corrections the local potential converges at much smaller basis sets, at much fewer states, and its construction becomes numerically very stable. We analyze the improvements for rock-salt ScN and report results for BN, AlN, and GaN, as well as the perovskites CaTiO3, SrTiO3, and BaTiO3. The incomplete-basis-set correction can be applied to other electronic-structure methods with potential-dependent basis sets and opens the perspective to investigate a broad spectrum of problems in theoretical solid-state physics that involve response functions.
NASA Astrophysics Data System (ADS)
Levchenko, Sergey V.; Ren, Xinguo; Wieferink, Jürgen; Johanni, Rainer; Rinke, Patrick; Blum, Volker; Scheffler, Matthias
2015-07-01
We describe a framework to evaluate the Hartree-Fock exchange operator for periodic electronic-structure calculations based on general, localized atom-centered basis functions. The functionality is demonstrated by hybrid-functional calculations of properties for several semiconductors. In our implementation of the Fock operator, the Coulomb potential is treated either in reciprocal space or in real space, where the sparsity of the density matrix can be exploited for computational efficiency. Computational aspects, such as the rigorous avoidance of on-the-fly disk storage, and a load-balanced parallel implementation, are also discussed. We demonstrate linear scaling of our implementation with system size by calculating the electronic structure of a bulk semiconductor (GaAs) with up to 1,024 atoms per unit cell without compromising the accuracy.
All-electron topological insulator in InAs double wells
NASA Astrophysics Data System (ADS)
Erlingsson, Sigurdur I.; Egues, J. Carlos
2015-01-01
We show that electrons in ordinary III-V semiconductor double wells with an in-plane modulating periodic potential and interwell spin-orbit interaction are tunable topological insulators (TIs). Here the essential TI ingredients, namely, band inversion and the opening of an overall bulk gap in the spectrum arise, respectively, from (i) the combined effect of the double-well even-odd state splitting ΔSAS together with the superlattice potential and (ii) the interband Rashba spin-orbit coupling η . We corroborate our exact diagonalization results with an analytical nearly-free-electron description that allows us to derive an effective Bernevig-Hughes-Zhang model. Interestingly, the gate-tunable mass gap M drives a topological phase transition featuring a discontinuous Chern number at ΔSAS˜5.4 meV . Finally, we explicitly verify the bulk-edge correspondence by considering a strip configuration and determining not only the bulk bands in the nontopological and topological phases but also the edge states and their Dirac-like spectrum in the topological phase. The edge electronic densities exhibit peculiar spatial oscillations as they decay away into the bulk. For concreteness, we present our results for InAs-based wells with realistic parameters.
Asynchronous electro-optic sampling of all-electronically generated ultrashort voltage pulses
NASA Astrophysics Data System (ADS)
Füser, Heiko; Bieler, Mark; Ahmed, Sajjad; Verbeyst, Frans
2015-02-01
We measure the output of an electrical pulse generator with a repetition rate of 76 MHz employing a laser-based asynchronous sampling technique with an effective sampling frequency of 250 GHz. A best estimate of the resulting 13 ns long waveform is obtained from multiple waveform measurements, which are taken without any trigger event and subsequently aligned in time. This asynchronous sampling scheme can even be adopted in situations where small phase drifts between the electrical pulse generator and the laser occur, making synchronized sampling very difficult. In addition to accurate measurements, the proposed asynchronous measurement scheme allows for the construction of covariance matrices with full rank since a large number of time traces is acquired. Such matrices might reveal correlations which do not appear in low-rank matrices. We believe that the asynchronous sampling technique advocated in this paper will prove to be a valuable characterization tool covering an ultra-broadband frequency range from below 100 MHz to above 100 GHz.
ERIC Educational Resources Information Center
Hagedorn, Linda Serra
1998-01-01
A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…
A comparison of Monte Carlo and model-based dose calculations in radiotherapy using MCNPTV
NASA Astrophysics Data System (ADS)
Wyatt, Mark S.; Miller, Laurence F.
2006-06-01
Monte Carlo calculations for megavoltage radiotherapy beams represent the next generation of dose calculation in the clinical environment. In this paper, calculations obtained by the MCNP code based on CT data from a human pelvis are compared against those obtained by a commercial radiotherapy treatment system (CMS XiO). The MCNP calculations are automated by the use of MCNPTV (MCNP Treatment Verification), an integrated application developed in Visual Basic that runs on a Windows-based PC. The linear accelerator beam is modeled as a finite point source, and validated by comparing depth dose curves and lateral profiles in a water phantom to measured data. Calculated water phantom PDDs are within 1% of measured data, but the lateral profiles exhibit differences of 2.4, 5.5, and 5.7 mm at the 60%, 40%, and 20% isodose lines, respectively. A MCNP calculation is performed using the CT data and 15 points are selected for comparison with XiO. Results are generally within the uncertainty of the MCNP calculation, although differences up to 13.2% are seen in the presence of large heterogeneities.
Simple atmospheric transmittance calculation based on a Fourier-transformed Voigt profile.
Kobayashi, Hirokazu
2002-11-20
A method of line-by-line transmission calculation for a homogeneous atmospheric layer that uses the Fourier-transformed Voigt profile is presented. The method is based on a pure Voigt function with no approximation and an interference term that takes into account the line-mixing effect. One can use the method to calculate transmittance, considering each line shape as it is affected by temperature and pressure, with a line database with an arbitrary wave-number range and resolution. To show that the method is feasible for practical model development, we compared the calculated transmittance with that obtained with a conventional model, and good consistency was observed. PMID:12463237
A transport based one-dimensional perturbation code for reactivity calculations in metal systems
Wenz, T.R.
1995-02-01
A one-dimensional reactivity calculation code is developed using first order perturbation theory. The reactivity equation is based on the multi-group transport equation using the discrete ordinates method for angular dependence. In addition to the first order perturbation approximations, the reactivity code uses only the isotropic scattering data, but cross section libraries with higher order scattering data can still be used with this code. The reactivity code obtains all the flux, cross section, and geometry data from the standard interface files created by ONEDANT, a discrete ordinates transport code. Comparisons between calculated and experimental reactivities were done with the central reactivity worth data for Lady Godiva, a bare uranium metal assembly. Good agreement is found for isotopes that do not violate the assumptions in the first order approximation. In general for cases where there are large discrepancies, the discretized cross section data is not accurately representing certain resonance regions that coincide with dominant flux groups in the Godiva assembly. Comparing reactivities calculated with first order perturbation theory and a straight {Delta}k/k calculation shows agreement within 10% indicating the perturbation of the calculated fluxes is small enough for first order perturbation theory to be applicable in the modeled system. Computation time comparisons between reactivities calculated with first order perturbation theory and straight {Delta}k/k calculations indicate considerable time can be saved performing a calculation with a perturbation code particularly as the complexity of the modeled problems increase.
NASA Astrophysics Data System (ADS)
Kucuk, Fuat; Goto, Hiroki; Guo, Hai-Jiao; Ichinokura, Osamu
2009-04-01
Feedback of motor torque is required in most of switched reluctance (SR) motor applications in order to control torque and its ripple. An SR motor shows highly nonlinear property which does not allow calculating torque analytically. Torque can be directly measured by torque sensor, but it inevitably increases the cost and has to be properly mounted on the motor shaft. Instead of torque sensor, finite element analysis (FEA) may be employed for torque calculation. However, motor modeling and calculation takes relatively long time. The results of FEA may also differ from the actual results. The most convenient way seems to calculate torque from the measured values of rotor position, current, and flux linkage while locking the rotor at definite positions. However, this method needs an extra assembly to lock the rotor. In this study, a novel torque calculation based on artificial neural networks (ANNs) is presented. Magnetizing data are collected while a 6/4 SR motor is running. They need to be interpolated for torque calculation. ANN is very strong tool for data interpolation. ANN based torque estimation is verified on the 6/4 SR motor and is compared by FEA based torque estimation to show its validity.
A correction-based dose calculation algorithm for kilovoltage x rays
Ding, George X.; Pawlowski, Jason M.; Coffey, Charles W.
2008-12-15
Frequent and repeated imaging procedures such as those performed in image-guided radiotherapy (IGRT) programs may add significant dose to radiosensitive organs of radiotherapy patients. It has been shown that kV-CBCT results in doses to bone that are up to a factor of 3-4 higher than those in surrounding soft tissue. Imaging guidance procedures are necessary due to their potential benefits, but the additional incremental dose per treatment fraction may exceed an individual organ tolerance. Hence it is important to manage and account for this additional dose from imaging for radiotherapy patients. Currently available model-based dose calculation methods in radiation treatment planning (RTP) systems are not suitable for low-energy x rays, and new and fast calculation algorithms are needed for a RTP system for kilovoltage dose computations. This study presents a new dose calculation algorithm, referred to as the medium-dependent-correction (MDC) algorithm, for accurate patient dose calculation resulting from kilovoltage x rays. The accuracy of the new algorithm is validated against Monte Carlo calculations. The new algorithm overcomes the deficiency of existing density correction based algorithms in dose calculations for inhomogeneous media, especially for CT-based human volumetric images used in radiotherapy treatment planning.
Huang, Yuanshen; Li, Ting; Xu, Banglian; Hong, Ruijin; Tao, Chunxian; Ling, Jinzhong; Li, Baicheng; Zhang, Dawei; Ni, Zhengji; Zhuang, Songlin
2013-02-10
Fraunhofer diffraction formula cannot be applied to calculate the diffraction wave energy distribution of concave gratings like plane gratings because their grooves are distributed on a concave spherical surface. In this paper, a method based on the Kirchhoff diffraction theory is proposed to calculate the diffraction efficiency on concave gratings by considering the curvature of the whole concave spherical surface. According to this approach, each groove surface is divided into several limited small planes, on which the Kirchhoff diffraction field distribution is calculated, and then the diffraction field of whole concave grating can be obtained by superimposition. Formulas to calculate the diffraction efficiency of Rowland-type and flat-field concave gratings are deduced from practical applications. Experimental results showed strong agreement with theoretical computations. With the proposed method, light energy can be optimized to the expected diffraction wave range while implementing aberration-corrected design of concave gratings, particularly for the concave blazed gratings. PMID:23400074
The effects of calculator-based laboratories on standardized test scores
NASA Astrophysics Data System (ADS)
Stevens, Charlotte Bethany Rains
Nationwide, the goal of providing a productive science and math education to our youth in today's educational institutions is centering itself around the technology being utilized in these classrooms. In this age of digital technology, educational software and calculator-based laboratories (CBL) have become significant devices in the teaching of science and math for many states across the United States. Among the technology, the Texas Instruments graphing calculator and Vernier Labpro interface, are among some of the calculator-based laboratories becoming increasingly popular among middle and high school science and math teachers in many school districts across this country. In Tennessee, however, it is reported that this type of technology is not regularly utilized at the student level in most high school science classrooms, especially in the area of Physical Science (Vernier, 2006). This research explored the effect of calculator based laboratory instruction on standardized test scores. The purpose of this study was to determine the effect of traditional teaching methods versus graphing calculator teaching methods on the state mandated End-of-Course (EOC) Physical Science exam based on ability, gender, and ethnicity. The sample included 187 total tenth and eleventh grade physical science students, 101 of which belonged to a control group and 87 of which belonged to the experimental group. Physical Science End-of-Course scores obtained from the Tennessee Department of Education during the spring of 2005 and the spring of 2006 were used to examine the hypotheses. The findings of this research study suggested the type of teaching method, traditional or calculator based, did not have an effect on standardized test scores. However, the students' ability level, as demonstrated on the End-of-Course test, had a significant effect on End-of-Course test scores. This study focused on a limited population of high school physical science students in the middle Tennessee
40 CFR 1066.605 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... media buoyancy as described in 40 CFR 1065.690. (d) Calculate the emission mass of each gaseous... specified in paragraph (c) of this section or in 40 CFR part 1065, subpart G, as applicable. (b) See the... contamination as described in 40 CFR 1065.660(a), including continuous readings, sample bag readings,...
Monte Carlo-based dose calculation engine for minibeam radiation therapy.
Martínez-Rovira, I; Sempau, J; Prezado, Y
2014-02-01
Minibeam radiation therapy (MBRT) is an innovative radiotherapy approach based on the well-established tissue sparing effect of arrays of quasi-parallel micrometre-sized beams. In order to guide the preclinical trials in progress at the European Synchrotron Radiation Facility (ESRF), a Monte Carlo-based dose calculation engine has been developed and successfully benchmarked with experimental data in anthropomorphic phantoms. Additionally, a realistic example of treatment plan is presented. Despite the micron scale of the voxels used to tally dose distributions in MBRT, the combination of several efficiency optimisation methods allowed to achieve acceptable computation times for clinical settings (approximately 2 h). The calculation engine can be easily adapted with little or no programming effort to other synchrotron sources or for dose calculations in presence of contrast agents. PMID:23597423
Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene
NASA Astrophysics Data System (ADS)
Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.
2012-02-01
We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.
The Effect of Calculator-Based Ranger Activities on Students' Graphing Ability.
ERIC Educational Resources Information Center
Kwon, Oh Nam
2002-01-01
Addresses three issues of Calculator-based Ranger (CBR) activities on graphing abilities: (a) the effect of CBR activities on graphing abilities; (b) the extent to which prior knowledge about graphing skills affects graphing ability; and (c) the influence of instructional styles on students' graphing abilities. Indicates that CBR activities are…
Preliminary result of transport properties calculation molten Ag-based superionics
NASA Astrophysics Data System (ADS)
Oztek, H. O.; Yılmaz, M.; Kavanoz, H. B.
2016-03-01
We studied molten Ag based superionics (AgI, Ag2S and Ag3S I) which are well defined with Vashista-Rahman potential. Molecular Dynamic simulation code is Moldy which is used for canonical ensemble (NPT). Thermal properties are obtained from Green-Kubo formalism with equilibrium molecular dynamics (EMD) simulation. These calculation results are compared with the experimentals results.
Medication calculation: the potential role of digital game-based learning in nurse education.
Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle
2013-12-01
Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area. PMID:24107685
CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals.
Červinka, Ctirad; Fulem, Michal; Růžička, Květoslav
2016-02-14
A comparative study of the lattice energy calculations for a data set of 25 molecular crystals is performed using an additive scheme based on the individual energies of up to four-body interactions calculated using the coupled clusters with iterative treatment of single and double excitations and perturbative triples correction (CCSD(T)) with an estimated complete basis set (CBS) description. The CCSD(T)/CBS values on lattice energies are used to estimate sublimation enthalpies which are compared with critically assessed and thermodynamically consistent experimental values. The average absolute percentage deviation of calculated sublimation enthalpies from experimental values amounts to 13% (corresponding to 4.8 kJ mol(-1) on absolute scale) with unbiased distribution of positive to negative deviations. As pair interaction energies present a dominant contribution to the lattice energy and CCSD(T)/CBS calculations still remain computationally costly, benchmark calculations of pair interaction energies defined by crystal parameters involving 17 levels of theory, including recently developed methods with local and explicit treatment of electronic correlation, such as LCC and LCC-F12, are also presented. Locally and explicitly correlated methods are found to be computationally effective and reliable methods enabling the application of fragment-based methods for larger systems. PMID:26874495
CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals
NASA Astrophysics Data System (ADS)
Červinka, Ctirad; Fulem, Michal; Růžička, Květoslav
2016-02-01
A comparative study of the lattice energy calculations for a data set of 25 molecular crystals is performed using an additive scheme based on the individual energies of up to four-body interactions calculated using the coupled clusters with iterative treatment of single and double excitations and perturbative triples correction (CCSD(T)) with an estimated complete basis set (CBS) description. The CCSD(T)/CBS values on lattice energies are used to estimate sublimation enthalpies which are compared with critically assessed and thermodynamically consistent experimental values. The average absolute percentage deviation of calculated sublimation enthalpies from experimental values amounts to 13% (corresponding to 4.8 kJ mol-1 on absolute scale) with unbiased distribution of positive to negative deviations. As pair interaction energies present a dominant contribution to the lattice energy and CCSD(T)/CBS calculations still remain computationally costly, benchmark calculations of pair interaction energies defined by crystal parameters involving 17 levels of theory, including recently developed methods with local and explicit treatment of electronic correlation, such as LCC and LCC-F12, are also presented. Locally and explicitly correlated methods are found to be computationally effective and reliable methods enabling the application of fragment-based methods for larger systems.
A NASTRAN DMAP procedure for calculation of base excitation modal participation factors
NASA Technical Reports Server (NTRS)
Case, W. R.
1983-01-01
This paper presents a technique for calculating the modal participation factors for base excitation problems using a DMAP alter to the NASTRAN real eigenvalue analysis Rigid Format. The DMAP program automates the generation of the seismic mass to add to the degrees of freedom representing the shaker input directions and calculates the modal participation factors. These are shown in the paper to be a good measure of the maximum acceleration expected at any point on the structure when the subsequent frequency response analysis is run.
NASA Astrophysics Data System (ADS)
He, Yuping
2015-03-01
We present calculations of the thermal transport coefficients of Si-based clathrates and solar perovskites, as obtained from ab initio calculations and models, where all input parameters derived from first principles. We elucidated the physical mechanisms responsible for the measured low thermal conductivity in Si-based clatherates and predicted their electronic properties and mobilities, which were later confirmed experimentally. We also predicted that by appropriately tuning the carrier concentration, the thermoelectric figure of merit of Sn and Pb based perovskites may reach values ranging between 1 and 2, which could possibly be further increased by optimizing the lattice thermal conductivity through engineering perovskite superlattices. Work done in collaboration with Prof. G. Galli, and supported by DOE/BES Grant No. DE-FG0206ER46262.
NASA Astrophysics Data System (ADS)
Hasan, Z.; Qiu, Z.; Johnson, Jackie; Homerick, Uwe
2009-02-01
The potential of three erbium based solids hosts has been investigated for laser cooling. Absorption and emission spectra have been studied for the low lying IR transitions of erbium that are relevant to recent reports of cooling using the 4I15/2-4I9/2 and4I15/2 -4I13/2 transitions. Experimental studies have been performed for erbium in three hosts; ZBLAN glass and KPb2Cl5 and Cs2NaYCl6 crystals. In order to estimate the efficiencies of cooling, theoretical calculations have been performed for the cubic Elpasolite (Cs2NaYCl6 ) crystal. These calculations also provide a first principle insight into the cooling efficiency for non-cubic and glassy hosts where such calculations are not possible.
NASA Astrophysics Data System (ADS)
Li, Yang; Lian, Fang; Chen, Ning; Hao, Zhen-jia; Chou, Kuo-chih
2015-05-01
A first-principles method is applied to comparatively study the stability of lithium metal oxides with layered or spinel structures to predict the most energetically favorable structure for different compositions. The binding and reaction energies of the real or virtual layered LiMO2 and spinel LiM2O4 (M = Sc-Cu, Y-Ag, Mg-Sr, and Al-In) are calculated. The effect of element M on the structural stability, especially in the case of multiple-cation compounds, is discussed herein. The calculation results indicate that the phase stability depends on both the binding and reaction energies. The oxidation state of element M also plays a role in determining the dominant structure, i.e., layered or spinel phase. Moreover, calculation-based theoretical predictions of the phase stability of the doped materials agree with the previously reported experimental data.
Efficient algorithms for semiclassical instanton calculations based on discretized path integrals
Kawatsu, Tsutomu E-mail: smiura@mail.kanazawa-u.ac.jp; Miura, Shinichi E-mail: smiura@mail.kanazawa-u.ac.jp
2014-07-14
Path integral instanton method is a promising way to calculate the tunneling splitting of energies for degenerated two state systems. In order to calculate the tunneling splitting, we need to take the zero temperature limit, or the limit of infinite imaginary time duration. In the method developed by Richardson and Althorpe [J. Chem. Phys. 134, 054109 (2011)], the limit is simply replaced by the sufficiently long imaginary time. In the present study, we have developed a new formula of the tunneling splitting based on the discretized path integrals to take the limit analytically. We have applied our new formula to model systems, and found that this approach can significantly reduce the computational cost and gain the numerical accuracy. We then developed the method combined with the electronic structure calculations to obtain the accurate interatomic potential on the fly. We present an application of our ab initio instanton method to the ammonia umbrella flip motion.
GPU-based acceleration of free energy calculations in solid state physics
NASA Astrophysics Data System (ADS)
Januszewski, Michał; Ptok, Andrzej; Crivelli, Dawid; Gardas, Bartłomiej
2015-07-01
Obtaining a thermodynamically accurate phase diagram through numerical calculations is a computationally expensive problem that is crucially important to understanding the complex phenomena of solid state physics, such as superconductivity. In this work we show how this type of analysis can be significantly accelerated through the use of modern GPUs. We illustrate this with a concrete example of free energy calculation in multi-band iron-based superconductors, known to exhibit a superconducting state with oscillating order parameter (OP). Our approach can also be used for classical BCS-type superconductors. With a customized algorithm and compiler tuning we are able to achieve a 19×speedup compared to the CPU (119×compared to a single CPU core), reducing calculation time from minutes to mere seconds, enabling the analysis of larger systems and the elimination of finite size effects.
NASA Astrophysics Data System (ADS)
Hafiz, Hasnain; Barbiellini, B.; Jia, Q.; Tylus, U.; Strickland, K.; Bansil, A.; Mukerjee, S.
2015-03-01
Catalysts based on Fe/N/C clusters can support the oxygen-reduction reaction (ORR) without the use of expensive metals such as platinum. These systems can also prevent some poisonous species to block the active sites from the reactant. We have performed spin-polarized calculations on various Fe/N/C fragments using the Vienna Ab initio Simulation Package (VASP) code. Some results are compared to similar calculations obtained with the Gaussian code. We investigate the partial density of states (PDOS) of the 3d orbitals near the Fermi level and calculate the binding energies of several ligands. Correlations of the binding energies with the 3d electronic PDOS's are used to propose electronic descriptors of the ORR associated with the 3d states of Fe. We also suggest a structural model for the most active site with a ferrous ion (Fe2+) in the high spin state or the so-called Doublet 3 (D3).
Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location
NASA Astrophysics Data System (ADS)
Zhao, A. H.
2014-12-01
Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.
Eged, Katalin; Kis, Zoltán; Voigt, Gabriele
2006-01-01
After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the "location factor method" and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison. PMID:16095771
Qiu, Rui; Li, Junli; Zhang, Zhan; Wu, Zhen; Zeng, Zhi; Fan, Jiajin
2008-12-01
The Chinese mathematical phantom (CMP) is a stylized human body model developed based on the methods of Oak Ridge National Laboratory (ORNL) mathematical phantom series (OMPS), and data from Reference Asian Man and Chinese Reference Man. It is constructed for radiation dose estimation for Mongolians, whose anatomical parameters are different from those of Caucasians to some extent. Specific absorbed fractions (SAF) are useful quantities for the primary estimation of internal radiation dose. In this paper, a general Monte Carlo code, Monte Carlo N-Particle Code (MCNP) is used to transport particles and calculate SAF. A new variance reduction technique, called the "pointing probability with force collision" method, is implemented into MCNP to reduce the calculation uncertainty, especially for a small-volume target organ. Finally, SAF data for all 31 organs of both sexes of CMP are calculated. A comparison between SAF based on male phantoms of CMP and OMPS demonstrates that the differences apparently exist, and more than 80% of SAF data based on CMP are larger than that of OMPS. However, the differences are acceptable (the differences are above one order of magnitude only in less than 3% of situations) considering the differences in physique. Furthermore, trends in the SAF with increasing photon energy based on the two phantoms agree well. This model complements existing phantoms of different age, sex and ethnicity. PMID:19001898
GPU-based ultra-fast dose calculation using a finite size pencil beam model
NASA Astrophysics Data System (ADS)
Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B.
2009-10-01
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.
GPU-based ultra-fast dose calculation using a finite size pencil beam model.
Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B
2009-10-21
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy. PMID:19794244
Modelling lateral beam quality variations in pencil kernel based photon dose calculations
NASA Astrophysics Data System (ADS)
Nyholm, T.; Olofsson, J.; Ahnesjö, A.; Karlsson, M.
2006-08-01
Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error
Dose calculation from a D-D-reaction-based BSA for boron neutron capture synovectomy.
Abdalla, Khalid; Naqvi, A A; Maalej, N; Elshahat, B
2010-01-01
Monte Carlo simulations were carried out to calculate dose in a knee phantom from a D-D-reaction-based Beam Shaping Assembly (BSA) for Boron Neutron Capture Synovectomy (BNCS). The BSA consists of a D(d,n)-reaction-based neutron source enclosed inside a polyethylene moderator and graphite reflector. The polyethylene moderator and graphite reflector sizes were optimized to deliver the highest ratio of thermal to fast neutron yield at the knee phantom. Then neutron dose was calculated at various depths in a knee phantom loaded with boron and therapeutic ratios of synovium dose/skin dose and synovium dose/bone dose were determined. Normalized to same boron loading in synovium, the values of the therapeutic ratios obtained in the present study are 12-30 times higher than the published values. PMID:19828325
Effect of composition on antiphase boundary energy in Ni3Al based alloys: Ab initio calculations
NASA Astrophysics Data System (ADS)
Gorbatov, O. I.; Lomaev, I. L.; Gornostyrev, Yu. N.; Ruban, A. V.; Furrer, D.; Venkatesh, V.; Novikov, D. L.; Burlatsky, S. F.
2016-06-01
The effect of composition on the antiphase boundary (APB) energy of Ni-based L 12-ordered alloys is investigated by ab initio calculations employing the coherent potential approximation. The calculated APB energies for the {111} and {001} planes reproduce experimental values of the APB energy. The APB energies for the nonstoichiometric γ' phase increase with Al concentration and are in line with the experiment. The magnitude of the alloying effect on the APB energy correlates with the variation of the ordering energy of the alloy according to the alloying element's position in the 3 d row. The elements from the left side of the 3 d row increase the APB energy of the Ni-based L 12-ordered alloys, while the elements from the right side slightly affect it except Ni. The way to predict the effect of an addition on the {111} APB energy in a multicomponent alloy is discussed.
A note on geometric method-based procedures to calculate the Hurst exponent
NASA Astrophysics Data System (ADS)
Trinidad Segovia, J. E.; Fernández-Martínez, M.; Sánchez-Granero, M. A.
2012-03-01
Geometric method-based procedures, which we will call GM algorithms hereafter, were introduced in M.A. Sánchez-Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551, to calculate the Hurst exponent of a time series. The authors proved that GM algorithms, based on a geometrical approach, are more accurate than classical algorithms, especially with short length time series. The main contribution of this paper is to provide a mathematical background for the validity of these two algorithms to calculate the Hurst exponent H of random processes with stationary and self-affine increments. In particular, we show that these procedures are valid not only for exploring long memory in classical processes such as (fractional) Brownian motions, but also for estimating the Hurst exponent of (fractional) Lévy stable motions.
Stiffness of Diphenylalanine-Based Molecular Solids from First Principles Calculations
NASA Astrophysics Data System (ADS)
Azuri, Ido; Hod, Oded; Gazit, Ehud; Kronik, Leeor
2013-03-01
Diphenylalanine-based peptide nanotubes were found to be unexpectedly stiff, with a Young modulus of 19 GPa. Here, we calculate the Young modulus from first principles, using density functional theory with dispersive corrections. This allows us to show that at least half of the stiffness of the material comes from dispersive interactions and to identify the nature of the interactions that contribute most to the stiffness. This presents a general strategy for the analysis of bioinspired functional materials.
An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet
NASA Astrophysics Data System (ADS)
Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon
2015-08-01
The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.
Iterative diagonalization in augmented plane wave based methods in electronic structure calculations
Blaha, P.; Laskowski, R.; Schwarz, K.
2010-01-20
Due to the increased computer power and advanced algorithms, quantum mechanical calculations based on Density Functional Theory are more and more widely used to solve real materials science problems. In this context large nonlinear generalized eigenvalue problems must be solved repeatedly to calculate the electronic ground state of a solid or molecule. Due to the nonlinear nature of this problem, an iterative solution of the eigenvalue problem can be more efficient provided it does not disturb the convergence of the self-consistent-field problem. The blocked Davidson method is one of the widely used and efficient schemes for that purpose, but its performance depends critically on the preconditioning, i.e. the procedure to improve the search space for an accurate solution. For more diagonally dominated problems, which appear typically for plane wave based pseudopotential calculations, the inverse of the diagonal of (H - ES) is used. However, for the more efficient 'augmented plane wave + local-orbitals' basis set this preconditioning is not sufficient due to large off-diagonal terms caused by the local orbitals. We propose a new preconditioner based on the inverse of (H - {lambda}S) and demonstrate its efficiency for real applications using both, a sequential and a parallel implementation of this algorithm into our WIEN2k code.
NASA Astrophysics Data System (ADS)
Amadon, B.; Lechermann, F.; Georges, A.; Jollet, F.; Wehling, T. O.; Lichtenstein, A. I.
2008-05-01
The description of realistic strongly correlated systems has recently advanced through the combination of density functional theory in the local density approximation (LDA) and dynamical mean field theory (DMFT). This LDA+DMFT method is able to treat both strongly correlated insulators and metals. Several interfaces between LDA and DMFT have been used, such as ( Nth order) linear muffin-tin orbitals or maximally localized Wannier functions. Such schemes are, however, either complex in use or additional simplifications are often performed (i.e., the atomic sphere approximation). We present an alternative implementation of LDA+DMFT , which keeps the precision of the Wannier implementation, but which is lighter. It relies on the projection of localized orbitals onto a restricted set of Kohn-Sham states to define the correlated subspace. The method is implemented within the projector augmented wave and within the mixed-basis pseudopotential frameworks. This opens the way to electronic structure calculations within LDA+DMFT for more complex structures with the precision of an all-electron method. We present an application to two correlated systems, namely, SrVO3 and β -NiS (a charge-transfer material), including ligand states in the basis set. The results are compared to calculations done with maximally localized Wannier functions, and the physical features appearing in the orbitally resolved spectral functions are discussed.
Pipek, János; Nagy, Szilvia
2013-03-01
The wave function of a many electron system contains inhomogeneously distributed spatial details, which allows to reduce the number of fine detail wavelets in multiresolution analysis approximations. Finding a method for decimating the unnecessary basis functions plays an essential role in avoiding an exponential increase of computational demand in wavelet-based calculations. We describe an effective prediction algorithm for the next resolution level wavelet coefficients, based on the approximate wave function expanded up to a given level. The prediction results in a reasonable approximation of the wave function and allows to sort out the unnecessary wavelets with a great reliability. PMID:23115109
Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.
A design of a DICOM-RT-based tool box for nonrigid 4D dose calculation.
Wong, Victy Y W; Baker, Colin R; Leung, T W; Tung, Stewart Y
2016-01-01
The study was aimed to introduce a design of a DICOM-RT-based tool box to facilitate 4D dose calculation based on deformable voxel-dose registration. The computational structure and the calculation algorithm of the tool box were explicitly discussed in the study. The tool box was written in MATLAB in conjunction with CERR. It consists of five main functions which allow a) importation of DICOM-RT-based 3D dose plan, b) deformable image registration, c) tracking voxel doses along breathing cycle, d) presentation of temporal dose distribution at different time phase, and e) derivation of 4D dose. The efficacy of using the tool box for clinical application had been verified with nine clinical cases on retrospective-study basis. The logistic and the robustness of the tool box were tested with 27 applications and the results were shown successful with no computational errors encountered. In the study, the accumulated dose coverage as a function of planning CT taken at end-inhale, end-exhale, and mean tumor position were assessed. The results indicated that the majority of the cases (67%) achieved maximum target coverage, while the planning CT was taken at the temporal mean tumor position and 56% at the end-exhale position. The comparable results to the literature imply that the studied tool box can be reliable for 4D dose calculation. The authors suggest that, with proper application, 4D dose calculation using deformable registration can provide better dose evaluation for treatment with moving target. PMID:27074476
NASA Astrophysics Data System (ADS)
Yano, Masato; Hirose, Kenji; Yoshikawa, Minoru; Thermal management technology Team
Facile property calculation model for adsorption chillers was developed based on equilibrium adsorption cycles. Adsorption chillers are one of promising systems that can use heat energy efficiently because adsorption chillers can generate cooling energy using relatively low temperature heat energy. Properties of adsorption chillers are determined by heat source temperatures, adsorption/desorption properties of adsorbent, and kinetics such as heat transfer rate and adsorption/desorption rate etc. In our model, dependence of adsorption chiller properties on heat source temperatures was represented using approximated equilibrium adsorption cycles instead of solving conventional time-dependent differential equations for temperature changes. In addition to equilibrium cycle calculations, we calculated time constants for temperature changes as functions of heat source temperatures, which represent differences between equilibrium cycles and real cycles that stemmed from kinetic adsorption processes. We found that the present approximated equilibrium model could calculate properties of adsorption chillers (driving energies, cooling energies, and COP etc.) under various driving conditions quickly and accurately within average errors of 6% compared to experimental data.
NASA Astrophysics Data System (ADS)
Pennec, Fabienne; Alzina, Arnaud; Tessier-Doyen, Nicolas; Naitali, Benoit; Smith, David S.
2012-11-01
This work is about the calculation of thermal conductivity of insulating building materials made from plant particles. To determine the type of raw materials, the particle sizes or the volume fractions of plant and binder, a tool dedicated to calculate the thermal conductivity of heterogeneous materials has been developped, using the discrete element method to generate the volume element and the finite element method to calculate the homogenized properties. A 3D optical scanner has been used to capture plant particle shapes and convert them into a cluster of discret elements. These aggregates are initially randomly distributed but without any overlap, and then fall down in a container due to the gravity force and collide with neighbour particles according to a velocity Verlet algorithm. Once the RVE is built, the geometry is exported in the open-source Salome-Meca platform to be meshed. The calculation of the effective thermal conductivity of the heterogeneous volume is then performed using a homogenization technique, based on an energy method. To validate the numerical tool, thermal conductivity measurements have been performed on sunflower pith aggregates and on packed beds of the same particles. The experimental values have been compared satisfactorily with a batch of numerical simulations.
Automated Calculation of Water-equivalent Diameter (DW) Based on AAPM Task Group 220.
Anam, Choirul; Haryanto, Freddy; Widita, Rena; Arif, Idam; Dougherty, Geoff
2016-01-01
The purpose of this study is to accurately and effectively automate the calculation of the water-equivalent diameter (DW) from 3D CT images for estimating the size-specific dose. DW is the metric that characterizes the patient size and attenuation. In this study, DW was calculated for standard CTDI phantoms and patient images. Two types of phantom were used, one representing the head with a diameter of 16 cm and the other representing the body with a diameter of 32 cm. Images of 63 patients were also taken, 32 who had undergone a CT head examination and 31 who had undergone a CT thorax examination. There are three main parts to our algorithm for automated DW calculation. The first part is to read 3D images and convert the CT data into Hounsfield units (HU). The second part is to find the contour of the phantoms or patients automatically. And the third part is to automate the calculation of DW based on the automated contouring for every slice (DW,all). The results of this study show that the automated calculation of DW and the manual calculation are in good agreement for phantoms and patients. The differences between the automated calculation of DW and the manual calculation are less than 0.5%. The results of this study also show that the estimating of DW,all using DW,n=1 (central slice along longitudinal axis) produces percentage differences of -0.92% ± 3.37% and 6.75%± 1.92%, and estimating DW,all using DW,n=9 produces percentage differences of 0.23% ± 0.16% and 0.87% ± 0.36%, for thorax and head examinations, respectively. From this study, the percentage differences between normalized size-specific dose estimate for every slice (nSSDEall) and nSSDEn=1 are 0.74% ± 2.82% and -4.35% ± 1.18% for thorax and head examinations, respectively; between nSSDEall and nSSDEn=9 are 0.00% ± 0.46% and -0.60% ± 0.24% for thorax and head examinations, respectively. PMID:27455491
Wang, Junmei; Hou, Tingjun
2012-05-25
It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS
Monte Carlo-based dose calculation for 32P patch source for superficial brachytherapy applications
Sahoo, Sridhar; Palani, Selvam T.; Saxena, S. K.; Babu, D. A. R.; Dash, A.
2015-01-01
Skin cancer treatment involving 32P source is an easy, less expensive method of treatment limited to small and superficial lesions of approximately 1 mm deep. Bhabha Atomic Research Centre (BARC) has indigenously developed 32P nafion-based patch source (1 cm × 1 cm) for treating skin cancer. For this source, the values of dose per unit activity at different depths including dose profiles in water are calculated using the EGSnrc-based Monte Carlo code system. For an initial activity of 1 Bq distributed in 1 cm2 surface area of the source, the calculated central axis depth dose values are 3.62 × 10-10 GyBq-1 and 8.41 × 10-11 GyBq-1at 0.0125 and 1 mm depths in water, respectively. Hence, the treatment time calculated for delivering therapeutic dose of 30 Gy at 1 mm depth along the central axis of the source involving 37 MBq activity is about 2.7 hrs. PMID:26150682
Miliordos, Evangelos; Xantheas, Sotiris S.
2013-08-15
We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding number using double differentiation in Cartesian coordinates. For molecules of C_{1} symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm^{–1} from those obtained from Cartesian coordinates.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
NASA Astrophysics Data System (ADS)
Townson, Reid W.; Jia, Xun; Tian, Zhen; Jiang Graves, Yan; Zavgorodni, Sergei; Jiang, Steve B.
2013-06-01
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.
Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-06-21
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
SU-E-T-161: Evaluation of Dose Calculation Based On Cone-Beam CT
Abe, T; Nakazawa, T; Saitou, Y; Nakata, A; Yano, M; Tateoka, K; Fujimoto, K; Sakata, K
2014-06-01
Purpose: The purpose of this study is to convert pixel values in cone-beam CT (CBCT) using histograms of pixel values in the simulation CT (sim-CT) and the CBCT images and to evaluate the accuracy of dose calculation based on the CBCT. Methods: The sim-CT and CBCT images immediately before the treatment of 10 prostate cancer patients were acquired. Because of insufficient calibration of the pixel values in the CBCT, it is difficult to be directly used for dose calculation. The pixel values in the CBCT images were converted using an in-house program. A 7 fields treatment plans (original plan) created on the sim-CT images were applied to the CBCT images and the dose distributions were re-calculated with same monitor units (MUs). These prescription doses were compared with those of original plans. Results: In the results of the pixel values conversion in the CBCT images,the mean differences of pixel values for the prostate,subcutaneous adipose, muscle and right-femur were −10.78±34.60, 11.78±41.06, 29.49±36.99 and 0.14±31.15 respectively. In the results of the calculated doses, the mean differences of prescription doses for 7 fields were 4.13±0.95%, 0.34±0.86%, −0.05±0.55%, 1.35±0.98%, 1.77±0.56%, 0.89±0.69% and 1.69±0.71% respectively and as a whole, the difference of prescription dose was 1.54±0.4%. Conclusion: The dose calculation on the CBCT images achieve an accuracy of <2% by using this pixel values conversion program. This may enable implementation of efficient adaptive radiotherapy.
A Brief User's Guide to the Excel^{®} -Based DF Calculator
Jubin, Robert T.
2015-09-30
To understand the importance of capturing penetrating forms of iodine as well as the other volatile radionuclides, a calculation tool was developed in the form of an Excel^{®} spreadsheet to estimate the overall plant decontamination factor (DF). The tool requires the user to estimate splits of the volatile radionuclides within the major portions of the reprocessing plant, speciation of iodine and individual DFs for each off-gas stream within the Used Nuclear Fuel reprocessing plant. The Impact to the overall plant DF for each volatile radionuclide is then calculated by the tool based on the specific user choices. The Excel^{®} spreadsheet tracks both elemental and penetrating forms of iodine separately and allows changes in the speciation of iodine at each processing step. It also tracks ^{3}H, ^{14}C and ^{85}Kr. This document provides a basic user's guide to the manipulation of this tool.
Gong, Jian; Kim, Chang-Jin “CJ”
2009-01-01
Electrowetting-on-dielectric (EWOD) actuation enables digital (or droplet) microfluidics where small packets of liquids are manipulated on a two-dimensional surface. Due to its mechanical simplicity and low energy consumption, EWOD holds particular promise for portable systems. To improve volume precision of the droplets, which is desired for quantitative applications such as biochemical assays, existing practices would require near-perfect device fabricaion and operation conditions unless the droplets are generated under feedback control by an extra pump setup off of the chip. In this paper, we develop an all-electronic (i.e., no ancillary pumping) real-time feedback control of on-chip droplet generation. A fast voltage modulation, capacitance sensing, and discrete-time PID feedback controller are integrated on the operating electronic board. A significant improvement is obtained in the droplet volume uniformity, compared with an open loop control as well as the previous feedback control employing an external pump. Furthermore, this new capability empowers users to prescribe the droplet volume even below the previously considered minimum, allowing, for example, 1:x (x < 1) mixing, in comparison to the previously considered n:m mixing (i.e., n and m unit droplets). PMID:18497909
NASA Astrophysics Data System (ADS)
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods
NASA Astrophysics Data System (ADS)
Li, S.-Y.; Niklasson, G. A.; Granqvist, C. G.
2011-06-01
Composites including VO2-based thermochromic nanoparticles are able to combine high luminous transmittance Tlum with a significant modulation of the solar energy transmittance ΔTsol at a "critical" temperature in the vicinity of room temperature. Thus nanothermochromics is of much interest for energy efficient fenestration and offers advantages over thermochromic VO2-based thin films. This paper presents calculations based on effective medium theory applied to dilute suspensions of core-shell nanoparticles and demonstrates that, in particular, moderately thin-walled hollow spherical VO2 nanoshells can give significantly higher values of ΔTsol than solid nanoparticles at the expense of a somewhat lowered Tlum. This paper is a sequel to a recent publication [S.-Y. Li, G. A. Niklasson, and C. G. Granqvist, J. Appl. Phys. 108, 063525 (2010)].
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
NASA Astrophysics Data System (ADS)
Chen, Chaobin; Huang, Qunying; Wu, Yican
2005-04-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe
2015-08-01
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
Improving iterative surface energy balance convergence for remote sensing based flux calculation
NASA Astrophysics Data System (ADS)
Dhungel, Ramesh; Allen, Richard G.; Trezza, Ricardo
2016-04-01
A modification of the iterative procedure of the surface energy balance was purposed to expedite the convergence of Monin-Obukhov stability correction utilized by the remote sensing based flux calculation. This was demonstrated using ground-based weather stations as well as the gridded weather data (North American Regional Reanalysis) and remote sensing based (Landsat 5, 7) images. The study was conducted for different land-use classes in southern Idaho and northern California for multiple satellite overpasses. The convergence behavior of a selected Landsat pixel as well as all of the Landsat pixels within the area of interest was analyzed. Modified version needed multiple times less iteration compared to the current iterative technique. At the time of low wind speed (˜1.3 m/s), the current iterative technique was not able to find a solution of surface energy balance for all of the Landsat pixels, while the modified version was able to achieve it in a few iterations. The study will facilitate many operational evapotranspiration models to avoid the nonconvergence in low wind speeds, which helps to increase the accuracy of flux calculations.
A method for calculating strain energy release rate based on beam theory
NASA Technical Reports Server (NTRS)
Sun, C. T.; Pandey, R. K.
1993-01-01
The Timoshenko beam theory was used to model cracked beams and to calculate the total strain energy release rate. The root rotation of the beam segments at the crack tip were estimated based on an approximate 2D elasticity solution. By including the strain energy released due to the root rotations of the beams during crack extension, the strain energy release rate obtained using beam theory agrees very well with the 2D finite element solution. Numerical examples were given for various beam geometries and loading conditions. Comparisons with existing beam models were also given.
Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations
Jo, Y.; Yun, S.; Cho, N. Z.
2013-07-01
In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)
NASA Astrophysics Data System (ADS)
Arroudj, S.; Bouchouit, M.; Bouchouit, K.; Bouraiou, A.; Messaadia, L.; Kulyk, B.; Figa, V.; Bouacida, S.; Sofiani, Z.; Taboukhat, S.
2016-06-01
This paper explores the synthesis, structure characterization and optical properties of two new schiff bases. These compounds were obtained by condensation of o-tolidine with salicylaldehyde and cinnamaldehyde. The obtained ligands were characterized by UV, 1H and NMR. Their third-order NLO properties were measured using the third harmonic generation technique on thin films at 1064 nm. The electric dipole moment (μ), the polarizability (α) and the first hyperpolarizability (β) were calculated using the density functional B3LYP method with the lanl2dz basis set. For the results, the title compound shows nonzero β value revealing second order NLO behaviour.
NASA Astrophysics Data System (ADS)
Grebeshkov, V. V.; Smolyakov, V. M.
2012-05-01
A 16-constant additive scheme was derived for calculating the physicochemical properties of saturated monoalcohols CH4O-C9H20O and decomposing the triangular numbers of the Pascal triangle based on the similarity of subgraphs in the molecular graphs (MGs) of the homologous series of these alcohols. It was shown, using this scheme for calculation of properties of saturated monoalcohols as an example, that each coefficient of the scheme (in other words, the number of methods to impose a chain of a definite length i 1, i 2, … on a molecular graph) is the result of the decomposition of the triangular numbers of the Pascal triangle. A linear dependence was found within the adopted classification of structural elements. Sixteen parameters of the schemes were recorded as linear combinations of 17 parameters. The enthalpies of vaporization L {298/K 0} of the saturated monoalcohols CH4O-C9H20O, for which there were no experimental data, were calculated. It was shown that the parameters are not chosen randomly when using the given procedure for constructing an additive scheme by decomposing the triangular numbers of the Pascal triangle.
a Novel Sub-Pixel Matching Algorithm Based on Phase Correlation Using Peak Calculation
NASA Astrophysics Data System (ADS)
Xie, Junfeng; Mo, Fan; Yang, Chao; Li, Pin; Tian, Shiqiang
2016-06-01
The matching accuracy of homonymy points of stereo images is a key point in the development of photogrammetry, which influences the geometrical accuracy of the image products. This paper presents a novel sub-pixel matching method phase correlation using peak calculation to improve the matching accuracy. The peak theoretic centre that means to sub-pixel deviation can be acquired by Peak Calculation (PC) according to inherent geometrical relationship, which is generated by inverse normalized cross-power spectrum, and the mismatching points are rejected by two strategies: window constraint, which is designed by matching window and geometric constraint, and correlation coefficient, which is effective for satellite images used for mismatching points removing. After above, a lot of high-precise homonymy points can be left. Lastly, three experiments are taken to verify the accuracy and efficiency of the presented method. Excellent results show that the presented method is better than traditional phase correlation matching methods based on surface fitting in these aspects of accuracy and efficiency, and the accuracy of the proposed phase correlation matching algorithm can reach 0.1 pixel with a higher calculation efficiency.
Lin, Lin; Chen, Mohan; Yang, Chao; He, Lixin
2012-02-10
We describe how to apply the recently developed pole expansion plus selected inversion (PEpSI) technique to Kohn-Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating charge density, total energy, Helmholtz free energy and atomic forces without using the eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to update the chemical potential without using Kohn-Sham eigenvalues. The advantage of using PEpSI is that it has a much lower computational complexity than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEpSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEpSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEpSI is modest. This makes it even possible to perform Kohn-Sham DFT calculations for 10,000-atom nanotubes on a single processor. We also show that the use of PEpSI does not lead to loss of accuracy required in a practical DFT calculation.
Dual-energy CT-based material extraction for tissue segmentation in Monte Carlo dose calculations
NASA Astrophysics Data System (ADS)
Bazalova, Magdalena; Carrier, Jean-François; Beaulieu, Luc; Verhaegen, Frank
2008-05-01
Monte Carlo (MC) dose calculations are performed on patient geometries derived from computed tomography (CT) images. For most available MC codes, the Hounsfield units (HU) in each voxel of a CT image have to be converted into mass density (ρ) and material type. This is typically done with a (HU; ρ) calibration curve which may lead to mis-assignment of media. In this work, an improved material segmentation using dual-energy CT-based material extraction is presented. For this purpose, the differences in extracted effective atomic numbers Z and the relative electron densities ρe of each voxel are used. Dual-energy CT material extraction based on parametrization of the linear attenuation coefficient for 17 tissue-equivalent inserts inside a solid water phantom was done. Scans of the phantom were acquired at 100 kVp and 140 kVp from which Z and ρe values of each insert were derived. The mean errors on Z and ρe extraction were 2.8% and 1.8%, respectively. Phantom dose calculations were performed for 250 kVp and 18 MV photon beams and an 18 MeV electron beam in the EGSnrc/DOSXYZnrc code. Two material assignments were used: the conventional (HU; ρ) and the novel (HU; ρ, Z) dual-energy CT tissue segmentation. The dose calculation errors using the conventional tissue segmentation were as high as 17% in a mis-assigned soft bone tissue-equivalent material for the 250 kVp photon beam. Similarly, the errors for the 18 MeV electron beam and the 18 MV photon beam were up to 6% and 3% in some mis-assigned media. The assignment of all tissue-equivalent inserts was accurate using the novel dual-energy CT material assignment. As a result, the dose calculation errors were below 1% in all beam arrangements. Comparable improvement in dose calculation accuracy is expected for human tissues. The dual-energy tissue segmentation offers a significantly higher accuracy compared to the conventional single-energy segmentation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Methodology for calculating the per-treatment base... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the per.... The methodology for determining the per treatment base rate under the ESRD prospective payment...
GPU-based fast Monte Carlo dose calculation for proton therapy
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B
2015-01-01
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6–22 s to simulate 10 million source protons to achieve ~1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy. PMID:23128424
GPU-based fast Monte Carlo dose calculation for proton therapy.
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B
2012-12-01
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ∼1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy. PMID
Wang, Lin-Wang
2006-12-01
Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N{sup 3}) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the
GPU-based fast Monte Carlo dose calculation for proton therapy
NASA Astrophysics Data System (ADS)
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B.
2012-12-01
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ˜1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.
Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation
NASA Astrophysics Data System (ADS)
Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei
2007-02-01
On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No
Kis, Zoltán; Eged, Katalin; Voigt, Gabriele; Meckbach, Reinhard; Müller, Heinz
2004-02-01
External gamma exposures from radionuclides deposited on surfaces usually result in the major contribution to the total dose to the public living in urban-industrial environments. The aim of the paper is to give an example for a calculation of the collective and averted collective dose due to the contamination and decontamination of deposition surfaces in a complex environment based on the results of Monte Carlo simulations. The shielding effects of the structures in complex and realistic industrial environments (where productive and/or commercial activity is carried out) were computed by the use of Monte Carlo method. Several types of deposition areas (walls, roofs, windows, streets, lawn) were considered. Moreover, this paper gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. An exposure scenario was designed based on a survey of average German and Hungarian supermarkets. In the first part of the paper the air kermas per photon per unit area due to each specific deposition area contaminated by 137Cs were determined at several arbitrary locations in the whole environment relative to a reference value of 8.39 x 10(-4) pGy per gamma m(-2). The calculations provide the possibility to assess the whole contribution of a specific deposition area to the collective dose, separately. According to the current results, the roof and the paved area contribute the most part (approximately 92%) to the total dose in the first year taking into account the relative contamination of the deposition areas. When integrating over 10 or 50 y, these two surfaces remain the most important contributors as well but the ratio will increasingly be shifted in favor of the roof. The decontamination of the roof and the paved area results in about 80-90% of the total averted collective dose in each calculated time period (1, 10, 50 y
SDT: A Virus Classification Tool Based on Pairwise Sequence Alignment and Identity Calculation
Muhire, Brejnev Muhizi; Varsani, Arvind; Martin, Darren Patrick
2014-01-01
The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV). There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT), a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms). PMID:25259891
Novel Anthropometry-Based Calculation of the Body Heat Capacity in the Korean Population.
Pham, Duong Duc; Lee, Jeong Hoon; Lee, Young Boum; Park, Eun Seok; Kim, Ka Yul; Song, Ji Yeon; Kim, Ji Eun; Leem, Chae Hun
2015-01-01
Heat capacity (HC) has an important role in the temperature regulation process, particularly in dealing with the heat load. The actual measurement of the body HC is complicated and is generally estimated by body-composition-specific data. This study compared the previously known HC estimating equations and sought how to define HC using simple anthropometric indices such as weight and body surface area (BSA) in the Korean population. Six hundred participants were randomly selected from a pool of 902 healthy volunteers aged 20 to 70 years for the training set. The remaining 302 participants were used for the test set. Body composition analysis using multi-frequency bioelectrical impedance analysis was used to access body components including body fat, water, protein, and mineral mass. Four different HCs were calculated and compared using a weight-based HC (HC_Eq1), two HCs estimated from fat and fat-free mass (HC_Eq2 and HC_Eq3), and an HC calculated from fat, protein, water, and mineral mass (HC_Eq4). HC_Eq1 generally produced a larger HC than the other HC equations and had a poorer correlation with the other HC equations. HC equations using body composition data were well-correlated to each other. If HC estimated with HC_Eq4 was regarded as a standard, interestingly, the BSA and weight independently contributed to the variation of HC. The model composed of weight, BSA, and gender was able to predict more than a 99% variation of HC_Eq4. Validation analysis on the test set showed a very high satisfactory level of the predictive model. In conclusion, our results suggest that gender, BSA, and weight are the independent factors for calculating HC. For the first time, a predictive equation based on anthropometry data was developed and this equation could be useful for estimating HC in the general Korean population without body-composition measurement. PMID:26529594
Novel Anthropometry-Based Calculation of the Body Heat Capacity in the Korean Population
Pham, Duong Duc; Lee, Jeong Hoon; Lee, Young Boum; Park, Eun Seok; Kim, Ka Yul; Song, Ji Yeon; Kim, Ji Eun; Leem, Chae Hun
2015-01-01
Heat capacity (HC) has an important role in the temperature regulation process, particularly in dealing with the heat load. The actual measurement of the body HC is complicated and is generally estimated by body-composition-specific data. This study compared the previously known HC estimating equations and sought how to define HC using simple anthropometric indices such as weight and body surface area (BSA) in the Korean population. Six hundred participants were randomly selected from a pool of 902 healthy volunteers aged 20 to 70 years for the training set. The remaining 302 participants were used for the test set. Body composition analysis using multi-frequency bioelectrical impedance analysis was used to access body components including body fat, water, protein, and mineral mass. Four different HCs were calculated and compared using a weight-based HC (HC_Eq1), two HCs estimated from fat and fat-free mass (HC_Eq2 and HC_Eq3), and an HC calculated from fat, protein, water, and mineral mass (HC_Eq4). HC_Eq1 generally produced a larger HC than the other HC equations and had a poorer correlation with the other HC equations. HC equations using body composition data were well-correlated to each other. If HC estimated with HC_Eq4 was regarded as a standard, interestingly, the BSA and weight independently contributed to the variation of HC. The model composed of weight, BSA, and gender was able to predict more than a 99% variation of HC_Eq4. Validation analysis on the test set showed a very high satisfactory level of the predictive model. In conclusion, our results suggest that gender, BSA, and weight are the independent factors for calculating HC. For the first time, a predictive equation based on anthropometry data was developed and this equation could be useful for estimating HC in the general Korean population without body-composition measurement. PMID:26529594
Implementation of a Web-Based Spatial Carbon Calculator for Latin America and the Caribbean
NASA Astrophysics Data System (ADS)
Degagne, R. S.; Bachelet, D. M.; Grossman, D.; Lundin, M.; Ward, B. C.
2013-12-01
A multi-disciplinary team from the Conservation Biology Institute is creating a web-based tool for the InterAmerican Development Bank (IDB) to assess the impact of potential development projects on carbon stocks in Latin America and the Caribbean. Funded by the German Society for International Cooperation (GIZ), this interactive carbon calculator is an integrated component of the IDB Decision Support toolkit which is currently utilized by the IDB's Environmental Safeguards Group. It is deployed on the Data Basin (www.databasin.org) platform and provides a risk screening function to indicate the potential carbon impact of various types of projects, based on a user-delineated development footprint. The tool framework employs the best available geospatial carbon data to quantify above-ground carbon stocks and highlights potential below-ground and soil carbon hotspots in the proposed project area. Results are displayed in the web mapping interface, as well as summarized in PDF documents generated by the tool.
Electronic structures of halogen-doped Cu2O based on DFT calculations
NASA Astrophysics Data System (ADS)
Zhao, Zong-Yan; Yi, Juan; Zhou, Da-Cheng
2014-01-01
In order to construct p—n homojunction of Cu2O-based thin film solar cells that may increase its conversion efficiency, to synthesize n-type Cu2O with high conductivity is extremely crucial, and considered as a challenge in the near future. The doping effects of halogen on electronic structure of Cu2O have been investigated by density function theory calculations in the present work. Halogen dopants form donor levels below the bottom of conduction band through gaining or losing electrons, suggesting that halogen doping could make Cu2O have n-type conductivity. The lattice distortion, the impurity formation energy, the position, and the band width of donor level of Cu2O1-xHx (H = F, Cl, Br, I) increase with the halogen atomic number. Based on the calculated results, chlorine doping is an effective n-type dopant for Cu2O, owing to the lower impurity formation energy and suitable donor level.
Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki
2013-01-01
This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128
Calculations of helium separation via uniform pores of stanene-based membranes
Gao, Guoping; Jiao, Yan; Jiao, Yalong; Ma, Fengxian; Kou, Liangzhi
2015-01-01
Summary The development of low energy cost membranes to separate He from noble gas mixtures is highly desired. In this work, we studied He purification using recently experimentally realized, two-dimensional stanene (2D Sn) and decorated 2D Sn (SnH and SnF) honeycomb lattices by density functional theory calculations. To increase the permeability of noble gases through pristine 2D Sn at room temperature (298 K), two practical strategies (i.e., the application of strain and functionalization) are proposed. With their high concentration of large pores, 2D Sn-based membrane materials demonstrate excellent helium purification and can serve as a superior membrane over traditionally used, porous materials. In addition, the separation performance of these 2D Sn-based membrane materials can be significantly tuned by application of strain to optimize the He purification properties by taking both diffusion and selectivity into account. Our results are the first calculations of He separation in a defect-free honeycomb lattice, highlighting new interesting materials for helium separation for future experimental validation. PMID:26885459
Setny, Piotr; Zacharias, Martin
2010-07-01
A simple, semiheuristic solvation model based on a discrete, BCC grid of solvent cells has been presented. The model utilizes a mean field approach for the calculation of solute-solvent and solvent-solvent interaction energies and a cellular automata based algorithm for the prediction of solvent distribution in the presence of solute. The construction of the effective Hamiltonian for a solvent cell provides an explicit coupling between orientation-dependent water-solute electrostatic interactions and water-water hydrogen bonding. The water-solute dispersion interaction is also explicitly taken into account. The model does not depend on any arbitrary definition of the solute-solvent interface nor does it use a microscopic surface tension for the calculation of nonpolar contributions to the hydration free energies. It is demonstrated that the model provides satisfactory predictions of hydration free energies for drug-like molecules and is able to reproduce the distribution of buried water molecules within protein structures. The model is computationally efficient and is applicable to arbitrary molecules described by atomistic force field. PMID:20552986
Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki
2013-01-01
This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
NASA Technical Reports Server (NTRS)
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
GPU Based Fast Free-Wake Calculations For Multiple Horizontal Axis Wind Turbine Rotors
NASA Astrophysics Data System (ADS)
Türkal, M.; Novikov, Y.; Üşenmez, S.; Sezer-Uzol, N.; Uzol, O.
2014-06-01
Unsteady free-wake solutions of wind turbine flow fields involve computationally intensive interaction calculations, which generally limit the total amount of simulation time or the number of turbines that can be simulated by the method. This problem, however, can be addressed easily using high-level of parallelization. Especially when exploited with a GPU, a Graphics Processing Unit, this property can provide a significant computational speed-up, rendering the most intensive engineering problems realizable in hours of computation time. This paper presents the results of the simulation of the flow field for the NREL Phase VI turbine using a GPU-based in-house free-wake panel method code. Computational parallelism involved in the free-wake methodology is exploited using a GPU, allowing thousands of similar operations to be performed simultaneously. The results are compared to experimental data as well as to those obtained by running a corresponding CPU-based code. Results show that the GPU based code is capable of producing wake and load predictions similar to the CPU- based code and in a substantially reduced amount of time. This capability could allow free- wake based analysis to be used in the possible design and optimization studies of wind farms as well as prediction of multiple turbine flow fields and the investigation of the effects of using different vortex core models, core expansion and stretching models on the turbine rotor interaction problems in multiple turbine wake flow fields.
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations
NASA Astrophysics Data System (ADS)
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun
2015-10-01
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
Model-based dose calculations for {sup 125}I lung brachytherapy
Sutherland, J. G. H.; Furutani, K. M.; Garces, Y. I.; Thomson, R. M.
2012-07-15
Purpose: Model-baseddose calculations (MBDCs) are performed using patient computed tomography (CT) data for patients treated with intraoperative {sup 125}I lung brachytherapy at the Mayo Clinic Rochester. Various metallic artifact correction and tissue assignment schemes are considered and their effects on dose distributions are studied. Dose distributions are compared to those calculated under TG-43 assumptions. Methods: Dose distributions for six patients are calculated using phantoms derived from patient CT data and the EGSnrc user-code BrachyDose. {sup 125}I (GE Healthcare/Oncura model 6711) seeds are fully modeled. Four metallic artifact correction schemes are applied to the CT data phantoms: (1) no correction, (2) a filtered back-projection on a modified virtual sinogram, (3) the reassignment of CT numbers above a threshold in the vicinity of the seeds, and (4) a combination of (2) and (3). Tissue assignment is based on voxel CT number and mass density is assigned using a CT number to mass density calibration. Three tissue assignment schemes with varying levels of detail (20, 11, and 5 tissues) are applied to metallic artifact corrected phantoms. Simulations are also performed under TG-43 assumptions, i.e., seeds in homogeneous water with no interseed attenuation. Results: Significant dose differences (up to 40% for D{sub 90}) are observed between uncorrected and metallic artifact corrected phantoms. For phantoms created with metallic artifact correction schemes (3) and (4), dose volume metrics are generally in good agreement (less than 2% differences for all patients) although there are significant local dose differences. The application of the three tissue assignment schemes results in differences of up to 8% for D{sub 90}; these differences vary between patients. Significant dose differences are seen between fully modeled and TG-43 calculations with TG-43 underestimating the dose (up to 36% in D{sub 90}) for larger volumes containing higher proportions of
Liu, Miao; Rong, Ziqin; Malik, Rahul; Canepa, Pieremanuele; Jain, Anubhav; Ceder, Gerbrand; Persson, Kristin A.
2014-12-16
In this study, batteries that shuttle multivalent ions such as Mg2+ and Ca2+ ions are promising candidates for achieving higher energy density than available with current Li-ion technology. Finding electrode materials that reversibly store and release these multivalent cations is considered a major challenge for enabling such multivalent battery technology. In this paper, we use recent advances in high-throughput first-principles calculations to systematically evaluate the performance of compounds with the spinel structure as multivalent intercalation cathode materials, spanning a matrix of five different intercalating ions and seven transition metal redox active cations. We estimate the insertion voltage, capacity, thermodynamic stabilitymore » of charged and discharged states, as well as the intercalating ion mobility and use these properties to evaluate promising directions. Our calculations indicate that the Mn2O4 spinel phase based on Mg and Ca are feasible cathode materials. In general, we find that multivalent cathodes exhibit lower voltages compared to Li cathodes; the voltages of Ca spinels are ~0.2 V higher than those of Mg compounds (versus their corresponding metals), and the voltages of Mg compounds are ~1.4 V higher than Zn compounds; consequently, Ca and Mg spinels exhibit the highest energy densities amongst all the multivalent cation species. The activation barrier for the Al³⁺ ion migration in the Mn₂O₄ spinel is very high (~1400 meV for Al3+ in the dilute limit); thus, the use of an Al based Mn spinel intercalation cathode is unlikely. Amongst the choice of transition metals, Mn-based spinel structures rank highest when balancing all the considered properties.« less
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Martinez-Rovira, I.; Sempau, J.; Prezado, Y.
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation.
Jia, Xun; Gu, Xuejun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-11-21
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress toward the development of a graphics processing unit (GPU)-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original dose planning method (DPM) code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. A high-performance random number generator and a hardware linear interpolation are also utilized. We have also developed various components to handle the fluence map and linac geometry, so that gDPM can be used to compute dose distributions for realistic IMRT or VMAT treatment plans. Our gDPM package is tested for its accuracy and efficiency in both phantoms and realistic patient cases. In all cases, the average relative uncertainties are less than 1%. A statistical t-test is performed and the dose difference between the CPU and the GPU results is not found to be statistically significant in over 96% of the high dose region and over 97% of the entire region. Speed-up factors of 69.1 ∼ 87.2 have been observed using an NVIDIA Tesla C2050 GPU card against a 2.27 GHz Intel Xeon CPU processor. For realistic IMRT and VMAT plans, MC dose calculation can be completed with less than 1% standard deviation in 36.1 ∼ 39.6 s using gDPM. PMID:22016026
Acceptance and commissioning of a treatment planning system based on Monte Carlo calculations.
Lopez-Tarjuelo, J; Garcia-Molla, R; Juan-Senabre, X J; Quiros-Higueras, J D; Santos-Serra, A; de Marco-Blancas, N; Calzada-Feliu, S
2014-04-01
The Monaco Treatment Planning System (TPS), based on a virtual energy fluence model of the photon beam head components of the linac and a dose computation engine made with Monte Carlo (MC) algorithm X-Ray Voxel MC (XVMC), has been tested before being put into clinical use. An Elekta Synergy with 6 MV was characterized using routine equipment. After the machine's model was installed, a set of functionality, geometric, dosimetric and data transfer tests were performed. The dosimetric tests included dose calculations in water, heterogeneous phantoms and Intensity Modulated Radiation Therapy (IMRT) verifications. Data transfer tests were run for every imaging device, TPS and the electronic medical record linked to Monaco. Functionality and geometric tests were run properly. Dose calculations in water were in accordance with measurements so that, in 95% of cases, differences were up to 1.9%. Dose calculation in heterogeneous media showed expected results found in the literature. IMRT verification results with an ionization chamber led to dose differences lower than 2.5% for points inside a standard gradient. When an 2-D array was used, all the fields passed the g (3%, 3 mm) test with a percentage of succeeding points between 90% and 95%, of which the majority of the mentioned fields had a percentage of succeeding points between 95% and 100%. Data transfer caused problems that had to be solved by means of changing our workflow. In general, tests led to satisfactory results. Monaco performance complied with published international recommendations and scored highly in the dosimetric ambit. However, the problems detected when the TPS was put to work together with our current equipment showed that this kind of product must be completely commissioned, without neglecting data workflow, before treating the first patient. PMID:23862746
Star sub-pixel centroid calculation based on multi-step minimum energy difference method
NASA Astrophysics Data System (ADS)
Wang, Duo; Han, YanLi; Sun, Tengfei
2013-09-01
The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better
Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital
Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud
2016-01-01
Background: Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. Objective: This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. Methods: This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. Results: The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. Conclusion: By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department. PMID:26234974
NASA Astrophysics Data System (ADS)
Lu, Hua; Zhang, Shushu; Liu, Hanzhuang; Wang, Yanwei; Shen, Zhen; Liu, Chungen; You, Xiaozeng
2009-12-01
A boron-dipyrromethene (BODIPY)-based fluorescence probe with a N,N'-(pyridine-2, 6-diylbis(methylene))-dianiline substituent (1) has been prepared by condensation of 2,6-pyridinedicarboxaldehyde with 8-(4-amino)-4,4-difluoro-1,3,5,7-tetramethyl-4-bora-3a,4a-diaza-s-indacene and reduction by NaBH4. The sensing properties of compound 1 toward various metal ions are investigated via fluorometric titration in methanol, which show highly selective fluorescent turn-on response in the presence of Hg2+ over the other metal ions, such as Li+, Na+, K+, Ca2+, Mg2+, Pb2+, Fe2+, Co2+, Ni2+, Cu2+, Zn2+, Cd2+, Ag+, and Mn2+. Computational approach has been carried out to investigate the mechanism why compound 1 provides different fluorescent signal for Hg2+ and other ions. Theoretic calculations of the energy levels show that the quenching of the bright green fluorescence of boradiazaindacene fluorophore is due to the reductive photoinduced electron transfer (PET) from the aniline subunit to the excited state of BODIPY fluorophore. In metal complexes, the frontier molecular orbital energy levels changes greatly. Binding Zn2+ or Cd2+ ion leads to significant decreasing of both the HOMO and LUMO energy levels of the receptor, thus inhibit the reductive PET process, whereas an oxidative PET from the excited state fluorophore to the receptor occurs, vice versa, which also quenches the fluorescence. However, for 1-Hg2+ complex, both the reductive and oxidative PETs are prohibited; therefore, strong fluorescence emission from the fluorophore can be observed experimentally. The agreement of the experimental results and theoretic calculations suggests that our calculation method can be applicable as guidance for the design of new chemosensors for other metal ions.
NASA Astrophysics Data System (ADS)
Jacob, D.; Palacios, J. J.
2011-01-01
We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.
A modified W-W interatomic potential based on ab initio calculations
NASA Astrophysics Data System (ADS)
Wang, J.; Zhou, Y. L.; Li, M.; Hou, Q.
2014-01-01
In this paper we have developed a Finnis-Sinclair-type interatomic potential for W-W interactions that is based on ab initio calculations. The modified potential is able to reproduce the correct formation energies of self-interstitial atom (SIA) defects in tungsten, offering a significant improvement over the Ackland-Thetford tungsten potential. Using the modified potential, the thermal expansion is calculated in a temperature range from 0 to 3500 K. The results are in reasonable agreement with the experimental data, thus overcoming the shortcomings of the negative thermal expansion using the Derlet-Nguyen-Manh-Dudarev tungsten potential. The W-W potential presented here is also applied to study in detail the diffusion of SIAs in tungsten. We reveal that the initial SIA initiates a sequence of tungsten atom displacements and replacements in the <1 1 1> direction. An Arrhenius fit to the diffusion data at temperatures below 550 K indicates a migration energy of 0.022 eV, which is in reasonable agreement with the experimental data.
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Adaptation of GEANT4 to Monte Carlo dose calculations based on CT data.
Jiang, H; Paganetti, H
2004-10-01
The GEANT4 Monte Carlo code provides many powerful functions for conducting particle transport simulations with great reliability and flexibility. However, as a general purpose Monte Carlo code, not all the functions were specifically designed and fully optimized for applications in radiation therapy. One of the primary issues is the computational efficiency, which is especially critical when patient CT data have to be imported into the simulation model. In this paper we summarize the relevant aspects of the GEANT4 tracking and geometry algorithms and introduce our work on using the code to conduct dose calculations based on CT data. The emphasis is focused on modifications of the GEANT4 source code to meet the requirements for fast dose calculations. The major features include a quick voxel search algorithm, fast volume optimization, and the dynamic assignment of material density. These features are ready to be used for tracking the primary types of particles employed in radiation therapy such as photons, electrons, and heavy charged particles. Recalculation of a proton therapy treatment plan generated by a commercial treatment planning program for a paranasal sinus case is presented as an example. PMID:15543788
Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics
NASA Astrophysics Data System (ADS)
Hošek, Petr; Spiwok, Vojtěch
2016-01-01
Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
NASA Astrophysics Data System (ADS)
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
A 3D pencil-beam-based superposition algorithm for photon dose calculation in heterogeneous media
NASA Astrophysics Data System (ADS)
Tillikainen, L.; Helminen, H.; Torsti, T.; Siljamäki, S.; Alakuijala, J.; Pyyry, J.; Ulmer, W.
2008-07-01
In this work, a novel three-dimensional superposition algorithm for photon dose calculation is presented. The dose calculation is performed as a superposition of pencil beams, which are modified based on tissue electron densities. The pencil beams have been derived from Monte Carlo simulations, and are separated into lateral and depth-directed components. The lateral component is modeled using exponential functions, which allows accurate modeling of lateral scatter in heterogeneous tissues. The depth-directed component represents the total energy deposited on each plane, which is spread out using the lateral scatter functions. Finally, convolution in the depth direction is applied to account for tissue interface effects. The method can be used with the previously introduced multiple-source model for clinical settings. The method was compared against Monte Carlo simulations in several phantoms including lung- and bone-type heterogeneities. Comparisons were made for several field sizes for 6 and 18 MV energies. The deviations were generally within (2%, 2 mm) of the field central axis dmax. Significantly larger deviations (up to 8%) were found only for the smallest field in the lung slab phantom for 18 MV. The presented method was found to be accurate in a wide range of conditions making it suitable for clinical planning purposes.
Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation
Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan
2014-03-24
This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.
Direct calculation of correlation length based on quasi-cumulant method
NASA Astrophysics Data System (ADS)
Fukushima, Noboru
2014-03-01
We formulate a method of directly obtaining a correlation length without full calculation of correlation functions, as a high-temperature series. The method is based on the quasi-cumulant method, which was formulated by the author in J. Stat. Phys. 111, 1049-1090 (2003) as a complementary method for the high-temperature series expansion originally for an SU(n) Heisenberg model, but is applicable to general spin models according to our recent reformulation. A correlation function divided by its lowest-order nonzero contribution has properties very similar to a generating function of some kind of moments, which we call quasi-moments. Their corresponding quasi-cumulants can be also derived, whose generating function is related to the correlation length. In addition, applications to other numerical methods such as the quantum Monte Carlo method are also discussed. JSPS KAKENHI Grant Number 25914008.
Langevin spin dynamics based on ab initio calculations: numerical schemes and applications.
Rózsa, L; Udvardi, L; Szunyogh, L
2014-05-28
A method is proposed to study the finite-temperature behaviour of small magnetic clusters based on solving the stochastic Landau-Lifshitz-Gilbert equations, where the effective magnetic field is calculated directly during the solution of the dynamical equations from first principles instead of relying on an effective spin Hamiltonian. Different numerical solvers are discussed in the case of a one-dimensional Heisenberg chain with nearest-neighbour interactions. We performed detailed investigations for a monatomic chain of ten Co atoms on top of a Au(0 0 1) surface. We found a spiral-like ground state of the spins due to Dzyaloshinsky-Moriya interactions, while the finite-temperature magnetic behaviour of the system was well described by a nearest-neighbour Heisenberg model including easy-axis anisotropy. PMID:24806308
Angenendt, Knut; Johansson, Patrik
2011-06-23
The solvation of lithium salts in ionic liquids (ILs) leads to the creation of a lithium ion carrying species quite different from those found in traditional nonaqueous lithium battery electrolytes. The most striking differences are that these species are composed only of ions and in general negatively charged. In many IL-based electrolytes, the dominant species are triplets, and the charge, stability, and size of the triplets have a large impact on the total ion conductivity, the lithium ion mobility, and also the lithium ion delivery at the electrode. As an inherent advantage, the triplets can be altered by selecting lithium salts and ionic liquids with different anions. Thus, within certain limits, the lithium ion carrying species can even be tailored toward distinct important properties for battery application. Here, we show by DFT calculations that the resulting charge carrying species from combinations of ionic liquids and lithium salts and also some resulting electrolyte properties can be predicted. PMID:21591707
Wang, Guo-Xiang; Dong, Shuai; Hou, Jing-Min
2016-03-31
The lattice structures and topological properties of [Formula: see text] (X = C, Si, Ge, Sn, Pb) under hydrostatic strain have been investigated based on first-principle calculations. Among the materials, [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text] are dynamically stable with negative formation energy and no imaginary phonon frequency. We find that the hydrostatic strain cannot induce a quantum phase transition between topological trivial and nontrivial state for both [Formula: see text] and [Formula: see text], while for [Formula: see text] and [Formula: see text] the tensile strain can play a unique role in tuning the band topology, which will lead to a topological nontrivial state with Z 2 invariants (1;111). Although the topological transition occurs above the Fermi level, the Fermi level can be tuned by applying electrostatic gating voltage. PMID:26932939
NASA Astrophysics Data System (ADS)
Homma, H.; Murayama, T.
We investigate the chemical evolution model explaining the chemical composition and the star formation histories (SFHs) simultaneously for the dwarf spheroidal galaxies (dSphs). Recently, wide imaging photometry and multi-object spectroscopy give us a large number of data. Therefore, we start to develop the chemical evolution model based on an SFH given by photometric observations and estimates a metallicity distribution function (MDF) comparing with spectroscopic observations. With this new model we calculate the chemical evolution for 4 dSphs (Fornax, Sculptor, Leo II, Sextans), and then we found that the model of 0.1 Gyr for the delay time of type Ia SNe is too short to explain the observed [alpha /Fe] vs. [Fe/H] diagrams.
Fast GPU-based calculations in few-body quantum scattering
NASA Astrophysics Data System (ADS)
Pomerantsev, V. N.; Kukulin, V. I.; Rubtsova, O. A.; Sakhiev, S. K.
2016-07-01
A principally novel approach towards solving the few-particle (many-dimensional) quantum scattering problems is described. The approach is based on a complete discretization of few-particle continuum and usage of massively parallel computations of integral kernels for scattering equations by means of GPU. The discretization for continuous spectrum of few-particle Hamiltonian is realized with a projection of all scattering operators and wave functions onto the stationary wave-packet basis. Such projection procedure leads to a replacement of singular multidimensional integral equations with linear matrix ones having finite matrix elements. Different aspects of the employment of multithread GPU computing for fast calculation of the matrix kernel of the equation are studied in detail. As a result, the fully realistic three-body scattering problem above the break-up threshold is solved on an ordinary desktop PC with GPU for a rather small computational time.
A study of potential numerical pitfalls in GPU-based Monte Carlo dose calculation
NASA Astrophysics Data System (ADS)
Magnoux, Vincent; Ozell, Benoît; Bonenfant, Éric; Després, Philippe
2015-07-01
The purpose of this study was to evaluate the impact of numerical errors caused by the floating point representation of real numbers in a GPU-based Monte Carlo code used for dose calculation in radiation oncology, and to identify situations where this type of error arises. The program used as a benchmark was bGPUMCD. Three tests were performed on the code, which was divided into three functional components: energy accumulation, particle tracking and physical interactions. First, the impact of single-precision calculations was assessed for each functional component. Second, a GPU-specific compilation option that reduces execution time as well as precision was examined. Third, a specific function used for tracking and potentially more sensitive to precision errors was tested by comparing it to a very high-precision implementation. Numerical errors were found in two components of the program. Because of the energy accumulation process, a few voxels surrounding a radiation source end up with a lower computed dose than they should. The tracking system contained a series of operations that abnormally amplify rounding errors in some situations. This resulted in some rare instances (less than 0.1%) of computed distances that are exceedingly far from what they should have been. Most errors detected had no significant effects on the result of a simulation due to its random nature, either because they cancel each other out or because they only affect a small fraction of particles. The results of this work can be extended to other types of GPU-based programs and be used as guidelines to avoid numerical errors on the GPU computing platform.
Inertial sensor-based stride parameter calculation from gait sequences in geriatric patients.
Rampp, Alexander; Barth, Jens; Schülein, Samuel; Gaßmann, Karl-Günter; Klucken, Jochen; Eskofier, Björn M
2015-04-01
A detailed and quantitative gait analysis can provide evidence of various gait impairments in elderly people. To provide an objective decision-making basis for gait analysis, simple applicable tests analyzing a high number of strides are required. A mobile gait analysis system, which is mounted on shoes, can fulfill these requirements. This paper presents a method for computing clinically relevant temporal and spatial gait parameters. Therefore, an accelerometer and a gyroscope were positioned laterally below each ankle joint. Temporal gait events were detected by searching for characteristic features in the signals. To calculate stride length, the gravity compensated accelerometer signal was double integrated, and sensor drift was modeled using a piece-wise defined linear function. The presented method was validated using GAITRite-based gait parameters from 101 patients (average age 82.1 years). Subjects performed a normal walking test with and without a wheeled walker. The parameters stride length and stride time showed a correlation of 0.93 and 0.95 between both systems. The absolute error of stride length was 6.26 cm on normal walking test. The developed system as well as the GAITRite showed an increased stride length, when using a four-wheeled walker as walking aid. However, the walking aid interfered with the automated analysis of the GAITRite system, but not with the inertial sensor-based approach. In summary, an algorithm for the calculation of clinically relevant gait parameters derived from inertial sensors is applicable in the diagnostic workup and also during long-term monitoring approaches in the elderly population. PMID:25389237
A cultural study of a science classroom and graphing calculator-based technology
NASA Astrophysics Data System (ADS)
Casey, Dennis Alan
Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology, has found its way from commercial and domestic applications into the pedagogy of science and math education. The purpose of this study was to investigate the culture of an "alternative" science classroom and how it functions with graphing calculator-based technology. Using ethnographic methods, a case study of one secondary, team-taught, Environmental/Physical Science (EPS) classroom was conducted. Nearly half of the 23 students were identified as students with special education needs. Over a four-month period, field data was gathered from written observations, videotaped interactions, audio taped interviews, and document analyses to determine how technology was used and what meaning it had for the participants. Analysis indicated that the technology helped to keep students from getting frustrated with handling data and graphs. In a relatively short period of time, students were able to gather data, produce graphs, and to use inscriptions in meaningful classroom discussions. In addition, teachers used the technology as a means to involve and motivate students to want to learn science. By employing pedagogical skills and by utilizing a technology that might not otherwise be readily available to these students, an environment of appreciation, trust, and respect was fostered. Further, the use of technology by these teachers served to expand students' social capital---the benefits that come from an individual's social contacts, social skills, and social resources.
Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ruf, Joe
2007-01-01
As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.
Giuseppe Palmiotti
2015-05-01
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
Egami, Yoshiyuki; Iwase, Shigeru; Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji
2015-09-01
We develop a first-principles electron-transport simulator based on the Lippmann-Schwinger (LS) equation within the framework of the real-space finite-difference scheme. In our fully real-space-based LS (grid LS) method, the ratio expression technique for the scattering wave functions and the Green's function elements of the reference system is employed to avoid numerical collapse. Furthermore, we present analytical expressions and/or prominent calculation procedures for the retarded Green's function, which are utilized in the grid LS approach. In order to demonstrate the performance of the grid LS method, we simulate the electron-transport properties of the semiconductor-oxide interfaces sandwiched between semi-infinite jellium electrodes. The results confirm that the leakage current through the (001)Si-SiO_{2} model becomes much larger when the dangling-bond state is induced by a defect in the oxygen layer, while that through the (001)Ge-GeO_{2} model is insensitive to the dangling bond state. PMID:26465580
NASA Astrophysics Data System (ADS)
Egami, Yoshiyuki; Iwase, Shigeru; Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji
2015-09-01
We develop a first-principles electron-transport simulator based on the Lippmann-Schwinger (LS) equation within the framework of the real-space finite-difference scheme. In our fully real-space-based LS (grid LS) method, the ratio expression technique for the scattering wave functions and the Green's function elements of the reference system is employed to avoid numerical collapse. Furthermore, we present analytical expressions and/or prominent calculation procedures for the retarded Green's function, which are utilized in the grid LS approach. In order to demonstrate the performance of the grid LS method, we simulate the electron-transport properties of the semiconductor-oxide interfaces sandwiched between semi-infinite jellium electrodes. The results confirm that the leakage current through the (001 )Si -SiO2 model becomes much larger when the dangling-bond state is induced by a defect in the oxygen layer, while that through the (001 )Ge -GeO2 model is insensitive to the dangling bond state.
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Methodology for calculating the per-treatment base...-treatment base rate under the ESRD prospective payment system effective January 1, 2011. (a) Data sources. The methodology for determining the per treatment base rate under the ESRD prospective payment...
Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-01-01
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using
GIS supported calculations of (137)Cs deposition in Sweden based on precipitation data.
Almgren, Sara; Nilsson, Elisabeth; Erlandsson, Bengt; Isaksson, Mats
2006-09-15
It is of interest to know the spatial variation and the amount of (137)Cs e.g. in case of an accident with a radioactive discharge. In this study, the spatial distribution of the quarterly (137)Cs deposition over Sweden due to nuclear weapons fallout (NWF) during the period 1962-1966 was determined by relating the measured deposition density at a reference site to the amount of precipitation. Measured quarterly values of (137)Cs deposition density per unit precipitation at three reference sites and quarterly precipitation at 62 weather stations distributed over Sweden were used in the calculations. The reference sites were assumed to represent areas with different quarterly mean precipitation. The extent of these areas was determined from the distribution of the mean measured precipitation between 1961 and 1990 and varied according to seasonal variations in the mean precipitation pattern. Deposition maps were created by interpolation within a geographical information system (GIS). Both integrated (total) and cumulative (decay corrected) deposition densities were calculated. The lowest levels of NWF (137)Cs deposition density were noted in north-eastern and eastern parts of Sweden and the highest levels in the western parts of Sweden. Furthermore the deposition density of (137)Cs, resulting from the Chernobyl accident was determined for an area in western Sweden based on precipitation data. The highest levels of Chernobyl (137)Cs in western Sweden were found in the western parts of the area along the coast and the lowest in the east. The sum of the deposition densities from NWF and Chernobyl in western Sweden was then compared to the total activity measured in soil samples at 27 locations. Comparisons between the predicted values of this study show a good agreement with measured values and other studies. PMID:16647743
He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao
2016-01-01
The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone. PMID:27228612
Gräff, Ingo; Goldschmidt, Bernd; Glien, Procula; Klockner, Sophia; Erdfelder, Felix; Schiefer, Jennifer Lynn; Grigutsch, Daniel
2016-01-01
Background To date, there are no valid statistics regarding the number of full time staff necessary for nursing care in emergency departments in Europe. Material and Methods Staff requirement calculations were performed using state-of-the art procedures which take both fluctuating patient volume and individual staff shortfall rates into consideration. In a longitudinal observational study, the average nursing staff engagement time per patient was assessed for 503 patients. For this purpose, a full-time staffing calculation was estimated based on the five priority levels of the Manchester Triage System (MTS), taking into account specific workload fluctuations (50th-95th percentiles). Results Patients classified to the MTS category red (n = 35) required the most engagement time with an average of 97.93 min per patient. On weighted average, for orange MTS category patients (n = 118), nursing staff were required for 85.07 min, for patients in the yellow MTS category (n = 181), 40.95 min, while the two MTS categories with the least acute patients, green (n = 129) and blue (n = 40) required 23.18 min and 14.99 min engagement time per patient, respectively. Individual staff shortfall due to sick days and vacation time was 20.87% of the total working hours. When extrapolating this to 21,899 (2010) emergency patients, 67–123 emergency patients (50–95% percentile) per month can be seen by one nurse. The calculated full time staffing requirement depending on the percentiles was 14.8 to 27.1. Conclusion Performance-oriented staff planning offers an objective instrument for calculation of the full-time nursing staff required in emergency departments. PMID:27138492
Mikell, Justin K.; Klopp, Ann H.; Gonzalez, Graciela M.N.; Kisling, Kelly D.; Price, Michael J.; Berner, Paula A.; Eifel, Patricia J.; Mourtada, Firas
2012-07-01
Purpose: To investigate the dosimetric impact of the heterogeneity dose calculation Acuros (Transpire Inc., Gig Harbor, WA), a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials: The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received {sup 192}Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed with and without solid model applicator, with and without overriding the patient contour to 1 g/cm{sup 3} muscle, and with and without overriding contrast materials to muscle or 2.25 g/cm{sup 3} bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. American Association of Physicists in Medicine Task Group 43 (TG-43) guidelines and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, International Commission on Radiation Units and Measurements (ICRU) report 38 rectal and bladder points, three and nine o'clock, and {sub D2cm3} to the bladder, rectum, and sigmoid. Results: Points A and B, D{sub 2} cm{sup 3} bladder, ICRU bladder, and three and nine o'clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43 guidelines. The D{sub 2cm3} rectum (n = 3), D{sub 2cm3} sigmoid (n = 1), and ICRU rectum (n = 6) had differences of >5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast were mapped to bone and radiopaque packing was not overridden. Conclusions
Mikell, Justin K.; Klopp, Ann H.; Gonzalez, Graciela M. N.; Kisling, Kelly D.; Price, Michael J.; Berner, Paula A.; Eifel, Patricia J.; Mourtada, and Firas
2014-01-01
Purpose To investigate the dosimetric impact of the heterogeneity dose calculation Acuros, a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received 192Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance (MR)-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed: with and without solid model applicator, with and without overriding the patient contour to 1g/cc muscle, and with and without overriding contrast materials to muscle or 2.25 g/cc bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. TG-43 and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, ICRU report #38 rectal and bladder points, three and nine o'clock, and D2cc to the bladder, rectum, and sigmoid. Results Points A, B, D2cc bladder, ICRU bladder, and three and nine o'clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43. The D2cc rectum (n=3), D2cc sigmoid (n=1), and ICRU rectum (n=6) had differences > 5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast mapped to bone and radiopaque packing was not overridden. Conclusions The GBBS has minimal impact on clinical parameters for this cohort of GYN patients with unshielded applicators. The incorrect mapping of rectal and balloon contrast does not have a significant impact on clinical parameters. Rectal parameters may be
Aguilar, Boris
2015-01-01
by doubling of the solute dielectric constant. However, the use of the higher interior dielectric does not eliminate the large individual deviations between pairwise interactions computed within the two DB definitions. It is argued that while the MS based definition of the dielectric boundary is more physically correct in some types of practical calculations, the choice is not so clear in some other common scenarios. PMID:26236064
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.
2014-02-15
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model
An anatomically realistic lung model for Monte Carlo-based dose calculations
Liang Liang; Larsen, Edward W.; Chetty, Indrin J.
2007-03-15
Treatment planning for disease sites with large variations of electron density in neighboring tissues requires an accurate description of the geometry. This self-evident statement is especially true for the lung, a highly complex organ having structures with a wide range of sizes that range from about 10{sup -4} to 1 cm. In treatment planning, the lung is commonly modeled by a voxelized geometry obtained using computed tomography (CT) data at various resolutions. The simplest such model, which is often used for QA and validation work, is the atomic mix or mean density model, in which the entire lung is homogenized and given a mean (volume-averaged) density. The purpose of this paper is (i) to describe a new heterogeneous random lung model, which is based on morphological data of the human lung, and (ii) use this model to assess the differences in dose calculations between an actual lung (as represented by our model) and a mean density (homogenized) lung. Eventually, we plan to use the random lung model to assess the accuracy of CT-based treatment plans of the lung. For this paper, we have used Monte Carlo methods to make accurate comparisons between dose calculations for the random lung model and the mean density model. For four realizations of the random lung model, we used a single photon beam, with two different energies (6 and 18 MV) and four field sizes (1x1, 5x5, 10x10, and 20x20 cm{sup 2}). We found a maximum difference of 34% of D{sub max} with the 1x1, 18 MV beam along the central axis (CAX). A ''shadow'' region distal to the lung, with dose reduction up to 7% of D{sub max}, exists for the same realization. The dose perturbations decrease for larger field sizes, but the magnitude of the differences in the shadow region is nearly independent of the field size. We also observe that, compared to the mean density model, the random structures inside the heterogeneous lung can alter the shape of the isodose lines, leading to a broadening or shrinking of the
Ruthenia-based electrochemical supercapacitors: insights from first-principles calculations.
Ozoliņš, Vidvuds; Zhou, Fei; Asta, Mark
2013-05-21
Electrochemical supercapacitors (ECs) have important applications in areas wherethe need for fast charging rates and high energy density intersect, including in hybrid and electric vehicles, consumer electronics, solar cell based devices, and other technologies. In contrast to carbon-based supercapacitors, where energy is stored in the electrochemical double-layer at the electrode/electrolyte interface, ECs involve reversible faradaic ion intercalation into the electrode material. However, this intercalation does not lead to phase change. As a result, ECs can be charged and discharged for thousands of cycles without loss of capacity. ECs based on hydrous ruthenia, RuO2·xH2O, exhibit some of the highest specific capacitances attained in real devices. Although RuO2 is too expensive for widespread practical use, chemists have long used it as a model material for investigating the fundamental mechanisms of electrochemical supercapacitance and heterogeneous catalysis. In this Account, we discuss progress in first-principles density-functional theory (DFT) based studies of the electronic structure, thermodynamics, and kinetics of hydrous and anhydrous RuO2. We find that DFT correctly reproduces the metallic character of the RuO2 band structure. In addition, electron-proton double-insertion into bulk RuO2 leads to the formation of a polar covalent O-H bond with a fractional increase of the Ru charge in delocalized d-band states by only 0.3 electrons. This is in slight conflict with the common assumption of a Ru valence change from Ru(4+) to Ru(3+). Using the prototype electrostatic ground state (PEGS) search method, we predict a crystalline RuOOH compound with a formation energy of only 0.15 eV per proton. The calculated voltage for the onset of bulk proton insertion in the dilute limit is only 0.1 V with respect to the reversible hydrogen electrode (RHE), in reasonable agreement with the 0.4 V threshold for a large diffusion-limited contribution measured experimentally
NASA Astrophysics Data System (ADS)
Lauridsen, Bente; Hedemann Jensen, Per
1987-03-01
The basic dosimetric quantity in ICRP-publication no. 30 is the aborbed fraction AF( T←S). This parameter is the fraction of energy absorbed in a target organ T per emission of radiation from activity deposited in the source organ S. Based upon this fraction it is possible to calculate the Specific Effective Energy SEE( T← S). From this, the committed effective dose equivalent from an intake of radioactive material can be found, and thus the annual limit of intake for given radionuclides can be determined. A male phantom has been constructed with the aim of measuring the Specific Effective Energy SEE(T←S) in various target organs. Impressions-of real human organs have been used to produce vacuum forms. Tissue equivalent plastic sheets were sucked into the vacuum forms producing a shell with a shape identical to the original organ. Each organ has been made of two shells. The same procedure has been used for the body. Thin tubes through the organs make it possible to place TL dose meters in a matrix so the dose distribution can be measured. The phantom has been supplied with lungs, liver, kidneys, spleen, stomach, bladder, pancreas, and thyroid gland. To select a suitable body liquid for the phantom, laboratory experiments have been made with different liquids and different radionuclides. In these experiments the change in dose rate due to changes in density and composition of the liquid was determined. Preliminary results of the experiments are presented.
Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems
NASA Technical Reports Server (NTRS)
Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.
2012-01-01
Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.
Fission yield calculation using toy model based on Monte Carlo simulation
Jubaidah; Kurniadi, Rizal
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry
2011-01-01
ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm. PMID:22254744