Science.gov

Sample records for all-electron calculations based

  1. Fundamental High-Pressure Calibration from All-Electron Quantum Monte Carlo Calculations

    SciTech Connect

    Esler, K. P.; Cohen, R. E.; Militzer, B.; Kim, Jeongnim; Needs, R. J.; Towler, M. D.

    2010-05-07

    We develop an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and use it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We compute the static contribution to the free energy with the QMC method and obtain the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We compute the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy.

  2. Calculation of all-electron wavefunction of hemoprotein cytochrome c by density functional theory

    NASA Astrophysics Data System (ADS)

    Sato, Fumitoshi; Yoshihiro, Tamotsu; Era, Makoto; Kashiwagi, Hiroshi

    2001-06-01

    An all-electron wavefunction of horse heart d 6-low-spin ferrocytochrome c (ferrocyt. c) was calculated by our Gaussian-based density functional theory (DFT) molecular orbital (MO) program, ProteinDF with a workstation cluster. It may be the first full-scale DFT calculation of a metalloprotein, and the numbers of orbitals and auxiliary functions are 9600 and 17 578, respectively. We show that the highest occupied MO (HOMO) derives from 3d orbitals of heme Fe and is unexpectedly delocalized while preserving the essential atomic character, which will give room for consideration of the electron transfer processes between proteins. The potential of MO calculations on larger proteins is also discussed with the computational data of cytochrome c (cyt. c).

  3. Toward Accurate Modelling of Enzymatic Reactions: All Electron Quantum Chemical Analysis combined with QM/MM Calculation of Chorismate Mutase

    SciTech Connect

    Ishida, Toyokazu

    2008-09-17

    To further understand the catalytic role of the protein environment in the enzymatic process, the author has analyzed the reaction mechanism of the Claisen rearrangement of Bacillus subtilis chorismate mutase (BsCM). By introducing a new computational strategy that combines all-electron QM calculations with ab initio QM/MM modelings, it was possible to simulate the molecular interactions between the substrate and the protein environment. The electrostatic nature of the transition state stabilization was characterized by performing all-electron QM calculations based on the fragment molecular orbital technique for the entire enzyme.

  4. Large-scale All-electron Density Functional Theory Calculations using Enriched Finite Element Method

    NASA Astrophysics Data System (ADS)

    Kanungo, Bikash; Gavini, Vikram

    We present a computationally efficient method to perform large-scale all-electron density functional theory calculations by enriching the Lagrange polynomial basis in classical finite element (FE) discretization with atom-centered numerical basis functions, which are obtained from the solutions of the Kohn-Sham (KS) problem for single atoms. We term these atom-centered numerical basis functions as enrichment functions. The integrals involved in the construction of the discrete KS Hamiltonian and overlap matrix are computed using an adaptive quadrature grid based on gradients in the enrichment functions. Further, we propose an efficient scheme to invert the overlap matrix by exploiting its LDL factorization and employing spectral finite elements along with Gauss-Lobatto quadrature rules. Finally, we use a Chebyshev polynomial based acceleration technique to compute the occupied eigenspace in each self-consistent iteration. We demonstrate the accuracy, efficiency and scalability of the proposed method on various metallic and insulating benchmark systems, with systems ranging in the order of 10,000 electrons. We observe a 50-100 fold reduction in the overall computational time when compared to classical FE calculations while being commensurate with the desired chemical accuracy. We acknowledge the support of NSF (Grant No. 1053145) and ARO (Grant No. W911NF-15-1-0158) in conducting this work.

  5. Norm-conserving pseudopotentials with chemical accuracy compared to all-electron calculations

    NASA Astrophysics Data System (ADS)

    Willand, Alex; Kvashnin, Yaroslav O.; Genovese, Luigi; Vázquez-Mayagoitia, Álvaro; Deb, Arpan Krishna; Sadeghi, Ali; Deutsch, Thierry; Goedecker, Stefan

    2013-03-01

    By adding a nonlinear core correction to the well established dual space Gaussian type pseudopotentials for the chemical elements up to the third period, we construct improved pseudopotentials for the Perdew-Burke-Ernzerhof [J. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996), 10.1103/PhysRevLett.77.3865] functional and demonstrate that they exhibit excellent accuracy. Our benchmarks for the G2-1 test set show average atomization energy errors of only half a kcal/mol. The pseudopotentials also remain highly reliable for high pressure phases of crystalline solids. When supplemented by empirical dispersion corrections [S. Grimme, J. Comput. Chem. 27, 1787 (2006), 10.1002/jcc.20495; S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, J. Chem. Phys. 132, 154104 (2010), 10.1063/1.3382344] the average error in the interaction energy between molecules is also about half a kcal/mol. The accuracy that can be obtained by these pseudopotentials in combination with a systematic basis set is well superior to the accuracy that can be obtained by commonly used medium size Gaussian basis sets in all-electron calculations.

  6. Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations

    SciTech Connect

    Webster, R. Harrison, N. M.; Bernasconi, L.

    2015-06-07

    We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features of the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green’s function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional c{sub HF} and show that there exists one value of c{sub HF} (∼0.32) that reproduces at least semi-quantitatively the optical gap of this material.

  7. Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations

    NASA Astrophysics Data System (ADS)

    Webster, R.; Bernasconi, L.; Harrison, N. M.

    2015-06-01

    We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features of the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green's function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional cHF and show that there exists one value of cHF (˜0.32) that reproduces at least semi-quantitatively the optical gap of this material.

  8. Efficient O(N) integration for all-electron electronic structure calculation using numeric basis functions

    SciTech Connect

    Havu, V. Blum, V.; Havu, P.; Scheffler, M.

    2009-12-01

    We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as the more rigorous bottom-up approaches.

  9. Potential energy curves of Li+2 from all-electron EA-EOM-CCSD calculations

    NASA Astrophysics Data System (ADS)

    Musiał, Monika; Medrek, Magdalena; Kucharski, Stanisław A.

    2015-10-01

    The electron attachment (EA) equation-of-motion coupled-cluster theory provides description of the states obtained by the attachment of an electron to the reference system. If the reference is assumed to be a doubly ionised cation, then the EA results relate to the singly ionised ion. In the current work, the above scheme is applied to the calculations of the potential energy curves (PECs) of the Li+2 cation adopting the doubly ionised Li2 +2 structure as the reference system. The advantage of such computational strategy relies on the fact that the closed-shell Li2 +2 reference dissociates into closed-shell fragments (Li2 +2 ⇒ Li+ + Li+), hence the RHF (restricted Hartree-Fock) function can be used as the reference in the whole range of interatomic distances. This scheme offers the first principle method without any model or effective potential parameters for the description of the bond-breaking processes. In this study, the PECs and selected spectroscopic constants for 18 electronic states of the Li+2 ion were computed and compared with experimental and other theoretical results. †In honour of Professor Sourav Pal on the occasion of an anniversary in his private and scientific life.

  10. All-electron G W +Bethe-Salpeter calculations on small molecules

    NASA Astrophysics Data System (ADS)

    Hirose, Daichi; Noguchi, Yoshifumi; Sugino, Osamu

    2015-05-01

    Accuracy of the first-principles G W +Bethe-Salpeter equation (BSE) method is examined for low-energy excited states of small molecules. The standard formalism, which is based on the one-shot G W approximation and the Tamm-Dancoff approximation (TDA), is found to underestimate the optical gap of N2, CO, H2O ,C2H4 , and CH2O by about 1 eV. Possible origins are investigated separately for the effect of TDA and for the approximate schemes of the self-energy operator, which are known to cause overbinding of the electron-hole pair and overscreening of the interaction. By applying the known correction formula, we find the amount of the correction is too small to overcome the underestimated excitation energy. This result indicates a need for fundamental revision of the G W +BSE method rather than adjustment of the standard one. We expect that this study makes the problems in the current G W +BSE formalism clearer and provides useful information for further intrinsic development beyond the current framework.

  11. All-electron double zeta basis sets for the lanthanides: Application in atomic and molecular property calculations

    NASA Astrophysics Data System (ADS)

    Jorge, F. E.; Martins, L. S. C.; Franco, M. L.

    2016-01-01

    Segmented all-electron basis sets of valence double zeta quality plus polarization functions (DZP) for the elements from Ce to Lu are generated to be used with the non-relativistic and Douglas-Kroll-Hess (DKH) Hamiltonians. At the B3LYP level, the DZP-DKH atomic ionization energies and equilibrium bond lengths and atomization energies of the lanthanide trifluorides are evaluated and compared with benchmark theoretical and experimental data reported in the literature. In general, this compact size set shows to have a regular, efficient, and reliable performance. It can be particularly useful in molecular property calculations that require explicit treatment of the core electrons.

  12. Validity of virial theorem in all-electron mixed basis density functional, Hartree-Fock, and GW calculations.

    PubMed

    Kuwahara, Riichi; Tadokoro, Yoichi; Ohno, Kaoru

    2014-08-28

    In this paper, we calculate kinetic and potential energy contributions to the electronic ground-state total energy of several isolated atoms (He, Be, Ne, Mg, Ar, and Ca) by using the local density approximation (LDA) in density functional theory, the Hartree-Fock approximation (HFA), and the self-consistent GW approximation (GWA). To this end, we have implemented self-consistent HFA and GWA routines in our all-electron mixed basis code, TOMBO. We confirm that virial theorem is fairly well satisfied in all of these approximations, although the resulting eigenvalue of the highest occupied molecular orbital level, i.e., the negative of the ionization potential, is in excellent agreement only in the case of the GWA. We find that the wave function of the lowest unoccupied molecular orbital level of noble gas atoms is a resonating virtual bound state, and that of the GWA spreads wider than that of the LDA and thinner than that of the HFA.

  13. All-electron first-principles GW+Bethe-Salpeter calculation for optical absorption spectra of sodium clusters

    SciTech Connect

    Noguchi, Yoshifumi; Ohno, Kaoru

    2010-04-15

    The optical absorption spectra of sodium clusters (Na{sub 2n}, n{<=} 4) are calculated by using an all-electron first-principles GW+Bethe-Salpeter method with the mixed-basis approach within the Tamm-Dancoff approximation. In these small systems, the excitonic effect strongly affects the optical properties due to the confinement of exciton in the small system size. The present state-of-the-art method treats the electron-hole two-particle Green's function by incorporating the ladder diagrams up to the infinite order and therefore takes into account the excitonic effect in a good approximation. We check the accuracy of the present method by comparing the resulting spectra with experiments. In addition, the effect of delocalization in particular in the lowest unoccupied molecular orbital in the GW quasiparticle wave function is also discussed by rediagonalizing the Dyson equation.

  14. Validity of virial theorem in all-electron mixed basis density functional, Hartree–Fock, and GW calculations

    SciTech Connect

    Kuwahara, Riichi; Tadokoro, Yoichi; Ohno, Kaoru

    2014-08-28

    In this paper, we calculate kinetic and potential energy contributions to the electronic ground-state total energy of several isolated atoms (He, Be, Ne, Mg, Ar, and Ca) by using the local density approximation (LDA) in density functional theory, the Hartree–Fock approximation (HFA), and the self-consistent GW approximation (GWA). To this end, we have implemented self-consistent HFA and GWA routines in our all-electron mixed basis code, TOMBO. We confirm that virial theorem is fairly well satisfied in all of these approximations, although the resulting eigenvalue of the highest occupied molecular orbital level, i.e., the negative of the ionization potential, is in excellent agreement only in the case of the GWA. We find that the wave function of the lowest unoccupied molecular orbital level of noble gas atoms is a resonating virtual bound state, and that of the GWA spreads wider than that of the LDA and thinner than that of the HFA.

  15. Validity of virial theorem in all-electron mixed basis density functional, Hartree-Fock, and GW calculations.

    PubMed

    Kuwahara, Riichi; Tadokoro, Yoichi; Ohno, Kaoru

    2014-08-28

    In this paper, we calculate kinetic and potential energy contributions to the electronic ground-state total energy of several isolated atoms (He, Be, Ne, Mg, Ar, and Ca) by using the local density approximation (LDA) in density functional theory, the Hartree-Fock approximation (HFA), and the self-consistent GW approximation (GWA). To this end, we have implemented self-consistent HFA and GWA routines in our all-electron mixed basis code, TOMBO. We confirm that virial theorem is fairly well satisfied in all of these approximations, although the resulting eigenvalue of the highest occupied molecular orbital level, i.e., the negative of the ionization potential, is in excellent agreement only in the case of the GWA. We find that the wave function of the lowest unoccupied molecular orbital level of noble gas atoms is a resonating virtual bound state, and that of the GWA spreads wider than that of the LDA and thinner than that of the HFA. PMID:25173006

  16. Real-space electronic structure calculations with full-potential all-electron precision for transition metals

    NASA Astrophysics Data System (ADS)

    Ono, Tomoya; Heide, Marcus; Atodiresei, Nicolae; Baumeister, Paul; Tsukamoto, Shigeru; Blügel, Stefan

    2010-11-01

    We have developed an efficient computational scheme utilizing the real-space finite-difference formalism and the projector augmented-wave (PAW) method to perform precise first-principles electronic-structure simulations based on the density-functional theory for systems containing transition metals with a modest computational effort. By combining the advantages of the time-saving double-grid technique and the Fourier-filtering procedure for the projectors of pseudopotentials, we can overcome the egg box effect in the computations even for first-row elements and transition metals, which is a problem of the real-space finite-difference formalism. In order to demonstrate the potential power in terms of precision and applicability of the present scheme, we have carried out simulations to examine several bulk properties and structural energy differences between different bulk phases of transition metals and have obtained excellent agreement with the results of other precise first-principles methods such as a plane-wave-based PAW method and an all-electron full-potential linearized augmented plane-wave (FLAPW) method.

  17. All-electron first principles calculations of the ground and some low-lying excited states of BaI.

    PubMed

    Miliordos, Evangelos; Papakondylis, Aristotle; Tsekouras, Athanasios A; Mavridis, Aristides

    2007-10-01

    The electronic structure of the heavy diatomic molecule BaI has been examined for the first time by ab initio multiconfigurational configuration interaction (MRCI) and coupled cluster (RCCSD(T)) methods. The effects of special relativity have been taken into account through the second-order Douglas-Kroll-Hess approximation. The construction of Omega(omega,omega) potential energy curves allows for the estimation of "experimental" dissociation energies (De) of the first few excited states by exploiting the accurately known De experimental value of the X2Sigma+ ground state. All states examined are of ionic character with a Mulliken charge transfer of 0.5 e- from Ba to I, and this is reflected to large dipole moments ranging from 6 to 11 D. Despite the inherent difficulties of a heavy system like BaI, our results are encouraging. With the exception of bond distances that on the average are calculated 0.05 A longer than the experimental ones, common spectroscopic parameters are in fair agreement with experiment, whereas De values are on the average 10 kcal/mol smaller. PMID:17850123

  18. All-electron molecular Dirac-Hartree-Fock calculations: Properties of the group IV monoxides GeO, SnO and PbO

    NASA Technical Reports Server (NTRS)

    Dyall, Kenneth G.

    1991-01-01

    Dirac-Hartree-Fock calculations have been carried out on the ground states of the group IV monoxides GeO, SnO and PbO. Geometries, dipole moments and infrared data are presented. For comparison, nonrelativistic, first-order perturbation and relativistic effective core potential calculations have also been carried out. Where appropriate the results are compared with the experimental data and previous calculations. Spin-orbit effects are of great importance for PbO, where first-order perturbation theory including only the mass-velocity and Darwin terms is inadequate to predict the relativistic corrections to the properties. The relativistic effective core potential results show a larger deviation from the all-electron values than for the hydrides, and confirm the conclusions drawn on the basis of the hydride calculations.

  19. Advancing Efficient All-Electron Electronic Structure Methods Based on Numeric Atom-Centered Orbitals for Energy Related Materials

    NASA Astrophysics Data System (ADS)

    Blum, Volker

    This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.

  20. Two-component relativistic density-functional calculations of the dimers of the halogens from bromine through element 117 using effective core potential and all-electron methods.

    PubMed

    Mitin, Alexander V; van Wüllen, Christoph

    2006-02-14

    A two-component quasirelativistic Hamiltonian based on spin-dependent effective core potentials is used to calculate ionization energies and electron affinities of the heavy halogen atom bromine through the superheavy element 117 (eka-astatine) as well as spectroscopic constants of the homonuclear dimers of these atoms. We describe a two-component Hartree-Fock and density-functional program that treats spin-orbit coupling self-consistently within the orbital optimization procedure. A comparison with results from high-order Douglas-Kroll calculations--for the superheavy systems also with zeroth-order regular approximation and four-component Dirac results--demonstrates the validity of the pseudopotential approximation. The density-functional (but not the Hartree-Fock) results show very satisfactory agreement with theoretical coupled cluster as well as experimental data where available, such that the theoretical results can serve as an estimate for the hitherto unknown properties of astatine, element 117, and their dimers. PMID:16483205

  1. Electronic structure and physical properties of the spinel-type phase of BeP{sub 2}N{sub 4} from all-electron density functional calculations

    SciTech Connect

    Ching, W. Y.; Aryal, Sitram; Rulis, Paul; Schnick, Wolfgang

    2011-04-15

    Using density-functional-theory-based ab initio methods, the electronic structure and physical properties of the newly synthesized nitride BeP{sub 2}N{sub 4} with a phenakite-type structure and the predicted high-pressure spinel phase of BeP{sub 2}N{sub 4} are studied in detail. It is shown that both polymorphs are wide band-gap semiconductors with relatively small electron effective masses at the conduction-band minima. The spinel-type phase is more covalently bonded due to the increased number of P-N bonds for P at the octahedral sites. Calculations of mechanical properties indicate that the spinel-type polymorph is a promising superhard material with notably large bulk, shear, and Young's moduli. Also calculated are the Be K, P K, P L{sub 3}, and N K edges of the electron energy-loss near-edge structure for both phases. They show marked differences because of the different local environments of the atoms in the two crystalline polymorphs. These differences will be very useful for the experimental identification of the products of high-pressure syntheses targeting the predicted spinel-type phase of BeP{sub 2}N{sub 4}.

  2. Singlet-triplet energy splitting between 1D and 3D (1s2 2s nd), n = 3, 4, 5, and 6, Rydberg states of the beryllium atom (9Be) calculated with all-electron explicitly correlated Gaussian functions

    NASA Astrophysics Data System (ADS)

    Sharkey, Keeper L.; Bubin, Sergiy; Adamowicz, Ludwik

    2014-11-01

    Accurate variational nonrelativistic quantum-mechanical calculations are performed for the five lowest 1D and four lowest 3D states of the 9Be isotope of the beryllium atom. All-electron explicitly correlated Gaussian (ECG) functions are used in the calculations and their nonlinear parameters are optimized with the aid of the analytical energy gradient determined with respect to these parameters. The effect of the finite nuclear mass is directly included in the Hamiltonian used in the calculations. The singlet-triplet energy gaps between the corresponding 1D and 3D states, are reported.

  3. A wavelet-based Projector Augmented-Wave (PAW) method: Reaching frozen-core all-electron precision with a systematic, adaptive and localized wavelet basis set

    NASA Astrophysics Data System (ADS)

    Rangel, T.; Caliste, D.; Genovese, L.; Torrent, M.

    2016-11-01

    We present a Projector Augmented-Wave (PAW) method based on a wavelet basis set. We implemented our wavelet-PAW method as a PAW library in the ABINIT package [http://www.abinit.org] and into BigDFT [http://www.bigdft.org]. We test our implementation in prototypical systems to illustrate the potential usage of our code. By using the wavelet-PAW method, we can simulate charged and special boundary condition systems with frozen-core all-electron precision. Furthermore, our work paves the way to large-scale and potentially order- N simulations within a PAW method.

  4. X-ray absorption spectra of hexagonal ice and liquid water by all-electron Gaussian and augmented plane wave calculations.

    PubMed

    Iannuzzi, Marcella

    2008-05-28

    Full potential x-ray spectroscopy simulations of hexagonal ice and liquid water are performed by means of the newly implemented methodology based on the Gaussian augmented plane waves formalism. The computed spectra obtained within the supercell approach are compared to experimental data. The variations of the spectral distribution determined by the quality of the basis set, the size of the sample, and the choice of the core-hole potential are extensively discussed. The second part of this work is focused on the understanding of the connections between specific configurations of the hydrogen bond network and the corresponding contributions to the x-ray absorption spectrum in liquid water. Our results confirm that asymmetrically coordinated molecules, in particular, those donating only one or no hydrogen bond, are associated with well identified spectral signatures that differ significantly from the ice spectral profile. However, transient local structures, with half formed hydrogen bonds, may still give rise to spectra with dominant postedge contributions and relatively weaker oscillator strengths at lower energy. This explains why by averaging the spectra over all the O atoms of liquid instantaneous configurations extracted from ab initio molecular dynamics trajectories, the spectral features indicating the presence of weak or broken hydrogen bonds turn out to be attenuated and sometimes not clearly distinguishable.

  5. X-ray absorption spectra of hexagonal ice and liquid water by all-electron Gaussian and augmented plane wave calculations

    NASA Astrophysics Data System (ADS)

    Iannuzzi, Marcella

    2008-05-01

    Full potential x-ray spectroscopy simulations of hexagonal ice and liquid water are performed by means of the newly implemented methodology based on the Gaussian augmented plane waves formalism. The computed spectra obtained within the supercell approach are compared to experimental data. The variations of the spectral distribution determined by the quality of the basis set, the size of the sample, and the choice of the core-hole potential are extensively discussed. The second part of this work is focused on the understanding of the connections between specific configurations of the hydrogen bond network and the corresponding contributions to the x-ray absorption spectrum in liquid water. Our results confirm that asymmetrically coordinated molecules, in particular, those donating only one or no hydrogen bond, are associated with well identified spectral signatures that differ significantly from the ice spectral profile. However, transient local structures, with half formed hydrogen bonds, may still give rise to spectra with dominant postedge contributions and relatively weaker oscillator strengths at lower energy. This explains why by averaging the spectra over all the O atoms of liquid instantaneous configurations extracted from ab initio molecular dynamics trajectories, the spectral features indicating the presence of weak or broken hydrogen bonds turn out to be attenuated and sometimes not clearly distinguishable.

  6. How localized is ``local?'' Efficiency vs. accuracy of O(N) domain decomposition in local orbital based all-electron electronic structure theory

    NASA Astrophysics Data System (ADS)

    Havu, Vile; Blum, Volker; Scheffler, Matthias

    2007-03-01

    Numeric atom-centered local orbitals (NAO) are efficient basis sets for all-electron electronic structure theory. The locality of NAO's can be exploited to render (in principle) all operations of the self-consistency cycle O(N). This is straightforward for 3D integrals using domain decomposition into spatially close subsets of integration points, enabling critical computational savings that are effective from ˜tens of atoms (no significant overhead for smaller systems) and make large systems (100s of atoms) computationally feasible. Using a new all-electron NAO-based code,^1 we investigate the quantitative impact of exploiting this locality on two distinct classes of systems: Large light-element molecules [Alanine-based polypeptide chains (Ala)n], and compact transition metal clusters. Strict NAO locality is achieved by imposing a cutoff potential with an onset radius rc, and exploited by appropriately shaped integration domains (subsets of integration points). Conventional tight rc<= 3å have no measurable accuracy impact in (Ala)n, but introduce inaccuracies of 20-30 meV/atom in Cun. The domain shape impacts the computational effort by only 10-20 % for reasonable rc. ^1 V. Blum, R. Gehrke, P. Havu, V. Havu, M. Scheffler, The FHI Ab Initio Molecular Simulations (aims) Project, Fritz-Haber-Institut, Berlin (2006).

  7. An algorithm for nonrelativistic quantum-mechanical finite-nuclear-mass variational calculations of nitrogen atom in L = 0, M = 0 states using all-electrons explicitly correlated Gaussian basis functions

    SciTech Connect

    Sharkey, Keeper L.; Adamowicz, Ludwik

    2014-05-07

    An algorithm for quantum-mechanical nonrelativistic variational calculations of L = 0 and M = 0 states of atoms with an arbitrary number of s electrons and with three p electrons have been implemented and tested in the calculations of the ground {sup 4}S state of the nitrogen atom. The spatial part of the wave function is expanded in terms of all-electrons explicitly correlated Gaussian functions with the appropriate pre-exponential Cartesian angular factors for states with the L = 0 and M = 0 symmetry. The algorithm includes formulas for calculating the Hamiltonian and overlap matrix elements, as well as formulas for calculating the analytic energy gradient determined with respect to the Gaussian exponential parameters. The gradient is used in the variational optimization of these parameters. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all-particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. With that, the mass effect on the total ground-state energy is determined.

  8. Frozen core potential scheme with a relativistic electronic Hamiltonian: Theoretical connection between the model potential and all-electron treatments

    NASA Astrophysics Data System (ADS)

    Seino, Junji; Tarumi, Moto; Nakai, Hiromi

    2014-01-01

    This Letter proposes an accurate scheme using frozen core orbitals, called the frozen core potential (FCP) method, to theoretically connect model potential calculations to all-electron (AE) ones. The present scheme is based on the Huzinaga-Cantu equation combined with spin-free relativistic Douglas-Kroll-Hess Hamiltonians. The local unitary transformation scheme for efficiently constructing the Hamiltonian produces a seamless extension to the FCP method in a relativistic framework. Numerical applications to coinage diatomic molecules illustrate the high accuracy of this FCP method, as compared to AE calculations. Furthermore, the efficiency of the FCP method is also confirmed by these calculations.

  9. Data base to compare calculations and observations

    SciTech Connect

    Tichler, J.L.

    1985-01-01

    Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)

  10. Upper Subcritical Calculations Based on Correlated Data

    SciTech Connect

    Sobes, Vladimir; Rearden, Bradley T; Mueller, Don; Marshall, William BJ J; Scaglione, John M; Dunn, Michael E

    2015-01-01

    The American National Standards Institute and American Nuclear Society standard for Validation of Neutron Transport Methods for Nuclear Criticality Safety Calculations defines the upper subcritical limit (USL) as “a limit on the calculated k-effective value established to ensure that conditions calculated to be subcritical will actually be subcritical.” Often, USL calculations are based on statistical techniques that infer information about a nuclear system of interest from a set of known/well-characterized similar systems. The work in this paper is part of an active area of research to investigate the way traditional trending analysis is used in the nuclear industry, and in particular, the research is assessing the impact of the underlying assumption that the experimental data being analyzed for USL calculations are statistically independent. In contrast, the multiple experiments typically used for USL calculations can be correlated because they are often performed at the same facilities using the same materials and measurement techniques. This paper addresses this issue by providing a set of statistical inference methods to calculate the bias and bias uncertainty based on the underlying assumption that the experimental data are correlated. Methods to quantify these correlations are the subject of a companion paper and will not be discussed here. The newly proposed USL methodology is based on the assumption that the integral experiments selected for use in the establishment of the USL are sufficiently applicable and that experimental correlations are known. Under the assumption of uncorrelated data, the new methods collapse directly to familiar USL equations currently used. We will demonstrate our proposed methods on real data and compare them to calculations of currently used methods such as USLSTATS and NUREG/CR-6698. Lastly, we will also demonstrate the effect experiment correlations can have on USL calculations.

  11. Numerical inductance calculations based on first principles.

    PubMed

    Shatz, Lisa F; Christensen, Craig W

    2014-01-01

    A method of calculating inductances based on first principles is presented, which has the advantage over the more popular simulators in that fundamental formulas are explicitly used so that a deeper understanding of the inductance calculation is obtained with no need for explicit discretization of the inductor. It also has the advantage over the traditional method of formulas or table lookups in that it can be used for a wider range of configurations. It relies on the use of fast computers with a sophisticated mathematical computing language such as Mathematica to perform the required integration numerically so that the researcher can focus on the physics of the inductance calculation and not on the numerical integration.

  12. Label-free all-electronic biosensing in microfluidic systems

    NASA Astrophysics Data System (ADS)

    Stanton, Michael A.

    Label-free, all-electronic detection techniques offer great promise for advancements in medical and biological analysis. Electrical sensing can be used to measure both interfacial and bulk impedance changes in conducting solutions. Electronic sensors produced using standard microfabrication processes are easily integrated into microfluidic systems. Combined with the sensitivity of radiofrequency electrical measurements, this approach offers significant advantages over competing biological sensing methods. Scalable fabrication methods also provide a means of bypassing the prohibitive costs and infrastructure associated with current technologies. We describe the design, development and use of a radiofrequency reflectometer integrated into a microfluidic system towards the specific detection of biologically relevant materials. We developed a detection protocol based on impedimetric changes caused by the binding of antibody/antigen pairs to the sensing region. Here we report the surface chemistry that forms the necessary capture mechanism. Gold-thiol binding was utilized to create an ordered alkane monolayer on the sensor surface. Exposed functional groups target the N-terminus, affixing a protein to the monolayer. The general applicability of this method lends itself to a wide variety of proteins. To demonstrate specificity, commercially available mouse anti- Streptococcus Pneumoniae monoclonal antibody was used to target the full-length recombinant pneumococcal surface protein A, type 2 strain D39 expressed by Streptococcus Pneumoniae. We demonstrate the RF response of the sensor to both the presence of the surface decoration and bound SPn cells in a 1x phosphate buffered saline solution. The combined microfluidic sensor represents a powerful platform for the analysis and detection of cells and biomolecules.

  13. GPU-based fast gamma index calculation

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Jia, Xun; Jiang, Steve B.

    2011-03-01

    The γ-index dose comparison tool has been widely used to compare dose distributions in cancer radiotherapy. The accurate calculation of γ-index requires an exhaustive search of the closest Euclidean distance in the high-resolution dose-distance space. This is a computational intensive task when dealing with 3D dose distributions. In this work, we combine a geometric method (Ju et al 2008 Med. Phys. 35 879-87) with a radial pre-sorting technique (Wendling et al 2007 Med. Phys. 34 1647-54) and implement them on computer graphics processing units (GPUs). The developed GPU-based γ-index computational tool is evaluated on eight pairs of IMRT dose distributions. The γ-index calculations can be finished within a few seconds for all 3D testing cases on one single NVIDIA Tesla C1060 card, achieving 45-75× speedup compared to CPU computations conducted on an Intel Xeon 2.27 GHz processor. We further investigated the effect of various factors on both CPU and GPU computation time. The strategy of pre-sorting voxels based on their dose difference values speeds up the GPU calculation by about 2.7-5.5 times. For n-dimensional dose distributions, γ-index calculation time on CPU is proportional to the summation of γn over all voxels, while that on GPU is affected by γn distributions and is approximately proportional to the γn summation over all voxels. We found that increasing the resolution of dose distributions leads to a quadratic increase of computation time on CPU, while less-than-quadratic increase on GPU. The values of dose difference and distance-to-agreement criteria also have an impact on γ-index calculation time.

  14. Grid-based electronic structure calculations: The tensor decomposition approach

    NASA Astrophysics Data System (ADS)

    Rakhuba, M. V.; Oseledets, I. V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  15. GPU-based calculations in digital holography

    NASA Astrophysics Data System (ADS)

    Madrigal, R.; Acebal, P.; Blaya, S.; Carretero, L.; Fimia, A.; Serrano, F.

    2013-05-01

    In this work we are going to apply GPU (Graphical Processing Units) with CUDA environment for scientific calculations, concretely high cost computations on the field of digital holography. For this, we have studied three typical problems in digital holography such as Fourier transforms, Fresnel reconstruction of the hologram and the calculation of vectorial diffraction integral. In all cases the runtime at different image size and the corresponding accuracy were compared to the obtained by traditional calculation systems. The programs have been carried out on a computer with a graphic card of last generation, Nvidia GTX 680, which is optimized for integer calculations. As a result a large reduction of runtime has been obtained which allows a significant improvement. Concretely, 15 fold shorter times for Fresnel approximation calculations and 600 times for the vectorial diffraction integral. These initial results, open the possibility for applying such kind of calculations in real time digital holography.

  16. Rapid Bacterial Detection via an All-Electronic CMOS Biosensor

    PubMed Central

    Nikkhoo, Nasim; Cumby, Nichole; Gulak, P. Glenn; Maxwell, Karen L.

    2016-01-01

    The timely and accurate diagnosis of infectious diseases is one of the greatest challenges currently facing modern medicine. The development of innovative techniques for the rapid and accurate identification of bacterial pathogens in point-of-care facilities using low-cost, portable instruments is essential. We have developed a novel all-electronic biosensor that is able to identify bacteria in less than ten minutes. This technology exploits bacteriocins, protein toxins naturally produced by bacteria, as the selective biological detection element. The bacteriocins are integrated with an array of potassium-selective sensors in Complementary Metal Oxide Semiconductor technology to provide an inexpensive bacterial biosensor. An electronic platform connects the CMOS sensor to a computer for processing and real-time visualization. We have used this technology to successfully identify both Gram-positive and Gram-negative bacteria commonly found in human infections. PMID:27618185

  17. Broadband all-electronically tunable MEMS terahertz quantum cascade lasers.

    PubMed

    Han, Ningren; de Geofroy, Alexander; Burghoff, David P; Chan, Chun Wang I; Lee, Alan Wei Min; Reno, John L; Hu, Qing

    2014-06-15

    In this work, we demonstrate all-electronically tunable terahertz quantum cascade lasers (THz QCLs) with MEMS tuner structures. A two-stage MEMS tuner device is fabricated by a commercial open-foundry process performed by the company MEMSCAP. This provides an inexpensive, rapid, and reliable approach for MEMS tuner fabrication for THz QCLs with a high-precision alignment scheme. In order to electronically actuate the MEMS tuner device, an open-loop cryogenic piezo nanopositioning stage is integrated with the device chip. Our experimental result shows that at least 240 GHz of single-mode continuous electronic tuning can be achieved in cryogenic environments (∼4  K) without mode hopping. This provides an important step toward realizing turn-key bench-top tunable THz coherent sources for spectroscopic and coherent tomography applications.

  18. Rapid Bacterial Detection via an All-Electronic CMOS Biosensor.

    PubMed

    Nikkhoo, Nasim; Cumby, Nichole; Gulak, P Glenn; Maxwell, Karen L

    2016-01-01

    The timely and accurate diagnosis of infectious diseases is one of the greatest challenges currently facing modern medicine. The development of innovative techniques for the rapid and accurate identification of bacterial pathogens in point-of-care facilities using low-cost, portable instruments is essential. We have developed a novel all-electronic biosensor that is able to identify bacteria in less than ten minutes. This technology exploits bacteriocins, protein toxins naturally produced by bacteria, as the selective biological detection element. The bacteriocins are integrated with an array of potassium-selective sensors in Complementary Metal Oxide Semiconductor technology to provide an inexpensive bacterial biosensor. An electronic platform connects the CMOS sensor to a computer for processing and real-time visualization. We have used this technology to successfully identify both Gram-positive and Gram-negative bacteria commonly found in human infections. PMID:27618185

  19. Spreadsheet Based Scaling Calculations and Membrane Performance

    SciTech Connect

    Wolfe, T D; Bourcier, W L; Speth, T F

    2000-12-28

    Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI) for each solid of

  20. Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set.

    PubMed

    Paschoal, D; Guerra, C Fonseca; de Oliveira, M A L; Ramalho, T C; Dos Santos, H F

    2016-10-01

    Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the Pt-195 NMR chemical shift, are presented for Pt and all elements commonly found as Pt-ligands. The new basis sets identified as NMR-DKH were partially contracted as a triple-zeta doubly polarized scheme with all coefficients obtained from a Douglas-Kroll-Hess (DKH) second-order scalar relativistic calculation. The Pt-195 chemical shift was predicted through empirical models fitted to reproduce experimental data for a set of 183 Pt(II) complexes which NMR sign ranges from -1000 to -6000 ppm. Furthermore, the models were validated using a new set of 75 Pt(II) complexes, not included in the descriptive set. The models were constructed using non-relativistic Hamiltonian at density functional theory (DFT-PBEPBE) level with NMR-DKH basis set for all atoms. For the best model, the mean absolute deviation (MAD) and the mean relative deviation (MRD) were 150 ppm and 6%, respectively, for the validation set (75 Pt-complexes) and 168 ppm (MAD) and 5% (MRD) for all 258 Pt(II) complexes. These results were comparable with relativistic DFT calculation, 200 ppm (MAD) and 6% (MRD). © 2016 Wiley Periodicals, Inc. PMID:27510431

  1. Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set.

    PubMed

    Paschoal, D; Guerra, C Fonseca; de Oliveira, M A L; Ramalho, T C; Dos Santos, H F

    2016-10-01

    Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the Pt-195 NMR chemical shift, are presented for Pt and all elements commonly found as Pt-ligands. The new basis sets identified as NMR-DKH were partially contracted as a triple-zeta doubly polarized scheme with all coefficients obtained from a Douglas-Kroll-Hess (DKH) second-order scalar relativistic calculation. The Pt-195 chemical shift was predicted through empirical models fitted to reproduce experimental data for a set of 183 Pt(II) complexes which NMR sign ranges from -1000 to -6000 ppm. Furthermore, the models were validated using a new set of 75 Pt(II) complexes, not included in the descriptive set. The models were constructed using non-relativistic Hamiltonian at density functional theory (DFT-PBEPBE) level with NMR-DKH basis set for all atoms. For the best model, the mean absolute deviation (MAD) and the mean relative deviation (MRD) were 150 ppm and 6%, respectively, for the validation set (75 Pt-complexes) and 168 ppm (MAD) and 5% (MRD) for all 258 Pt(II) complexes. These results were comparable with relativistic DFT calculation, 200 ppm (MAD) and 6% (MRD). © 2016 Wiley Periodicals, Inc.

  2. Proton dose calculation based on in-air fluence measurements.

    PubMed

    Schaffner, Barbara

    2008-03-21

    Proton dose calculation algorithms--as well as photon and electron algorithms--are usually based on configuration measurements taken in a water phantom. The exceptions to this are proton dose calculation algorithms for modulated scanning beams. There, it is usual to measure the spot profiles in air. We use the concept of in-air configuration measurements also for scattering and uniform scanning (wobbling) proton delivery techniques. The dose calculation includes a separate step for the calculation of the in-air fluence distribution per energy layer. The in-air fluence calculation is specific to the technique and-to a lesser extent-design of the treatment machine. The actual dose calculation uses the in-air fluence as input and is generic for all proton machine designs and techniques. PMID:18367787

  3. A basic insight to FEM_based temperature distribution calculation

    NASA Astrophysics Data System (ADS)

    Purwaningsih, A.; Khairina

    2012-06-01

    A manual for finite element method (FEM)-based temperature distribution calculation has been performed. The code manual is written in visual basic that is operated in windows. The calculation of temperature distribution based on FEM has three steps namely preprocessor, processor and post processor. Therefore, three manuals are produced namely a preprocessor to prepare the data, a processor to solve the problem, and a post processor to display the result. In these manuals, every step of a general procedure is described in detail. It is expected, by these manuals, the understanding of calculating temperature distribution be better and easier.

  4. All-electron Kohn–Sham density functional theory on hierarchic finite element spaces

    SciTech Connect

    Schauer, Volker; Linder, Christian

    2013-10-01

    In this work, a real space formulation of the Kohn–Sham equations is developed, making use of the hierarchy of finite element spaces from different polynomial order. The focus is laid on all-electron calculations, having the highest requirement onto the basis set, which must be able to represent the orthogonal eigenfunctions as well as the electrostatic potential. A careful numerical analysis is performed, which points out the numerical intricacies originating from the singularity of the nuclei and the necessity for approximations in the numerical setting, with the ambition to enable solutions within a predefined accuracy. In this context the influence of counter-charges in the Poisson equation, the requirement of a finite domain size, numerical quadratures and the mesh refinement are examined as well as the representation of the electrostatic potential in a high order finite element space. The performance and accuracy of the method is demonstrated in computations on noble gases. In addition the finite element basis proves its flexibility in the calculation of the bond-length as well as the dipole moment of the carbon monoxide molecule.

  5. Cluster size dependence of double ionization energy spectra of spin-polarized aluminum and sodium clusters: All-electron spin-polarized GW+T -matrix method

    NASA Astrophysics Data System (ADS)

    Noguchi, Yoshifumi; Ohno, Kaoru; Solovyev, Igor; Sasaki, Taizo

    2010-04-01

    The double ionization energy (DIE) spectra are calculated for the spin-polarized aluminum and sodium clusters by means of the all-electron spin-polarized GW+T -matrix method based on the many-body perturbation theory. Our method using the one- and two-particle Green’s functions enables us to determine the whole spectra at once in a single calculation. The smaller is the size of the cluster, the larger the difference between the minimal double ionization energy and the twice of the ionization potential. This is because the strong Coulomb repulsion between two holes becomes dominant in small confined geometry. Due to Pauli’s exclusion principle, the parallel spin DIE is close to or smaller than the antiparallel spin DIE except for Na4 that has well-separated highest and second highest occupied molecular-orbital levels calculated by the spin-dependent GW calculation. In this paper, we compare the results calculated for aluminum and sodium clusters and discuss the spin-polarized effect and the cluster size dependence of the resulting spectra in detail.

  6. Software-Based Visual Loan Calculator For Banking Industry

    NASA Astrophysics Data System (ADS)

    Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.

    2012-03-01

    industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.

  7. Electronic coupling calculation and pathway analysis of electron transfer reaction using ab initio fragment-based method. I. FMO-LCMO approach

    NASA Astrophysics Data System (ADS)

    Nishioka, Hirotaka; Ando, Koji

    2011-05-01

    By making use of an ab initio fragment-based electronic structure method, fragment molecular orbital-linear combination of MOs of the fragments (FMO-LCMO), developed by Tsuneyuki et al. [Chem. Phys. Lett. 476, 104 (2009)], 10.1016/j.cplett.2009.05.069, we propose a novel approach to describe long-distance electron transfer (ET) in large system. The FMO-LCMO method produces one-electron Hamiltonian of whole system using the output of the FMO calculation with computational cost much lower than conventional all-electron calculations. Diagonalizing the FMO-LCMO Hamiltonian matrix, the molecular orbitals (MOs) of the whole system can be described by the LCMOs. In our approach, electronic coupling TDA of ET is calculated from the energy splitting of the frontier MOs of whole system or perturbation method in terms of the FMO-LCMO Hamiltonian matrix. Moreover, taking into account only the valence MOs of the fragments, we can considerably reduce computational cost to evaluate TDA. Our approach was tested on four different kinds of model ET systems with non-covalent stacks of methane, non-covalent stacks of benzene, trans-alkanes, and alanine polypeptides as their bridge molecules, respectively. As a result, it reproduced reasonable TDA for all cases compared to the reference all-electron calculations. Furthermore, the tunneling pathway at fragment-based resolution was obtained from the tunneling current method with the FMO-LCMO Hamiltonian matrix.

  8. Calculation of electromagnetic parameter based on interpolation algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-11-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole.

  9. Electronic Structure Calculations of delta-Pu Based Alloys

    SciTech Connect

    Landa, A; Soderlind, P; Ruban, A

    2003-11-13

    First-principles methods are employed to study the ground-state properties of {delta}-Pu-based alloys. The calculations show that an alloy component larger than {delta}-Pu has a stabilizing effect. Detailed calculations have been performed for the {delta}-Pu{sub 1-c}Am{sub c} system. Calculated density of Pu-Am alloys agrees well with the experimental data. The paramagnetic {yields} antiferromagnetic transition temperature (T{sub c}) of {delta}-Pu{sub 1-c}Am{sub c} alloys is calculated by a Monte-Carlo technique. By introducing Am into the system, one could lower T{sub c} from 548 K (pure Pu) to 372 K (Pu{sub 70}Am{sub 30}). We also found that, contrary to pure Pu where this transition destabilizes {delta}-phase, Pu{sub 3}Am compound remains stable in the antiferromagnetic phase that correlates with the recent discovery of a Curie-Weiss behavior of {delta}-Pu{sub 1-c}Am{sub c} at c {approx} 24 at. %.

  10. Calculating track-based observables for the LHC.

    PubMed

    Chang, Hsi-Ming; Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J

    2013-09-01

    By using observables that only depend on charged particles (tracks), one can efficiently suppress pileup contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new nonperturbative objects called track functions which absorb infrared divergences. The track function Ti(x) describes the energy fraction x of a hard parton i which is converted into charged hadrons. We give a field-theoretic definition of the track function and derive its renormalization group evolution, which is in excellent agreement with the pythia parton shower. We then perform a next-to-leading order calculation of the total energy fraction of charged particles in e+ e-→ hadrons. To demonstrate the implications of our framework for the LHC, we match the pythia parton shower onto a set of track functions to describe the track mass distribution in Higgs plus one jet events. We also show how to reduce smearing due to hadronization fluctuations by measuring dimensionless track-based ratios.

  11. Calculating Track-Based Observables for the LHC

    NASA Astrophysics Data System (ADS)

    Chang, Hsi-Ming; Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J.

    2013-09-01

    By using observables that only depend on charged particles (tracks), one can efficiently suppress pileup contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new nonperturbative objects called track functions which absorb infrared divergences. The track function Ti(x) describes the energy fraction x of a hard parton i which is converted into charged hadrons. We give a field-theoretic definition of the track function and derive its renormalization group evolution, which is in excellent agreement with the pythia parton shower. We then perform a next-to-leading order calculation of the total energy fraction of charged particles in e+e-→ hadrons. To demonstrate the implications of our framework for the LHC, we match the pythia parton shower onto a set of track functions to describe the track mass distribution in Higgs plus one jet events. We also show how to reduce smearing due to hadronization fluctuations by measuring dimensionless track-based ratios.

  12. 40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specified in 40 CFR 86.144 or 40 CFR part 1065, subpart G. (b) For composite emission calculations over... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and...

  13. 40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specified in 40 CFR 86.144 or 40 CFR part 1065, subpart G. (b) For composite emission calculations over... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and...

  14. All-electronic biosensing in microfluidics: bulk and surface impedance sensing

    NASA Astrophysics Data System (ADS)

    Fraikin, Jean-Luc

    All-electronic, impedance-based sensing techniques offer promising new routes for probing nanoscale biological processes. The ease with which electrical probes can be fabricated at the nanoscale and integrated into microfluidic systems, combined with the large bandwidth afforded by radiofrequency electrical measurement, gives electrical detection significant advantages over other sensing approaches. We have developed two microfluidic devices for impedance-based biosensing. The first is a novel radiofrequency (rf) field-effect transistor which uses the electrolytic Debye layer as its active element. We demonstrate control of the nm-thick Debye layer using an external gate voltage, with gate modulation at frequencies as high 5 MHz. We use this sensor to make quantitative measurements of the electric double-layer capacitance, including determining and controlling the potential of zero charge of the electrodes, a quantity of importance for electrochemistry and impedance-based biosensing. The second device is a microfluidic analyzer for high-throughput, label-free measurement of nanoparticles suspended in a fluid. We demonstrate detection and volumetric analysis of individual synthetic nanoparticles (<100 nm dia.) with sufficient throughput to analyze >500,000 particles/second, and are able to distinguish subcomponents of a polydisperse particle mixture with diameters larger than about 30-40 nm. We also demonstrate the rapid (seconds) size and titer analysis of unlabeled bacteriophage T7 (55-65 nm dia.) in both salt solution and mouse blood plasma, using ˜ 1 muL of analyte. Surprisingly, we find that the background of naturally-occurring nanoparticles in plasma have a power-law size distribution. The scalable fabrication of these instruments, and the simple electronics required for readout make them well-suited for practical applications.

  15. Advancing QCD-based calculations of energy loss

    NASA Astrophysics Data System (ADS)

    Tywoniuk, Konrad

    2013-08-01

    We give a brief overview of the basics and current developments of QCD-based calculations of radiative processes in medium. We put an emphasis on the underlying physics concepts and discuss the theoretical uncertainties inherently associated with the fundamental parameters to be extracted from data. An important area of development is the study of the single-gluon emission in medium. Moreover, establishing the correct physical picture of multi-gluon emissions is imperative for comparison with data. We will report on progress made in both directions and discuss perspectives for the future.

  16. Supersampling method for efficient grid-based electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-01

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.

  17. Sensor Based Engine Life Calculation: A Probabilistic Perspective

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei; Chen, Philip

    2003-01-01

    It is generally known that an engine component will accumulate damage (life usage) during its lifetime of use in a harsh operating environment. The commonly used cycle count for engine component usage monitoring has an inherent range of uncertainty which can be overly costly or potentially less safe from an operational standpoint. With the advance of computer technology, engine operation modeling, and the understanding of damage accumulation physics, it is possible (and desirable) to use the available sensor information to make a more accurate assessment of engine component usage. This paper describes a probabilistic approach to quantify the effects of engine operating parameter uncertainties on the thermomechanical fatigue (TMF) life of a selected engine part. A closed-loop engine simulation with a TMF life model is used to calculate the life consumption of different mission cycles. A Monte Carlo simulation approach is used to generate the statistical life usage profile for different operating assumptions. The probabilities of failure of different operating conditions are compared to illustrate the importance of the engine component life calculation using sensor information. The results of this study clearly show that a sensor-based life cycle calculation can greatly reduce the risk of component failure as well as extend on-wing component life by avoiding unnecessary maintenance actions.

  18. Wannier-based calculation of the orbital magnetization in crystals

    NASA Astrophysics Data System (ADS)

    Lopez, M. G.; Vanderbilt, David; Thonhauser, T.; Souza, Ivo

    2012-01-01

    We present a first-principles scheme that allows the orbital magnetization of a magnetic crystal to be evaluated accurately and efficiently even in the presence of complex Fermi surfaces. Starting from an initial electronic-structure calculation with a coarse ab initio k-point mesh, maximally localized Wannier functions are constructed and used to interpolate the necessary k-space quantities on a fine mesh, in parallel to a previously developed formalism for the anomalous Hall conductivity [X. Wang, J. Yates, I. Souza, and D. Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.74.195118 74, 195118 (2006)]. We formulate our new approach in a manifestly gauge-invariant manner, expressing the orbital magnetization in terms of traces over matrices in Wannier space. Since only a few (e.g., of the order of 20) Wannier functions are typically needed to describe the occupied and partially occupied bands, these Wannier matrices are small, which makes the interpolation itself very efficient. The method has been used to calculate the orbital magnetization of bcc Fe, hcp Co, and fcc Ni. Unlike an approximate calculation based on integrating orbital currents inside atomic spheres, our results nicely reproduce the experimentally measured ordering of the orbital magnetization in these three materials.

  19. Probabilistic Study Conducted on Sensor-Based Engine Life Calculation

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei

    2004-01-01

    Turbine engine life management is a very complicated process to ensure the safe operation of an engine subjected to complex usage. The challenge of life management is to find a reasonable compromise between the safe operation and the maximum usage of critical parts to reduce maintenance costs. The commonly used "cycle count" approach does not take the engine operation conditions into account, and it oversimplifies the calculation of the life usage. Because of the shortcomings, many engine components are regularly pulled for maintenance before their usable life is over. And, if an engine has been running regularly under more severe conditions, components might not be taken out of service before they exceed their designed risk of failure. The NASA Glenn Research Center and its industrial and academic partners have been using measurable parameters to improve engine life estimation. This study was based on the Monte Carlo simulation of 5000 typical flights under various operating conditions. First a closed-loop engine model was developed to simulate the engine operation across the mission profile and a thermomechanical fatigue (TMF) damage model was used to calculate the actual damage during takeoff, where the maximum TMF accumulates. Next, a Weibull distribution was used to estimate the implied probability of failure for a given accumulated cycle count. Monte Carlo simulations were then employed to find the profiles of the TMF damage under different operating assumptions including parameter uncertainties. Finally, probabilities of failure for different operating conditions were analyzed to demonstrate the importance of a sensor-based damage calculation in order to better manage the risk of failure and on-wing life.

  20. Children Base Their Investment on Calculated Pay-Off

    PubMed Central

    Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard

    2012-01-01

    To investigate the rise of economic abilities during development we studied children aged between 3 and 10 in an exchange situation requiring them to calculate their investment based on different offers. One experimenter gave back a reward twice the amount given by the children, and a second always gave back the same quantity regardless of the amount received. To maximize pay-offs children had to invest a maximal amount with the first, and a minimal amount with the second. About one third of the 5-year-olds and most 7- and 10-year-olds were able to adjust their investment according to the partner, while all 3-year-olds failed. Such performances should be related to the rise of cognitive and social skills after 4 years. PMID:22413006

  1. All-electron segmented contraction basis sets of triple zeta valence quality for the fifth-row elements

    NASA Astrophysics Data System (ADS)

    Martins, L. S. C.; Jorge, F. E.; Machado, S. F.

    2015-11-01

    All-electron contracted Gaussian basis set of triple zeta valence quality plus polarisation functions (TZP) for the elements Cs, Ba, La, and from Hf to Rn is presented. Douglas-Kroll-Hess (DKH) basis set for fifth-row elements is also reported. We have recontracted the original TZP basis set, i.e., the values of the contraction coefficients are re-optimised using the second-order DKH Hamiltonian. By addition of diffuse functions (s, p, d, f, and g symmetries), which are optimised for the anion ground states, an augmented TZP basis set is constructed. Using the B3LYP hybrid functional, the performance of the TZP-DKH basis set is assessed for predicting atomic ionisation energy as well as spectroscopy constants of some compounds. Despite its compact size, this set demonstrates consistent, efficient, and reliable performance and will be especially useful in calculations of molecular properties that require explicit treatment of the core electrons.

  2. Rapid Parallel Calculation of shell Element Based On GPU

    NASA Astrophysics Data System (ADS)

    Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao

    2010-06-01

    Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.

  3. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  4. Trajectory Based Heating and Ablation Calculations for MESUR Pathfinder Aeroshell

    NASA Technical Reports Server (NTRS)

    Chen, Y. K.; Henline, W. D.; Tauber, M. E.; Arnold, James O. (Technical Monitor)

    1994-01-01

    Based on the geometry of Mars Environment Survey (MESUR) Pathfinder aeroshell and an estimated Mars entry trajectory, two-dimensional axisymmetric time dependent calculations have been obtained using GIANTS (Gauss-Siedel Implicit Aerothermodynamic Navier-Stokes code with Thermochemical Surface Conditions) code and CMA (Charring Material Thermal Response and Ablation) Program for heating analysis and heat shield material sizing. These two codes are interfaced using a loosely coupled technique. The flowfield and convective heat transfer coefficients are computed by the GIANTS code with a species balance condition for an ablating surface, and the time dependent in-depth conduction with surface blowing is simulated by the CMA code with a complete surface energy balance condition. In this study, SLA-561V has been selected as heat shield material. The solutions, including the minimum heat shield thicknesses over aeroshell forebody, pyrolysis gas blowing rates, surface heat fluxes and temperature distributions, flowfield, and in-depth temperature history of SLA-561V, are presented and discussed in detail.

  5. A divide and conquer real-space approach for all-electron molecular electrostatic potentials and interaction energies.

    PubMed

    Losilla, S A; Sundholm, D

    2012-06-01

    A computational scheme to perform accurate numerical calculations of electrostatic potentials and interaction energies for molecular systems has been developed and implemented. Molecular electron and energy densities are divided into overlapping atom-centered atomic contributions and a three-dimensional molecular remainder. The steep nuclear cusps are included in the atom-centered functions making the three-dimensional remainder smooth enough to be accurately represented with a tractable amount of grid points. The one-dimensional radial functions of the atom-centered contributions as well as the three-dimensional remainder are expanded using finite element functions. The electrostatic potential is calculated by integrating the Coulomb potential for each separate density contribution, using our tensorial finite element method for the three-dimensional remainder. We also provide algorithms to compute accurate electron-electron and electron-nuclear interactions numerically using the proposed partitioning. The methods have been tested on all-electron densities of 18 reasonable large molecules containing elements up to Zn. The accuracy of the calculated Coulomb interaction energies is in the range of 10(-3) to 10(-6) E(h) when using an equidistant grid with a step length of 0.05 a(0).

  6. Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine

    SciTech Connect

    Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois

    2013-01-01

    Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6 MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, − 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3 mm criteria. The mean and standard deviation of pixels passing

  7. All-electron formalism for total energy strain derivatives and stress tensor components for numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Knuth, Franz; Carbogno, Christian; Atalla, Viktor; Blum, Volker; Scheffler, Matthias

    2015-05-01

    We derive and implement the strain derivatives of the total energy of solids, i.e., the analytic stress tensor components, in an all-electron, numeric atom-centered orbital based density-functional formalism. We account for contributions that arise in the semi-local approximation (LDA/GGA) as well as in the generalized Kohn-Sham case, in which a fraction of exact exchange (hybrid functionals) is included. In this work, we discuss the details of the implementation including the numerical corrections for sparse integrations grids which allow to produce accurate results. We validate the implementation for a variety of test cases by comparing to strain derivatives performed via finite differences. Additionally, we include the detailed definition of the overlapping atom-centered integration formalism used in this work to obtain total energies and their derivatives.

  8. Calculation of dehydration absorbers based on improved phase equilibrium data

    SciTech Connect

    Oi, L.E.

    1999-07-01

    Dehydration using triethylene glycol (TEG) as an absorbent, is a standard process for natural gas treating. New and more accurate TEG/water equilibrium data have been measured between 1980 and 1990. However, this has not influenced much on the design methods of dehydration absorbers. Inaccurate equilibrium data have been extensively used in design calculations. When using data from a common source like Worley, an overall bubble cap tray efficiency between 25--40% has normally been recommended. This has resulted in a quite satisfactory and consistent design method. It is obvious that newer equilibrium data (Herskowitz, Parrish, Bestani) are more accurate. However, to achieve an improved design method, column efficiencies consistent with the new equilibrium data must be recommended. New equilibrium data have been correlated to an activity coefficient model for the liquid phase and combined with an equation of state for the gas phase. Performance data from the North Sea offshore platform Gullfaks C (drying 4--5 MMscmd) have been measured. The bubble cap column has been simulated, and the tray efficiency has been adjusted to fit the performance data. Tray efficiencies calculated with new equilibrium data are higher than 50%. Calculated tray efficiency values are dependent on the equilibrium data used. There are still uncertainties in equilibrium data for the TEC/water/natural gas system. When using accurate equilibrium data, an overall bubble cap tray efficiency of 40--50% and a Murphree efficiency of 55--70% can be expected at normal absorption conditions.

  9. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value based on constructed... ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and Normal Value § 351.405 Calculation of normal value based on constructed value. (a) Introduction....

  10. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 3 2011-04-01 2011-04-01 false Calculation of normal value based on constructed... ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and Normal Value § 351.405 Calculation of normal value based on constructed value. (a) Introduction....

  11. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  12. Freeway travel speed calculation model based on ETC transaction data.

    PubMed

    Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang

    2014-01-01

    Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107

  13. Efficient Parallel All-Electron Four-Component Dirac-Kohn-Sham Program Using a Distributed Matrix Approach II.

    PubMed

    Storchi, Loriano; Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Quiney, Harry M

    2013-12-10

    We propose a new complete memory-distributed algorithm, which significantly improves the parallel implementation of the all-electron four-component Dirac-Kohn-Sham (DKS) module of BERTHA (J. Chem. Theory Comput. 2010, 6, 384). We devised an original procedure for mapping the DKS matrix between an efficient integral-driven distribution, guided by the structure of specific G-spinor basis sets and by density fitting algorithms, and the two-dimensional block-cyclic distribution scheme required by the ScaLAPACK library employed for the linear algebra operations. This implementation, because of the efficiency in the memory distribution, represents a leap forward in the applicability of the DKS procedure to arbitrarily large molecular systems and its porting on last-generation massively parallel systems. The performance of the code is illustrated by some test calculations on several gold clusters of increasing size. The DKS self-consistent procedure has been explicitly converged for two representative clusters, namely Au20 and Au34, for which the density of electronic states is reported and discussed. The largest gold cluster uses more than 39k basis functions and DKS matrices of the order of 23 GB. PMID:26592273

  14. Analytic calculation of physiological acid-base parameters in plasma.

    PubMed

    Wooten, E W

    1999-01-01

    Analytic expressions for plasma total titratable base, base excess (DeltaCB), strong-ion difference, change in strong-ion difference (DeltaSID), change in Van Slyke standard bicarbonate (DeltaVSSB), anion gap, and change in anion gap are derived as a function of pH, total buffer ion concentration, and conditional molar equilibrium constants. The behavior of these various parameters under respiratory and metabolic acid-base disturbances for constant and variable buffer ion concentrations is considered. For constant noncarbonate buffer concentrations, DeltaSID = DeltaCB = DeltaVSSB, whereas these equalities no longer hold under changes in noncarbonate buffer concentration. The equivalence is restored if the reference state is changed to include the new buffer concentrations.

  15. Coupled-cluster based basis sets for valence correlation calculations

    NASA Astrophysics Data System (ADS)

    Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J.

    2016-03-01

    Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers.

  16. Coupled-cluster based basis sets for valence correlation calculations.

    PubMed

    Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J

    2016-03-14

    Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via ⟨r(n)⟩ (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers.

  17. Ray-based calculations of backscatter in laser fusion targets

    NASA Astrophysics Data System (ADS)

    Strozzi, D. J.; Williams, E. A.; Hinkel, D. E.; Froula, D. H.; London, R. A.; Callahan, D. A.

    2008-10-01

    A one-dimensional, steady-state model for Brillouin and Raman backscatter from an inhomogeneous plasma is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code DEPLETE, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as "plane-wave" simulations with the paraxial propagation code PF3D. Comparisons with Brillouin-scattering experiments at the OMEGA Laser Facility [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)] show that laser speckles greatly enhance the reflectivity over the DEPLETE results. An approximate upper bound on this enhancement, motivated by phase conjugation, is given by doubling the DEPLETE coupling coefficient. Analysis with DEPLETE of an ignition design for the National Ignition Facility (NIF) [J. A. Paisner, E. M. Campbell, and W. J. Hogan, Fusion Technol. 26, 755 (1994)], with a peak radiation temperature of 285eV, shows encouragingly low reflectivity. Re-absorption of Raman light is seen to be significant in this design.

  18. Ray-Based Calculations of Backscatter in Laser Fusion Targets

    SciTech Connect

    Strozzi, D J; Williams, E A; Hinkel, D E; Froula, D H; London, R A; Callahan, D A

    2008-02-26

    A steady-state model for Brillouin and Raman backscatter along a laser ray path is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code deplete, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as 'plane-wave' simulations with the paraxial propagation code pf3d. Comparisons with Brillouin-scattering experiments at the OMEGA Laser Facility [T. R. Boehly et al., Opt. Commun. 133, p. 495 (1997)] show that laser speckles greatly enhance the reflectivity over the deplete results. An approximate upper bound on this enhancement, motivated by phase conjugation, is given by doubling the deplete coupling coefficient. Analysis with deplete of an ignition design for the National Ignition Facility (NIF) [J. A. Paisner, E. M. Campbell, and W. J. Hogan, Fusion Technol. 26, p. 755 (1994)], with a peak radiation temperature of 285 eV, shows encouragingly low reflectivity. Doubling the coupling to bound the speckle enhancement suggests a less optimistic picture. Re-absorption of Raman light is seen to be significant in this design.

  19. Coupled-cluster based basis sets for valence correlation calculations.

    PubMed

    Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J

    2016-03-14

    Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via ⟨r(n)⟩ (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers. PMID:26979680

  20. UAV-based NDVI calculation over grassland: An alternative approach

    NASA Astrophysics Data System (ADS)

    Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc

    2016-04-01

    The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a

  1. Calculation of thermomechanical fatigue life based on isothermal behavior

    NASA Technical Reports Server (NTRS)

    Halford, G. R.; Saltsman, J. F.

    1987-01-01

    The isothermal and thermomechanical fatigue (TMF) crack initiation response of a hypothetical material was analyzed. Expected thermomechanical behavior was evaluated numerically based on simple, isothermal, cyclic stress-strain-time characteristics and on strainrange versus cyclic life relations that have been assigned to the material. The attempt was made to establish basic minimum requirements for the development of a physically accurate TMF life-prediction model. A worthy method must be able to deal with the simplest of conditions: that is, those for which thermal cycling, per se, introduces no damage mechanisms other than those found in isothermal behavior. Under these assumed conditions, the TMF life should be obtained uniquely from known isothermal behavior. The ramifications of making more complex assumptions will be dealt with in future studies. Although analyses are only in their early stages, considerable insight has been gained in understanding the characteristics of several existing high-temperature life-prediction methods. The present work indicates that the most viable damage parameter is based on the inelastic strainrange.

  2. Calculation of thermomechanical fatigue life based on isothermal behavior

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.; Saltsman, James F.

    1987-01-01

    The isothermal and thermomechanical fatigue (TMF) crack initiation response of a hypothetical material was analyzed. Expected thermomechanical behavior was evaluated numerically based on simple, isothermal, cyclic stress-strain - time characteristics and on strainrange versus cyclic life relations that have been assigned to the material. The attempt was made to establish basic minimum requirements for the development of a physically accurate TMF life-prediction model. A worthy method must be able to deal with the simplest of conditions: that is, those for which thermal cycling, per se, introduces no damage mechanisms other than those found in isothermal behavior. Under these assumed conditions, the TMF life should be obtained uniquely from known isothermal behavior. The ramifications of making more complex assumptions will be dealt with in future studies. Although analyses are only in their early stages, considerable insight has been gained in understanding the characteristics of several existing high-temperature life-prediction methods. The present work indicates that the most viable damage parameter is based on the inelastic strainrange.

  3. Validation of KENO based criticality calculations at Rocky Flats

    SciTech Connect

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P.

    1992-01-01

    In the absence of experimental data it is necessary to rely on computer based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the codes ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper we discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG G Rocky Flats.

  4. Aeroelastic Calculations Based on Three-Dimensional Euler Analysis

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Stefko, George L.

    1998-01-01

    This paper presents representative results from an aeroelastic code (TURBO-AE) based on an Euler/Navier-Stokes unsteady aerodynamic code (TURBO). Unsteady pressure, lift, and moment distributions are presented for a helical fan test configuration which is used to verify the code by comparison to two-dimensional linear potential (flat plate) theory. The results are for pitching and plunging motions over a range of phase angles, Good agreement with linear theory is seen for all phase angles except those near acoustic resonances. The agreement is better for pitching motions than for plunging motions. The reason for this difference is not understood at present. Numerical checks have been performed to ensure that solutions are independent of time step, converged to periodicity, and linearly dependent on amplitude of blade motion. The paper concludes with an evaluation of the current state of development of the TURBO-AE code and presents some plans for further development and validation of the TURBO-AE code.

  5. Validation of KENO-based criticality calculations at Rocky Flats

    SciTech Connect

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P. )

    1992-01-01

    In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k{sub eff} limit of 0.95 for the limiting-accident scanarios of a criticality evaluation.

  6. GYutsis: heuristic based calculation of general recoupling coefficients

    NASA Astrophysics Data System (ADS)

    Van Dyck, D.; Fack, V.

    2003-08-01

    General angular momentum recoupling coefficients can be expressed as a summation formula over products of 6- j coefficients. Yutsis, Levinson and Vanagas developed graphical techniques for representing the general recoupling coefficient as a cubic graph and they describe a set of reduction rules allowing a stepwise generation of the corresponding summation formula. This paper is a follow up to [Van Dyck and Fack, Comput. Phys. Comm. 151 (2003) 353-368] where we described a heuristic algorithm based on these techniques. In this article we separate the heuristic from the algorithm and describe some new heuristic approaches which can be plugged into the generic algorithm. We show that these new heuristics lead to good results: in many cases we get a more efficient summation formula than our previous approach, in particular for problems of higher order. In addition the new features and the use of our program GYutsis, which implements these techniques, is described both for end users and application programmers. Program summaryTitle of program: CycleCostAlgorithm, GYutsis Catalogue number: ADSA Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSA Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland. Users may obtain the program also by downloading either the compressed tar file gyutsis.tgz (for Unix and Linux) or the zip file gyutsis.zip (for Windows) from our website ( http://caagt.rug.ac.be/yutsis/). An applet version of the program is also available on our website and can be run in a web browser from the URL http://caagt.rug.ac.be/yutsis/GYutsisApplet.html. Licensing provisions: none Computers for which the program is designed: any computer with Sun's Java Runtime Environment 1.4 or higher installed. Programming language used: Java 1.2 (Compiler: Sun's SDK 1.4.0) No. of lines in program: approximately 9400 No. of bytes in distributed program, including test data, etc.: 544 117 Distribution format: tar gzip file Nature of

  7. exciting: a full-potential all-electron package implementing density-functional theory and many-body perturbation theory

    NASA Astrophysics Data System (ADS)

    Gulans, Andris; Kontur, Stefan; Meisenbichler, Christian; Nabok, Dmitrii; Pavone, Pasquale; Rigamonti, Santiago; Sagmeister, Stephan; Werner, Ute; Draxl, Claudia

    2014-09-01

    Linearized augmented planewave methods are known as the most precise numerical schemes for solving the Kohn-Sham equations of density-functional theory (DFT). In this review, we describe how this method is realized in the all-electron full-potential computer package, exciting. We emphasize the variety of different related basis sets, subsumed as (linearized) augmented planewave plus local orbital methods, discussing their pros and cons and we show that extremely high accuracy (microhartrees) can be achieved if the basis is chosen carefully. As the name of the code suggests, exciting is not restricted to ground-state calculations, but has a major focus on excited-state properties. It includes time-dependent DFT in the linear-response regime with various static and dynamical exchange-correlation kernels. These are preferably used to compute optical and electron-loss spectra for metals, molecules and semiconductors with weak electron-hole interactions. exciting makes use of many-body perturbation theory for charged and neutral excitations. To obtain the quasi-particle band structure, the GW approach is implemented in the single-shot approximation, known as G0W0. Optical absorption spectra for valence and core excitations are handled by the solution of the Bethe-Salpeter equation, which allows for the description of strongly bound excitons. Besides these aspects concerning methodology, we demonstrate the broad range of possible applications by prototypical examples, comprising elastic properties, phonons, thermal-expansion coefficients, dielectric tensors and loss functions, magneto-optical Kerr effect, core-level spectra and more.

  8. Exciting: a full-potential all-electron package implementing density-functional theory and many-body perturbation theory.

    PubMed

    Gulans, Andris; Kontur, Stefan; Meisenbichler, Christian; Nabok, Dmitrii; Pavone, Pasquale; Rigamonti, Santiago; Sagmeister, Stephan; Werner, Ute; Draxl, Claudia

    2014-09-10

    Linearized augmented planewave methods are known as the most precise numerical schemes for solving the Kohn-Sham equations of density-functional theory (DFT). In this review, we describe how this method is realized in the all-electron full-potential computer package, exciting. We emphasize the variety of different related basis sets, subsumed as (linearized) augmented planewave plus local orbital methods, discussing their pros and cons and we show that extremely high accuracy (microhartrees) can be achieved if the basis is chosen carefully. As the name of the code suggests, exciting is not restricted to ground-state calculations, but has a major focus on excited-state properties. It includes time-dependent DFT in the linear-response regime with various static and dynamical exchange-correlation kernels. These are preferably used to compute optical and electron-loss spectra for metals, molecules and semiconductors with weak electron-hole interactions. exciting makes use of many-body perturbation theory for charged and neutral excitations. To obtain the quasi-particle band structure, the GW approach is implemented in the single-shot approximation, known as G(0)W(0). Optical absorption spectra for valence and core excitations are handled by the solution of the Bethe-Salpeter equation, which allows for the description of strongly bound excitons. Besides these aspects concerning methodology, we demonstrate the broad range of possible applications by prototypical examples, comprising elastic properties, phonons, thermal-expansion coefficients, dielectric tensors and loss functions, magneto-optical Kerr effect, core-level spectra and more. PMID:25135665

  9. Prediction of {sup 1}P Rydberg energy levels of beryllium based on calculations with explicitly correlated Gaussians

    SciTech Connect

    Bubin, Sergiy; Adamowicz, Ludwik

    2014-01-14

    Benchmark variational calculations are performed for the seven lowest 1s{sup 2}2s np ({sup 1}P), n = 2…8, states of the beryllium atom. The calculations explicitly include the effect of finite mass of {sup 9}Be nucleus and account perturbatively for the mass-velocity, Darwin, and spin-spin relativistic corrections. The wave functions of the states are expanded in terms of all-electron explicitly correlated Gaussian functions. Basis sets of up to 12 500 optimized Gaussians are used. The maximum discrepancy between the calculated nonrelativistic and experimental energies of 1s{sup 2}2s np ({sup 1}P) →1s{sup 2}2s{sup 2} ({sup 1}S) transition is about 12 cm{sup −1}. The inclusion of the relativistic corrections reduces the discrepancy to bellow 0.8 cm{sup −1}.

  10. First-Principle Calculations of Large Fullerenes.

    PubMed

    Calaminici, Patrizia; Geudtner, Gerald; Köster, Andreas M

    2009-01-13

    State of-the-art density functional theory calculations have been performed for the large fullerenes C180, C240, C320, and C540 using the linear combination of Gaussian-type orbitals density functional theory (LCGTO-DFT) approach. For the calculations all-electron basis sets were employed. All fullerene structures were fully optimized without symmetry constrains. The analysis of the obtained structures as well as a study on the evolution of the bond lengths and calculated binding energies are presented. The fullerene results are compared to diamond and graphene which were calculated at the same level of theory. This represents the first systematic study on these large fullerenes based on nonsymmetry adapted first-principle calculations, and it demonstrates the capability of DFT calculations for energy and structure computations of large scale structures without any symmetry constraint.

  11. Development and Validation of XML-based Calculations within Order Sets

    PubMed Central

    Hulse, Nathan C.; Del Fiol, Guilherme; Rocha, Roberto A.

    2005-01-01

    We have developed two XML Schemas to support the implementation of calculations within XML-based order sets for use within a physician order entry system. The models support the representation of variable-based algorithms and include data elements designed to support ancillary functions such as input range checking, rounding, and minimum/maximum value constraints. Two clinicians successfully authored 57 unique calculated orders derived from a set of 11 calculations using the models within our authoring environment. The resultant knowledge base content was subsequently tested and found to produce the desired results within the electronic physician order entry environment. PMID:16779062

  12. Creative Uses for Calculator-based Laboratory (CBL) Technology in Chemistry.

    ERIC Educational Resources Information Center

    Sales, Cynthia L.; Ragan, Nicole M.; Murphy, Maureen Kendrick

    1999-01-01

    Reviews three projects that use a graphing calculator linked to a calculator-based laboratory device as a portable data-collection system for students in chemistry classes. Projects include Isolation, Purification and Quantification of Buckminsterfullerene from Woodstove Ashes; Determination of the Activation Energy Associated with the…

  13. Fast calculation method for computer-generated cylindrical hologram based on wave propagation in spectral domain.

    PubMed

    Jackin, Boaz Jessie; Yatagai, Toyohiko

    2010-12-01

    A fast calculation method for computer generation of cylindrical holograms is proposed. The calculation method is based on wave propagation in spectral domain and in cylindrical co-ordinates, which is otherwise similar to the angular spectrum of plane waves in cartesian co-ordinates. The calculation requires only two FFT operations and hence is much faster. The theoretical background of the calculation method, sampling conditions and simulation results are presented. The generated cylindrical hologram has been tested for reconstruction in different view angles and also in plane surfaces.

  14. Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.

    PubMed

    Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano

    2014-09-01

    A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.

  15. Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.

    PubMed

    Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano

    2014-09-01

    A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system. PMID:26588521

  16. All-electron GW quasiparticle band structures of group 14 nitride compounds

    SciTech Connect

    Chu, Iek-Heng; Cheng, Hai-Ping; Kozhevnikov, Anton; Schulthess, Thomas C.

    2014-07-28

    We have investigated the group 14 nitrides (M{sub 3}N{sub 4}) in the spinel phase (γ-M{sub 3}N{sub 4} with M = C, Si, Ge, and Sn) and β phase (β-M{sub 3}N{sub 4} with M = Si, Ge, and Sn) using density functional theory with the local density approximation and the GW approximation. The Kohn-Sham energies of these systems have been first calculated within the framework of full-potential linearized augmented plane waves (LAPW) and then corrected using single-shot G{sub 0}W{sub 0} calculations, which we have implemented in the modified version of the Elk full-potential LAPW code. Direct band gaps at the Γ point have been found for spinel-type nitrides γ-M{sub 3}N{sub 4} with M = Si, Ge, and Sn. The corresponding GW-corrected band gaps agree with experiment. We have also found that the GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications because of their direct and tunable band gaps.

  17. All-electron GW quasiparticle band structures of group 14 nitride compounds

    NASA Astrophysics Data System (ADS)

    Chu, Iek-Heng; Kozhevnikov, Anton; Schulthess, Thomas C.; Cheng, Hai-Ping

    2014-07-01

    We have investigated the group 14 nitrides (M3N4) in the spinel phase (γ-M3N4 with M = C, Si, Ge, and Sn) and β phase (β-M3N4 with M = Si, Ge, and Sn) using density functional theory with the local density approximation and the GW approximation. The Kohn-Sham energies of these systems have been first calculated within the framework of full-potential linearized augmented plane waves (LAPW) and then corrected using single-shot G0W0 calculations, which we have implemented in the modified version of the Elk full-potential LAPW code. Direct band gaps at the Γ point have been found for spinel-type nitrides γ-M3N4 with M = Si, Ge, and Sn. The corresponding GW-corrected band gaps agree with experiment. We have also found that the GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications because of their direct and tunable band gaps.

  18. An Intuitive and General Approach to Acid-Base Equilibrium Calculations.

    ERIC Educational Resources Information Center

    Felty, Wayne L.

    1978-01-01

    Describes the intuitive approach used in general chemistry and points out its pedagogical advantages. Explains how to extend it to acid-base equilibrium calculations without the need to introduce additional sophisticated concepts. (GA)

  19. Inverse calculation of biochemical oxygen demand models based on time domain for the tidal Foshan River.

    PubMed

    Er, Li; Xiangying, Zeng

    2014-01-01

    To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models. PMID:25026574

  20. 31 CFR 370.35 - Does the Bureau of the Public Debt accept all electronically signed transaction requests?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Does the Bureau of the Public Debt accept all electronically signed transaction requests? 370.35 Section 370.35 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF...

  1. Implications to Postsecondary Faculty of Alternative Calculation Methods of Gender-Based Wage Differentials.

    ERIC Educational Resources Information Center

    Hagedorn, Linda Serra

    1998-01-01

    A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…

  2. Double-exposure phase calculation method in electronic speckle pattern interferometry based on holographic object illumination

    NASA Astrophysics Data System (ADS)

    Séfel, Richárd; Kornis, János

    2011-08-01

    Multiple-exposure phase calculation procedures are widely used in electronic speckle pattern interferometry to calculate phase maps of displacements. We developed a double-exposure process based on holographic illumination of the object and the idea of the spatial carrier phase-shifting method to examine transient displacements. In our work, computer-generated holograms and a spatial light modulator were used to generate proper coherent illuminating masks. In this adjustment all phase-shifted states were at our disposal from one recorded speckle image for phase calculation. This technique can be used in the large scale of transient measurements. In this paper we illustrate the principle through several examples.

  3. Shell model based Coulomb excitation γ-ray intensity calculations in 107Sn

    NASA Astrophysics Data System (ADS)

    DiJulio, D. D.; Cederkall, J.; Ekström, A.; Fahlander, C.; Hjorth-Jensen, M.

    2012-10-01

    In this work, we present recent shell model calculations, based on a realistic nucleon-nucleon interaction, for the light 107, 109Sn nuclei. By combining the calculations with the semi-classical Coulomb excitation code GOSIA, a set of γ-ray intensities has been generated. The calculated intensities are compared with the data from recent Coulomb excitation studies in inverse kinematics at the REX-ISOLDE facility with the nucleus 107Sn. The results are discussed in the context of the ordering of the single-particle orbits relative to 100Sn.

  4. Simple atmospheric transmittance calculation based on a Fourier-transformed Voigt profile.

    PubMed

    Kobayashi, Hirokazu

    2002-11-20

    A method of line-by-line transmission calculation for a homogeneous atmospheric layer that uses the Fourier-transformed Voigt profile is presented. The method is based on a pure Voigt function with no approximation and an interference term that takes into account the line-mixing effect. One can use the method to calculate transmittance, considering each line shape as it is affected by temperature and pressure, with a line database with an arbitrary wave-number range and resolution. To show that the method is feasible for practical model development, we compared the calculated transmittance with that obtained with a conventional model, and good consistency was observed. PMID:12463237

  5. A transport based one-dimensional perturbation code for reactivity calculations in metal systems

    SciTech Connect

    Wenz, T.R.

    1995-02-01

    A one-dimensional reactivity calculation code is developed using first order perturbation theory. The reactivity equation is based on the multi-group transport equation using the discrete ordinates method for angular dependence. In addition to the first order perturbation approximations, the reactivity code uses only the isotropic scattering data, but cross section libraries with higher order scattering data can still be used with this code. The reactivity code obtains all the flux, cross section, and geometry data from the standard interface files created by ONEDANT, a discrete ordinates transport code. Comparisons between calculated and experimental reactivities were done with the central reactivity worth data for Lady Godiva, a bare uranium metal assembly. Good agreement is found for isotopes that do not violate the assumptions in the first order approximation. In general for cases where there are large discrepancies, the discretized cross section data is not accurately representing certain resonance regions that coincide with dominant flux groups in the Godiva assembly. Comparing reactivities calculated with first order perturbation theory and a straight {Delta}k/k calculation shows agreement within 10% indicating the perturbation of the calculated fluxes is small enough for first order perturbation theory to be applicable in the modeled system. Computation time comparisons between reactivities calculated with first order perturbation theory and straight {Delta}k/k calculations indicate considerable time can be saved performing a calculation with a perturbation code particularly as the complexity of the modeled problems increase.

  6. Artificial neural network based torque calculation of switched reluctance motor without locking the rotor

    NASA Astrophysics Data System (ADS)

    Kucuk, Fuat; Goto, Hiroki; Guo, Hai-Jiao; Ichinokura, Osamu

    2009-04-01

    Feedback of motor torque is required in most of switched reluctance (SR) motor applications in order to control torque and its ripple. An SR motor shows highly nonlinear property which does not allow calculating torque analytically. Torque can be directly measured by torque sensor, but it inevitably increases the cost and has to be properly mounted on the motor shaft. Instead of torque sensor, finite element analysis (FEA) may be employed for torque calculation. However, motor modeling and calculation takes relatively long time. The results of FEA may also differ from the actual results. The most convenient way seems to calculate torque from the measured values of rotor position, current, and flux linkage while locking the rotor at definite positions. However, this method needs an extra assembly to lock the rotor. In this study, a novel torque calculation based on artificial neural networks (ANNs) is presented. Magnetizing data are collected while a 6/4 SR motor is running. They need to be interpolated for torque calculation. ANN is very strong tool for data interpolation. ANN based torque estimation is verified on the 6/4 SR motor and is compared by FEA based torque estimation to show its validity.

  7. 4-Component correlated all-electron study on Eka-actinium Fluoride (E121F) including Gaunt interaction: Accurate analytical form, bonding and influence on rovibrational spectra

    NASA Astrophysics Data System (ADS)

    Amador, Davi H. T.; de Oliveira, Heibbe C. B.; Sambrano, Julio R.; Gargano, Ricardo; de Macedo, Luiz Guilherme M.

    2016-10-01

    A prolapse-free basis set for Eka-Actinium (E121, Z = 121), numerical atomic calculations on E121, spectroscopic constants and accurate analytical form for the potential energy curve of diatomic E121F obtained at 4-component all-electron CCSD(T) level including Gaunt interaction are presented. The results show a strong and polarized bond (≈181 kcal/mol in strength) between E121 and F, the outermost frontier molecular orbitals from E121F should be fairly similar to the ones from AcF and there is no evidence of break of periodic trends. Moreover, the Gaunt interaction, although small, is expected to influence considerably the overall rovibrational spectra.

  8. Inhibitor Ranking through QM Based Chelation Calculations for Virtual Screening of HIV-1 RNase H Inhibition

    PubMed Central

    Poongavanam, Vasanthanathan; Steinmann, Casper; Kongsted, Jacob

    2014-01-01

    Quantum mechanical (QM) calculations have been used to predict the binding affinity of a set of ligands towards HIV-1 RT associated RNase H (RNH). The QM based chelation calculations show improved binding affinity prediction for the inhibitors compared to using an empirical scoring function. Furthermore, full protein fragment molecular orbital (FMO) calculations were conducted and subsequently analysed for individual residue stabilization/destabilization energy contributions to the overall binding affinity in order to better understand the true and false predictions. After a successful assessment of the methods based on the use of a training set of molecules, QM based chelation calculations were used as filter in virtual screening of compounds in the ZINC database. By this, we find, compared to regular docking, QM based chelation calculations to significantly reduce the large number of false positives. Thus, the computational models tested in this study could be useful as high throughput filters for searching HIV-1 RNase H active-site molecules in the virtual screening process. PMID:24897431

  9. Calculating super efficiency of DMUs for ranking units in data envelopment analysis based on SBM model.

    PubMed

    Zanboori, E; Rostamy-Malkhalifeh, M; Jahanshahloo, G R; Shoja, N

    2014-01-01

    There are a number of methods for ranking decision making units (DMUs), among which calculating super efficiency and then ranking the units based on the obtained amount of super efficiency are both valid and efficient. Since most of the proposed models do not provide the projection of Pareto efficiency, a model is developed and presented through this paper based on which in the projection of Pareto-efficient is obtained, in addition to calculating the amount of super efficiency. Moreover, the model is unit invariant, and is always feasible and makes the amount of inefficiency effective in ranking.

  10. Consistency analysis of plastic samples based on similarity calculation from limited range of the Raman spectra

    NASA Astrophysics Data System (ADS)

    Lai, B. W.; Wu, Z. X.; Dong, X. P.; Lu, D.; Tao, S. C.

    2016-07-01

    We proposed a novel method to calculate the similarity between samples with only small differences at unknown and specific positions in their Raman spectra, using a moving interval window scanning across the whole Raman spectra. Two ABS plastic samples, one with and the other without flame retardant, were tested in the experiment. Unlike the traditional method in which the similarity is calculated based on the whole spectrum, we do the calculation by using a window to cut out a certain segment from Raman spectra, each at a time as the window moves across the entire spectrum range. By our method, a curve of similarity versus wave number is obtained. And the curve shows a large change where the partial spectra of the two samples is different. Thus, the new similarity calculation method identifies samples with tiny difference in their Raman spectra better.

  11. Calculation of the diffraction efficiency on concave gratings based on Fresnel-Kirchhoff's diffraction formula.

    PubMed

    Huang, Yuanshen; Li, Ting; Xu, Banglian; Hong, Ruijin; Tao, Chunxian; Ling, Jinzhong; Li, Baicheng; Zhang, Dawei; Ni, Zhengji; Zhuang, Songlin

    2013-02-10

    Fraunhofer diffraction formula cannot be applied to calculate the diffraction wave energy distribution of concave gratings like plane gratings because their grooves are distributed on a concave spherical surface. In this paper, a method based on the Kirchhoff diffraction theory is proposed to calculate the diffraction efficiency on concave gratings by considering the curvature of the whole concave spherical surface. According to this approach, each groove surface is divided into several limited small planes, on which the Kirchhoff diffraction field distribution is calculated, and then the diffraction field of whole concave grating can be obtained by superimposition. Formulas to calculate the diffraction efficiency of Rowland-type and flat-field concave gratings are deduced from practical applications. Experimental results showed strong agreement with theoretical computations. With the proposed method, light energy can be optimized to the expected diffraction wave range while implementing aberration-corrected design of concave gratings, particularly for the concave blazed gratings.

  12. The effects of calculator-based laboratories on standardized test scores

    NASA Astrophysics Data System (ADS)

    Stevens, Charlotte Bethany Rains

    Nationwide, the goal of providing a productive science and math education to our youth in today's educational institutions is centering itself around the technology being utilized in these classrooms. In this age of digital technology, educational software and calculator-based laboratories (CBL) have become significant devices in the teaching of science and math for many states across the United States. Among the technology, the Texas Instruments graphing calculator and Vernier Labpro interface, are among some of the calculator-based laboratories becoming increasingly popular among middle and high school science and math teachers in many school districts across this country. In Tennessee, however, it is reported that this type of technology is not regularly utilized at the student level in most high school science classrooms, especially in the area of Physical Science (Vernier, 2006). This research explored the effect of calculator based laboratory instruction on standardized test scores. The purpose of this study was to determine the effect of traditional teaching methods versus graphing calculator teaching methods on the state mandated End-of-Course (EOC) Physical Science exam based on ability, gender, and ethnicity. The sample included 187 total tenth and eleventh grade physical science students, 101 of which belonged to a control group and 87 of which belonged to the experimental group. Physical Science End-of-Course scores obtained from the Tennessee Department of Education during the spring of 2005 and the spring of 2006 were used to examine the hypotheses. The findings of this research study suggested the type of teaching method, traditional or calculator based, did not have an effect on standardized test scores. However, the students' ability level, as demonstrated on the End-of-Course test, had a significant effect on End-of-Course test scores. This study focused on a limited population of high school physical science students in the middle Tennessee

  13. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services

    PubMed Central

    Rajabi, A; Dabiri, A

    2012-01-01

    Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171

  14. All-electron topological insulator in InAs double wells

    NASA Astrophysics Data System (ADS)

    Erlingsson, Sigurdur I.; Egues, J. Carlos

    2015-01-01

    We show that electrons in ordinary III-V semiconductor double wells with an in-plane modulating periodic potential and interwell spin-orbit interaction are tunable topological insulators (TIs). Here the essential TI ingredients, namely, band inversion and the opening of an overall bulk gap in the spectrum arise, respectively, from (i) the combined effect of the double-well even-odd state splitting ΔSAS together with the superlattice potential and (ii) the interband Rashba spin-orbit coupling η . We corroborate our exact diagonalization results with an analytical nearly-free-electron description that allows us to derive an effective Bernevig-Hughes-Zhang model. Interestingly, the gate-tunable mass gap M drives a topological phase transition featuring a discontinuous Chern number at ΔSAS˜5.4 meV . Finally, we explicitly verify the bulk-edge correspondence by considering a strip configuration and determining not only the bulk bands in the nontopological and topological phases but also the edge states and their Dirac-like spectrum in the topological phase. The edge electronic densities exhibit peculiar spatial oscillations as they decay away into the bulk. For concreteness, we present our results for InAs-based wells with realistic parameters.

  15. Calculation of thermal expansion coefficient of glasses based on topological constraint theory

    NASA Astrophysics Data System (ADS)

    Zeng, Huidan; Ye, Feng; Li, Xiang; Wang, Ling; Yang, Bin; Chen, Jianding; Zhang, Xianghua; Sun, Luyi

    2016-10-01

    In this work, the thermal expansion behavior and the structure configuration evolution of glasses were studied. Degree of freedom based on the topological constraint theory is correlated with configuration evolution; considering the chemical composition and the configuration change, the analytical equation for calculating the thermal expansion coefficient of glasses from degree of freedom was derived. The thermal expansion of typical silicate and chalcogenide glasses was examined by calculating their thermal expansion coefficients (TEC) using the approach stated above. The results showed that this approach was energetically favorable for glass materials and revealed the corresponding underlying essence from viewpoint of configuration entropy. This work establishes a configuration-based methodology to calculate the thermal expansion coefficient of glasses that, lack periodic order.

  16. Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene

    NASA Astrophysics Data System (ADS)

    Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.

    2012-02-01

    We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.

  17. Computer-based ST/HR slope calculation on Marquette CASE 12: development and technical considerations.

    PubMed

    Kligfield, P; Okin, P M; Stumpf, T; Zachman, D

    1988-01-01

    Computer-based implementation of the ST/HR slope on Marquette CASE 12 is described. ST-segment measurement is performed with improved software for QRS detection and incremental signal updating, and on-line test calculation results from automated linear regression, leading to graphic display of the maximum ST/HR slope at the end of exercise. PMID:3216168

  18. The Effect of Calculator-Based Ranger Activities on Students' Graphing Ability.

    ERIC Educational Resources Information Center

    Kwon, Oh Nam

    2002-01-01

    Addresses three issues of Calculator-based Ranger (CBR) activities on graphing abilities: (a) the effect of CBR activities on graphing abilities; (b) the extent to which prior knowledge about graphing skills affects graphing ability; and (c) the influence of instructional styles on students' graphing abilities. Indicates that CBR activities are…

  19. Preliminary result of transport properties calculation molten Ag-based superionics

    NASA Astrophysics Data System (ADS)

    Oztek, H. O.; Yılmaz, M.; Kavanoz, H. B.

    2016-03-01

    We studied molten Ag based superionics (AgI, Ag2S and Ag3S I) which are well defined with Vashista-Rahman potential. Molecular Dynamic simulation code is Moldy which is used for canonical ensemble (NPT). Thermal properties are obtained from Green-Kubo formalism with equilibrium molecular dynamics (EMD) simulation. These calculation results are compared with the experimentals results.

  20. 40 CFR 1066.605 - Mass-based and molar-based exhaust emission calculations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... meter inlet, measured directly or calculated as the sum of atmospheric pressure plus a differential pressure referenced to atmospheric pressure. T std = standard temperature. p std = standard pressure. T in... specified in paragraph (c) of this section or in 40 CFR part 1065, subpart G, as applicable. (b) See...

  1. Quantum calculation of protein NMR chemical shifts based on the automated fragmentation method.

    PubMed

    Zhu, Tong; Zhang, John Z H; He, Xiao

    2015-01-01

    The performance of quantum mechanical methods on the calculation of protein NMR chemical shifts is reviewed based on the recently developed automatic fragmentation quantum mechanics/molecular mechanics (AF-QM/MM) approach. By using the Poisson-Boltzmann (PB) model and first solvation water molecules, the influence of solvent effect is also discussed. Benefiting from the fragmentation algorithm, the AF-QM/MM approach is computationally efficient, linear-scaling with a low pre-factor, and thus can be applied to routinely calculate the ab initio NMR chemical shifts for proteins of any size. The results calculated using Density Functional Theory (DFT) show that when the solvent effect is included, this method can accurately reproduce the experimental ¹H NMR chemical shifts, while the ¹³C NMR chemical shifts are less affected by the solvent. However, although the inclusion of solvent effect shows significant improvement for ¹⁵N chemical shifts, the calculated values still have large deviations from the experimental observations. Our study further demonstrates that AF-QM/MM calculated results accurately reflect the dependence of ¹³C(α) NMR chemical shifts on the secondary structure of proteins, and the calculated ¹H chemical shift can be utilized to discriminate the native structure of proteins from decoys.

  2. CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals

    NASA Astrophysics Data System (ADS)

    Červinka, Ctirad; Fulem, Michal; Růžička, Květoslav

    2016-02-01

    A comparative study of the lattice energy calculations for a data set of 25 molecular crystals is performed using an additive scheme based on the individual energies of up to four-body interactions calculated using the coupled clusters with iterative treatment of single and double excitations and perturbative triples correction (CCSD(T)) with an estimated complete basis set (CBS) description. The CCSD(T)/CBS values on lattice energies are used to estimate sublimation enthalpies which are compared with critically assessed and thermodynamically consistent experimental values. The average absolute percentage deviation of calculated sublimation enthalpies from experimental values amounts to 13% (corresponding to 4.8 kJ mol-1 on absolute scale) with unbiased distribution of positive to negative deviations. As pair interaction energies present a dominant contribution to the lattice energy and CCSD(T)/CBS calculations still remain computationally costly, benchmark calculations of pair interaction energies defined by crystal parameters involving 17 levels of theory, including recently developed methods with local and explicit treatment of electronic correlation, such as LCC and LCC-F12, are also presented. Locally and explicitly correlated methods are found to be computationally effective and reliable methods enabling the application of fragment-based methods for larger systems.

  3. CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals.

    PubMed

    Červinka, Ctirad; Fulem, Michal; Růžička, Květoslav

    2016-02-14

    A comparative study of the lattice energy calculations for a data set of 25 molecular crystals is performed using an additive scheme based on the individual energies of up to four-body interactions calculated using the coupled clusters with iterative treatment of single and double excitations and perturbative triples correction (CCSD(T)) with an estimated complete basis set (CBS) description. The CCSD(T)/CBS values on lattice energies are used to estimate sublimation enthalpies which are compared with critically assessed and thermodynamically consistent experimental values. The average absolute percentage deviation of calculated sublimation enthalpies from experimental values amounts to 13% (corresponding to 4.8 kJ mol(-1) on absolute scale) with unbiased distribution of positive to negative deviations. As pair interaction energies present a dominant contribution to the lattice energy and CCSD(T)/CBS calculations still remain computationally costly, benchmark calculations of pair interaction energies defined by crystal parameters involving 17 levels of theory, including recently developed methods with local and explicit treatment of electronic correlation, such as LCC and LCC-F12, are also presented. Locally and explicitly correlated methods are found to be computationally effective and reliable methods enabling the application of fragment-based methods for larger systems. PMID:26874495

  4. A novel lateral disequilibrium inclusive (LDI) pencil-beam based dose calculation algorithm: Evaluation in inhomogeneous phantoms and comparison with Monte Carlo calculations

    SciTech Connect

    Wertz, Hansjoerg; Jahnke, Lennart; Schneider, Frank; Polednik, Martin; Fleckenstein, Jens; Lohr, Frank; Wenz, Frederik

    2011-03-15

    Purpose: Pencil-beam (PB) based dose calculation for treatment planning is limited by inaccuracies in regions of tissue inhomogeneities, particularly in situations with lateral electron disequilibrium as is present at tissue/lung interfaces. To overcome these limitations, a new ''lateral disequilibrium inclusive'' (LDI) PB based calculation algorithm was introduced. In this study, the authors evaluated the accuracy of the new model by film and ionization chamber measurements and Monte Carlo simulations. Methods: To validate the performance of the new LDI algorithm implemented in Corvus 09, eight test plans were generated on inhomogeneous thorax and pelvis phantoms. In addition, three plans were calculated with a simple effective path length (EPL) algorithm on the inhomogeneous thorax phantom. To simulate homogeneous tissues, four test plans were evaluated in homogeneous phantoms (homogeneous dose calculation). Results: The mean pixel pass rates and standard deviations of the gamma 4%/4 mm test for the film measurements were (96{+-}3)% for the plans calculated with LDI, (70{+-}5)% for the plans calculated with EPL, and (99{+-}1)% for the homogeneous plans. Ionization chamber measurements and Monte Carlo simulations confirmed the high accuracy of the new algorithm (dose deviations {<=}4%; gamma 3%/3 mm {>=}96%)Conclusions: LDI represents an accurate and fast dose calculation algorithm for treatment planning.

  5. Holographic three-dimensional display and hologram calculation based on liquid crystal on silicon device [invited].

    PubMed

    Li, Junchang; Tu, Han-Yen; Yeh, Wei-Chieh; Gui, Jinbin; Cheng, Chau-Jern

    2014-09-20

    Based on scalar diffraction theory and the geometric structure of liquid crystal on silicon (LCoS), we study the impulse responses and image depth of focus in a holographic three-dimensional (3D) display system. Theoretical expressions of the impulse response and the depth of focus of reconstructed 3D images are obtained, and experimental verifications of the imaging properties are performed. The results indicated that the images formed by holographic display based on the LCoS device were periodic image fields surrounding optical axes. The widths of the image fields were directly proportional to the wavelength and diffraction distance, and inversely proportional to the pixel size of the LCoS device. Based on the features of holographic 3D imaging and focal depth, we enhance currently popular hologram calculation methods of 3D objects to improve the computing speed of hologram calculation.

  6. [Risk factor calculator for medical underwriting of life insurers based on the PROCAM study].

    PubMed

    Geritse, A; Müller, G; Trompetter, T; Schulte, H; Assmann, G

    2008-06-01

    For its electronic manual GEM, used to perform medical risk assessment in life insurance, SCOR Global Life Germany has developed an innovative and evidence-based calculator of the mortality risk depending on cardiovascular risk factors. The calculator contains several new findings regarding medical underwriting, which were gained from the analysis of the PROCAM (Prospective Cardiovascular Münster) study. For instance, in the overall consideration of all risk factors of a medically examined applicant, BMI is not an independent risk factor. Further, given sufficient information, the total extra mortality of a person no longer results from adding up the ratings for the single risk factors. In fact, this new approach of risk assessment considers the interdependencies between the different risk factors. The new calculator is expected to improve risk selection and standard acceptances will probably increase.

  7. Efficient algorithms for semiclassical instanton calculations based on discretized path integrals

    SciTech Connect

    Kawatsu, Tsutomu E-mail: smiura@mail.kanazawa-u.ac.jp; Miura, Shinichi E-mail: smiura@mail.kanazawa-u.ac.jp

    2014-07-14

    Path integral instanton method is a promising way to calculate the tunneling splitting of energies for degenerated two state systems. In order to calculate the tunneling splitting, we need to take the zero temperature limit, or the limit of infinite imaginary time duration. In the method developed by Richardson and Althorpe [J. Chem. Phys. 134, 054109 (2011)], the limit is simply replaced by the sufficiently long imaginary time. In the present study, we have developed a new formula of the tunneling splitting based on the discretized path integrals to take the limit analytically. We have applied our new formula to model systems, and found that this approach can significantly reduce the computational cost and gain the numerical accuracy. We then developed the method combined with the electronic structure calculations to obtain the accurate interatomic potential on the fly. We present an application of our ab initio instanton method to the ammonia umbrella flip motion.

  8. Structural predictions based on the compositions of cathodic materials by first-principles calculations

    NASA Astrophysics Data System (ADS)

    Li, Yang; Lian, Fang; Chen, Ning; Hao, Zhen-jia; Chou, Kuo-chih

    2015-05-01

    A first-principles method is applied to comparatively study the stability of lithium metal oxides with layered or spinel structures to predict the most energetically favorable structure for different compositions. The binding and reaction energies of the real or virtual layered LiMO2 and spinel LiM2O4 (M = Sc-Cu, Y-Ag, Mg-Sr, and Al-In) are calculated. The effect of element M on the structural stability, especially in the case of multiple-cation compounds, is discussed herein. The calculation results indicate that the phase stability depends on both the binding and reaction energies. The oxidation state of element M also plays a role in determining the dominant structure, i.e., layered or spinel phase. Moreover, calculation-based theoretical predictions of the phase stability of the doped materials agree with the previously reported experimental data.

  9. Extending fragment-based free energy calculations with library Monte Carlo simulation: annealing in interaction space.

    PubMed

    Lettieri, Steven; Mamonov, Artem B; Zuckerman, Daniel M

    2011-04-30

    Pre-calculated libraries of molecular fragment configurations have previously been used as a basis for both equilibrium sampling (via library-based Monte Carlo) and for obtaining absolute free energies using a polymer-growth formalism. Here, we combine the two approaches to extend the size of systems for which free energies can be calculated. We study a series of all-atom poly-alanine systems in a simple dielectric solvent and find that precise free energies can be obtained rapidly. For instance, for 12 residues, less than an hour of single-processor time is required. The combined approach is formally equivalent to the annealed importance sampling algorithm; instead of annealing by decreasing temperature, however, interactions among fragments are gradually added as the molecule is grown. We discuss implications for future binding affinity calculations in which a ligand is grown into a binding site.

  10. An automated Monte-Carlo based method for the calculation of cascade summing factors

    NASA Astrophysics Data System (ADS)

    Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.

    2016-10-01

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.

  11. Understanding Iron-based catalysts with efficient Oxygen reduction activity from first-principles calculations

    NASA Astrophysics Data System (ADS)

    Hafiz, Hasnain; Barbiellini, B.; Jia, Q.; Tylus, U.; Strickland, K.; Bansil, A.; Mukerjee, S.

    2015-03-01

    Catalysts based on Fe/N/C clusters can support the oxygen-reduction reaction (ORR) without the use of expensive metals such as platinum. These systems can also prevent some poisonous species to block the active sites from the reactant. We have performed spin-polarized calculations on various Fe/N/C fragments using the Vienna Ab initio Simulation Package (VASP) code. Some results are compared to similar calculations obtained with the Gaussian code. We investigate the partial density of states (PDOS) of the 3d orbitals near the Fermi level and calculate the binding energies of several ligands. Correlations of the binding energies with the 3d electronic PDOS's are used to propose electronic descriptors of the ORR associated with the 3d states of Fe. We also suggest a structural model for the most active site with a ferrous ion (Fe2+) in the high spin state or the so-called Doublet 3 (D3).

  12. GPU-based acceleration of free energy calculations in solid state physics

    NASA Astrophysics Data System (ADS)

    Januszewski, Michał; Ptok, Andrzej; Crivelli, Dawid; Gardas, Bartłomiej

    2015-07-01

    Obtaining a thermodynamically accurate phase diagram through numerical calculations is a computationally expensive problem that is crucially important to understanding the complex phenomena of solid state physics, such as superconductivity. In this work we show how this type of analysis can be significantly accelerated through the use of modern GPUs. We illustrate this with a concrete example of free energy calculation in multi-band iron-based superconductors, known to exhibit a superconducting state with oscillating order parameter (OP). Our approach can also be used for classical BCS-type superconductors. With a customized algorithm and compiler tuning we are able to achieve a 19×speedup compared to the CPU (119×compared to a single CPU core), reducing calculation time from minutes to mere seconds, enabling the analysis of larger systems and the elimination of finite size effects.

  13. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  14. Review of dynamical models for external dose calculations based on Monte Carlo simulations in urbanised areas.

    PubMed

    Eged, Katalin; Kis, Zoltán; Voigt, Gabriele

    2006-01-01

    After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the "location factor method" and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison.

  15. Calculating the detection limits of chamber-based soil greenhouse gas flux measurements.

    PubMed

    Parkin, T B; Venterea, R T; Hargreaves, S K

    2012-01-01

    Renewed interest in quantifying greenhouse gas emissions from soil has led to an increase in the application of chamber-based flux measurement techniques. Despite the apparent conceptual simplicity of chamber-based methods, nuances in chamber design, deployment, and data analyses can have marked effects on the quality of the flux data derived. In many cases, fluxes are calculated from chamber headspace vs. time series consisting of three or four data points. Several mathematical techniques have been used to calculate a soil gas flux from time course data. This paper explores the influences of sampling and analytical variability associated with trace gas concentration quantification on the flux estimated by linear and nonlinear models. We used Monte Carlo simulation to calculate the minimum detectable fluxes (α = 0.05) of linear regression (LR), the Hutchinson/Mosier (H/M) method, the quadratic method (Quad), the revised H/M (HMR) model, and restricted versions of the Quad and H/M methods over a range of analytical precisions and chamber deployment times (DT) for data sets consisting of three or four time points. We found that LR had the smallest detection limit thresholds and was the least sensitive to analytical precision and chamber deployment time. The HMR model had the highest detection limits and was most sensitive to analytical precision and chamber deployment time. Equations were developed that enable the calculation of flux detection limits of any gas species if analytical precision, chamber deployment time, and ambient concentration of the gas species are known.

  16. GPU-based ultra-fast dose calculation using a finite size pencil beam model

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B.

    2009-10-01

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  17. Modelling lateral beam quality variations in pencil kernel based photon dose calculations.

    PubMed

    Nyholm, T; Olofsson, J; Ahnesjö, A; Karlsson, M

    2006-08-21

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  18. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    NASA Astrophysics Data System (ADS)

    Nyholm, T.; Olofsson, J.; Ahnesjö, A.; Karlsson, M.

    2006-08-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  19. Loss of conformational entropy in protein folding calculated using realistic ensembles and its implications for NMR-based calculations.

    PubMed

    Baxa, Michael C; Haddadian, Esmael J; Jumper, John M; Freed, Karl F; Sosnick, Tobin R

    2014-10-28

    The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol(-1) per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix-sheet = 0.5 kcal⋅mol(-1)), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR.

  20. Loss of conformational entropy in protein folding calculated using realistic ensembles and its implications for NMR-based calculations

    PubMed Central

    Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.

    2014-01-01

    The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044

  1. Overcoming misconceptions of graph interpretation of kinematics motion using calculator based rangers

    NASA Astrophysics Data System (ADS)

    Olson, John R.

    This is a quasi-experimental study of 261 first year high school students that analyzes gains made through the use of calculator based rangers attached to calculators. The study has qualitative components but is based on quantitative tests. Biechner's TUG-K test was used for the pretest, posttest, and post-posttest. The population was divided into one group that predicted the results before using the CBRs and another that did not predict first but completed the same activities. The data for the groups was further disaggregated into learning style groups (based on Kolb's Learning Styles Inventory), type of class (advanced vs. general physics), and gender. Four instructors used the labs developed by the author for this study and created significant differences between the groups by instructor based on interviews, participant observation and one way ANOVA. No significant differences were found between learning styles based on MANOVA. No significant differences were found between predict and nonpredict groups for the one way ANOVAs or MANOVA, however, some differences do exist as measured by a survey and participant observation. Significant differences do exist between gender and type of class (advanced/general) based on one way ANOVA and MANOVA. The males outscored the females on all tests and the advanced physics scored higher than the general physics on all tests. The advanced physics scoring higher was expected but the difference between genders was not.

  2. An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet

    NASA Astrophysics Data System (ADS)

    Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon

    2015-08-01

    The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.

  3. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    NASA Astrophysics Data System (ADS)

    Osadchy, A. V.; Volotovskiy, S. G.; Obraztsova, E. D.; Savin, V. V.; Golovashkin, D. L.

    2016-08-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars.

  4. Automated Calculation of Water-equivalent Diameter (DW) Based on AAPM Task Group 220.

    PubMed

    Anam, Choirul; Haryanto, Freddy; Widita, Rena; Arif, Idam; Dougherty, Geoff

    2016-01-01

    The purpose of this study is to accurately and effectively automate the calculation of the water-equivalent diameter (DW) from 3D CT images for estimating the size-specific dose. DW is the metric that characterizes the patient size and attenuation. In this study, DW was calculated for standard CTDI phantoms and patient images. Two types of phantom were used, one representing the head with a diameter of 16 cm and the other representing the body with a diameter of 32 cm. Images of 63 patients were also taken, 32 who had undergone a CT head examination and 31 who had undergone a CT thorax examination. There are three main parts to our algorithm for automated DW calculation. The first part is to read 3D images and convert the CT data into Hounsfield units (HU). The second part is to find the contour of the phantoms or patients automatically. And the third part is to automate the calculation of DW based on the automated contouring for every slice (DW,all). The results of this study show that the automated calculation of DW and the manual calculation are in good agreement for phantoms and patients. The differences between the automated calculation of DW and the manual calculation are less than 0.5%. The results of this study also show that the estimating of DW,all using DW,n=1 (central slice along longitudinal axis) produces percentage differences of -0.92% ± 3.37% and 6.75%± 1.92%, and estimating DW,all using DW,n=9 produces percentage differences of 0.23% ± 0.16% and 0.87% ± 0.36%, for thorax and head examinations, respectively. From this study, the percentage differences between normalized size-specific dose estimate for every slice (nSSDEall) and nSSDEn=1 are 0.74% ± 2.82% and -4.35% ± 1.18% for thorax and head examinations, respectively; between nSSDEall and nSSDEn=9 are 0.00% ± 0.46% and -0.60% ± 0.24% for thorax and head examinations, respectively. PMID:27455491

  5. A Monte Carlo-based procedure for independent monitor unit calculation in IMRT treatment plans.

    PubMed

    Pisaturo, O; Moeckli, R; Mirimanoff, R-O; Bochud, F O

    2009-07-01

    Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.

  6. Development of facile property calculation model for adsorption chillers based on equilibrium adsorption cycle

    NASA Astrophysics Data System (ADS)

    Yano, Masato; Hirose, Kenji; Yoshikawa, Minoru; Thermal management technology Team

    Facile property calculation model for adsorption chillers was developed based on equilibrium adsorption cycles. Adsorption chillers are one of promising systems that can use heat energy efficiently because adsorption chillers can generate cooling energy using relatively low temperature heat energy. Properties of adsorption chillers are determined by heat source temperatures, adsorption/desorption properties of adsorbent, and kinetics such as heat transfer rate and adsorption/desorption rate etc. In our model, dependence of adsorption chiller properties on heat source temperatures was represented using approximated equilibrium adsorption cycles instead of solving conventional time-dependent differential equations for temperature changes. In addition to equilibrium cycle calculations, we calculated time constants for temperature changes as functions of heat source temperatures, which represent differences between equilibrium cycles and real cycles that stemmed from kinetic adsorption processes. We found that the present approximated equilibrium model could calculate properties of adsorption chillers (driving energies, cooling energies, and COP etc.) under various driving conditions quickly and accurately within average errors of 6% compared to experimental data.

  7. Brittleness index calculation and evaluation for CBM reservoirs based on AVO simultaneous inversion

    NASA Astrophysics Data System (ADS)

    Wu, Haibo; Dong, Shouhua; Huang, Yaping; Wang, Haolong; Chen, Guiwu

    2016-11-01

    In this paper, a new approach is proposed for coalbed methane (CBM) reservoir brittleness index (BI) calculations. The BI, as a guide for fracture area selection, is calculated by dynamic elastic parameters (dynamic Young's modulus Ed and dynamic Poisson's ratio υd) obtained from an amplitude versus offset (AVO) simultaneous inversion. Among the three different classes of CBM reservoirs distinguished on the basis of brittleness in the theoretical part of this study, class I reservoirs with high BI values are identified as preferential target areas for fracturing. Therefore, we derive the AVO approximation equation expressed by Ed and υd first. This allows the direct inversion of the dynamic elastic parameters through the pre-stack AVO simultaneous inversion, which is based on Bayes' theorem. Thereafter, a test model with Gaussian white noise and a through-well seismic profile inversion is used to demonstrate the high reliability of the inversion parameters. Accordingly, the BI of a CBM reservoir section from the Qinshui Basin is calculated using the proposed method and a class I reservoir section detected through brittleness evaluation. From the outcome of this study, we believe the adoption of this new approach could act as a guide and reference for BI calculations and evaluations of CBM reservoirs.

  8. Thermal conductivity calculation of bio-aggregates based materials using finite and discrete element methods

    NASA Astrophysics Data System (ADS)

    Pennec, Fabienne; Alzina, Arnaud; Tessier-Doyen, Nicolas; Naitali, Benoit; Smith, David S.

    2012-11-01

    This work is about the calculation of thermal conductivity of insulating building materials made from plant particles. To determine the type of raw materials, the particle sizes or the volume fractions of plant and binder, a tool dedicated to calculate the thermal conductivity of heterogeneous materials has been developped, using the discrete element method to generate the volume element and the finite element method to calculate the homogenized properties. A 3D optical scanner has been used to capture plant particle shapes and convert them into a cluster of discret elements. These aggregates are initially randomly distributed but without any overlap, and then fall down in a container due to the gravity force and collide with neighbour particles according to a velocity Verlet algorithm. Once the RVE is built, the geometry is exported in the open-source Salome-Meca platform to be meshed. The calculation of the effective thermal conductivity of the heterogeneous volume is then performed using a homogenization technique, based on an energy method. To validate the numerical tool, thermal conductivity measurements have been performed on sunflower pith aggregates and on packed beds of the same particles. The experimental values have been compared satisfactorily with a batch of numerical simulations.

  9. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    SciTech Connect

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  10. Develop and test a solvent accessible surface area-based model in conformational entropy calculations.

    PubMed

    Wang, Junmei; Hou, Tingjun

    2012-05-25

    It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS

  11. Efficient Procedure for the Numerical Calculation of Harmonic Vibrational Frequencies Based on Internal Coordinates

    SciTech Connect

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2013-08-15

    We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding number using double differentiation in Cartesian coordinates. For molecules of C1 symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm–1 from those obtained from Cartesian coordinates.

  12. Efficient procedure for the numerical calculation of harmonic vibrational frequencies based on internal coordinates.

    PubMed

    Miliordos, Evangelos; Xantheas, Sotiris S

    2013-08-15

    We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson's GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding number using double differentiation in Cartesian coordinates. For molecules of C1 symmetry the computational savings in the energy calculations amount to 36N - 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. In all cases the frequencies based on internal coordinates differ on average by <1 cm(-1) from those obtained from Cartesian coordinates.

  13. Performance of IRI-based ionospheric critical frequency calculations with reference to forecasting

    NASA Astrophysics Data System (ADS)

    Ünal, İbrahim; Şenalp, Erdem Türker; Yeşil, Ali; Tulunay, Ersin; Tulunay, Yurdanur

    2011-01-01

    Ionospheric critical frequency (foF2) is an important ionospheric parameter in telecommunication. Ionospheric processes are highly nonlinear and time varying. Thus, mathematical modeling based on physical principles is extremely difficult if not impossible. The authors forecast foF2 values by using neural networks and, in parallel, they calculate foF2 values based on the IRI model. The foF2 values were forecast 1 h in advance by using the Middle East Technical University Neural Network model (METU-NN) and the work was reported previously. Since then, the METU-NN has been improved. In this paper, 1 h in advance forecast foF2 values and the calculated foF2 values have been compared with the observed values considering the Slough (51.5°N, 0.6°W), Uppsala (59.8°N, 17.6°E), and Rome (41.8°N, 12.5°E) station foF2 data. The authors have considered the models alternative to each other. The performance results of the models are promising. The METU-NN foF2 forecast errors are smaller than the calculated foF2 errors. The models may be used in parallel employing the METU-NN as the primary source for the foF2 forecasting.

  14. Performance of IRI-based ionospheric critical frequency calculations with reference to forecasting

    NASA Astrophysics Data System (ADS)

    ÜNal, Ä.°Brahim; ŞEnalp, Erdem Türker; YeşIl, Ali; Tulunay, Ersin; Tulunay, Yurdanur

    2011-02-01

    Ionospheric critical frequency (foF2) is an important ionospheric parameter in telecommunication. Ionospheric processes are highly nonlinear and time varying. Thus, mathematical modeling based on physical principles is extremely difficult if not impossible. The authors forecast foF2 values by using neural networks and, in parallel, they calculate foF2 values based on the IRI model. The foF2 values were forecast 1 h in advance by using the Middle East Technical University Neural Network model (METU-NN) and the work was reported previously. Since then, the METU-NN has been improved. In this paper, 1 h in advance forecast foF2 values and the calculated foF2 values have been compared with the observed values considering the Slough (51.5°N, 0.6°W), Uppsala (59.8°N, 17.6°E), and Rome (41.8°N, 12.5°E) station foF2 data. The authors have considered the models alternative to each other. The performance results of the models are promising. The METU-NN foF2 forecast errors are smaller than the calculated foF2 errors. The models may be used in parallel employing the METU-NN as the primary source for the foF2 forecasting.

  15. Monte Carlo-based dose calculation for 32P patch source for superficial brachytherapy applications

    PubMed Central

    Sahoo, Sridhar; Palani, Selvam T.; Saxena, S. K.; Babu, D. A. R.; Dash, A.

    2015-01-01

    Skin cancer treatment involving 32P source is an easy, less expensive method of treatment limited to small and superficial lesions of approximately 1 mm deep. Bhabha Atomic Research Centre (BARC) has indigenously developed 32P nafion-based patch source (1 cm × 1 cm) for treating skin cancer. For this source, the values of dose per unit activity at different depths including dose profiles in water are calculated using the EGSnrc-based Monte Carlo code system. For an initial activity of 1 Bq distributed in 1 cm2 surface area of the source, the calculated central axis depth dose values are 3.62 × 10-10 GyBq-1 and 8.41 × 10-11 GyBq-1at 0.0125 and 1 mm depths in water, respectively. Hence, the treatment time calculated for delivering therapeutic dose of 30 Gy at 1 mm depth along the central axis of the source involving 37 MBq activity is about 2.7 hrs. PMID:26150682

  16. GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.

    PubMed

    Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B

    2013-06-21

    A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm

  17. SU-E-T-161: Evaluation of Dose Calculation Based On Cone-Beam CT

    SciTech Connect

    Abe, T; Nakazawa, T; Saitou, Y; Nakata, A; Yano, M; Tateoka, K; Fujimoto, K; Sakata, K

    2014-06-01

    Purpose: The purpose of this study is to convert pixel values in cone-beam CT (CBCT) using histograms of pixel values in the simulation CT (sim-CT) and the CBCT images and to evaluate the accuracy of dose calculation based on the CBCT. Methods: The sim-CT and CBCT images immediately before the treatment of 10 prostate cancer patients were acquired. Because of insufficient calibration of the pixel values in the CBCT, it is difficult to be directly used for dose calculation. The pixel values in the CBCT images were converted using an in-house program. A 7 fields treatment plans (original plan) created on the sim-CT images were applied to the CBCT images and the dose distributions were re-calculated with same monitor units (MUs). These prescription doses were compared with those of original plans. Results: In the results of the pixel values conversion in the CBCT images,the mean differences of pixel values for the prostate,subcutaneous adipose, muscle and right-femur were −10.78±34.60, 11.78±41.06, 29.49±36.99 and 0.14±31.15 respectively. In the results of the calculated doses, the mean differences of prescription doses for 7 fields were 4.13±0.95%, 0.34±0.86%, −0.05±0.55%, 1.35±0.98%, 1.77±0.56%, 0.89±0.69% and 1.69±0.71% respectively and as a whole, the difference of prescription dose was 1.54±0.4%. Conclusion: The dose calculation on the CBCT images achieve an accuracy of <2% by using this pixel values conversion program. This may enable implementation of efficient adaptive radiotherapy.

  18. A theoretical study of blue phosphorene nanoribbons based on first-principles calculations

    SciTech Connect

    Xie, Jiafeng; Si, M. S. Yang, D. Z.; Zhang, Z. Y.; Xue, D. S.

    2014-08-21

    Based on first-principles calculations, we present a quantum confinement mechanism for the band gaps of blue phosphorene nanoribbons (BPNRs) as a function of their widths. The BPNRs considered have either armchair or zigzag shaped edges on both sides with hydrogen saturation. Both the two types of nanoribbons are shown to be indirect semiconductors. An enhanced energy gap of around 1 eV can be realized when the ribbon's width decreases to ∼10 Å. The underlying physics is ascribed to the quantum confinement effect. More importantly, the parameters to describe quantum confinement are obtained by fitting the calculated band gaps with respect to their widths. The results show that the quantum confinement in armchair nanoribbons is stronger than that in zigzag ones. This study provides an efficient approach to tune the band gap in BPNRs.

  19. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Xinyi; Xia, Jun

    2016-09-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. Project supported by the National Basic Research Program of China (Grant No. 2013CB328803) and the National High Technology Research and Development Program of China (Grant Nos. 2013AA013904 and 2015AA016301).

  20. POF misalignment model based on the calculation of the radiation pattern using the Hankel transform.

    PubMed

    Mateo, J; Losada, M A; López, A

    2015-03-23

    Here, we propose a method to estimate misalignment losses that is based on the calculation of the radiated angular power distribution as light propagates through space using the fiber far field pattern (FFP) and simplifying and speeding calculations with the Hankel transform. This method gives good estimates for combined transversal and longitudinal losses at short, intermediate and long offset distances. In addition, the same methodology can be adapted to describe not only scalar loss but also its angular dependence caused by misalignments. We show that this approach can be applied to upgrade a connector matrix included in a propagation model that is integrated into simulation software. This way, we assess the effects of misalignments at different points in the link and are able to predict the performance of different layouts at system level.

  1. Phase-only stereoscopic hologram calculation based on Gerchberg-Saxton iterative algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Xinyi; Xia, Jun

    2016-09-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg-Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. Project supported by the National Basic Research Program of China (Grant No. 2013CB328803) and the National High Technology Research and Development Program of China (Grant Nos. 2013AA013904 and 2015AA016301).

  2. High accuracy modeling for advanced nuclear reactor core designs using Monte Carlo based coupled calculations

    NASA Astrophysics Data System (ADS)

    Espel, Federico Puente

    The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods

  3. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.

    PubMed

    Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated. PMID:26216484

  4. Calculation of grey level co-occurrence matrix-based seismic attributes in three dimensions

    NASA Astrophysics Data System (ADS)

    Eichkitz, Christoph Georg; Amtmann, Johannes; Schreilechner, Marcellus Gregor

    2013-10-01

    Seismic interpretation can be supported by seismic attribute analysis. Common seismic attributes use mathematical relationships based on the geometry and the physical properties of the subsurface to reveal features of interest. But they are mostly not capable of describing the spatial arrangement of depositional facies or reservoir properties. Textural attributes such as the grey level co-occurrence matrix (GLCM) and its derived attributes are able to describe the spatial dependencies of seismic facies. The GLCM - primary used for 2D data - is a measure of how often different combinations of pixel brightness values occur in an image. We present in this paper a workflow for full three-dimensional calculation of GLCM-based seismic attributes that also consider the structural dip of the seismic data. In our GLCM workflow we consider all 13 possible space directions to determine GLCM-based attributes. The developed workflow is applied onto various seismic datasets and the results of GLCM calculation are compared to common seismic attributes such as coherence.

  5. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.

    PubMed

    Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  6. Improving iterative surface energy balance convergence for remote sensing based flux calculation

    NASA Astrophysics Data System (ADS)

    Dhungel, Ramesh; Allen, Richard G.; Trezza, Ricardo

    2016-04-01

    A modification of the iterative procedure of the surface energy balance was purposed to expedite the convergence of Monin-Obukhov stability correction utilized by the remote sensing based flux calculation. This was demonstrated using ground-based weather stations as well as the gridded weather data (North American Regional Reanalysis) and remote sensing based (Landsat 5, 7) images. The study was conducted for different land-use classes in southern Idaho and northern California for multiple satellite overpasses. The convergence behavior of a selected Landsat pixel as well as all of the Landsat pixels within the area of interest was analyzed. Modified version needed multiple times less iteration compared to the current iterative technique. At the time of low wind speed (˜1.3 m/s), the current iterative technique was not able to find a solution of surface energy balance for all of the Landsat pixels, while the modified version was able to achieve it in a few iterations. The study will facilitate many operational evapotranspiration models to avoid the nonconvergence in low wind speeds, which helps to increase the accuracy of flux calculations.

  7. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  8. Synthesis, spectral, optical properties and theoretical calculations on schiff bases ligands containing o-tolidine

    NASA Astrophysics Data System (ADS)

    Arroudj, S.; Bouchouit, M.; Bouchouit, K.; Bouraiou, A.; Messaadia, L.; Kulyk, B.; Figa, V.; Bouacida, S.; Sofiani, Z.; Taboukhat, S.

    2016-06-01

    This paper explores the synthesis, structure characterization and optical properties of two new schiff bases. These compounds were obtained by condensation of o-tolidine with salicylaldehyde and cinnamaldehyde. The obtained ligands were characterized by UV, 1H and NMR. Their third-order NLO properties were measured using the third harmonic generation technique on thin films at 1064 nm. The electric dipole moment (μ), the polarizability (α) and the first hyperpolarizability (β) were calculated using the density functional B3LYP method with the lanl2dz basis set. For the results, the title compound shows nonzero β value revealing second order NLO behaviour.

  9. Fast calculation of computer-generated hologram using run-length encoding based recurrence relation.

    PubMed

    Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2015-04-20

    Computer-Generated Holograms (CGHs) can be generated by superimposing zoneplates. A zoneplate is a grating that can concentrate an incident light into a point. Since a zoneplate has a circular symmetry, we reported an algorithm that rapidly generates a zoneplate by drawing concentric circles using computer graphic techniques. However, random memory access was required in the algorithm and resulted in degradation of the computational efficiency. In this study, we propose a fast CGH generation algorithm without random memory access using run-length encoding (RLE) based recurrence relation. As a result, we succeeded in improving the calculation time by 88%, compared with that of the previous work.

  10. Analytical calculation of intrinsic shielding effectiveness for isotropic and anisotropic materials based on measured electrical parameters

    NASA Astrophysics Data System (ADS)

    Kühn, M.; John, W.; Weigel, R.

    2014-11-01

    This contribution contains the mechanisms for calculation of magnetic shielding effectiveness from material samples, based on measured electrical parameters. For this, measurement systems for the electrical conductivity of high and low conductive material samples with respect to the direction of current flow are presented and discussed. Also a definition of isotropic and anisotropic materials with electrical circuit diagrams is given. For prediction of shielding effectiveness for isotropic and anisotropic materials, several analytical models are presented. Also adaptions to gain a near field solution are part of this contribution. All analytical models will also be validated with an adequate measurement system.

  11. Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations

    SciTech Connect

    Jo, Y.; Yun, S.; Cho, N. Z.

    2013-07-01

    In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)

  12. Variational Calculation of K-pp with Chiral SU(3)-BASED bar KN Interaction

    NASA Astrophysics Data System (ADS)

    Doté, A.; Hyodo, T.; Weise, W.

    The prototype of a bar K nuclear cluster, K-pp, has been investigated using effective bar KN potentials based on chiral SU(3) dynamics. Variational calculation shows a bound state solution with shallow binding energy B(K-pp) = 20 ± 3 MeV and broad mesonic decay width Γ (bar KNN -> π YN) = 40 - 70 {MeV}. The bar KN(I = 0) pair in the K-pp system exhibits a similar structure as the Λ(1405). We have also estimated the dispersive correction, p-wave bar KN interaction, and two-nucleon absorption width.

  13. Variational Calculation of K-pp with Chiral SU(3)-BASED bar KN Interaction

    NASA Astrophysics Data System (ADS)

    Doté, A.; Hyodo, T.; Weise, W.

    2010-10-01

    The prototype of a bar K nuclear cluster, K-pp, has been investigated using effective bar KN potentials based on chiral SU(3) dynamics. Variational calculation shows a bound state solution with shallow binding energy B(K-pp) = 20 ± 3 MeV and broad mesonic decay width Γ (bar KNN -> π YN) = 40 - 70 {MeV}. The bar KN(I = 0) pair in the K-pp system exhibits a similar structure as the Λ(1405). We have also estimated the dispersive correction, p-wave bar KN interaction, and two-nucleon absorption width.

  14. A method for calculating strain energy release rate based on beam theory

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Pandey, R. K.

    1993-01-01

    The Timoshenko beam theory was used to model cracked beams and to calculate the total strain energy release rate. The root rotation of the beam segments at the crack tip were estimated based on an approximate 2D elasticity solution. By including the strain energy released due to the root rotations of the beams during crack extension, the strain energy release rate obtained using beam theory agrees very well with the 2D finite element solution. Numerical examples were given for various beam geometries and loading conditions. Comparisons with existing beam models were also given.

  15. Improved method for calculating strain energy release rate based on beam theory

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Pandey, R. K.

    1994-01-01

    The Timoshenko beam theory was used to model cracked beams and to calculate the total strain-energy release rate. The root rotations of the beam segments at the crack tip were estimated based on an approximate two-dimensional elasticity solution. By including the strain energy released due to the root rotations of the beams during crack extension, the strain-energy release rate obtained using beam theory agrees very well with the two-dimensional finite element solution. Numerical examples were given for various beam geometries and loading conditions. Comparisons with existing beam models were also given.

  16. Graph model for calculating the properties of saturated monoalcohols based on the additivity of energy terms

    NASA Astrophysics Data System (ADS)

    Grebeshkov, V. V.; Smolyakov, V. M.

    2012-05-01

    A 16-constant additive scheme was derived for calculating the physicochemical properties of saturated monoalcohols CH4O-C9H20O and decomposing the triangular numbers of the Pascal triangle based on the similarity of subgraphs in the molecular graphs (MGs) of the homologous series of these alcohols. It was shown, using this scheme for calculation of properties of saturated monoalcohols as an example, that each coefficient of the scheme (in other words, the number of methods to impose a chain of a definite length i 1, i 2, … on a molecular graph) is the result of the decomposition of the triangular numbers of the Pascal triangle. A linear dependence was found within the adopted classification of structural elements. Sixteen parameters of the schemes were recorded as linear combinations of 17 parameters. The enthalpies of vaporization L {298/K 0} of the saturated monoalcohols CH4O-C9H20O, for which there were no experimental data, were calculated. It was shown that the parameters are not chosen randomly when using the given procedure for constructing an additive scheme by decomposing the triangular numbers of the Pascal triangle.

  17. Pcetk: A pDynamo-based Toolkit for Protonation State Calculations in Proteins.

    PubMed

    Feliks, Mikolaj; Field, Martin J

    2015-10-26

    Pcetk (a pDynamo-based continuum electrostatic toolkit) is an open-source, object-oriented toolkit for the calculation of proton binding energetics in proteins. The toolkit is a module of the pDynamo software library, combining the versatility of the Python scripting language and the efficiency of the compiled languages, C and Cython. In the toolkit, we have connected pDynamo to the external Poisson-Boltzmann solver, extended-MEAD. Our goal was to provide a modern and extensible environment for the calculation of protonation states, electrostatic energies, titration curves, and other electrostatic-dependent properties of proteins. Pcetk is freely available under the CeCILL license, which is compatible with the GNU General Public License. The toolkit can be found on the Web at the address http://github.com/mfx9/pcetk. The calculation of protonation states in proteins requires a knowledge of pKa values of protonatable groups in aqueous solution. However, for some groups, such as protonatable ligands bound to protein, the pKa aq values are often difficult to obtain from experiment. As a complement to Pcetk, we revisit an earlier computational method for the estimation of pKa aq values that has an accuracy of ± 0.5 pKa-units or better. Finally, we verify the Pcetk module and the method for estimating pKa aq values with different model cases.

  18. Accelerating Atomic Orbital-based Electronic Structure Calculation via Pole Expansion plus Selected Inversion

    SciTech Connect

    Lin, Lin; Chen, Mohan; Yang, Chao; He, Lixin

    2012-02-10

    We describe how to apply the recently developed pole expansion plus selected inversion (PEpSI) technique to Kohn-Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating charge density, total energy, Helmholtz free energy and atomic forces without using the eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to update the chemical potential without using Kohn-Sham eigenvalues. The advantage of using PEpSI is that it has a much lower computational complexity than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEpSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEpSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEpSI is modest. This makes it even possible to perform Kohn-Sham DFT calculations for 10,000-atom nanotubes on a single processor. We also show that the use of PEpSI does not lead to loss of accuracy required in a practical DFT calculation.

  19. Accelerating atomic orbital-based electronic structure calculation via pole expansion and selected inversion

    NASA Astrophysics Data System (ADS)

    Lin, Lin; Chen, Mohan; Yang, Chao; He, Lixin

    2013-07-01

    We describe how to apply the recently developed pole expansion and selected inversion (PEXSI) technique to Kohn-Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating the charge density, the total energy, the Helmholtz free energy and the atomic forces (including both the Hellmann-Feynman force and the Pulay force) without using the eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to update the chemical potential without using Kohn-Sham eigenvalues. The advantage of using PEXSI is that it has a computational complexity much lower than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEXSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEXSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEXSI are modest. This even makes it possible to perform Kohn-Sham DFT calculations for 10 000-atom nanotubes with a sequential implementation of the selected inversion algorithm. We also perform an accurate geometry optimization calculation on a truncated (8, 0) boron nitride nanotube system containing 1024 atoms. Numerical results indicate that the use of PEXSI does not lead to loss of the accuracy required in a practical DFT calculation.

  20. A brief comparison between grid based real space algorithms andspectrum algorithms for electronic structure calculations

    SciTech Connect

    Wang, Lin-Wang

    2006-12-01

    Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N{sup 3}) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the

  1. SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.

    PubMed

    Muhire, Brejnev Muhizi; Varsani, Arvind; Martin, Darren Patrick

    2014-01-01

    The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV). There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT), a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms). PMID:25259891

  2. Novel Anthropometry-Based Calculation of the Body Heat Capacity in the Korean Population.

    PubMed

    Pham, Duong Duc; Lee, Jeong Hoon; Lee, Young Boum; Park, Eun Seok; Kim, Ka Yul; Song, Ji Yeon; Kim, Ji Eun; Leem, Chae Hun

    2015-01-01

    Heat capacity (HC) has an important role in the temperature regulation process, particularly in dealing with the heat load. The actual measurement of the body HC is complicated and is generally estimated by body-composition-specific data. This study compared the previously known HC estimating equations and sought how to define HC using simple anthropometric indices such as weight and body surface area (BSA) in the Korean population. Six hundred participants were randomly selected from a pool of 902 healthy volunteers aged 20 to 70 years for the training set. The remaining 302 participants were used for the test set. Body composition analysis using multi-frequency bioelectrical impedance analysis was used to access body components including body fat, water, protein, and mineral mass. Four different HCs were calculated and compared using a weight-based HC (HC_Eq1), two HCs estimated from fat and fat-free mass (HC_Eq2 and HC_Eq3), and an HC calculated from fat, protein, water, and mineral mass (HC_Eq4). HC_Eq1 generally produced a larger HC than the other HC equations and had a poorer correlation with the other HC equations. HC equations using body composition data were well-correlated to each other. If HC estimated with HC_Eq4 was regarded as a standard, interestingly, the BSA and weight independently contributed to the variation of HC. The model composed of weight, BSA, and gender was able to predict more than a 99% variation of HC_Eq4. Validation analysis on the test set showed a very high satisfactory level of the predictive model. In conclusion, our results suggest that gender, BSA, and weight are the independent factors for calculating HC. For the first time, a predictive equation based on anthropometry data was developed and this equation could be useful for estimating HC in the general Korean population without body-composition measurement.

  3. Novel Anthropometry-Based Calculation of the Body Heat Capacity in the Korean Population

    PubMed Central

    Pham, Duong Duc; Lee, Jeong Hoon; Lee, Young Boum; Park, Eun Seok; Kim, Ka Yul; Song, Ji Yeon; Kim, Ji Eun; Leem, Chae Hun

    2015-01-01

    Heat capacity (HC) has an important role in the temperature regulation process, particularly in dealing with the heat load. The actual measurement of the body HC is complicated and is generally estimated by body-composition-specific data. This study compared the previously known HC estimating equations and sought how to define HC using simple anthropometric indices such as weight and body surface area (BSA) in the Korean population. Six hundred participants were randomly selected from a pool of 902 healthy volunteers aged 20 to 70 years for the training set. The remaining 302 participants were used for the test set. Body composition analysis using multi-frequency bioelectrical impedance analysis was used to access body components including body fat, water, protein, and mineral mass. Four different HCs were calculated and compared using a weight-based HC (HC_Eq1), two HCs estimated from fat and fat-free mass (HC_Eq2 and HC_Eq3), and an HC calculated from fat, protein, water, and mineral mass (HC_Eq4). HC_Eq1 generally produced a larger HC than the other HC equations and had a poorer correlation with the other HC equations. HC equations using body composition data were well-correlated to each other. If HC estimated with HC_Eq4 was regarded as a standard, interestingly, the BSA and weight independently contributed to the variation of HC. The model composed of weight, BSA, and gender was able to predict more than a 99% variation of HC_Eq4. Validation analysis on the test set showed a very high satisfactory level of the predictive model. In conclusion, our results suggest that gender, BSA, and weight are the independent factors for calculating HC. For the first time, a predictive equation based on anthropometry data was developed and this equation could be useful for estimating HC in the general Korean population without body-composition measurement. PMID:26529594

  4. SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.

    PubMed

    Muhire, Brejnev Muhizi; Varsani, Arvind; Martin, Darren Patrick

    2014-01-01

    The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV). There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT), a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms).

  5. Novel Anthropometry-Based Calculation of the Body Heat Capacity in the Korean Population.

    PubMed

    Pham, Duong Duc; Lee, Jeong Hoon; Lee, Young Boum; Park, Eun Seok; Kim, Ka Yul; Song, Ji Yeon; Kim, Ji Eun; Leem, Chae Hun

    2015-01-01

    Heat capacity (HC) has an important role in the temperature regulation process, particularly in dealing with the heat load. The actual measurement of the body HC is complicated and is generally estimated by body-composition-specific data. This study compared the previously known HC estimating equations and sought how to define HC using simple anthropometric indices such as weight and body surface area (BSA) in the Korean population. Six hundred participants were randomly selected from a pool of 902 healthy volunteers aged 20 to 70 years for the training set. The remaining 302 participants were used for the test set. Body composition analysis using multi-frequency bioelectrical impedance analysis was used to access body components including body fat, water, protein, and mineral mass. Four different HCs were calculated and compared using a weight-based HC (HC_Eq1), two HCs estimated from fat and fat-free mass (HC_Eq2 and HC_Eq3), and an HC calculated from fat, protein, water, and mineral mass (HC_Eq4). HC_Eq1 generally produced a larger HC than the other HC equations and had a poorer correlation with the other HC equations. HC equations using body composition data were well-correlated to each other. If HC estimated with HC_Eq4 was regarded as a standard, interestingly, the BSA and weight independently contributed to the variation of HC. The model composed of weight, BSA, and gender was able to predict more than a 99% variation of HC_Eq4. Validation analysis on the test set showed a very high satisfactory level of the predictive model. In conclusion, our results suggest that gender, BSA, and weight are the independent factors for calculating HC. For the first time, a predictive equation based on anthropometry data was developed and this equation could be useful for estimating HC in the general Korean population without body-composition measurement. PMID:26529594

  6. Coordination chemistry, thermodynamics and DFT calculations of copper(II) NNOS Schiff base complexes.

    PubMed

    Esmaielzadeh, Sheida; Azimian, Leila; Shekoohi, Khadijeh; Mohammadi, Khosro

    2014-12-10

    Synthesis, magnetic and spectroscopy techniques are described for five copper(II) containing tetradentate Schiff bases are synthesized from methyl-2-(N-2'-aminoethane), (1-methyl-2'-aminoethane), (3-aminopropylamino)cyclopentenedithiocarboxylate. Molar conductance and infrared spectral evidences indicate that the complexes are four-coordinate in which the Schiff bases are coordinated as NNOS ligands. Room temperature μeff values for the complexes are 1.71-1.80B.M. corresponding to one unpaired electron respectively. The formation constants and free energies were measured spectrophotometrically, at constant ionic strength 0.1M (NaClO4), at 25˚C in DMF solvent. Also, the DFT calculations were carried out to determine the structural and the geometrical properties of the complexes. The DFT results are further supported by the experimental formation constants of these complexes.

  7. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    NASA Technical Reports Server (NTRS)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  8. Calculations of helium separation via uniform pores of stanene-based membranes

    PubMed Central

    Gao, Guoping; Jiao, Yan; Jiao, Yalong; Ma, Fengxian; Kou, Liangzhi

    2015-01-01

    Summary The development of low energy cost membranes to separate He from noble gas mixtures is highly desired. In this work, we studied He purification using recently experimentally realized, two-dimensional stanene (2D Sn) and decorated 2D Sn (SnH and SnF) honeycomb lattices by density functional theory calculations. To increase the permeability of noble gases through pristine 2D Sn at room temperature (298 K), two practical strategies (i.e., the application of strain and functionalization) are proposed. With their high concentration of large pores, 2D Sn-based membrane materials demonstrate excellent helium purification and can serve as a superior membrane over traditionally used, porous materials. In addition, the separation performance of these 2D Sn-based membrane materials can be significantly tuned by application of strain to optimize the He purification properties by taking both diffusion and selectivity into account. Our results are the first calculations of He separation in a defect-free honeycomb lattice, highlighting new interesting materials for helium separation for future experimental validation. PMID:26885459

  9. Calculations of helium separation via uniform pores of stanene-based membranes.

    PubMed

    Gao, Guoping; Jiao, Yan; Jiao, Yalong; Ma, Fengxian; Kou, Liangzhi; Du, Aijun

    2015-01-01

    The development of low energy cost membranes to separate He from noble gas mixtures is highly desired. In this work, we studied He purification using recently experimentally realized, two-dimensional stanene (2D Sn) and decorated 2D Sn (SnH and SnF) honeycomb lattices by density functional theory calculations. To increase the permeability of noble gases through pristine 2D Sn at room temperature (298 K), two practical strategies (i.e., the application of strain and functionalization) are proposed. With their high concentration of large pores, 2D Sn-based membrane materials demonstrate excellent helium purification and can serve as a superior membrane over traditionally used, porous materials. In addition, the separation performance of these 2D Sn-based membrane materials can be significantly tuned by application of strain to optimize the He purification properties by taking both diffusion and selectivity into account. Our results are the first calculations of He separation in a defect-free honeycomb lattice, highlighting new interesting materials for helium separation for future experimental validation. PMID:26885459

  10. Three Dimensional Gait Analysis Using Wearable Acceleration and Gyro Sensors Based on Quaternion Calculations

    PubMed Central

    Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki

    2013-01-01

    This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128

  11. Electronic structures of halogen-doped Cu2O based on DFT calculations

    NASA Astrophysics Data System (ADS)

    Zhao, Zong-Yan; Yi, Juan; Zhou, Da-Cheng

    2014-01-01

    In order to construct p—n homojunction of Cu2O-based thin film solar cells that may increase its conversion efficiency, to synthesize n-type Cu2O with high conductivity is extremely crucial, and considered as a challenge in the near future. The doping effects of halogen on electronic structure of Cu2O have been investigated by density function theory calculations in the present work. Halogen dopants form donor levels below the bottom of conduction band through gaining or losing electrons, suggesting that halogen doping could make Cu2O have n-type conductivity. The lattice distortion, the impurity formation energy, the position, and the band width of donor level of Cu2O1-xHx (H = F, Cl, Br, I) increase with the halogen atomic number. Based on the calculated results, chlorine doping is an effective n-type dopant for Cu2O, owing to the lower impurity formation energy and suitable donor level.

  12. Three dimensional gait analysis using wearable acceleration and gyro sensors based on quaternion calculations.

    PubMed

    Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki

    2013-01-01

    This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128

  13. GPU Based Fast Free-Wake Calculations For Multiple Horizontal Axis Wind Turbine Rotors

    NASA Astrophysics Data System (ADS)

    Türkal, M.; Novikov, Y.; Üşenmez, S.; Sezer-Uzol, N.; Uzol, O.

    2014-06-01

    Unsteady free-wake solutions of wind turbine flow fields involve computationally intensive interaction calculations, which generally limit the total amount of simulation time or the number of turbines that can be simulated by the method. This problem, however, can be addressed easily using high-level of parallelization. Especially when exploited with a GPU, a Graphics Processing Unit, this property can provide a significant computational speed-up, rendering the most intensive engineering problems realizable in hours of computation time. This paper presents the results of the simulation of the flow field for the NREL Phase VI turbine using a GPU-based in-house free-wake panel method code. Computational parallelism involved in the free-wake methodology is exploited using a GPU, allowing thousands of similar operations to be performed simultaneously. The results are compared to experimental data as well as to those obtained by running a corresponding CPU-based code. Results show that the GPU based code is capable of producing wake and load predictions similar to the CPU- based code and in a substantially reduced amount of time. This capability could allow free- wake based analysis to be used in the possible design and optimization studies of wind farms as well as prediction of multiple turbine flow fields and the investigation of the effects of using different vortex core models, core expansion and stretching models on the turbine rotor interaction problems in multiple turbine wake flow fields.

  14. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  15. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  16. All-electronic droplet generation on-chip with real-time feedback control for EWOD digital microfluidics.

    PubMed

    Gong, Jian; Kim, Chang-Jin C J

    2008-06-01

    Electrowetting-on-dielectric (EWOD) actuation enables digital (or droplet) microfluidics where small packets of liquids are manipulated on a two-dimensional surface. Due to its mechanical simplicity and low energy consumption, EWOD holds particular promise for portable systems. To improve volume precision of the droplets, which is desired for quantitative applications such as biochemical assays, existing practices would require near-perfect device fabrication and operation conditions unless the droplets are generated under feedback control by an extra pump setup off of the chip. In this paper, we develop an all-electronic (i.e., no ancillary pumping) real-time feedback control of on-chip droplet generation. A fast voltage modulation, capacitance sensing, and discrete-time PID feedback controller are integrated on the operating electronic board. A significant improvement is obtained in the droplet volume uniformity, compared with an open loop control as well as the previous feedback control employing an external pump. Furthermore, this new capability empowers users to prescribe the droplet volume even below the previously considered minimum, allowing, for example, 1 : x (x < 1) mixing, in comparison to the previously considered n : m mixing (i.e., n and m unit droplets).

  17. ALL-ELECTRONIC DROPLET GENERATION ON-CHIP WITH REAL-TIME FEEDBACK CONTROL FOR EWOD DIGITIAL MICROFLUIDICS

    PubMed Central

    Gong, Jian; Kim, Chang-Jin “CJ”

    2009-01-01

    Electrowetting-on-dielectric (EWOD) actuation enables digital (or droplet) microfluidics where small packets of liquids are manipulated on a two-dimensional surface. Due to its mechanical simplicity and low energy consumption, EWOD holds particular promise for portable systems. To improve volume precision of the droplets, which is desired for quantitative applications such as biochemical assays, existing practices would require near-perfect device fabricaion and operation conditions unless the droplets are generated under feedback control by an extra pump setup off of the chip. In this paper, we develop an all-electronic (i.e., no ancillary pumping) real-time feedback control of on-chip droplet generation. A fast voltage modulation, capacitance sensing, and discrete-time PID feedback controller are integrated on the operating electronic board. A significant improvement is obtained in the droplet volume uniformity, compared with an open loop control as well as the previous feedback control employing an external pump. Furthermore, this new capability empowers users to prescribe the droplet volume even below the previously considered minimum, allowing, for example, 1:x (x < 1) mixing, in comparison to the previously considered n:m mixing (i.e., n and m unit droplets). PMID:18497909

  18. Develop and Test a Solvent Accessible Surface Area-Based Model in Conformational Entropy Calculations

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2012-01-01

    It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (Molecular Mechanics-Poisson Boltzmann Surface Area) and MM-GBSA (Molecular Mechanics-Generalized Born Surface Area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parameterized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For the convenience, TS, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for post-entropy calculations): the mean correlation coefficient squares (R2) was 0.56. As to the 20 complexes, the TS changes

  19. Spinel compounds as multivalent battery cathodes: A systematic evaluation based on ab initio calculations

    DOE PAGES

    Liu, Miao; Rong, Ziqin; Malik, Rahul; Canepa, Pieremanuele; Jain, Anubhav; Ceder, Gerbrand; Persson, Kristin A.

    2014-12-16

    In this study, batteries that shuttle multivalent ions such as Mg2+ and Ca2+ ions are promising candidates for achieving higher energy density than available with current Li-ion technology. Finding electrode materials that reversibly store and release these multivalent cations is considered a major challenge for enabling such multivalent battery technology. In this paper, we use recent advances in high-throughput first-principles calculations to systematically evaluate the performance of compounds with the spinel structure as multivalent intercalation cathode materials, spanning a matrix of five different intercalating ions and seven transition metal redox active cations. We estimate the insertion voltage, capacity, thermodynamic stabilitymore » of charged and discharged states, as well as the intercalating ion mobility and use these properties to evaluate promising directions. Our calculations indicate that the Mn2O4 spinel phase based on Mg and Ca are feasible cathode materials. In general, we find that multivalent cathodes exhibit lower voltages compared to Li cathodes; the voltages of Ca spinels are ~0.2 V higher than those of Mg compounds (versus their corresponding metals), and the voltages of Mg compounds are ~1.4 V higher than Zn compounds; consequently, Ca and Mg spinels exhibit the highest energy densities amongst all the multivalent cation species. The activation barrier for the Al³⁺ ion migration in the Mn₂O₄ spinel is very high (~1400 meV for Al3+ in the dilute limit); thus, the use of an Al based Mn spinel intercalation cathode is unlikely. Amongst the choice of transition metals, Mn-based spinel structures rank highest when balancing all the considered properties.« less

  20. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    SciTech Connect

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y.

    2012-05-15

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at

  1. Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital

    PubMed Central

    Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud

    2016-01-01

    Background: Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. Objective: This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. Methods: This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. Results: The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. Conclusion: By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department. PMID:26234974

  2. Experimentation and theoretic calculation of a BODIPY sensor based on photoinduced electron transfer for ions detection.

    PubMed

    Lu, Hua; Zhang, ShuShu; Liu, HanZhuang; Wang, YanWei; Shen, Zhen; Liu, ChunGen; You, XiaoZeng

    2009-12-24

    A boron-dipyrromethene (BODIPY)-based fluorescence probe with a N,N'-(pyridine-2, 6-diylbis(methylene))-dianiline substituent (1) has been prepared by condensation of 2,6-pyridinedicarboxaldehyde with 8-(4-amino)-4,4-difluoro-1,3,5,7-tetramethyl-4-bora-3a,4a-diaza-s-indacene and reduction by NaBH(4). The sensing properties of compound 1 toward various metal ions are investigated via fluorometric titration in methanol, which show highly selective fluorescent turn-on response in the presence of Hg(2+) over the other metal ions, such as Li(+), Na(+), K(+), Ca(2+), Mg(2+), Pb(2+), Fe(2+), Co(2+), Ni(2+), Cu(2+), Zn(2+), Cd(2+), Ag(+), and Mn(2+). Computational approach has been carried out to investigate the mechanism why compound 1 provides different fluorescent signal for Hg(2+) and other ions. Theoretic calculations of the energy levels show that the quenching of the bright green fluorescence of boradiazaindacene fluorophore is due to the reductive photoinduced electron transfer (PET) from the aniline subunit to the excited state of BODIPY fluorophore. In metal complexes, the frontier molecular orbital energy levels changes greatly. Binding Zn(2+) or Cd(2+) ion leads to significant decreasing of both the HOMO and LUMO energy levels of the receptor, thus inhibit the reductive PET process, whereas an oxidative PET from the excited state fluorophore to the receptor occurs, vice versa, which also quenches the fluorescence. However, for 1-Hg(2+) complex, both the reductive and oxidative PETs are prohibited; therefore, strong fluorescence emission from the fluorophore can be observed experimentally. The agreement of the experimental results and theoretic calculations suggests that our calculation method can be applicable as guidance for the design of new chemosensors for other metal ions. PMID:19950967

  3. Experimentation and Theoretic Calculation of a BODIPY Sensor Based on Photoinduced Electron Transfer for Ions Detection

    NASA Astrophysics Data System (ADS)

    Lu, Hua; Zhang, Shushu; Liu, Hanzhuang; Wang, Yanwei; Shen, Zhen; Liu, Chungen; You, Xiaozeng

    2009-12-01

    A boron-dipyrromethene (BODIPY)-based fluorescence probe with a N,N'-(pyridine-2, 6-diylbis(methylene))-dianiline substituent (1) has been prepared by condensation of 2,6-pyridinedicarboxaldehyde with 8-(4-amino)-4,4-difluoro-1,3,5,7-tetramethyl-4-bora-3a,4a-diaza-s-indacene and reduction by NaBH4. The sensing properties of compound 1 toward various metal ions are investigated via fluorometric titration in methanol, which show highly selective fluorescent turn-on response in the presence of Hg2+ over the other metal ions, such as Li+, Na+, K+, Ca2+, Mg2+, Pb2+, Fe2+, Co2+, Ni2+, Cu2+, Zn2+, Cd2+, Ag+, and Mn2+. Computational approach has been carried out to investigate the mechanism why compound 1 provides different fluorescent signal for Hg2+ and other ions. Theoretic calculations of the energy levels show that the quenching of the bright green fluorescence of boradiazaindacene fluorophore is due to the reductive photoinduced electron transfer (PET) from the aniline subunit to the excited state of BODIPY fluorophore. In metal complexes, the frontier molecular orbital energy levels changes greatly. Binding Zn2+ or Cd2+ ion leads to significant decreasing of both the HOMO and LUMO energy levels of the receptor, thus inhibit the reductive PET process, whereas an oxidative PET from the excited state fluorophore to the receptor occurs, vice versa, which also quenches the fluorescence. However, for 1-Hg2+ complex, both the reductive and oxidative PETs are prohibited; therefore, strong fluorescence emission from the fluorophore can be observed experimentally. The agreement of the experimental results and theoretic calculations suggests that our calculation method can be applicable as guidance for the design of new chemosensors for other metal ions.

  4. Experimental verification of a Monte Carlo-based MLC simulation model for IMRT dose calculation

    SciTech Connect

    Tyagi, Neelam; Moran, Jean M.; Litzenberg, Dale W.; Bielajew, Alex F.; Fraass, Benedick A.; Chetty, Indrin J.

    2007-02-15

    Inter- and intra-leaf transmission and head scatter can play significant roles in intensity modulated radiation therapy (IMRT)-based treatment deliveries. In order to accurately calculate the dose in the IMRT planning process, it is therefore important that the detailed geometry of the multi-leaf collimator (MLC), in addition to other components in the accelerator treatment head, be accurately modeled. In this paper, we have used the Monte Carlo method (MC) to develop a comprehensive model of the Varian 120 leaf MLC and have compared it against measurements in homogeneous phantom geometries under different IMRT delivery circumstances. We have developed a geometry module within the DPM MC code to simulate the detailed MLC design and the collimating jaws. Tests consisting of leakage, leaf positioning and static MLC shapes were performed to verify the accuracy of transport within the MLC model. The calculations show agreement within 2% in the high dose region for both film and ion-chamber measurements for these static shapes. Clinical IMRT treatment plans for the breast [both segmental MLC (SMLC) and dynamic MLC (DMLC)], prostate (SMLC) and head and neck split fields (SMLC) were also calculated and compared with film measurements. Such a range of cases were chosen to investigate the accuracy of the model as a function of modulation in the beamlet pattern, beamlet width, and field size. The overall agreement is within 2%/2 mm of the film data for all IMRT beams except the head and neck split field, which showed differences up to 5% in the high dose regions. Various sources of uncertainties in these comparisons are discussed.

  5. Adjoint-based deviational Monte Carlo methods for phonon transport calculations

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.

    2015-06-01

    In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.

  6. Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics

    NASA Astrophysics Data System (ADS)

    Hošek, Petr; Spiwok, Vojtěch

    2016-01-01

    Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.

  7. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    SciTech Connect

    Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan

    2014-03-24

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.

  8. [Calculation method of absolute quantum yields in photocatalytic slurry reactor based on cylindrical light].

    PubMed

    Shen, Xun-wei; Yuan, Chun-wei

    2005-01-01

    Heterogeneous photocatalysis in slurry reactors have the particular characteristic that the catalyst particles not only absorb but also scatter photons so the radiation scattering can not be neglected. However, it is very difficult in mathematics to obtain the rigorous solution of the radiative transfer equation. Consequently present methods, in which the apparent quantum yields can be calculated by employing the incident radiation intensity, always underestimate quantum yields calculations. In this paper, a method is developed to produce absolute values of photocatalytic quantum yields in slurry reactor based on cylindrical UV light source. In a typical laboratory reactor (diameter equal to 5.6 cm and length equal to 10 cm) the values for the photocatalytic degradation of phenol are reported under precisely defined conditions. The true value of the local volumetric rate of photon absorption (LVRPA) can be obtained. It was shown that apparent quantum yields differ from true quantum yields 7.08% and that for the same geometric arrangement, vanishing fraction accounts for 1.1% of the incident radiation. The method can be used to compare reactivity of different catalysts or, for a given catalyst, reactivity with different model compounds and as a principle to design a reactor.

  9. Ionic liquid based lithium battery electrolytes: charge carriers and interactions derived by density functional theory calculations.

    PubMed

    Angenendt, Knut; Johansson, Patrik

    2011-06-23

    The solvation of lithium salts in ionic liquids (ILs) leads to the creation of a lithium ion carrying species quite different from those found in traditional nonaqueous lithium battery electrolytes. The most striking differences are that these species are composed only of ions and in general negatively charged. In many IL-based electrolytes, the dominant species are triplets, and the charge, stability, and size of the triplets have a large impact on the total ion conductivity, the lithium ion mobility, and also the lithium ion delivery at the electrode. As an inherent advantage, the triplets can be altered by selecting lithium salts and ionic liquids with different anions. Thus, within certain limits, the lithium ion carrying species can even be tailored toward distinct important properties for battery application. Here, we show by DFT calculations that the resulting charge carrying species from combinations of ionic liquids and lithium salts and also some resulting electrolyte properties can be predicted. PMID:21591707

  10. Convolution based method for calculating inputs from dendritic fields in a continuum model of the retina.

    PubMed

    Al Abed, Amr; Yin, Shijie; Suaning, Gregg J; Lovell, Nigel H; Dokos, Socrates

    2012-01-01

    Computational models are valuable tools that can be used to aid the design and test the efficacy of electrical stimulation strategies in prosthetic vision devices. In continuum models of retinal electrophysiology, the effective extracellular potential can be considered as an approximate measure of the electrotonic loading a neuron's dendritic tree exerts on the soma. A convolution based method is presented to calculate the local spatial average of the effective extracellular loading in retinal ganglion cells (RGCs) in a continuum model of the retina which includes an active RGC tissue layer. The method can be used to study the effect of the dendritic tree size on the activation of RGCs by electrical stimulation using a hexagonal arrangement of electrodes (hexpolar) placed in the suprachoroidal space.

  11. Direct calculation of correlation length based on quasi-cumulant method

    NASA Astrophysics Data System (ADS)

    Fukushima, Noboru

    2014-03-01

    We formulate a method of directly obtaining a correlation length without full calculation of correlation functions, as a high-temperature series. The method is based on the quasi-cumulant method, which was formulated by the author in J. Stat. Phys. 111, 1049-1090 (2003) as a complementary method for the high-temperature series expansion originally for an SU(n) Heisenberg model, but is applicable to general spin models according to our recent reformulation. A correlation function divided by its lowest-order nonzero contribution has properties very similar to a generating function of some kind of moments, which we call quasi-moments. Their corresponding quasi-cumulants can be also derived, whose generating function is related to the correlation length. In addition, applications to other numerical methods such as the quantum Monte Carlo method are also discussed. JSPS KAKENHI Grant Number 25914008.

  12. A study of potential numerical pitfalls in GPU-based Monte Carlo dose calculation

    NASA Astrophysics Data System (ADS)

    Magnoux, Vincent; Ozell, Benoît; Bonenfant, Éric; Després, Philippe

    2015-07-01

    The purpose of this study was to evaluate the impact of numerical errors caused by the floating point representation of real numbers in a GPU-based Monte Carlo code used for dose calculation in radiation oncology, and to identify situations where this type of error arises. The program used as a benchmark was bGPUMCD. Three tests were performed on the code, which was divided into three functional components: energy accumulation, particle tracking and physical interactions. First, the impact of single-precision calculations was assessed for each functional component. Second, a GPU-specific compilation option that reduces execution time as well as precision was examined. Third, a specific function used for tracking and potentially more sensitive to precision errors was tested by comparing it to a very high-precision implementation. Numerical errors were found in two components of the program. Because of the energy accumulation process, a few voxels surrounding a radiation source end up with a lower computed dose than they should. The tracking system contained a series of operations that abnormally amplify rounding errors in some situations. This resulted in some rare instances (less than 0.1%) of computed distances that are exceedingly far from what they should have been. Most errors detected had no significant effects on the result of a simulation due to its random nature, either because they cancel each other out or because they only affect a small fraction of particles. The results of this work can be extended to other types of GPU-based programs and be used as guidelines to avoid numerical errors on the GPU computing platform.

  13. A cultural study of a science classroom and graphing calculator-based technology

    NASA Astrophysics Data System (ADS)

    Casey, Dennis Alan

    Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology, has found its way from commercial and domestic applications into the pedagogy of science and math education. The purpose of this study was to investigate the culture of an "alternative" science classroom and how it functions with graphing calculator-based technology. Using ethnographic methods, a case study of one secondary, team-taught, Environmental/Physical Science (EPS) classroom was conducted. Nearly half of the 23 students were identified as students with special education needs. Over a four-month period, field data was gathered from written observations, videotaped interactions, audio taped interviews, and document analyses to determine how technology was used and what meaning it had for the participants. Analysis indicated that the technology helped to keep students from getting frustrated with handling data and graphs. In a relatively short period of time, students were able to gather data, produce graphs, and to use inscriptions in meaningful classroom discussions. In addition, teachers used the technology as a means to involve and motivate students to want to learn science. By employing pedagogical skills and by utilizing a technology that might not otherwise be readily available to these students, an environment of appreciation, trust, and respect was fostered. Further, the use of technology by these teachers served to expand students' social capital---the benefits that come from an individual's social contacts, social skills, and social resources.

  14. Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ruf, Joe

    2007-01-01

    As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.

  15. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    SciTech Connect

    Giuseppe Palmiotti

    2015-05-01

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  16. Nurse Staffing Calculation in the Emergency Department - Performance-Oriented Calculation Based on the Manchester Triage System at the University Hospital Bonn

    PubMed Central

    Gräff, Ingo; Goldschmidt, Bernd; Glien, Procula; Klockner, Sophia; Erdfelder, Felix; Schiefer, Jennifer Lynn; Grigutsch, Daniel

    2016-01-01

    Background To date, there are no valid statistics regarding the number of full time staff necessary for nursing care in emergency departments in Europe. Material and Methods Staff requirement calculations were performed using state-of-the art procedures which take both fluctuating patient volume and individual staff shortfall rates into consideration. In a longitudinal observational study, the average nursing staff engagement time per patient was assessed for 503 patients. For this purpose, a full-time staffing calculation was estimated based on the five priority levels of the Manchester Triage System (MTS), taking into account specific workload fluctuations (50th-95th percentiles). Results Patients classified to the MTS category red (n = 35) required the most engagement time with an average of 97.93 min per patient. On weighted average, for orange MTS category patients (n = 118), nursing staff were required for 85.07 min, for patients in the yellow MTS category (n = 181), 40.95 min, while the two MTS categories with the least acute patients, green (n = 129) and blue (n = 40) required 23.18 min and 14.99 min engagement time per patient, respectively. Individual staff shortfall due to sick days and vacation time was 20.87% of the total working hours. When extrapolating this to 21,899 (2010) emergency patients, 67–123 emergency patients (50–95% percentile) per month can be seen by one nurse. The calculated full time staffing requirement depending on the percentiles was 14.8 to 27.1. Conclusion Performance-oriented staff planning offers an objective instrument for calculation of the full-time nursing staff required in emergency departments. PMID:27138492

  17. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

    2014-10-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  18. First-principles calculation method for electron transport based on the grid Lippmann-Schwinger equation

    NASA Astrophysics Data System (ADS)

    Egami, Yoshiyuki; Iwase, Shigeru; Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji

    2015-09-01

    We develop a first-principles electron-transport simulator based on the Lippmann-Schwinger (LS) equation within the framework of the real-space finite-difference scheme. In our fully real-space-based LS (grid LS) method, the ratio expression technique for the scattering wave functions and the Green's function elements of the reference system is employed to avoid numerical collapse. Furthermore, we present analytical expressions and/or prominent calculation procedures for the retarded Green's function, which are utilized in the grid LS approach. In order to demonstrate the performance of the grid LS method, we simulate the electron-transport properties of the semiconductor-oxide interfaces sandwiched between semi-infinite jellium electrodes. The results confirm that the leakage current through the (001 )Si -SiO2 model becomes much larger when the dangling-bond state is induced by a defect in the oxygen layer, while that through the (001 )Ge -GeO2 model is insensitive to the dangling bond state.

  19. Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.

    PubMed

    Demol, Benjamin; Viard, Romain; Reynaert, Nick

    2015-09-08

    The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using

  20. [Calculation on ecological security baseline based on the ecosystem services value and the food security].

    PubMed

    He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao

    2016-01-01

    The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone. PMID:27228612

  1. The Venus nitric oxide night airglow: Model calculations based on the Venus thermospheric general circulation model

    SciTech Connect

    Bougher, S.W. ); Gerard, J.C. ); Stewart, A.I.F.; Fesen, C.G. )

    1990-05-01

    Pioneer Venus (PV) orbiter ultraviolet spectrometer (OUVS) images of the nightside airglow in the (0, 1) {delta} band of nitric oxide showed a maximum whose average location was at 0200 local solar time just south of the equator. The average airglow brightness calculated over a portion of the nightside for 35 early orbits during the Pioneer Venus mission was a factor of 4 lower than this maximum. Recent recalibration of the PV OUVS instrument and reanalysis of the data yield new values for this statistical maximum (1.9 {plus minus} 0.6 kR) and the nightside average (400-460 {plus minus} 120 R) nightglow. This emission is produced by radiative recombination of N and O atoms transported from their source on the dayside to the nightside by the Venus thermospheric circulation. The Venus Thermospheric General Circulation Model (VTGCM) has been extended to incorporate odd nitrogen chemistry in order to examine the dynamical and chemical processes required to give rise to this emission. Its predictions of dayside N atom densities are also compared with empirical models based on Pioneer Venus measurements. Calculations are presented corresponding to OUVS data taken during solar maximum. The average production of nitrogen atoms on the dayside is about 9.0 {times} 10{sup 9} atoms cm{sup {minus}2} s{sup {minus}1}. Approximately 30% of this dayside source is required for transport to the nightside to yield the observed dark-disk nightglow features. The statistical location and intensity of the bright spot are well reproduced, as well as the altitude of the airglow layer. The importance of the large-scale transport and eddy diffusion on the global N({sup 4}S) distribution is also evaluated.

  2. [Calculation on ecological security baseline based on the ecosystem services value and the food security].

    PubMed

    He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao

    2016-01-01

    The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone.

  3. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related exhaust...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-based fuel economy, CO2 emissions, and carbon-related exhaust emissions for a model type. 600.208-12... FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.208-12 Calculation of FTP-based and...

  4. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related exhaust...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-based fuel economy, CO2 emissions, and carbon-related exhaust emissions for a model type. 600.208-12... FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.208-12 Calculation of FTP-based and...

  5. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related exhaust...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-based fuel economy, CO2 emissions, and carbon-related exhaust emissions for a model type. 600.208-12... FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.208-12 Calculation of FTP-based and...

  6. Calculation of vibrational spectra for dioxouranium monochloride monomer and dimers

    NASA Astrophysics Data System (ADS)

    Umreiko, D. S.; Shundalau, M. B.; Zazhogin, A. P.; Komyak, A. I.

    2010-09-01

    Structural models were built and spectral characteristics were calculated based on ab initio calculations for the monomer and dimers of dioxouranium monochoride UO2Cl. The calculations were carried out in the effective core potential LANL2DZ approximation for the uranium atom and all-electron basis sets using DFT methods for oxygen and chlorine atoms (B3LYP/cc-pVDZ). The monomer UO2Cl was found to possess an equilibrium planar (close to T-shaped) configuration with C2v symmetry. The obtained spectral characteristics were analyzed and compared with experimental data. The adequacy of the proposed models and the qualitative agreement between calculation and experiment were demonstrated.

  7. Impact of Heterogeneity-Based Dose Calculation Using a Deterministic Grid-Based Boltzmann Equation Solver for Intracavitary Brachytherapy

    SciTech Connect

    Mikell, Justin K.; Klopp, Ann H.; Gonzalez, Graciela M.N.; Kisling, Kelly D.; Price, Michael J.; Berner, Paula A.; Eifel, Patricia J.; Mourtada, Firas

    2012-07-01

    Purpose: To investigate the dosimetric impact of the heterogeneity dose calculation Acuros (Transpire Inc., Gig Harbor, WA), a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials: The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received {sup 192}Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed with and without solid model applicator, with and without overriding the patient contour to 1 g/cm{sup 3} muscle, and with and without overriding contrast materials to muscle or 2.25 g/cm{sup 3} bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. American Association of Physicists in Medicine Task Group 43 (TG-43) guidelines and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, International Commission on Radiation Units and Measurements (ICRU) report 38 rectal and bladder points, three and nine o'clock, and {sub D2cm3} to the bladder, rectum, and sigmoid. Results: Points A and B, D{sub 2} cm{sup 3} bladder, ICRU bladder, and three and nine o'clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43 guidelines. The D{sub 2cm3} rectum (n = 3), D{sub 2cm3} sigmoid (n = 1), and ICRU rectum (n = 6) had differences of >5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast were mapped to bone and radiopaque packing was not overridden. Conclusions

  8. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom

    SciTech Connect

    Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.

    2014-02-15

    Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model

  9. Stationarity Modeling and Informatics-Based Diagnostics in Monte Carlo Criticality Calculations

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.

    2005-01-15

    In Monte Carlo criticality calculations, source error propagation through the stationary (active) cycles and source convergence in the settling (inactive) cycles are both dominated by the dominance ratio (DR) of fission kernels. For symmetric two-fissile-component systems with the DR close to unity, the extinction of fission source sites can occur in one of the components even when the initial source is symmetric and the number of histories per cycle is more than 1000. When such a system is made slightly asymmetric, the neutron effective multiplication factor at the inactive cycles does not reflect the convergence to stationary source distribution. To overcome this problem, relative entropy has been applied to a slightly asymmetric two-fissile-component problem with a DR of 0.993. The numerical results are mostly satisfactory but also show the possibility of the occasional occurrence of unnecessarily strict stationarity diagnostics. Therefore, a criterion is defined based on the concept of data compression limit in information theory. Numerical results for a pressurized water reactor fuel storage facility with a DR of 0.994 strongly support the efficacy of relative entropy in both the posterior and progressive stationarity diagnostics.

  10. Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.

    PubMed

    Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P

    2016-06-14

    Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates.

  11. Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems

    NASA Technical Reports Server (NTRS)

    Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.

    2012-01-01

    Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.

  12. Formation of a 6FDA-based ring polyimide with nanoscale cavity evaluated by DFT calculations

    NASA Astrophysics Data System (ADS)

    Fukuda, Mitsuhiro; Takao, Yoshimi; Tamai, Yoshinori

    2005-04-01

    The computer-aided molecular design of a rigid ring molecule has been performed. As a candidate molecule, the polyimide derived from 2,2-bis(3,4-carboxylphenyl) hexafluoropropane dianhydride (6FDA) with m-phenylenediamine (MDA) has been used. The optimized structures of the 6FDA-MDA model compounds including a precursor type amic acid model were investigated using the density functional theory (DFT) at the B3LYP/6-311G(d,p) level. Using the optimized structures of the model compounds, the probable combinations to form a flat ring polyimide are considered by taking the spatial angles between the respective aromatic groups into consideration. We selected several combinations with different conformations and the number of monomer units. We showed that the dimer, trimer and tetramer of not only the 6FDA-based ring imide but also the corresponding ring amic acid can have a stable geometry. Each of them contains a cavity of sub-nanometer size and characteristic shape. Among them, the interaction energy with some guest molecules are evaluated for the smallest ring imide constructed from two units of 6FDA-MDA using the DFT calculations.

  13. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    PubMed Central

    Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria

    2009-01-01

    BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642

  14. Structure reconstruction of TiO2-based multi-wall nanotubes: first-principles calculations.

    PubMed

    Bandura, A V; Evarestov, R A; Lukyanov, S I

    2014-07-28

    A new method of theoretical modelling of polyhedral single-walled nanotubes based on the consolidation of walls in the rolled-up multi-walled nanotubes is proposed. Molecular mechanics and ab initio quantum mechanics methods are applied to investigate the merging of walls in nanotubes constructed from the different phases of titania. The combination of two methods allows us to simulate the structures which are difficult to find only by ab initio calculations. For nanotube folding we have used (1) the 3-plane fluorite TiO2 layer; (2) the anatase (101) 6-plane layer; (3) the rutile (110) 6-plane layer; and (4) the 6-plane layer with lepidocrocite morphology. The symmetry of the resulting single-walled nanotubes is significantly lower than the symmetry of initial coaxial cylindrical double- or triple-walled nanotubes. These merged nanotubes acquire higher stability in comparison with the initial multi-walled nanotubes. The wall thickness of the merged nanotubes exceeds 1 nm and approaches the corresponding parameter of the experimental patterns. The present investigation demonstrates that the merged nanotubes can integrate the two different crystalline phases in one and the same wall structure.

  15. Fission yield calculation using toy model based on Monte Carlo simulation

    SciTech Connect

    Jubaidah; Kurniadi, Rizal

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  16. Neutron capture gamma-ray data and calculations for HPGe detector-based applications

    NASA Astrophysics Data System (ADS)

    McNabb, Dennis P.; Firestone, Richard B.

    2004-10-01

    Recently an IAEA Coordinated Research Project published an evaluation of thermal neutron capture gamma-ray cross sections, measured to 1-5% uncertainty, for over 80 elements [1] and produced the Evaluated Gamma-ray Activation File (EGAF) [2] containing nearly 35,000 primary and secondary gamma-rays is available from the IAEA Nuclear Data Section. We have begun an effort to model the quasi-continuum gamma-ray cascade following neutron capture using the approach outlined by Becvar et al. [3] while constraining the calculation to reproduce the measured cross sections deexciting low-lying levels. Our goal is to provide complete neutron capture gamma ray data in ENDF formatted files to use as accurate event generators for high-resolution HPGe detector based applications. The results will be benchmarked to experimental spectroscopic data and compared with existing gamma-decay widths and level densities. [1] Database of Prompt Gamma Rays from Slow Neutron Capture for Elemental Analysis, IAEA-TECDOC-DRAFT (December, 2003); http://www-nds.iaea.org/pgaa/tecdoc.pdf. [2] Evaluated Gamma-ray Activation File maintained by the International Atomic Energy Agency; http://www-nds.iaea.org/pgaa/. [3] F. Becvar, Nucl Instr. Meth. A417, 434 (1998).

  17. Accelerated materials design of fast oxygen ionic conductors based on first principles calculations

    NASA Astrophysics Data System (ADS)

    He, Xingfeng; Mo, Yifei

    Over the past decades, significant research efforts have been dedicated to seeking fast oxygen ion conductor materials, which have important technological applications in electrochemical devices such as solid oxide fuel cells, oxygen separation membranes, and sensors. Recently, Na0.5Bi0.5TiO3 (NBT) was reported as a new family of fast oxygen ionic conductor. We will present our first principles computation study aims to understand the O diffusion mechanisms in the NBT material and to design this material with enhanced oxygen ionic conductivity. Using the NBT materials as an example, we demonstrate the computation capability to evaluate the phase stability, chemical stability, and ionic diffusion of the ionic conductor materials. We reveal the effects of local atomistic configurations and dopants on oxygen diffusion and identify the intrinsic limiting factors in increasing the ionic conductivity of the NBT materials. Novel doping strategies were predicted and demonstrated by the first principles calculations. In particular, the K doped NBT compound achieved good phase stability and an order of magnitude increase in oxygen ionic conductivity of up to 0.1 S cm-1 at 900 K compared to the experimental Mg doped compositions. Our results provide new avenues for the future design of the NBT materials and demonstrate the accelerated design of new ionic conductor materials based on first principles techniques. This computation methodology and workflow can be applied to the materials design of any (e.g. Li +, Na +) fast ion-conducting materials.

  18. Research on Structural Safety of the Stratospheric Airship Based on Multi-Physics Coupling Calculation

    NASA Astrophysics Data System (ADS)

    Ma, Z.; Hou, Z.; Zang, X.

    2015-09-01

    As a large-scale flexible inflatable structure by a huge inner lifting gas volume of several hundred thousand cubic meters, the stratospheric airship's thermal characteristic of inner gas plays an important role in its structural performance. During the floating flight, the day-night variation of the combined thermal condition leads to the fluctuation of the flow field inside the airship, which will remarkably affect the pressure acted on the skin and the structural safety of the stratospheric airship. According to the multi-physics coupling mechanism mentioned above, a numerical procedure of structural safety analysis of stratospheric airships is developed and the thermal model, CFD model, finite element code and criterion of structural strength are integrated. Based on the computation models, the distributions of the deformations and stresses of the skin are calculated with the variation of day-night time. The effects of loads conditions and structural configurations on the structural safety of stratospheric airships in the floating condition are evaluated. The numerical results can be referenced for the structural design of stratospheric airships.

  19. GPAW - massively parallel electronic structure calculations with Python-based software.

    SciTech Connect

    Enkovaara, J.; Romero, N.; Shende, S.; Mortensen, J.

    2011-01-01

    Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used this approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.

  20. Structure reconstruction of TiO2-based multi-wall nanotubes: first-principles calculations.

    PubMed

    Bandura, A V; Evarestov, R A; Lukyanov, S I

    2014-07-28

    A new method of theoretical modelling of polyhedral single-walled nanotubes based on the consolidation of walls in the rolled-up multi-walled nanotubes is proposed. Molecular mechanics and ab initio quantum mechanics methods are applied to investigate the merging of walls in nanotubes constructed from the different phases of titania. The combination of two methods allows us to simulate the structures which are difficult to find only by ab initio calculations. For nanotube folding we have used (1) the 3-plane fluorite TiO2 layer; (2) the anatase (101) 6-plane layer; (3) the rutile (110) 6-plane layer; and (4) the 6-plane layer with lepidocrocite morphology. The symmetry of the resulting single-walled nanotubes is significantly lower than the symmetry of initial coaxial cylindrical double- or triple-walled nanotubes. These merged nanotubes acquire higher stability in comparison with the initial multi-walled nanotubes. The wall thickness of the merged nanotubes exceeds 1 nm and approaches the corresponding parameter of the experimental patterns. The present investigation demonstrates that the merged nanotubes can integrate the two different crystalline phases in one and the same wall structure. PMID:24922363

  1. Experimental verification of internal dosimetry calculations: Construction of a heterogeneous phantom based on human organs

    NASA Astrophysics Data System (ADS)

    Lauridsen, Bente; Hedemann Jensen, Per

    1987-03-01

    The basic dosimetric quantity in ICRP-publication no. 30 is the aborbed fraction AF( T←S). This parameter is the fraction of energy absorbed in a target organ T per emission of radiation from activity deposited in the source organ S. Based upon this fraction it is possible to calculate the Specific Effective Energy SEE( T← S). From this, the committed effective dose equivalent from an intake of radioactive material can be found, and thus the annual limit of intake for given radionuclides can be determined. A male phantom has been constructed with the aim of measuring the Specific Effective Energy SEE(T←S) in various target organs. Impressions-of real human organs have been used to produce vacuum forms. Tissue equivalent plastic sheets were sucked into the vacuum forms producing a shell with a shape identical to the original organ. Each organ has been made of two shells. The same procedure has been used for the body. Thin tubes through the organs make it possible to place TL dose meters in a matrix so the dose distribution can be measured. The phantom has been supplied with lungs, liver, kidneys, spleen, stomach, bladder, pancreas, and thyroid gland. To select a suitable body liquid for the phantom, laboratory experiments have been made with different liquids and different radionuclides. In these experiments the change in dose rate due to changes in density and composition of the liquid was determined. Preliminary results of the experiments are presented.

  2. Random and bias errors in simple regression-based calculations of sea-level acceleration

    NASA Astrophysics Data System (ADS)

    Howd, P.; Doran, K. J.; Sallenger, A. H.

    2012-12-01

    We examine the random and bias errors associated with three simple regression-based methods used to calculate the acceleration of sea-level elevation (SL). These methods are: (1) using ordinary least-squares regression (OLSR) to fit a single second-order (in time) equation to an entire elevation time series; (2) using a sliding regression window with OLRS 2nd order fits to provide time and window length dependent estimates; and (3) using a sliding regression window with OLSR 1st order fits to provide time and window length dependent estimates of sea level rate differences (SLRD). A Monte Carlo analysis using synthetic elevation time series with 9 different noise formulations (red, AR(1), and white noise at 3 variance levels) is used to examine the error structure associated with the three analysis methods. We show that, as expected, the single-fit method (1), while providing statistically unbiased estimates of the mean acceleration over an interval, by statistical design does not provide estimates of time-varying acceleration. This technique cannot be expected to detect recent changes in SL acceleration, such as those predicted by some climate models. The two sliding window techniques show similar qualitative results for the test time series, but differ dramatically in their statistical significance. Estimates of acceleration based on the 2nd order fits (2) are numerically smaller than the rate differences (3), and in the presence of near-equal residual noise, are more difficult to detect with statistical significance. We show, using the SLRD estimates from tide gauge data, how statistically significant changes in sea level accelerations can be detected at different temporal and spatial scales.

  3. CHEMEOS: a new chemical-picture-based model for plasma equation-of-state calculations

    SciTech Connect

    Hakel, P.; Kilcrease, D. P.

    2004-01-01

    We present the results of a new plasma equation-of-state (EOS) model currently under development at the Atomic and Optical Theory Group (T-4) in Los Alamos. This model is based on the chemical picture of the plasma and uses the free-energy-minimization technique and the occupation-probability formalism. The model is constructed as a combination of ideal and non-ideal contributions to the total Helmholtz free energy of the plasma including the effects of plasma microfields, strong coupling, and the hard-sphere description of the finite sizes of atomic species with bound electrons. These types of models have been recognized as a convenient and computationally inexpensive tool for modeling of local-thermal-equilibrium (LIE) plasmas for a broad range of temperatures and densities. We calculate the thermodynamic characteristics of the plasma (such as pressure and internal energy), and populations and occupation probabilities of atomic bound states. In addition to a smooth truncation of partition functions necessary for extracting ion populations from the system of Saha-type equations, the occupation probabilities can also be used for the merging of Rydberg line series into their associated bound-free edges. In the low-density, high-temperature regimes the plasma effects are adequately described by the Debye-Huckel model and its corresponding contribution to the total Helmholtz free energy of the plasma. In strongly-coupled plasmas, however, the Debye-Huckel approximation is no longer appropriate. In order to extend the validity of our EOS model to strongly-coupled plasmas while maintaining the analytic nature of our model, we adopt fits to the plasma free energy based on hypernetted-chain and Monte Carlo simulations. Our results for hydrogen are compared to other theoretical models. Hydrogen has been selected as a test-case on which improvements in EOS physics are benchmarked before analogous upgrades are included for any element in the EOS part of the new Los Alamos

  4. Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education

    ERIC Educational Resources Information Center

    Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.

    2014-01-01

    Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…

  5. An EGS4 based mathematical phantom for radiation protection calculations using standard man

    SciTech Connect

    Wise, K.N.

    1994-11-01

    This note describes an Electron Gamma Shower code (EGS4) Monte Carlo program for calculating radiation transport in adult males and females from internal or external electron and gamma sources which requires minimal knowledge of organ geometry. Calculations of the dose from planar gamma fields and from computerized tomography illustrate two applications of the package. 25 refs., 5 figs.

  6. A Lagrangian parcel based mixing plane method for calculating water based mixed phase particle flows in turbo-machinery

    NASA Astrophysics Data System (ADS)

    Bidwell, Colin S.

    2015-05-01

    A method for calculating particle transport through turbo-machinery using the mixing plane analogy was developed and used to analyze the energy efficient engine . This method allows the prediction of temperature and phase change of water based particles along their path and the impingement efficiency and particle impact property data on various components in the engine. This methodology was incorporated into the LEWICE3D V3.5 software. The method was used to predict particle transport in the low pressure compressor of the . The was developed by NASA and GE in the early 1980s as a technology demonstrator and is representative of a modern high bypass turbofan engine. The flow field was calculated using the NASA Glenn ADPAC turbo-machinery flow solver. Computations were performed for a Mach 0.8 cruise condition at 11,887 m assuming a standard warm day for ice particle sizes of 5, 20 and 100 microns and a free stream particle concentration of . The impingement efficiency results showed that as particle size increased average impingement efficiencies and scoop factors increased for the various components. The particle analysis also showed that the amount of mass entering the inner core decreased with increased particle size because the larger particles were less able to negotiate the turn into the inner core due to particle inertia. The particle phase change analysis results showed that the larger particles warmed less as they were transported through the low pressure compressor. Only the smallest 5 micron particles were warmed enough to produce melting with a maximum average melting fraction of 0.18. The results also showed an appreciable amount of particle sublimation and evaporation for the 5 micron particles entering the engine core (22.6 %).

  7. New calculation schemes for the {open_quotes}building-base{close_quotes} system in conditions of collapsing loess soils

    SciTech Connect

    Mezherovskii, V.A.

    1994-05-01

    New calculation schemes are suggested for the {open_quotes}building-loess collapsing base{close_quotes} system, with the help of which it is possible to obtain values of the forces and movements occurring in a building as a result of collapses of bases that are close to the real ones in the nature of moistening and deformations of loess strata.

  8. Volume calculation of subsurface structures and traps in hydrocarbon exploration — a comparison between numerical integration and cell based models

    NASA Astrophysics Data System (ADS)

    Slavinić, Petra; Cvetković, Marko

    2016-01-01

    The volume calculation of geological structures is one of the primary goals of interest when dealing with exploration or production of oil and gas in general. Most of those calculations are done using advanced software packages but still the mathematical workflow (equations) has to be used and understood for the initial volume calculation process. In this paper a comparison is given between bulk volume calculations of geological structures using trapezoidal and Simpson's rule and the ones obtained from cell-based models. Comparison in calculation is illustrated with four models; dome - 1/2 of ball/sphere, elongated anticline, stratigraphic trap due to lateral facies change and faulted anticline trap. Results show that Simpson's and trapezoidal rules give a very accurate volume calculation even with a few inputs(isopach areas - ordinates). A test of cell based model volume calculation precision against grid resolution is presented for various cases. For high accuracy, less the 1% of an error from coarsening, a cell area has to be 0.0008% of the reservoir area

  9. Benchmarking of Monte Carlo based shutdown dose rate calculations for applications to JET.

    PubMed

    Petrizzi, L; Batistoni, P; Fischer, U; Loughlin, M; Pereslavtsev, P; Villari, R

    2005-01-01

    The calculation of dose rates after shutdown is an important issue for operating nuclear reactors. A validated computational tool is needed for reliable dose rate calculations. In fusion reactors neutrons induce high levels of radioactivity and presumably high doses. The complex geometries of the devices require the use of sophisticated geometry modelling and computational tools for transport calculations. Simple rule of thumb laws do not always apply well. Two computational procedures have been developed recently and applied to fusion machines. Comparisons between the two methods showed some inherent discrepancies when applied to calculation for the ITER while good agreement was found for a 14 MeV point source neutron benchmark experiment. Further benchmarks were considered necessary to investigate in more detail the reasons for the different results in different cases. In this frame the application to the Joint European Torus JET machine has been considered as a useful benchmark exercise. In a first calculational benchmark with a representative D-T irradiation history of JET the two methods differed by no more than 25%. In another, more realistic benchmark exercise, which is the subject of this paper, the real irradiation history of D-T and D-D campaigns conducted at JET in 1997-98 were used to calculate the shut-down doses at different locations, irradiation and decay times. Experimental dose data recorded at JET for the same conditions offer the possibility to check the prediction capability of the calculations and thus show the applicability (and the constraints) of the procedures and data to the rather complex shutdown dose rate analysis of real fusion devices. Calculation results obtained by the two methods are reported below, comparison with experimental results give discrepancies ranging between 2 and 10. The reasons of that can be ascribed to the high uncertainty on the experimental data and the unsatisfactory JET model used in the calculation. A new

  10. [Calculation and analysis of arc temperature field of pulsed TIG welding based on Fowler-Milne method].

    PubMed

    Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang

    2012-09-01

    Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding. PMID:23240389

  11. Current in the Protein Nanowires: Quantum Calculations of the Base States.

    PubMed

    Suprun, Anatol D; Shmeleva, Liudmyla V

    2016-12-01

    It is known that synthesis of adenosine triphosphoric acid in mitochondrions may be only completed on the condition of transport of the electron pairs, which were created due to oxidation processes, to mitochondrions. As of today, many efforts were already taken in order to understand those processes that occur in the course of donor-acceptor electron transport between cellular organelles (that is, between various proteins and protein structures). However, the problem concerning the mechanisms of electron transport over these organelles still remains understudied. This paper is dedicated to the investigation of these same issues.It has been shown that regardless of the amino acid inhomogeneity of the primary structure, it is possible to apply a representation of the second quantization in respect of the protein molecule (hereinafter "numbers of filling representation"). Based on this representation, it has been established that the primary structure of the protein molecule is actually a semiconductor nanowire. In addition, at the same time, its conduction band, into which an electron is injected as the result of donor-acceptor processes, consists of five sub-bands. Three of these sub-bands have normal dispersion laws, while the rest two sub-bands have abnormal dispersion laws (reverse laws). Test calculation of the current density was made under the conditions of the complete absence of the factors, which may be interpreted as external fields. It has been shown that under such conditions, current density is exactly equal to zero. This is the evidence of correctness of the predictive model of the conductivity band of the primary structure of the protein molecule (protein nanowire). At the same time, it makes it possible to apply the obtained results in respect of the actual situation, where factors, which may be interpreted as external fields, exist. PMID:26858156

  12. Current in the Protein Nanowires: Quantum Calculations of the Base States.

    PubMed

    Suprun, Anatol D; Shmeleva, Liudmyla V

    2016-12-01

    It is known that synthesis of adenosine triphosphoric acid in mitochondrions may be only completed on the condition of transport of the electron pairs, which were created due to oxidation processes, to mitochondrions. As of today, many efforts were already taken in order to understand those processes that occur in the course of donor-acceptor electron transport between cellular organelles (that is, between various proteins and protein structures). However, the problem concerning the mechanisms of electron transport over these organelles still remains understudied. This paper is dedicated to the investigation of these same issues.It has been shown that regardless of the amino acid inhomogeneity of the primary structure, it is possible to apply a representation of the second quantization in respect of the protein molecule (hereinafter "numbers of filling representation"). Based on this representation, it has been established that the primary structure of the protein molecule is actually a semiconductor nanowire. In addition, at the same time, its conduction band, into which an electron is injected as the result of donor-acceptor processes, consists of five sub-bands. Three of these sub-bands have normal dispersion laws, while the rest two sub-bands have abnormal dispersion laws (reverse laws). Test calculation of the current density was made under the conditions of the complete absence of the factors, which may be interpreted as external fields. It has been shown that under such conditions, current density is exactly equal to zero. This is the evidence of correctness of the predictive model of the conductivity band of the primary structure of the protein molecule (protein nanowire). At the same time, it makes it possible to apply the obtained results in respect of the actual situation, where factors, which may be interpreted as external fields, exist.

  13. Full Dimensional Vibrational Calculations for Methane Using AN Accurate New AB Initio Based Potential Energy Surface

    NASA Astrophysics Data System (ADS)

    Majumder, Moumita; Dawes, Richard; Wang, Xiao-Gang; Carrington, Tucker; Li, Jun; Guo, Hua; Manzhos, Sergei

    2014-06-01

    New potential energy surfaces for methane were constructed, represented as analytic fits to about 100,000 individual high-level ab initio data. Explicitly-correlated multireference data (MRCI-F12(AE)/CVQZ-F12) were computed using Molpro [1] and fit using multiple strategies. Fits with small to negligible errors were obtained using adaptations of the permutation-invariant-polynomials (PIP) approach [2,3] based on neural-networks (PIP-NN) [4,5] and the interpolative moving least squares (IMLS) fitting method [6] (PIP-IMLS). The PESs were used in full-dimensional vibrational calculations with an exact kinetic energy operator by representing the Hamiltonian in a basis of products of contracted bend and stretch functions and using a symmetry adapted Lanczos method to obtain eigenvalues and eigenvectors. Very close agreement with experiment was produced from the purely ab initio PESs. References 1- H.-J. Werner, P. J. Knowles, G. Knizia, 2012.1 ed. 2012, MOLPRO, a package of ab initio programs. see http://www.molpro.net. 2- Z. Xie and J. M. Bowman, J. Chem. Theory Comput 6, 26, 2010. 3- B. J. Braams and J. M. Bowman, Int. Rev. Phys. Chem. 28, 577, 2009. 4- J. Li, B. Jiang and Hua Guo, J. Chem. Phys. 139, 204103 (2013). 5- S Manzhos, X Wang, R Dawes and T Carrington, JPC A 110, 5295 (2006). 6- R. Dawes, X-G Wang, A.W. Jasper and T. Carrington Jr., J. Chem. Phys. 133, 134304 (2010).

  14. [Amikacin pharmacokinetics in adults: a variability that question the dose calculation based on weight].

    PubMed

    Bourguignon, Laurent; Goutelle, Sylvain; Gérard, Cécile; Guillermet, Anne; Burdin de Saint Martin, Julie; Maire, Pascal; Ducher, Michel

    2009-01-01

    The use of amikacin is difficult because of its toxicity and its pharmacokinetic variability. This variability is almost ignored in adult standard dosage regimens since only the weight is used in the dose calculation. Our objective is to test if the pharmacokinetic of amikacin can be regarded as homogenous, and if the method for calculating the dose according to patients' weight is appropriate. From a cohort of 580 patients, five groups of patients were created by statistical data partitioning. A population pharmacokinetic analysis was performed in each group. The adult population is not homogeneous in term of pharmacokinetics. The doses required to achieve a maximum concentration of 60 mg/L are strongly different (585 to 1507 mg) between groups. The exclusive use of the weight to calculate the dose of amikacine appears inappropriate for 80% of the patients, showing the limits of the formulae for calculating doses of aminoglycosides.

  15. 12 CFR 702.106 - Standard calculation of risk-based net worth requirement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.106 Standard calculation of...) Allowance. Negative one hundred percent (−100%) of the balance of the Allowance for Loan and Lease...

  16. Calculation of Collective Variable-based PMF by Combining WHAM with Umbrella Sampling

    NASA Astrophysics Data System (ADS)

    Xu, Wei-Xin; Li, Yang; Zhang, John Z. H.

    2012-06-01

    Potential of mean force (PMF) with respect to localized reaction coordinates (RCs) such as distance is often applied to evaluate the free energy profile along the reaction pathway for complex molecular systems. However, calculation of PMF as a function of global RCs is still a challenging and important problem in computational biology. We examine the combined use of the weighted histogram analysis method and the umbrella sampling method for the calculation of PMF as a function of a global RC from the coarse-grained Langevin dynamics simulations for a model protein. The method yields the folding free energy profile projected onto a global RC, which is in accord with benchmark results. With this method rare global events would be sufficiently sampled because the biased potential can be used for restricting the global conformation to specific regions during free energy calculations. The strategy presented can also be utilized in calculating the global intra- and intermolecular PMF at more detailed levels.

  17. Calculator-Based Laboratory probeware implementation and integration by middle school science teachers

    NASA Astrophysics Data System (ADS)

    Wetzel, David Ronald

    The purpose of this study purpose was to investigate the factors that influenced five middle science teachers as they implemented and integrated Calculator-Based Laboratory (CBL) probeware in their curriculum. Additionally, the research explored how the implementation and integration process influenced the teachers' pedagogy and curricula. The findings of this study were a result of using a multiple case study design, us using both qualitative and quantitative data. Each teacher's data were analyzed in an in individual case study, with a cross-ease analysis of the case studies to determine the common factors that influenced the teachers. The framework for the cross-case analysis co consisted of four research questions and sub-questions. These questions focused on how the teachers used CBL probeware with students, their concerns regarding the implementation and integration process, factors that influenced their level of use, and changes they made in their teaching strategies and techniques. This study found that four of five teachers successfully had short-term pedagogical and curricula transformation to include CBL probeware technology, and that one teacher was so strongly influenced by the Virginia Standards of Learning tests that she remained a nonuser of CBL probeware at the end of the study. The collaborative efforts of the teachers contributed to an 80% success rate of CBL probeware integration. Pre-service and inservice knowledge and experience with computer-based instructional technology did not provide these teachers with adequate preparation for CBL probeware integration. At the end of the study, there was potential for long-term pedagogical and curricula transformation by the teachers, but the limitation of having only one class set of CBL probeware reduced their ability to fully integrate this technology in all curricula. Four of the five teachers' views and beliefs shifted regarding the use of CBL probeware in their curricula as they discovered that

  18. A regression model for calculating the boiling point isobars of tetrachloromethane-based binary solutions

    NASA Astrophysics Data System (ADS)

    Preobrazhenskii, M. P.; Rudakov, O. B.

    2016-01-01

    A regression model for calculating the boiling point isobars of tetrachloromethane-organic solvent binary homogeneous systems is proposed. The parameters of the model proposed were calculated for a series of solutions. The correlation between the nonadditivity parameter of the regression model and the hydrophobicity criterion of the organic solvent is established. The parameter value of the proposed model is shown to allow prediction of the potential formation of azeotropic mixtures of solvents with tetrachloromethane.

  19. Effectiveness of a computer based medication calculation education and testing programme for nurses.

    PubMed

    Sherriff, Karen; Burston, Sarah; Wallis, Marianne

    2012-01-01

    The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety.

  20. 40 CFR 600.206-12 - Calculation and use of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... HFET-based fuel economy, CO2 emissions, and carbon-related exhaust emission values for vehicle... (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.206-12 Calculation and use of...

  1. 40 CFR 600.206-12 - Calculation and use of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... HFET-based fuel economy, CO2 emissions, and carbon-related exhaust emission values for vehicle... (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.206-12 Calculation and use of...

  2. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.208-12 Calculation of...

  3. Calculation of aqueous solubility of crystalline un-ionized organic chemicals and drugs based on structural similarity and physicochemical descriptors.

    PubMed

    Raevsky, Oleg A; Grigor'ev, Veniamin Yu; Polianczyk, Daniel E; Raevskaja, Olga E; Dearden, John C

    2014-02-24

    Solubilities of crystalline organic compounds calculated according to AMP (arithmetic mean property) and LoReP (local one-parameter regression) models based on structural and physicochemical similarities are presented. We used data on water solubility of 2615 compounds in un-ionized form measured at 25±5 °C. The calculation results were compared with the equation based on the experimental data for lipophilicity and melting point. According to statistical criteria, the model based on structural and physicochemical similarities showed a better fit with the experimental data. An additional advantage of this model is that it uses only theoretical descriptors, and this provides means for calculating water solubility for both existing and not yet synthesized compounds.

  4. Accurate all-electron G0W0 quasiparticle energies employing the full-potential augmented plane-wave method

    NASA Astrophysics Data System (ADS)

    Nabok, Dmitrii; Gulans, Andris; Draxl, Claudia

    2016-07-01

    The G W approach of many-body perturbation theory has become a common tool for calculating the electronic structure of materials. However, with increasing number of published results, discrepancies between the values obtained by different methods and codes become more and more apparent. For a test set of small- and wide-gap semiconductors, we demonstrate how to reach the numerically best electronic structure within the framework of the full-potential linearized augmented plane-wave (FLAPW) method. We first evaluate the impact of local orbitals in the Kohn-Sham eigenvalue spectrum of the underlying starting point. The role of the basis-set quality is then further analyzed when calculating the G0W0 quasiparticle energies. Our results, computed with the exciting code, are compared to those obtained using the projector-augmented plane-wave formalism, finding overall good agreement between both methods. We also provide data produced with a typical FLAPW basis set as a benchmark for other G0W0 implementations.

  5. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    SciTech Connect

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 103 - 105 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  6. Effects of Dispersion in Density Functional Based Quantum Mechanical/Molecular Mechanical Calculations on Cytochrome P450 Catalyzed Reactions.

    PubMed

    Lonsdale, Richard; Harvey, Jeremy N; Mulholland, Adrian J

    2012-11-13

    Density functional theory (DFT) based quantum mechanical/molecular mechanical (QM/MM) calculations have provided valuable insight into the reactivity of the cytochrome P450 family of enzymes (P450s). A failure of commonly used DFT methods, such as B3LYP, is the neglect of dispersion interactions. An empirical dispersion correction has been shown to improve the accuracy of gas phase DFT calculations of P450s. The current work examines the effect of the dispersion correction in QM/MM calculations on P450s. The hydrogen abstraction from camphor, and hydrogen abstraction and C-O addition of cyclohexene and propene by P450cam have been modeled, along with the addition of benzene to Compound I in CYP2C9, at the B3LYP-D2/CHARMM27 level of theory. Single point energy calculations were also performed at the B3LYP-D3//B3LYP-D2/CHARMM27 level. The dispersion corrections lower activation energy barriers significantly (by ∼5 kcal/mol), as seen for gas phase calculations, but has a small effect on optimized geometries.These effects are likely to be important in modeling reactions catalyzed by other enzymes also. Given the low computational cost of including such dispersion corrections, we recommend doing so in all B3LYP based QM/MM calculations.

  7. Calculation of utmost parameters of active vision system based on nonscanning thermal imager

    NASA Astrophysics Data System (ADS)

    Sviridov, A. N.

    2003-09-01

    An active vision system (AVS) based on a non scanning thermal imager (TI) and CO2 - quantum amplifier of the image is offered. AVS mathematical model within which investigation of utmost signal / noise values and other system parameters depending on the distances to the scene - the area of observation (AO), an illumination impulse energy (W), an amplification factor (K) of a quantum amplifier, objective lens characteristics, spectral band width of a cooled filter of the thermal imager as well as object and scene characteristics is developed. Calculations were carried out for the following possible operating modes of a discussed vision system: - an active mode of a thermal imager with a cooled wideband filter; an active mode of a thermal imager with a cooled narrowband filter; - passive mode (W = 0, K = 1) of a thermal imager with a cooled wideband filter. As a result of carried out researches the opportunity and expediency of designing AVS, having a nonscanning thermal imager, impulse CO2 - quantum image amplifier and impulse CO2 - illumination laser are shown. It is shown that AVS have advantages over thermal imaging at observation of objects, temperature and reflection factors of which differ slightly from similar parameters of the scene. AVS depending on the W-K product can detect at a distance of up to 3000..5000m practically any local changes (you are interested in ) of a reflection factor. AVS not replacing the thermal imaging allow to receive additional information about observation objects. The images obtained with the help of AVS are more natural and more easy identified than thermal images received at the expense of the object own radiation. For quantitative determination of utmost values of AVS sensitivity it is offered to introduce a new parameter - NERD - 'radiation nose equivalent reflection factors difference'. IR active vision systems of vision, as well as a human vision and vision systems in the near IR - range on the basis image intensifiers

  8. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    PubMed

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules. PMID:24622472

  9. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    PubMed

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

  10. [The calculation of the intraocular lens power based on raytracing methods: a systematic review].

    PubMed

    Steiner, Deborah; Hoffmann, Peter; Goldblum, David

    2013-04-01

    A problem in cataract surgery consists in the preoperative identification of the appropriate intraocular lens (IOL) power. Different calculation approaches have been developed for this purpose; raytracing methods represent one of the most exact but also mathematically more challenging methods. This article gives a systematic overview of the different raytracing calculations available and described in the literature and compares their results. It has been shown that raytracing includes physical measurements and IOL manufacturing data but no approximations. The prediction error is close to zero and an essential advantage is the applicability to different conditions without the need of modifications. Compared to the classical formulae the raytracing methods are more precise overall, but due to the various data and property situations they are hardly comparable yet. The raytracing calculations represent a good alternative to the 3rd generation formulae. They minimize refractive errors, are wider applicable and provide better results overall, particularly in eyes with preconditions. PMID:23629771

  11. A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2010-01-01

    Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.

  12. Calculation of surface enthalpy of solids from an ab initio electronegativity based model: case of ice.

    PubMed

    Douillard, J M; Henry, M

    2003-07-15

    A very simple route to calculation of the surface energy of solids is proposed because this value is very difficult to determine experimentally. The first step is the calculation of the attractive part of the electrostatic energy of crystals. The partial charges used in this calculation are obtained by using electronegativity equalization and scales of electronegativity and hardness deduced from physical characteristics of the atom. The lattice energies of the infinite crystal and of semi-infinite layers are then compared. The difference is related to the energy of cohesion and then to the surface energy. Very good results are obtained with ice, if one compares with the surface energy of liquid water, which is generally considered a good approximation of the surface energy of ice.

  13. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... economy or carbon-related exhaust emission value for the base level. (7) For alcohol dual fuel automobiles... basic engine (i.e., they are not included in the calculation of the original base level fuel economy...) If only one vehicle configuration within a base level has been tested, the fuel economy and...

  14. An improved fragment-based quantum mechanical method for calculation of electrostatic solvation energy of proteins.

    PubMed

    Jia, Xiangyu; Wang, Xianwei; Liu, Jinfeng; Zhang, John Z H; Mei, Ye; He, Xiao

    2013-12-01

    An efficient approach that combines the electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method with conductor-like polarizable continuum model (CPCM), termed EE-GMFCC-CPCM, is developed for ab initio calculation of the electrostatic solvation energy of proteins. Compared with the previous MFCC-CPCM study [Y. Mei, C. G. Ji, and J. Z. H. Zhang, J. Chem. Phys. 125, 094906 (2006)], quantum mechanical (QM) calculation is applied to deal with short-range non-neighboring interactions replacing the classical treatment. Numerical studies are carried out for proteins up to 3837 atoms at the HF/6-31G* level. As compared to standard full system CPCM calculations, EE-GMFCC-CPCM shows clear improvement over the MFCC-CPCM method for both the total electrostatic solvation energy and its components (the polarized solute-solvent reaction field energy and wavefunction distortion energy of the solute). For large proteins with 1000-4000 atoms, where the standard full system ab initio CPCM calculations are not affordable, the EE-GMFCC-CPCM gives larger relative wavefunction distortion energies and weaker relative electrostatic solvation energies for proteins, as compared to the corresponding energies calculated by the Divide-and-Conquer Poisson-Boltzmann (D&C-PB) method. Notwithstanding, a high correlation between EE-GMFCC-CPCM and D&C-PB is observed. This study demonstrates that the linear-scaling EE-GMFCC-CPCM approach is an accurate and also efficient method for the calculation of electrostatic solvation energy of proteins.

  15. Spectral linelist of HD16O molecule based on VTT calculations for atmospheric application

    NASA Astrophysics Data System (ADS)

    Voronin, B. A.

    2014-11-01

    Three version line-list of dipole transition for isotopic modification of water molecule HD16O are presented. Line-lists have been created on the basis of VTT calculations (Voronin, Tennyson, Tolchenov et al. MNRAS, 2010) by adding air- and self-broadening coefficient, and temperature exponents for HD16O-air case. Three cut-of values for the line intensities were used: 1e-30, 1e-32 and 1e-35 cm/molecule. Calculated line-lists are available on the site ftp://ftp.iao.ru/pub/VTT/VTT-296/.

  16. Interpretation of the resonance Raman spectra of linear tetrapyrroles based on DFT calculations

    NASA Astrophysics Data System (ADS)

    Kneip, Christa; Hildebrandt, Peter; Németh, Károly; Mark, Franz; Schaffner, Kurt

    1999-10-01

    Raman spectra of linear methine-bridged tetrapyrroles in different conformational and protonation states were calculated on the basis of scaled force fields obtained by density functional theory. Results are reported for protonated phycocyanobilin in the extended ZZZasa configuration, as it is found in C-phycocyanin of cyanobacteria. The calculated spectra are in good agreement with experimental spectra of the protein-bound chromophore in the α-subunit of C-phycocyanin and allow a plausible and consistent assignment of most of the observed resonance Raman bands in the region between 1000 and 1700 cm -1.

  17. Optimum design calculations for detectors based on ZnSe(Те,О) scintillators

    NASA Astrophysics Data System (ADS)

    Katrunov, K.; Ryzhikov, V.; Gavrilyuk, V.; Naydenov, S.; Lysetska, O.; Litichevskyi, V.

    2013-06-01

    Light collection in scintillators ZnSe(X), where X is an isovalent dopant, was studied using Monte Carlo calculations. Optimum design was determined for detectors of "scintillator—Si-photodiode" type, which can involve either one scintillation element or scintillation layers of large area made of small-crystalline grains. The calculations were carried out both for determination of the optimum scintillator shape and for design optimization of light guides, on the surface of which the layer of small-crystalline grains is formed.

  18. An accurate potential energy curve for helium based on ab initio calculations

    NASA Astrophysics Data System (ADS)

    Janzen, A. R.; Aziz, R. A.

    1997-07-01

    Korona, Williams, Bukowski, Jeziorski, and Szalewicz [J. Chem. Phys. 106, 1 (1997)] constructed a completely ab initio potential for He2 by fitting their calculations using infinite order symmetry adapted perturbation theory at intermediate range, existing Green's function Monte Carlo calculations at short range and accurate dispersion coefficients at long range to a modified Tang-Toennies potential form. The potential with retardation added to the dipole-dipole dispersion is found to predict accurately a large set of microscopic and macroscopic experimental data. The potential with a significantly larger well depth than other recent potentials is judged to be the most accurate characterization of the helium interaction yet proposed.

  19. Evaluation of monitor unit calculation based on measurement and calculation with a simplified Monte Carlo method for passive beam delivery system in proton beam therapy.

    PubMed

    Hotta, Kenji; Kohno, Ryosuke; Nagafuchi, Kohsuke; Yamaguchi, Hidenori; Tansho, Ryohei; Takada, Yoshihisa; Akimoto, Tetsuo

    2015-01-01

    Calibrating the dose per monitor unit (DMU) for individual patients is important to deliver the prescribed dose in radiation therapy. We have developed a DMU calculation method combining measurement data and calculation with a simplified Monte Carlo method for the double scattering system in proton beam therapy at the National Cancer Center Hospital East in Japan. The DMU calculation method determines the clinical DMU by the multiplication of three factors: a beam spreading device factor FBSD, a patient-specific device factor FPSD, and a field-size correction factor FFS(A). We compared the calculated and the measured DMU for 75 dose fields in clinical cases. The calculated DMUs were in agreement with measurements in ± 1.5% for all of 25 fields in prostate cancer cases, and in ± 3% for 94% of 50 fields in head and neck (H&N) and lung cancer cases, including irregular shape fields and small fields. Although the FBSD in the DMU calculations is dominant as expected, we found that the patient-specific device factor and field-size correction also contribute significantly to the calculated DMU. This DMU calculation method will be able to substitute the conventional DMU measurement for the majority of clinical cases with a reasonable calculation time required for clinical use. PMID:26699303

  20. Vibrational and structural study of onopordopicrin based on the FTIR spectrum and DFT calculations.

    PubMed

    Chain, Fernando E; Romano, Elida; Leyton, Patricio; Paipa, Carolina; Catalán, César A N; Fortuna, Mario; Brandán, Silvia Antonia

    2015-11-01

    In the present work, the structural and vibrational properties of the sesquiterpene lactone onopordopicrin (OP) were studied by using infrared spectroscopy and density functional theory (DFT) calculations together with the 6-31G(∗) basis set. The harmonic vibrational wavenumbers for the optimized geometry were calculated at the same level of theory. The complete assignment of the observed bands in the infrared spectrum was performed by combining the DFT calculations with Pulay's scaled quantum mechanical force field (SQMFF) methodology. The comparison between the theoretical and experimental infrared spectrum demonstrated good agreement. Then, the results were used to predict the Raman spectrum. Additionally, the structural properties of OP, such as atomic charges, bond orders, molecular electrostatic potentials, characteristics of electronic delocalization and topological properties of the electronic charge density were evaluated by natural bond orbital (NBO), atoms in molecules (AIM) and frontier orbitals studies. The calculated energy band gap and the chemical potential (μ), electronegativity (χ), global hardness (η), global softness (S) and global electrophilicity index (ω) descriptors predicted for OP low reactivity, higher stability and lower electrophilicity index as compared with the sesquiterpene lactone cnicin containing similar rings. PMID:26057092

  1. Structure of amphotericin B aggregates based on calculations of optical spectra

    SciTech Connect

    Hemenger, R.P.; Kaplan, T.; Gray, L.J.

    1983-01-01

    The degenerate ground state approximation was used to calculate the optical absorption and CD spectra for helical polymer models of amphotericin B aggregates in aqueous solution. Comparisons with experimental spectra indicate that a two-molecule/unit cell helical polymer model is a possible structure for aggregates of amphotericin B.

  2. Autoradiography-based, three-dimensional calculation of dose rate for murine, human-tumor xenografts.

    PubMed

    Koral, K F; Kwok, C S; Yang, F E; Brown, R S; Sisson, J C; Wahl, R L

    1993-11-01

    A Fast Fourier Transform method for calculating the three-dimensional dose rate distribution for murine, human-tumor xenografts is outlined. The required input includes evenly-spaced activity slices which span the tumor. Numerical values in these slices are determined by quantitative 125I autoradiography. For the absorbed dose-rate calculation, we assume the activity from both 131I- and 90Y-labeled radiopharmaceuticals would be distributed as is measured with the 125I label. Two example cases are presented: an ovarian-carcinoma xenograft with an IgG 2ak monoclonal antibody and a neuroblastoma xenograft with meta-iodobenzylguanidine (MIBG). Considering all the volume elements in a tumor, we show, by comparison of histograms and also relative standard deviations, that the measured 125I activity and the calculated 131I dose-rate distributions, are similarly non-uniform and that they are more non-uniform than the calculated 90Y dose-rate distribution. However, the maximum-to-minimum ratio, another measure of non-uniformity, decreases by roughly an order of magnitude from one distribution to the next in the order given above. PMID:8298569

  3. A web-based calculator for estimating the profit potential of grain segregation by protein concentration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    By ignoring spatial variability in grain quality, conventional harvesting systems may increase the likelihood that growers will not capture price premiums for high quality grain found within fields. The Grain Segregation Profit Calculator was developed to demonstrate the profit potential of segregat...

  4. Protein NMR chemical shift calculations based on the automated fragmentation QM/MM approach.

    PubMed

    He, Xiao; Wang, Bing; Merz, Kenneth M

    2009-07-30

    An automated fragmentation quantum mechanics/molecular mechanics (AF-QM/MM) approach has been developed to routinely calculate ab initio protein NMR chemical shielding constants. The AF-QM/MM method is linear-scaling and trivially parallel. A general fragmentation scheme is employed to generate each residue-centric region which is treated by quantum mechanics, and the environmental electrostatic field is described with molecular mechanics. The AF-QM/MM method shows good agreement with standard self-consistent field (SCF) calculations of the NMR chemical shieldings for the mini-protein Trp cage. The root-mean-square errors (RMSEs) for 1H, 13C, and 15N NMR chemical shieldings are equal to or less than 0.09, 0.32, and 0.78 ppm, respectively, for all Hartree-Fock (HF) and density functional theory (DFT) calculations reported in this work. The environmental electrostatic potential is necessary to accurately reproduce the NMR chemical shieldings using the AF-QM/MM approach. The point-charge models provided by AMBER, AM1/CM2, PM3/CM1, and PM3/CM2 all effectively model the electrostatic field. The latter three point-charge models are generated via semiempirical linear-scaling SCF calculations of the entire protein system. The correlations between experimental 1H NMR chemical shifts and theoretical predictions are >0.95 for AF-QM/MM calculations using B3LYP with the 6-31G**, 6-311G**, and 6-311++G** basis sets. Our study, not unexpectedly, finds that conformational changes within a protein structure play an important role in the accurate prediction of experimental NMR chemical shifts from theory.

  5. Application of perturbation theory to lattice calculations based on method of cyclic characteristics

    NASA Astrophysics Data System (ADS)

    Assawaroongruengchot, Monchai

    computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR

  6. A GIS-based procedure for automatically calculating soil loss from the Universal Soil Loss Equation: GISus-M

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The integration of methods for calculating soil loss caused by water erosion using a geoprocessing system is important to enable investigations of soil erosion over large areas. GIS-based procedures have been used in soil erosion studies; however in most cases it is difficult to integrate the functi...

  7. Analysis of the Relationship between Estimation Skills Based on Calculation and Number Sense of Prospective Classroom Teachers

    ERIC Educational Resources Information Center

    Senol, Ali; Dündar, Sefa; Gündüz, Nazan

    2015-01-01

    The aim of this study are to examine the relationship between prospective classroom teachers' estimation skills based on calculation and their number sense and to investigate whether their number sense and estimation skills change according to their class level and gender. The participants of the study are 125 prospective classroom teachers…

  8. 42 CFR 413.220 - Methodology for calculating the per-treatment base rate under the ESRD prospective payment system...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... rate under the ESRD prospective payment system effective January 1, 2011. 413.220 Section 413.220... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the per-treatment base rate under the ESRD prospective payment system effective January 1, 2011. (a) Data...

  9. 42 CFR 413.220 - Methodology for calculating the per-treatment base rate under the ESRD prospective payment system...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... rate under the ESRD prospective payment system effective January 1, 2011. 413.220 Section 413.220... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the per-treatment base rate under the ESRD prospective payment system effective January 1, 2011. (a) Data...

  10. 42 CFR 413.220 - Methodology for calculating the per-treatment base rate under the ESRD prospective payment system...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... rate under the ESRD prospective payment system effective January 1, 2011. 413.220 Section 413.220... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the per-treatment base rate under the ESRD prospective payment system effective January 1, 2011. (a) Data...

  11. Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy

    SciTech Connect

    Hünemohr, Nora Greilich, Steffen; Paganetti, Harald; Seco, Joao; Jäkel, Oliver

    2014-06-15

    Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱ{sub e} and effective atomic number Z{sub eff} are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱ{sub e} that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱ{sub e} and the Z{sub eff} in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. MeanI-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would

  12. DEPDOSE: An interactive, microcomputer based program to calculate doses from exposure to radionuclides deposited on the ground

    SciTech Connect

    Beres, D.A.; Hull, A.P.

    1991-12-01

    DEPDOSE is an interactive, menu driven, microcomputer based program designed to rapidly calculate committed dose from radionuclides deposited on the ground. The program is designed to require little or no computer expertise on the part of the user. The program consisting of a dose calculation section and a library maintenance section. These selections are available to the user from the main menu. The dose calculation section provides the user with the ability to calculate committed doses, determine the decay time needed to reach a particular dose, cross compare deposition data from separate locations, and approximate a committed dose based on a measured exposure rate. The library maintenance section allows the user to review and update dose modifier data as well as to build and maintain libraries of radionuclide data, dose conversion factors, and default deposition data. The program is structured to provide the user easy access for reviewing data prior to running the calculation. Deposition data can either be entered by the user or imported from other databases. Results can either be displayed on the screen or sent to the printer.

  13. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    SciTech Connect

    Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  14. Effect of hydroxyl group position on adsorption behavior and corrosion inhibition of hydroxybenzaldehyde Schiff bases: Electrochemical and quantum calculations

    NASA Astrophysics Data System (ADS)

    Danaee, I.; Ghasemi, O.; Rashed, G. R.; Rashvand Avei, M.; Maddahy, M. H.

    2013-03-01

    The corrosion inhibition and adsorption of N,N'-bis(n-hydroxybenzaldehyde)-1,3-propandiimine (n-HBP) Schiff bases has been investigated on steel electrode in 1 M HCl by using electrochemical techniques. The experimental results suggest that the highest inhibition efficiency was obtained for 3-HBP. Polarization curves reveal that all studied inhibitors are mixed type. Density functional theory (DFT) at the B3LYP/6-31G(d,p) and B3LYP/3-21G basis set levels and ab initio calculations using HF/6-31G(d,p) and HF/3-21G methods were performed on three Schiff bases. By studying the effects of hydroxyl groups in ortho-, meta-, para- positions, the best one as inhibitor was found to be meta-position of OH in Schiff base (i.e., 3-HBP). The order of inhibition efficiency obtained was corresponded with the order of most of the calculated quantum chemical parameters. Quantitative structure activity relationship (QSAR) approach has been used and a correlation of the composite index of some of the quantum chemical parameters was performed to characterize the inhibition performance of the Schiff bases studied. The results showed that %IE of the Schiff bases was closely related to some of the quantum chemical parameters but with varying degrees/order. The calculated %IE of the Schiff base studied was found to be close to their experimental corrosion inhibition efficiencies.

  15. Navier-Stokes calculations on multi-element airfoils using a chimera-based solver

    NASA Technical Reports Server (NTRS)

    Jasper, Donald W.; Agrawal, Shreekant; Robinson, Brian A.

    1993-01-01

    A study of Navier-Stokes calculations of flows about multielement airfoils using a chimera grid approach is presented. The chimera approach utilizes structured, overlapped grids which allow great flexibility of grid arrangement and simplifies grid generation. Calculations are made for two-, three-, and four-element airfoils, and modeling of the effect of gap distance between elements is demonstrated for a two element case. Solutions are obtained using the thin-layer form of the Reynolds averaged Navier-Stokes equations with turbulence closure provided by the Baldwin-Lomax algebraic model or the Baldwin-Barth one equation model. The Baldwin-Barth turbulence model is shown to provide better agreement with experimental data and to dramatically improve convergence rates for some cases. Recently developed, improved farfield boundary conditions are incorporated into the solver for greater efficiency. Computed results show good comparison with experimental data which include aerodynamic forces, surface pressures, and boundary layer velocity profiles.

  16. Model-based calculations of off-axis ratio of conic beams for a dedicated 6 MV radiosurgery unit

    SciTech Connect

    Yang, J. N.; Ding, X.; Du, W.; Pino, R.

    2010-10-15

    Purpose: Because the small-radius photon beams shaped by cones in stereotactic radiosurgery (SRS) lack lateral electronic equilibrium and a detector's finite cross section, direct experimental measurement of dosimetric data for these beams can be subject to large uncertainties. As the dose calculation accuracy of a treatment planning system largely depends on how well the dosimetric data are measured during the machine's commissioning, there is a critical need for an independent method to validate measured results. Therefore, the authors studied the model-based calculation as an approach to validate measured off-axis ratios (OARs). Methods: The authors previously used a two-component analytical model to calculate central axis dose and associated dosimetric data (e.g., scatter factors and tissue-maximum ratio) in a water phantom and found excellent agreement between the calculated and the measured central axis doses for small 6 MV SRS conic beams. The model was based on that of Nizin and Mooij [''An approximation of central-axis absorbed dose in narrow photon beams,'' Med. Phys. 24, 1775-1780 (1997)] but was extended to account for apparent attenuation, spectral differences between broad and narrow beams, and the need for stricter scatter dose calculations for clinical beams. In this study, the authors applied Clarkson integration to this model to calculate OARs for conic beams. OARs were calculated for selected cones with radii from 0.2 to 1.0 cm. To allow comparisons, the authors also directly measured OARs using stereotactic diode (SFD), microchamber, and film dosimetry techniques. The calculated results were machine-specific and independent of direct measurement data for these beams. Results: For these conic beams, the calculated OARs were in excellent agreement with the data measured using an SFD. The discrepancies in radii and in 80%-20% penumbra were within 0.01 cm, respectively. Using SFD-measured OARs as the reference data, the authors found that the

  17. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  18. Nonequilibrium molecular dynamics calculation of the thermal conductivity based on an improved relaxation scheme.

    PubMed

    Cao, Bing-Yang

    2008-08-21

    A nonequilibrium molecular dynamics (NEMD) method using stochastic energy injection and removal as uniform heat sources and sinks is developed to calculate the thermal conductivity. The stochastic energy is generated by a Maxwell function generator and is imposed on only a few individual molecules each time step. The relaxation of the thermal perturbation is improved compared to other NEMD algorithms because there are no localized heat source and sink slab regions in the system. The heat sources are uniformly distributed in the right half of the system while the sinks are in the left half, which leads to a periodically quadratic temperature distribution that is almost sinusoidal. The thermal conductivity is then easily calculated from the mean temperatures of the right and left half systems rather than by fitting the temperature profiles. This improved relaxation NEMD scheme is used to calculate the thermal conductivities of liquid and solid argons. It shows that the present algorithm gives accurate results with fast convergence and small size effects. Other stochastic energy perturbation, e.g., thermal noise, can be used to replace the Maxwell-type perturbation used in this paper to make the improved relaxation scheme more effective. PMID:19044759

  19. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    NASA Astrophysics Data System (ADS)

    Ying, Zhang; Zhengqiang, Li; Yan, Wang

    2014-03-01

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.

  20. Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate

    NASA Astrophysics Data System (ADS)

    Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef

    2016-04-01

    The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.

  1. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    PubMed

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  2. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    PubMed

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  3. Δg: The new aromaticity index based on g-factor calculation applied for polycyclic benzene rings

    NASA Astrophysics Data System (ADS)

    Ucun, Fatih; Tokatlı, Ahmet

    2015-02-01

    In this work, the aromaticity of polycyclic benzene rings was evaluated by the calculation of g-factor for a hydrogen placed perpendicularly at geometrical center of related ring plane at a distance of 1.2 Å. The results have compared with the other commonly used aromatic indices, such as HOMA, NICSs, PDI, FLU, MCI, CTED and, generally been found to be in agreement with them. So, it was proposed that the calculation of the average g-factor as Δg could be applied to study the aromaticity of polycyclic benzene rings without any restriction in the number of benzene rings as a new magnetic-based aromaticity index.

  4. Angular-divergence calculation for Experimental Advanced Superconducting Tokamak neutral beam injection ion source based on spectroscopic measurements

    SciTech Connect

    Chi, Yuan; Hu, Chundong; Zhuang, Ge

    2014-02-15

    Calorimetric method has been primarily applied for several experimental campaigns to determine the angular divergence of high-current ion source for the neutral beam injection system on the Experimental Advanced Superconducting Tokamak (EAST). A Doppler shift spectroscopy has been developed to provide the secondary measurement of the angular divergence to improve the divergence measurement accuracy and for real-time and non-perturbing measurement. The modified calculation model based on the W7AS neutral beam injectors is adopted to accommodate the slot-type accelerating grids used in the EAST's ion source. Preliminary spectroscopic experimental results are presented comparable to the calorimetrically determined value of theoretical calculation.

  5. Ground- and excited-state properties of DNA base molecules from plane-wave calculations using ultrasoft pseudopotentials.

    PubMed

    Preuss, M; Schmidt, W G; Seino, K; Furthmüller, J; Bechstedt, F

    2004-01-15

    We present equilibrium geometries, vibrational modes, dipole moments, ionization energies, electron affinities, and optical absorption spectra of the DNA base molecules adenine, thymine, guanine, and cytosine calculated from first principles. The comparison of our results with experimental data and results obtained by using quantum chemistry methods show that in specific cases gradient-corrected density-functional theory (DFT-GGA) calculations using ultrasoft pseudopotentials and a plane-wave basis may be a numerically efficient and accurate alternative to methods employing localized orbitals for the expansion of the electron wave functions.

  6. Formation and dissociation of protonated cytosine—cytosine base pairs in i-motifs by ab initio quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Hu; Li, Ming; Wang, Yan-Ting; Ouyang, Zhong-Can

    2014-02-01

    Formation and dissociation mechanisms of C—C+ base pairs in acidic and alkaline environments are investigated, employing ab initio quantum chemical calculations. Our calculations suggest that, in an acidic environment, a cytosine monomer is first protonated and then dimerized with an unprotonated cytosine monomer to form a C—C+ base pair; in an alkaline environment, a protonated cytosine dimer is first unprotonated and then dissociated into two cytosine monomers. In addition, the force for detaching a C—C+ base pair was found to be inversely proportional to the distance between the two cytosine monomers. These results provide a microscopic mechanism to qualitatively explain the experimentally observed reversible formation and dissociation of i-motifs.

  7. Signal processing method based on group delay calculation for distributed Bragg wavelength shift in optical frequency domain reflectometry.

    PubMed

    Wada, Daichi; Igawa, Hirotaka; Murayama, Hideaki; Kasai, Tokio

    2014-03-24

    A signal processing method based on group delay calculations is introduced for distributed measurements of long-length fiber Bragg gratings (FBGs) based on optical frequency domain reflectometry (OFDR). Bragg wavelength shifts in interfered signals of OFDR are regarded as group delay. By calculating group delay, the distribution of Bragg wavelength shifts is obtained with high computational efficiency. We introduce weighted averaging process for noise reduction. This method required only 3.5% of signal processing time which was necessary for conventional equivalent signal processing based on short-time Fourier transform. The method also showed high sensitivity to experimental signals where non-uniform strain distributions existed in a long-length FBG.

  8. Influence of channel base current and varying return stroke speed on the calculated fields of three important return stroke models

    NASA Technical Reports Server (NTRS)

    Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard

    1991-01-01

    Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.

  9. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calculate the fuel economy for the base level. (7) For alcohol dual fuel automobiles and natural gas dual... fuel economy values for the model type. (5) For alcohol dual fuel automobiles and natural gas dual fuel... the original base level fuel economy values); and (iii) All subconfigurations within the new...

  10. Effects of sulfur on lead partitioning during sludge incineration based on experiments and thermodynamic calculations.

    PubMed

    Liu, Jing-yong; Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe; Li, Xiao-ming; Chen, Tao; Luo, Guang-qian; Xie, Wu-ming; Wang, Yu-Jie; Zhuo, Zhong-xu; Fu, Jie-wen

    2015-04-01

    Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na2S and Na2SO4) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na2SO4 and Na2S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO4(s) at low temperatures (<1000 K). The equilibrium calculation prediction also suggested that SiO2, CaO, TiO2, and Al2O3 containing materials function as condensed phase solids in the temperature range of 800-1100 K as sorbents to stabilize Pb. However, in the presence of sulfur or chlorine or the co-existence of sulfur and chlorine, these sorbents were inactive. The effect of sulfur on Pb partitioning in the sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the concentration of Si, Ca and Al-containing compounds in the sludge. These findings provide useful information for understanding the partitioning behavior of Pb, facilitating the development of strategies to control the volatilization of Pb during sludge incineration.

  11. Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems

    NASA Astrophysics Data System (ADS)

    da Jornada, Felipe H.

    2015-03-01

    Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.

  12. Towards black-box calculations of tunneling splittings obtained from vibrational structure methods based on normal coordinates.

    PubMed

    Neff, Michael; Rauhut, Guntram

    2014-02-01

    Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application.

  13. Sea breeze analysis based on LES simulations and the particle trace calculations in MM21 district

    NASA Astrophysics Data System (ADS)

    Sugiyama, Toru; Soga, Yuta; Goto, Koji; Sadohara, Satoru; Takahashi, Keiko

    2016-04-01

    We have performed thermal and wind environment LES simulations in MM21 district in Yokohama. The used simulation model is MSSG (Multi-Scale Simulator for the Geo-environment). The spatial resolution is about 5[m] in horizontal and vertical axis. We have also performed the particle trace calculations in order to investigate the route of the sea-breeze. We have found the cool wind is gradually warmed/heated up as flowing into the district, then it blows up and is diffused. We will also discuss the contributions of the DHC (District Heating & Cooling) system in the area.

  14. Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation

    USGS Publications Warehouse

    Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.

    2000-01-01

    We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.

  15. Effects of sulfur on lead partitioning during sludge incineration based on experiments and thermodynamic calculations

    SciTech Connect

    Liu, Jing-yong; Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe; Li, Xiao-ming; Chen, Tao; Luo, Guang-qian; Xie, Wu-ming; Wang, Yu-jie; Zhuo, Zhong-xu; Fu, Jie-wen

    2015-04-15

    Highlights: • A thermodynamic equilibrium calculation was carried out. • Effects of three types of sulfurs on Pb distribution were investigated. • The mechanism for three types of sulfurs acting on Pb partitioning were proposed. • Lead partitioning and species in bottom ash and fly ash were identified. - Abstract: Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na{sub 2}S and Na{sub 2}SO{sub 4}) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na{sub 2}SO{sub 4} and Na{sub 2}S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO{sub 4}(s) at low temperatures (<1000 K). The equilibrium calculation prediction also suggested that SiO{sub 2}, CaO, TiO{sub 2}, and Al{sub 2}O{sub 3} containing materials function as condensed phase solids in the temperature range of 800–1100 K as sorbents to stabilize Pb. However, in the presence of sulfur or chlorine or the co-existence of sulfur and chlorine, these sorbents were inactive. The effect of sulfur on Pb partitioning in the sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the

  16. Downwind hazard calculations for space shuttle launches at Kennedy Space Center and Vandenberg Air Force Base

    NASA Technical Reports Server (NTRS)

    Susko, M.; Hill, C. K.; Kaufman, J. W.

    1974-01-01

    The quantitative estimates are presented of pollutant concentrations associated with the emission of the major combustion products (HCl, CO, and Al2O3) to the lower atmosphere during normal launches of the space shuttle. The NASA/MSFC Multilayer Diffusion Model was used to obtain these calculations. Results are presented for nine sets of typical meteorological conditions at Kennedy Space Center, including fall, spring, and a sea-breeze condition, and six sets at Vandenberg AFB. In none of the selected typical meteorological regimes studied was a 10-min limit of 4 ppm exceeded.

  17. A microsoft excel(®) 2010 based tool for calculating interobserver agreement.

    PubMed

    Reed, Derek D; Azulay, Richard L

    2011-01-01

    This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel(®)) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work.

  18. Development of an ab-initio calculation method for 2D layered materials-based optoelectronic devices

    NASA Astrophysics Data System (ADS)

    Kim, Han Seul; Kim, Yong-Hoon

    We report on the development of a novel first-principles method for the calculation of non-equilibrium nanoscale device operation process. Based on region-dependent Δ self-consistent field method beyond the standard density functional theory (DFT), we will introduce a novel method to describe non-equilibrium situations such as external bias and simultaneous optical excitations. In particular, we will discuss the limitation of conventional method and advantage of our scheme in describing 2D layered materials-based devices operations. Then, we investigate atomistic mechanism of optoelectronic effects from 2D layered materials-based devices and suggest the optimal material and architecture for such devices.

  19. Mind the gap between both hands: evidence for internal finger-based number representations in children's mental calculation.

    PubMed

    Domahs, Frank; Krinzinger, Helga; Willmes, Klaus

    2008-04-01

    At a certain stage of development, virtually all children use some kind of external finger-based number representation. However, only little is known about how internal traces of this early external representation may still influence calculation even when finger calculation ceases to be an efficient tool in mental calculation. In the present study, we provide evidence for a disproportionate number of split-five errors (i.e., errors with a difference of +/-5 from the correct result) in mental addition and subtraction (e.g., 18 - 7 = 6). We will argue that such errors may have different origins. For complex problems and initially also for simple problems they are due to failure to keep track of 'full hands' in counting or calculation procedures. However, for simple addition problems split-five errors may later also be caused by mistakes in directly retrieving the result from declarative memory. In general, the present results are interpreted in terms of a transient use of mental finger patterns - in particular the whole hand pattern - in children's mental calculation. PMID:18387566

  20. Calculation of the magnetic gradient tensor from total magnetic anomaly field based on regularized method in frequency domain

    NASA Astrophysics Data System (ADS)

    Yin, Gang; Zhang, Yingtang; Mi, Songlin; Fan, Hongbo; Li, Zhining

    2016-11-01

    To obtain accurate magnetic gradient tensor data, a fast and robust calculation method based on regularized method in frequency domain was proposed. Using the potential field theory, the transform formula in frequency domain was deduced in order to calculate the magnetic gradient tensor from the pre-existing total magnetic anomaly data. By analyzing the filter characteristics of the Vertical vector transform operator (VVTO) and Gradient tensor transform operator (GTTO), we proved that the conventional transform process was unstable which would zoom in the high-frequency part of the data in which measuring noise locate. Due to the existing unstable problem that led to a low signal-to-noise (SNR) for the calculated result, we introduced regularized method in this paper. By selecting the optimum regularization parameters of different transform phases using the C-norm approach, the high frequency noise was restrained and the SNR was improved effectively. Numerical analysis demonstrates that most value and characteristics of the calculated data by the proposed method compare favorably with reference magnetic gradient tensor data. In addition, calculated magnetic gradient tensor components form real aeromagnetic survey provided better resolution of the magnetic sources and original profile.

  1. Time reversed test particle calculations at Titan, based on CAPS-IMS measurements

    NASA Astrophysics Data System (ADS)

    Bebesi, Zsofia; Erdos, Geza; Szego, Karoly; Young, David T.

    2013-04-01

    We used the theoretical approach of Kobel and Flückiger (1994) to construct a magnetic environment model in the vicinity of Titan - with the exception of placing the bow shock (which is not present at Titan) into infinity. The model has 4 free parameters to calibrate the shape and orientation of the field. We investigate the CAPS-IMS Singles data to calculate/estimate the location of origin of the detected cold ions at Titan, and we also use the measurements of the onboard Magnetometer to set the parameters of the model magnetic field. A 4th order Runge-Kutta method is applied to calculate the test particle trajectories in a time reversed scenario, in the curved magnetic environment. Several different ion species can be tracked by the model along their possible trajectories, as a first approach we considered three particle groups (1, 2 and 16 amu ions). In this initial study we show the results for some thoroughly discussed flybys like TA, TB and T5, but we consider more recent tailside encounters as well. Reference: Kobel, E. and E.O. Flückiger, A model of the steady state magnetic field in the magnetosheath, JGR 99, Issue A12, 23617, 1994

  2. Review of Advances in Cobb Angle Calculation and Image-Based Modelling Techniques for Spinal Deformities

    NASA Astrophysics Data System (ADS)

    Giannoglou, V.; Stylianidis, E.

    2016-06-01

    Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s) calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s) and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.

  3. A robust force field based method for calculating conformational energies of charged drug-like molecules.

    PubMed

    Poehlsgaard, Jacob; Harpsøe, Kasper; Jørgensen, Flemming Steen; Olsen, Lars

    2012-02-27

    The binding affinity of a drug-like molecule depends among other things on the availability of the bioactive conformation. If the bioactive conformation has a significantly higher energy than the global minimum energy conformation, then the molecule is unlikely to bind to its target. Determination of the global minimum energy conformation and calculation of conformational penalties of binding is a prerequisite for prediction of reliable binding affinities. Here, we present a simple and computationally efficient procedure to estimate the global energy minimum for a wide variety of structurally diverse molecules, including polar and charged compounds. Identifying global energy minimum conformations of such compounds with force field methods is problematic due to the exaggeration of intramolecular electrostatic interactions. We demonstrate that the global energy minimum conformations of zwitterionic compounds generated by conformational analysis with modified electrostatics are good approximations of the conformational distributions predicted by experimental data and with molecular dynamics performed in explicit solvent. Finally the method is used to calculate conformational penalties for zwitterionic GluA2 agonists and to filter false positives from a docking study. PMID:21985436

  4. Accurate and fast stray radiation calculation based on improved backward ray tracing.

    PubMed

    Yang, Liu; XiaoQiang, An; Qian, Wang

    2013-02-01

    An improved method of backward ray tracing is proposed according to the theory of geometrical optics and thermal radiation heat transfer. The accuracy is essentially raised comparing to the traditional backward ray tracing because ray orders and weight factors are taken into account and the process is designed as sequential and recurring steps to trace and calculate different order stray lights. Meanwhile, it needs very small computation comparing to forward ray tracing because irrelevant surfaces and rays are excluded from the tracing. The effectiveness was verified in the stray radiation analysis for a cryogenic infrared (IR) imaging system, as the results coincided with the actual stray radiation irradiance distributions in the real images. The computation amount was compared with that of forward ray tracing in the narcissus calculation for another cryogenic IR imaging system, it was found that to produce the same accuracy result, the computation of the improved backward ray tracing is far smaller than that of forward ray tracing by at least 2 orders of magnitude.

  5. Calculation of a nonlinear eigenvalue problem based on the MMP method for analyzing photonic crystals

    NASA Astrophysics Data System (ADS)

    Jalali, Tahmineh

    2014-12-01

    The multiple multipoles (MMP) method is used to solve a nonlinear eigenvalue problem for analysis of a 2D metallic and dielectric photonic crystal. Simulation space is implemented in the first Brillouin zone, in order to obtain band structure and modal fields and in the supercell to calculate waveguide modes. The Bloch theorem is used to implement fictitious periodic boundary conditions for the first Brillouin zone and supercell. This method successfully computes the transmission and reflection coefficients of photonic crystal waveguide without significant error for termination of the computational space. To validate our code, the band structure of a cubic lattice is simulated and results are compared with results of the plane wave expansion method. The proposed method is shown to be applicable to photonic crystals of irregular shape and frequency dependent (independent) materials, such as dielectric or dispersive material, and experimental data for different lattice structures. Numerical calculations show that the MMP method is stable, accurate and fast and can be used on personal computers.

  6. Density-based partitioning methods for ground-state molecular calculations.

    PubMed

    Nafziger, Jonathan; Wasserman, Adam

    2014-09-11

    With the growing complexity of systems that can be treated with modern electronic-structure methods, it is critical to develop accurate and efficient strategies to partition the systems into smaller, more tractable fragments. We review some of the various recent formalisms that have been proposed to achieve this goal using fragment (ground-state) electron densities as the main variables, with an emphasis on partition density-functional theory (PDFT), which the authors have been developing. To expose the subtle but important differences between alternative approaches and to highlight the challenges involved with density partitioning, we focus on the simplest possible systems where the various methods can be transparently compared. We provide benchmark PDFT calculations on homonuclear diatomic molecules and analyze the associated partition potentials. We derive a new exact condition determining the strength of the singularities of the partition potentials at the nuclei, establish the connection between charge-transfer and electronegativity equalization between fragments, test different ways of dealing with fractional fragment charges and spins, and finally outline a general strategy for overcoming delocalization and static-correlation errors in density-functional calculations.

  7. Practical calculation of the beam scintillation index based on the rigorous asymptotic propagation theory

    NASA Astrophysics Data System (ADS)

    Charnotskii, Mikhail; Baker, Gary J.

    2011-06-01

    Asymptotic theory of the finite beam scintillations (Charnotskii, WRM, 1994, JOSA A, 2010) provides an exhaustive description of the dependence of the beam scintillation index on the propagation conditions, beam size and focusing. However the complexity of the asymptotic configuration makes it difficult to apply these results for the practical calculations of the scintillation index (SI). We propose an estimation technique and demonstrate some examples of the calculations of the scintillation index dependence on the propagation path length, initial beam size, wavelength and turbulence strength for the beam geometries and propagation scenarios that are typical for applications. We suggest simple analytic bridging approximations that connect the specific asymptotes with the accuracy sufficient for the engineering estimates. Proposed technique covers propagation of the wide, narrow, collimated and focused beams under the weak and strong scintillation conditions. Direct numeric simulation of the beam waves propagation through turbulence expediently complements the asymptotic theory being most efficient when the governing scales difference is not very large. We performed numerical simulations of the beam wave propagation through turbulence for conditions that partially overlap with the major parameter space domains of the asymptotic theory. The results of the numeric simulation are used to confirm the asymptotic theory and estimate the accuracy of the bridging approximations.

  8. Surface energy budget and thermal inertia at Gale Crater: Calculations from ground-based measurements

    PubMed Central

    Martínez, G M; Rennó, N; Fischer, E; Borlina, C S; Hallet, B; de la Torre Juárez, M; Vasavada, A R; Ramos, M; Hamilton, V; Gomez-Elvira, J; Haberle, R M

    2014-01-01

    The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia (I) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ∼104 m2 to ∼107 m2. Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ∼102 m2. We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I = 452 J m−2 K−1 s−1/2 (SI units used throughout this article) is found at YKB followed by PL with I = 306 and RCK with I = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars. PMID:26213666

  9. Web-based Tsunami Early Warning System with instant Tsunami Propagation Calculations in the GPU Cloud

    NASA Astrophysics Data System (ADS)

    Hammitzsch, M.; Spazier, J.; Reißland, S.

    2014-12-01

    Usually, tsunami early warning and mitigation systems (TWS or TEWS) are based on several software components deployed in a client-server based infrastructure. The vast majority of systems importantly include desktop-based clients with a graphical user interface (GUI) for the operators in early warning centers. However, in times of cloud computing and ubiquitous computing the use of concepts and paradigms, introduced by continuously evolving approaches in information and communications technology (ICT), have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in three research projects - 'German Indonesian Tsunami Early Warning System' (GITEWS), 'Distant Early Warning System' (DEWS), and 'Collaborative, Complex, and Critical Decision-Support in Evolving Crises' (TRIDEC) - new technologies are exploited to implement a cloud-based and web-based prototype to open up new prospects for EWS. This prototype, named 'TRIDEC Cloud', merges several complementary external and in-house cloud-based services into one platform for automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The prototype in its current version addresses tsunami early warning and mitigation. The integration of GPU accelerated tsunami simulation computations have been an integral part of this prototype to foster early warning with on-demand tsunami predictions based on actual source parameters. However, the platform is meant for researchers around the world to make use of the cloud-based GPU computation to analyze other types of geohazards and natural hazards and react upon the computed situation picture with a web-based GUI in a web browser at remote sites. The current website is an early alpha version for demonstration purposes to give the

  10. Search for a reliable nucleic acid force field using neutron inelastic scattering and quantum mechanical calculations: Bases, nucleosides and nucleotides

    SciTech Connect

    Leulliot, Nicolas; Ghomi, Mahmoud; Jobic, Herve

    1999-06-15

    Neutron inelastic scattering (NIS), IR and Raman spectra of the RNA constituents: bases, nucleosides and nucleotides have been analyzed. The complementary aspects of these different experimental techniques makes them especially powerful for assigning the vibrational modes of the molecules of interest. Geometry optimization and harmonic force field calculations of these molecules have been undertaken by quantum mechanical calculations at several theoretical levels: Hartree-Fock (HF), Moller-plesset second-order perturbation (MP2) and Density Functional Theory (DFT). In all cases, it has been shown that HF calculations lead to insufficient results for assigning accurately the intramolecular vibrational modes. In the case of the nucleic bases, these discrepancies could be satisfactorily removed by introducing the correlation effects at MP2 level. However, the application of the MP2 procedure to the large size molecules such as nucleosides and nucleotides is absolutely impossible, taking into account the prohibitive computational time needed. On the basis of our results, the calculations at DFT levels using B3LYP exchange and correlation functional appear to be a cost-effective alternative in obtaining a reliable force field for the whole set of nucleic acid constituents.

  11. Model of the catalytic mechanism of human aldose reductase based on quantum chemical calculations.

    SciTech Connect

    Cachau, R. C.; Howard, E. H.; Barth, P. B.; Mitschler, A. M.; Chevrier, B. C.; Lamour, V.; Joachimiak, A.; Sanishvili, R.; Van Zandt, M.; Sibley, E.; Moras, D.; Podjarny, A.; UPR de Biologie Structurale; National Cancer Inst.; Univ. Louis Pasteur; Inst. for Diabetes Discovery, Inc.

    2000-01-01

    Aldose Reductase is an enzyme involved in diabetic complications, thoroughly studied for the purpose of inhibitor development. The structure of an enzyme-inhibitor complex solved at sub-atomic resolution has been used to develop a model for the catalytic mechanism. This model has been refined using a combination of Molecular Dynamics and Quantum calculations. It shows that the proton donation, the subject of previous controversies, is the combined effect of three residues: Lys 77, Tyr 48 and His 110. Lys 77 polarises the Tyr 48 OH group, which donates the proton to His 110, which becomes doubly protonated. His 110 then moves and donates the proton to the substrate. The key information from the sub-atomic resolution structure is the orientation of the ring and the single protonafion of the His 110 in the enzyme-inhibitor complex. This model is in full agreement with all available experimental data.

  12. Thermal state of SNPS Topaz'' units: Calculation basing and experimental confirmation

    SciTech Connect

    Bogush, I.P.; Bushinsky, A.V.; Galkin, A.Y.; Serbin, V.I.; Zhabotinsky, E.E. )

    1991-01-01

    The ensuring thermal state parameters of thermionic space nuclear power system (SNPS) units in required limits on all operating regimes is a factor which determines SNPSs lifetime. The requirements to unit thermal state are distinguished to a marked degree, and both the corresponding units arragement in SNPS power generating module and the use of definite control algorithms, special thermal regulation and protection are neccessary for its provision. The computer codes which permit to define the thermal transient performances of liquid metal loop and main units had been elaborated for calculation basis of required SNPS Topaz'' unit thermal state. The conformity of these parameters to a given requirements are confirmed by results of autonomous unit tests, tests of mock-ups, power tests of ground SNPS prototypes and flight tests of two SNPS Topaz''.

  13. A calculation of feedbacks based on climate variations over the last decade

    NASA Astrophysics Data System (ADS)

    Dessler, A. E.

    2011-12-01

    I have calculated the strength of the temperature, water vapor, cloud, and albedo feedbacks in response to climate variations over the last decade. In general, the global average feedbacks agree well with those in comparable climate model runs. For the temperature and water vapor feedbacks, the average of the ensemble of climate models faithfully reproduces the spatial pattern of the feedbacks. The model ensemble average does a reasonable job simulating the spatial pattern of longwave and shortwave cloud feedbacks, but there are important differences in the net cloud feedback. The models predict a positive feedback in the tropics and a near-zero feedback in the northern hemisphere extratropics, while the observations show a negative feedback in the tropics and a strong positive feedback in the northern hemisphere extratropics. These disagreements tend to cancel in the global average, leading to better agreement there.

  14. Calculation of Shuttle Base Heating Environments and Comparison with Flight Data

    NASA Technical Reports Server (NTRS)

    Greenwood, T. F.; Lee, Y. C.; Bender, R. L.; Carter, R. E.

    1983-01-01

    The techniques, analytical tools, and experimental programs used initially to generate and later to improve and validate the Shuttle base heating design environments are discussed. In general, the measured base heating environments for STS-1 through STS-5 were in good agreement with the preflight predictions. However, some changes were made in the methodology after reviewing the flight data. The flight data is described, preflight predictions are compared with the flight data, and improvements in the prediction methodology based on the data are discussed.

  15. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    NASA Astrophysics Data System (ADS)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  16. Patient reactions to a web-based cardiovascular risk calculator in type 2 diabetes: a qualitative study in primary care

    PubMed Central

    Nolan, Tom; Dack, Charlotte; Pal, Kingshuk; Ross, Jamie; Stevenson, Fiona A; Peacock, Richard; Pearson, Mike; Spiegelhalter, David; Sweeting, Michael; Murray, Elizabeth

    2015-01-01

    Background Use of risk calculators for specific diseases is increasing, with an underlying assumption that they promote risk reduction as users become better informed and motivated to take preventive action. Empirical data to support this are, however, sparse and contradictory. Aim To explore user reactions to a cardiovascular risk calculator for people with type 2 diabetes. Objectives were to identify cognitive and emotional reactions to the presentation of risk, with a view to understanding whether and how such a calculator could help motivate users to adopt healthier behaviours and/or improve adherence to medication. Design and setting Qualitative study combining data from focus groups and individual user experience. Adults with type 2 diabetes were recruited through website advertisements and posters displayed at local GP practices and diabetes groups. Method Participants used a risk calculator that provided individualised estimates of cardiovascular risk. Estimates were based on UK Prospective Diabetes Study (UKPDS) data, supplemented with data from trials and systematic reviews. Risk information was presented using natural frequencies, visual displays, and a range of formats. Data were recorded and transcribed, then analysed by a multidisciplinary group. Results Thirty-six participants contributed data. Users demonstrated a range of complex cognitive and emotional responses, which might explain the lack of change in health behaviours demonstrated in the literature. Conclusion Cardiovascular risk calculators for people with diabetes may best be used in conjunction with health professionals who can guide the user through the calculator and help them use the resulting risk information as a source of motivation and encouragement. PMID:25733436

  17. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    SciTech Connect

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.

    2010-12-07

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm{sup 2}). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm{sup 2}. Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm{sup 2}) only 92% of the data meet the criteria. Total scatter factors show a good agreement (<2.6%) between MC calculated and measured data, except for the smaller fields (12x12 and 6x6 mm{sup 2}) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm{sup 2}. Special care must be taken for smaller fields.

  18. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    NASA Astrophysics Data System (ADS)

    Lárraga-Gutiérrez, J. M.; García-Garduño, O. A.; de la Cruz, O. O. Galván; Hernández-Bojórquez, M.; Ballesteros-Zebadúa, P.

    2010-12-01

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6×6, 12×12, 18×18, 24×24, 42×42, 60×60, 80×80 and 100×100 mm2). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18×18 to 100×100 mm2. Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12×12 and 6×6 mm2) only 92% of the data meet the criteria. Total scatter factors show a good agreement (<2.6%) between MC calculated and measured data, except for the smaller fields (12×12 and 6×6 mm2) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18×18 mm2. Special care must be taken for smaller fields.

  19. Spectral properties of protonated Schiff base porphyrins and chlorins. INDO-CI calculations and resonance raman studies

    SciTech Connect

    Hanson, L.K.; Chang, C.K.; Ward, B.; Callahan, P.M.; Babcock, G.T.; Head, J.D.

    1984-07-11

    INDO-CI calculations successfully reproduce the striking changes in optical spectra that occur upon protonation of mono- and disubstituted porphyrin, chlorin, and bacteriochlorin Schiff base complexes. They ascribe the changes to Schiff base C=N ..pi..* orbitals which drop in energy upon protonation and mix with and perturb the ..pi..* orbitals of the macrocycle, a result consistent with resonance Raman data. The perturbation is predicted to affect not only transition energies and intensities but also dipole moment directions. The symmetry of the porphyrin and the substitution site of the chlorin are shown to play an important role, especially in governing whether the lowest energy transition will red shift or blue shift. Blue shifts are calculated for protonation of ketimine and enamine isomers of pyrochlorophyll a (PChl). Comparison with reported optical spectra suggests that PChl a Schiff base may undergo isomerization upon protonation. Resonance Raman data on CHO, CHNR, CHNHR/sup +/, and pyrrolidine adducts of chlorin demonstrate the isolation of the peripheral C=O and C=N groups from the macrocycle ..pi.. system intramolecular hydrogen bonding, and selective enhancement of v/sub C=N/ for those species with a split Soret band. V/sub C=N/ is observed with 488.0-nm excitation into the lower-energy Soret and absent for 406.7-nm excitation into the higher-energy Soret, a result predicted by the calculations. 44 references, 10 figures, 2 tables.

  20. Evaluation of uncertainty predictions and dose output for model-based dose calculations for megavoltage photon beams

    SciTech Connect

    Olofsson, Joergen; Nyholm, Tufve; Georg, Dietmar; Ahnesjoe, Anders; Karlsson, Mikael

    2006-07-15

    In many radiotherapy clinics an independent verification of the number of monitor units (MU) used to deliver the prescribed dose to the target volume is performed prior to the treatment start. Traditionally this has been done by using methods mainly based on empirical factors which, at least to some extent, try to separate the influence from input parameters such as field size, depth, distance, etc. The growing complexity of modern treatment techniques does however make this approach increasingly difficult, both in terms of practical application and in terms of the reliability of the results. In the present work the performance of a model-based approach, describing the influence from different input parameters through actual modeling of the physical effects, has been investigated in detail. The investigated model is based on two components related to megavoltage photon beams; one describing the exiting energy fluence per delivered MU, and a second component describing the dose deposition through a pencil kernel algorithm solely based on a measured beam quality index. Together with the output calculations, the basis of a method aiming to predict the inherent calculation uncertainties in individual treatment setups has been developed. This has all emerged from the intention of creating a clinical dose/MU verification tool that requires an absolute minimum of commissioned input data. This evaluation was focused on irregular field shapes and performed through comparison with output factors measured at 5, 10, and 20 cm depth in ten multileaf collimated fields on four different linear accelerators with varying multileaf collimator designs. The measurements were performed both in air and in water and the results of the two components of the model were evaluated separately and combined. When compared with the corresponding measurements the resulting deviations in the calculated output factors were in most cases smaller than 1% and in all cases smaller than 1.7%. The

  1. Evaluation of uncertainty predictions and dose output for model-based dose calculations for megavoltage photon beams.

    PubMed

    Olofsson, Jörgen; Nyholm, Tufve; Georg, Dietmar; Ahnesjö, Anders; Karlsson, Mikael

    2006-07-01

    In many radiotherapy clinics an independent verification of the number of monitor units (MU) used to deliver the prescribed dose to the target volume is performed prior to the treatment start. Traditionally this has been done by using methods mainly based on empirical factors which, at least to some extent, try to separate the influence from input parameters such as field size, depth, distance, etc. The growing complexity of modern treatment techniques does however make this approach increasingly difficult, both in terms of practical application and in terms of the reliability of the results. In the present work the performance of a model-based approach, describing the influence from different input parameters through actual modeling of the physical effects, has been investigated in detail. The investigated model is based on two components related to megavoltage photon beams; one describing the exiting energy fluence per delivered MU, and a second component describing the dose deposition through a pencil kernel algorithm solely based on a measured beam quality index. Together with the output calculations, the basis of a method aiming to predict the inherent calculation uncertainties in individual treatment setups has been developed. This has all emerged from the intention of creating a clinical dose/MU verification tool that requires an absolute minimum of commissioned input data. This evaluation was focused on irregular field shapes and performed through comparison with output factors measured at 5, 10, and 20 cm depth in ten multileaf collimated fields on four different linear accelerators with varying multileaf collimator designs. The measurements were performed both in air and in water and the results of the two components of the model were evaluated separately and combined. When compared with the corresponding measurements the resulting deviations in the calculated output factors were in most cases smaller than 1% and in all cases smaller than 1.7%. The

  2. Pre-drilling calculation of geomechanical parameters for safe geothermal wells based on outcrop analogue samples

    NASA Astrophysics Data System (ADS)

    Reyer, Dorothea; Philipp, Sonja

    2014-05-01

    It is desirable to enlarge the profit margin of geothermal projects by reducing the total drilling costs considerably. Substantiated assumptions on uniaxial compressive strengths and failure criteria are important to avoid borehole instabilities and adapt the drilling plan to rock mechanical conditions to minimise non-productive time. Because core material is rare we aim at predicting in situ rock properties from outcrop analogue samples which are easy and cheap to provide. The comparability of properties determined from analogue samples with samples from depths is analysed by performing physical characterisation (P-wave velocities, densities), conventional triaxial tests, and uniaxial compressive strength tests of both quarry and equivalent core samples. "Equivalent" means that the quarry sample is of the same stratigraphic age and of comparable sedimentary facies and composition as the correspondent core sample. We determined the parameters uniaxial compressive strength (UCS) and Young's modulus for 35 rock samples from quarries and 14 equivalent core samples from the North German Basin. A subgroup of these samples was used for triaxial tests. For UCS versus Young's modulus, density and P-wave velocity, linear- and non-linear regression analyses were performed. We repeated regression separately for clastic rock samples or carbonate rock samples only as well as for quarry samples or core samples only. Empirical relations were used to calculate UCS values from existing logs of sampled wellbore. Calculated UCS values were then compared with measured UCS of core samples of the same wellbore. With triaxial tests we determined linearized Mohr-Coulomb failure criteria, expressed in both principal stresses and shear and normal stresses, for quarry samples. Comparison with samples from larger depths shows that it is possible to apply the obtained principal stress failure criteria to clastic and volcanic rocks, but less so for carbonates. Carbonate core samples have higher

  3. Ab initio calculations and crystal symmetry considerations for novel FeSe-based superconductors

    NASA Astrophysics Data System (ADS)

    Mazin, Igor

    2013-03-01

    Density functional calculations disagree with the ARPES measurements on both K0.3Fe2Se2 superconducting phase and FeSe/SrTiO3 monolayers. Yet they can still be dramatically useful for the reason that they respect full crystallographic symmetry and take good account of electron-ion interaction. Using just symmetry analysis, it is shown that nodeless d-wave superconductivity is not an option in these systems, and a microscopic framework is derived that leads to a novel s-wave sign-reversal state, qualitatively different from the already familiar s+/- state in pnictides and bulk binary selenides. Regarding the FeSe monolayer, bonding and charge transfer between the film and the substrate is analyzed and it is shown that the former is weak and the latter negligible, which sets important restrictions on possible mechanisms of doping and superconductivity in these monolayers. In particular, the role of the so-called ``Se etching,'' necessary for superconductivity in FeSe monolayers, is analyzed in terms of electronic structure and bonding with the substrate.

  4. Ray-Based Calculations with DEPLETE of Laser Backscatter in ICF Targets

    SciTech Connect

    Strozzi, D J; Williams, E; Hinkel, D; Froula, D; London, R; Callahan, D

    2008-05-19

    A steady-state model for Brillouin and Raman backscatter along a laser ray path is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code Deplete, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as 'plane-wave' simulations with the paraxial propagation code pF3D. Comparisons with Brillouin-scattering experiments at the Omega Laser Facility show that laser speckles greatly enhance the reflectivity over the Deplete results. An approximate upper bound on this enhancement is given by doubling the Deplete coupling coefficient. Analysis with Deplete of an ignition design for the National Ignition Facility (NIF), with a peak radiation temperature of 285 eV, shows encouragingly low reflectivity. Doubling the coupling to bracket speckle effects suggests a less optimistic picture. Re-absorption of Raman light is seen to be significant in this design.

  5. Calculation of room temperature conductivity and mobility in tin-based topological insulator nanoribbons

    SciTech Connect

    Vandenberghe, William G. Fischetti, Massimo V.

    2014-11-07

    Monolayers of tin (stannanane) functionalized with halogens have been shown to be topological insulators. Using density functional theory (DFT), we study the electronic properties and room-temperature transport of nanoribbons of iodine-functionalized stannanane showing that the overlap integral between the wavefunctions associated to edge-states at opposite ends of the ribbons decreases with increasing width of the ribbons. Obtaining the phonon spectra and the deformation potentials also from DFT, we calculate the conductivity of the ribbons using the Kubo-Greenwood formalism and show that their mobility is limited by inter-edge phonon backscattering. We show that wide stannanane ribbons have a mobility exceeding 10{sup 6} cm{sup 2}/Vs. Contrary to ordinary semiconductors, two-dimensional topological insulators exhibit a high conductivity at low charge density, decreasing with increasing carrier density. Furthermore, the conductivity of iodine-functionalized stannanane ribbons can be modulated over a range of three orders of magnitude, thus rendering this material extremely interesting for classical computing applications.

  6. Performance of SOPPA-based methods in the calculation of vertical excitation energies and oscillator strengths

    NASA Astrophysics Data System (ADS)

    Sauer, Stephan P. A.; Pitzner-Frydendahl, Henrik F.; Buse, Mogens; Jensen, Hans Jørgen Aa.; Thiel, Walter

    2015-07-01

    We present two new modifications of the second-order polarization propagator approximation (SOPPA), SOPPA(SCS-MP2) and SOPPA(SOS-MP2), which employ either spin-component-scaled or scaled opposite-spin MP2 correlation coefficients instead of the regular MP2 coefficients. The performance of these two methods, the original SOPPA method as well as SOPPA(CCSD) and RPA(D) in the calculation of vertical electronic excitation energies and oscillator strengths is investigated for a large benchmark set of 28 medium-sized molecules with 139 singlet and 71 triplet excited states. The results are compared with the corresponding CC3 and CASPT2 results from the literature for both the TZVP set and the larger and more diffuse aug-cc-pVTZ basis set. In addition, the results with the aug-cc-pVTZ basis set are compared with the theoretical best estimates for this benchmark set. We find that the original SOPPA method gives overall the smallest mean deviations from the reference values and the most consistent results.

  7. Highly correlated configuration interaction calculations on water with large orbital bases

    SciTech Connect

    Almora-Díaz, César X.

    2014-05-14

    A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupled cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, −76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the “experimental” value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol)

  8. Exploring positron characteristics utilizing two new positron-electron correlation schemes based on multiple electronic structure calculation methods

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Shuai; Gu, Bing-Chuan; Han, Xiao-Xi; Liu, Jian-Dang; Ye, Bang-Jiao

    2015-10-01

    We make a gradient correction to a new local density approximation form of positron-electron correlation. The positron lifetimes and affinities are then probed by using these two approximation forms based on three electronic-structure calculation methods, including the full-potential linearized augmented plane wave (FLAPW) plus local orbitals approach, the atomic superposition (ATSUP) approach, and the projector augmented wave (PAW) approach. The differences between calculated lifetimes using the FLAPW and ATSUP methods are clearly interpreted in the view of positron and electron transfers. We further find that a well-implemented PAW method can give near-perfect agreement on both the positron lifetimes and affinities with the FLAPW method, and the competitiveness of the ATSUP method against the FLAPW/PAW method is reduced within the best calculations. By comparing with the experimental data, the new introduced gradient corrected correlation form is proved to be competitive for positron lifetime and affinity calculations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11175171 and 11105139).

  9. Verification study of thorium cross section in MVP calculation of thorium based fuel core using experimental data

    SciTech Connect

    Mai, V. T.; Fujii, T.; Wada, K.; Kitada, T.; Takaki, N.; Yamaguchi, A.; Watanabe, H.; Unesaki, H.

    2012-07-01

    Considering the importance of thorium data and concerning about the accuracy of Th-232 cross section library, a series of experiments of thorium critical core carried out at KUCA facility of Kyoto Univ. Research Reactor Inst. have been analyzed. The core was composed of pure thorium plates and 93% enriched uranium plates, solid polyethylene moderator with hydro to U-235 ratio of 140 and Th-232 to U-235 ratio of 15.2. Calculations of the effective multiplication factor, control rod worth, reactivity worth of Th plates have been conducted by MVP code using JENDL-4.0 library [1]. At the experiment site, after achieving the critical state with 51 fuel rods inserted inside the reactor, the measurements of the reactivity worth of control rod and thorium sample are carried out. By comparing with the experimental data, the calculation overestimates the effective multiplication factor about 0.90%. Reactivity worth of the control rods evaluation using MVP is acceptable with the maximum discrepancy about the statistical error of the measured data. The calculated results agree to the measurement ones within the difference range of 3.1% for the reactivity worth of one Th plate. From this investigation, further experiments and research on Th-232 cross section library need to be conducted to provide more reliable data for thorium based fuel core design and safety calculation. (authors)

  10. ApH calculator based on linear transformations of the Henderson-Hasselbalch equation.

    PubMed

    JOSEPH, N R

    1958-11-14

    The four classical methods of determining the mass of the moon are noted, and a new use of an artificial earth satellite is proposed. The procedure, based on Kepler's law, is outlined, but at present the uncertainties in the observed data preclude an improved estimate of the lunar mass.

  11. SPS: A Simulation Tool for Calculating Power of Set-Based Genetic Association Tests.

    PubMed

    Li, Jiang; Sham, Pak Chung; Song, Youqiang; Li, Miaoxin

    2015-07-01

    Set-based association tests, combining a set of single-nucleotide polymorphisms into a unified test, have become important approaches to identify weak-effect or low-frequency risk loci of complex diseases. However, there is no comprehensive and user-friendly tool to estimate power of set-based tests for study design. We developed a simulation tool to estimate statistical power of multiple representative set-based tests (SPS). SPS has a graphic interface to facilitate parameter settings and result visualization. Advanced functions include loading real genotypes to define genetic architecture, set-based meta-analysis for risk loci with or without heterogeneity, and parallel simulations. In proof-of-principle examples, SPS took no more than 3 sec on average to estimate the power in a conventional setting. The SPS has been integrated into a user-friendly software tool (KGG) as an independent functional module and it is freely available at http://statgenpro.psychiatry.hku.hk/limx/kgg/. PMID:25995121

  12. ApH calculator based on linear transformations of the Henderson-Hasselbalch equation.

    PubMed

    JOSEPH, N R

    1958-11-14

    The four classical methods of determining the mass of the moon are noted, and a new use of an artificial earth satellite is proposed. The procedure, based on Kepler's law, is outlined, but at present the uncertainties in the observed data preclude an improved estimate of the lunar mass. PMID:13592306

  13. Effect of Oblique Electromagnetic Ion Cyclotron Waves on Relativistic Electron Scattering: CRRES Based Calculation

    NASA Technical Reports Server (NTRS)

    Gamayunov, K. V.; Khazanov, G. V.

    2007-01-01

    We consider the effect of oblique EMIC waves on relativistic electron scattering in the outer radiation belt using simultaneous observations of plasma and wave parameters from CRRES. The main findings can be s ummarized as follows: 1. In 1comparison with field-aligned waves, int ermediate and highly oblique distributions decrease the range of pitc h-angles subject to diffusion, and reduce the local scattering rate b y an order of magnitude at pitch-angles where the principle absolute value of n = 1 resonances operate. Oblique waves allow the absolute va lue of n > 1 resonances to operate, extending the range of local pitc h-angle diffusion down to the loss cone, and increasing the diffusion at lower pitch angles by orders of magnitude; 2. The local diffusion coefficients derived from CRRES data are qualitatively similar to the local results obtained for prescribed plasma/wave parameters. Conseq uently, it is likely that the bounce-averaged diffusion coefficients, if estimated from concurrent data, will exhibit the dependencies similar to those we found for model calculations; 3. In comparison with f ield-aligned waves, intermediate and highly oblique waves decrease th e bounce-averaged scattering rate near the edge of the equatorial lo ss cone by orders of magnitude if the electron energy does not excee d a threshold (approximately equal to 2 - 5 MeV) depending on specified plasma and/or wave parameters; 4. For greater electron energies_ ob lique waves operating the absolute value of n > 1 resonances are more effective and provide the same bounce_averaged diffusion rate near the loss cone as fiel_aligned waves do.

  14. Analytical calculation of sensing parameters on carbon nanotube based gas sensors.

    PubMed

    Akbari, Elnaz; Buntat, Zolkafle; Ahmad, Mohd Hafizi; Enzevaee, Aria; Yousof, Rubiyah; Iqbal, Syed Muhammad Zafar; Ahmadi, Mohammad Taghi; Sidik, Muhammad Abu Bakar; Karimi, Hediyeh

    2014-01-01

    Carbon Nanotubes (CNTs) are generally nano-scale tubes comprising a network of carbon atoms in a cylindrical setting that compared with silicon counterparts present outstanding characteristics such as high mechanical strength, high sensing capability and large surface-to-volume ratio. These characteristics, in addition to the fact that CNTs experience changes in their electrical conductance when exposed to different gases, make them appropriate candidates for use in sensing/measuring applications such as gas detection devices. In this research, a model for a Field Effect Transistor (FET)-based structure has been developed as a platform for a gas detection sensor in which the CNT conductance change resulting from the chemical reaction between NH3 and CNT has been employed to model the sensing mechanism with proposed sensing parameters. The research implements the same FET-based structure as in the work of Peng et al. on nanotube-based NH3 gas detection. With respect to this conductance change, the I-V characteristic of the CNT is investigated. Finally, a comparative study shows satisfactory agreement between the proposed model and the experimental data from the mentioned research.

  15. Ga(+) Basicity and Affinity Scales Based on High-Level Ab Initio Calculations.

    PubMed

    Brea, Oriana; Mó, Otilia; Yáñez, Manuel

    2015-10-26

    The structure, relative stability and bonding of complexes formed by the interaction between Ga(+) and a large set of compounds, including hydrocarbons, aromatic systems, and oxygen-, nitrogen-, fluorine and sulfur-containing Lewis bases have been investigated through the use of the high-level composite ab initio Gaussian-4 theory. This allowed us to establish rather accurate Ga(+) cation affinity (GaCA) and Ga(+) cation basicity (GaCB) scales. The bonding analysis of the complexes under scrutiny shows that, even though one of the main ingredients of the Ga(+) -base interaction is electrostatic, it exhibits a non-negligible covalent character triggered by the presence of the low-lying empty 4p orbital of Ga(+) , which favors a charge donation from occupied orbitals of the base to the metal ion. This partial covalent character, also observed in AlCA scales, is behind the dissimilarities observed when GaCA are compared with Li(+) cation affinities, where these covalent contributions are practically nonexistent. Quite unexpectedly, there are some dissimilarities between several Ga(+) -complexes and the corresponding Al(+) -analogues, mainly affecting the relative stability of π-complexes involving aromatic compounds.

  16. Calculation of Shear Stiffness in Noise Dominated Magnetic Resonance Elastography (MRE) Data Based on Principal Frequency Estimation

    PubMed Central

    McGee, K. P.; Lake, D.; Mariappan, Y; Hubmayr, R. D.; Manduca, A.; Ansell, K.; Ehman, R. L.

    2011-01-01

    Magnetic resonance elastography (MRE) is a non invasive phase-contrast based method for quantifying the shear stiffness of biological tissues. Synchronous application of a shear wave source and motion encoding gradient waveforms within the MRE pulse sequence enable visualization of the propagating shear wave throughout the medium under investigation. Encoded shear wave induced displacements are then processed to calculate the local shear stiffness of each voxel. An important consideration in local shear stiffness estimates is that the algorithms employed typically calculate shear stiffness using relatively high signal-to-noise ratio (SNR) MRE images and have difficulties at extremely low SNR. A new method of estimating shear stiffness based on the principal spatial frequency of the shear wave displacement map is presented. Finite element simulations were performed to assess the relative insensitivity of this approach to decreases in SNR. Additionally, ex vivo experiments were conducted on normal rat lungs to assess the robustness of this approach in low SNR biological tissue. Simulation and experimental results indicate that calculation of shear stiffness by the principal frequency method is less sensitive to extremely low SNR than previously reported MRE inversion methods but at the expense of loss of spatial information within the region of interest from which the principal frequency estimate is derived. PMID:21701049

  17. Prospective demonstration of brain plasticity after intensive abacus-based mental calculation training: An fMRI study

    NASA Astrophysics Data System (ADS)

    Chen, C. L.; Wu, T. H.; Cheng, M. C.; Huang, Y. H.; Sheu, C. Y.; Hsieh, J. C.; Lee, J. S.

    2006-12-01

    Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training.

  18. Primary pressure standard based on piston-cylinder assemblies. Calculation of effective cross sectional area based on rarefied gas dynamics

    NASA Astrophysics Data System (ADS)

    Sharipov, Felix; Yang, Yuanchao; Ricker, Jacob E.; Hendricks, Jay H.

    2016-10-01

    Currently, the piston-cylinder assembly known as PG39 is used as a primary pressure standard at the National Institute of Standards and Technology (NIST) in the range of 20 kPa to 1 MPa with a standard uncertainty of 3× {{10}-6} as evaluated in 2006. An approximate model of gas flow through the crevice between the piston and sleeve contributed significantly to this uncertainty. The aim of this work is to revise the previous effective cross sectional area of PG39 and its uncertainty by carrying out more exact calculations that consider the effects of rarefied gas flow. The effective cross sectional area is completely determined by the pressure distribution in the crevice. Once the pressure distribution is known, the elastic deformations of both piston and sleeve are calculated by finite element analysis. Then, the pressure distribution is recalculated iteratively for the new crevice dimension. As a result, a new value of the effective area is obtained with a relative difference of 3× {{10}-6} from the previous one. Moreover, this approach allows us to reduce significantly the standard uncertainty related to the gas flow model so that the total uncertainty is decreased by a factor of three.

  19. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values §...

  20. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust...

  1. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values §...

  2. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust...

  3. 40 CFR 600.206-12 - Calculation and use of FTP-based and HFET-based fuel economy and carbon-related exhaust emission...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... HFET-based fuel economy and carbon-related exhaust emission values for vehicle configurations. 600.206... POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values §...

  4. Calculation of Waveform-based Differential Times with both Cross-correlation and Bispectrum Methods

    NASA Astrophysics Data System (ADS)

    Du, W.; Thurber, C. H.; Eberhart-Phillips, D.

    2003-12-01

    Cross-correlation (CC) determined relative time delays, or related differential times, between pairs of seismic events at the same station are often used as input data to improve earthquake relocation results. Researchers generally select those time delays with associated CC coefficients larger than a chosen threshold. When two similar time series are contaminated by correlated noise sources, the relative time delay between them calculated with the CC technique is sometimes not reliable. Noise at a station for different events are expected to be partially correlated due to a combination of constant noise sources with time-varying amplitudes (microseisms, wind or cultural noise) and site response effects. The bispectrum (BS) method performs better with such data by eliminating the effect of correlated Gaussian noise in the third-order spectral domain. In this work, we use both the CC and BS methods to compute the relative time delay between two windowed waveforms of an event pair recorded at the same station. CC is performed only on the band-pass filtered data, while the BS method is applied to both the raw (unfiltered) and filtered waveforms. Because the characteristics of the noise terms in the raw and filtered data are different, the two BS time delay estimates may not always agree with each other. We then use both of them to verify (select or reject) the computed CC time delay, i.e. to check whether the differences between the CC and the two BS estimates are both within a specified limit. The exact verification process for an event pair varies depending on the size of the maximum CC coefficient across all the common stations. This BS verification process can provide quality control over the chosen CC time delays and potentially more differential times for close event pairs. We apply this technique to obtain bispectrum-verified CC differential times for 822 New Zealand earthquakes in the Wellington region. We find that the bispectrum-verified CC time delays

  5. Methodologie de conception numerique d'un ventilateur helico-centrifuge basee sur l'emploi du calcul meridien

    NASA Astrophysics Data System (ADS)

    Lallier-Daniels, Dominic

    La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul m

  6. A spreadsheet calculator for estimating biogas production and economic measures for UK-based farm-fed anaerobic digesters.

    PubMed

    Wu, Anthony; Lovett, David; McEwan, Matthew; Cecelja, Franjo; Chen, Tao

    2016-11-01

    This paper presents a spreadsheet calculator to estimate biogas production and the operational revenue and costs for UK-based farm-fed anaerobic digesters. There exist sophisticated biogas production models in published literature, but the application of these in farm-fed anaerobic digesters is often impractical. This is due to the limited measuring devices, financial constraints, and the operators being non-experts in anaerobic digestion. The proposed biogas production model is designed to use the measured process variables typically available at farm-fed digesters, accounting for the effects of retention time, temperature and imperfect mixing. The estimation of the operational revenue and costs allow the owners to assess the most profitable approach to run the process. This would support the sustained use of the technology. The calculator is first compared with literature reported data, and then applied to the digester unit on a UK Farm to demonstrate its use in a practical setting. PMID:27614153

  7. Python-based finite element code used as a universal and modular tool for electronic structure calculation

    NASA Astrophysics Data System (ADS)

    Cimrman, Robert; Tůma, Miroslav; Novák, Matyáš; Čertík, Ondřej; Plešek, Jiří; Vackář, Jiří

    2013-10-01

    Ab-initio calculations of electronic states within the density-functional framework has been performed by means of the open source finite element package SfePy (Simple Finite Elements in Python, http://sfepy.org). We describe a new robust ab-initio real-space code based on (i) density functional theory, (ii) finite element method and (iii) environment-reflecting pseudopotentials. This approach brings a new quality to solving Kohn-Sham equations, calculating electronic states, total energy, Hellmann-Feynman forces and material properties particularly for non-crystalline, non-periodic structures. The main asset of the above approach is an efficient combination of excellent convergence control of standard, universal basis used in industrially proved finite-element method, high precision of ab-initio environment-reflecting pseudopotentials, and applicability not restricted to electrically neutral periodic environment. We present also numerical examples illustrating the outputs of the method.

  8. Accurate pKa calculation of the conjugate acids of alkanolamines, alkaloids and nucleotide bases by quantum chemical methods.

    PubMed

    Gangarapu, Satesh; Marcelis, Antonius T M; Zuilhof, Han

    2013-04-01

    The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum model (CPCM) and universal solvation models (SMD, SM8). G3, SCS-MP2 and M11-L methods coupled with SMD and SM8 solvation models perform well for alkanolamines with mean unsigned errors below 0.20 pKa units, in all cases. Extending this method to the pKa calculation of 35 nitrogen-containing compounds spanning 12 pKa units showed an excellent correlation between experimental and computational pKa values of these 35 amines with the computationally low-cost SM8/M11-L density functional approach.

  9. Brine release based on structural calculations of damage around an excavation at the Waste Isolation Pilot Plant (WIPP)

    SciTech Connect

    Munson, D.E.; Jensen, A.L.; Webb, S.W.; DeVries, K.L.

    1996-02-01

    In a large in situ experimntal circular room, brine inflow was measured over 5 years. After correcting for evaporation losses into mine ventilation air, the measurements gave data for a period of nearly 3 years. Predicted brine accumulation based on a mechanical ``snow plow`` model of the volume swept by creep-induced damage as calculated with the Multimechanism Deformation Coupled Fracture model was found to agree with experiment. Calculation suggests the damage zone at 5 years effectively exends only some 0.7 m into the salt around the room. Also, because the mecahnical model of brine release gives an adequate explanation of the measured data, the hydrological process of brine flow appears to be rapid compared to the mechanical process of brine release.

  10. Reliability sensitivity-based correlation coefficient calculation in structural reliability analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhang, Yimin; Zhang, Xufang; Huang, Xianzhen

    2012-05-01

    The correlation coefficients of random variables of mechanical structures are generally chosen with experience or even ignored, which cannot actually reflect the effects of parameter uncertainties on reliability. To discuss the selection problem of the correlation coefficients from the reliability-based sensitivity point of view, the theory principle of the problem is established based on the results of the reliability sensitivity, and the criterion of correlation among random variables is shown. The values of the correlation coefficients are obtained according to the proposed principle and the reliability sensitivity problem is discussed. Numerical studies have shown the following results: (1) If the sensitivity value of correlation coefficient ρ is less than (at what magnitude 0.000 01), then the correlation could be ignored, which could simplify the procedure without introducing additional error. (2) However, as the difference between ρ s, that is the most sensitive to the reliability, and ρ R , that is with the smallest reliability, is less than 0.001, ρ s is suggested to model the dependency of random variables. This could ensure the robust quality of system without the loss of safety requirement. (3) In the case of | E abs|>0.001 and also | E rel|>0.001, ρ R should be employed to quantify the correlation among random variables in order to ensure the accuracy of reliability analysis. Application of the proposed approach could provide a practical routine for mechanical design and manufactory to study the reliability and reliability-based sensitivity of basic design variables in mechanical reliability analysis and design.

  11. Voronoi-cell finite difference method for accurate electronic structure calculation of polyatomic molecules on unstructured grids

    SciTech Connect

    Son, Sang-Kil

    2011-03-01

    We introduce a new numerical grid-based method on unstructured grids in the three-dimensional real-space to investigate the electronic structure of polyatomic molecules. The Voronoi-cell finite difference (VFD) method realizes a discrete Laplacian operator based on Voronoi cells and their natural neighbors, featuring high adaptivity and simplicity. To resolve multicenter Coulomb singularity in all-electron calculations of polyatomic molecules, this method utilizes highly adaptive molecular grids which consist of spherical atomic grids. It provides accurate and efficient solutions for the Schroedinger equation and the Poisson equation with the all-electron Coulomb potentials regardless of the coordinate system and the molecular symmetry. For numerical examples, we assess accuracy of the VFD method for electronic structures of one-electron polyatomic systems, and apply the method to the density-functional theory for many-electron polyatomic molecules.

  12. Calculation of reflectance distribution using angular spectrum convolution in mesh-based computer generated hologram.

    PubMed

    Yeom, Han-Ju; Park, Jae-Hyeung

    2016-08-22

    We propose a method to obtain a computer-generated hologram that renders reflectance distributions of individual mesh surfaces of three-dimensional objects. Unlike previous methods which find phase distribution inside each mesh, the proposed method performs convolution of angular spectrum of the mesh to obtain desired reflectance distribution. Manipulation in the angular spectrum domain enables its application to fully-analytic mesh based computer generated hologram, removing the necessity for resampling of the spatial frequency grid. It is also computationally inexpensive as the convolution can be performed efficiently using Fourier transform. In this paper, we present principle, error analysis, simulation, and experimental verification results of the proposed method.

  13. Bases, Assumptions, and Results of the Flowsheet Calculations for the Decision Phase Salt Disposition Alternatives

    SciTech Connect

    Dimenna, R.A.; Jacobs, R.A.; Taylor, G.A.; Durate, O.E.; Paul, P.K.; Elder, H.H.; Pike, J.A.; Fowler, J.R.; Rutland, P.L.; Gregory, M.V.; Smith III, F.G.; Hang, T.; Subosits, S.G.; Campbell, S.G.

    2001-03-26

    The High Level Waste (HLW) Salt Disposition Systems Engineering Team was formed on March 13, 1998, and chartered to identify options, evaluate alternatives, and recommend a selected alternative(s) for processing HLW salt to a permitted wasteform. This requirement arises because the existing In-Tank Precipitation process at the Savannah River Site, as currently configured, cannot simultaneously meet the HLW production and Authorization Basis safety requirements. This engineering study was performed in four phases. This document provides the technical bases, assumptions, and results of this engineering study.

  14. Calculating the parameters of full lightning impulses using model-based curve fitting

    SciTech Connect

    McComb, T.R.; Lagnese, J.E. )

    1991-10-01

    In this paper a brief review is presented of the techniques used for the evaluation of the parameters of high voltage impulses and the problems encountered. The determination of the best smooth curve through oscillations on a high voltage impulse is the major problem limiting the automatic processing of digital records of impulses. Non-linear regression, based on simple models, is applied to the analysis of simulated and experimental data of full lightning impulses. Results of model fitting to four different groups of impulses are presented and compared with some other methods. Plans for the extension of this work are outlined.

  15. [A method of hyperspectral quantificational identification of minerals based on infrared spectral artificial immune calculation].

    PubMed

    Liu, Qing-Jie; Jing, Lin-Hai; Li, Xin-Wu; Bi, Jian-Tao; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-04-01

    Rapid identification of minerals based on near infrared (NIR) and shortwave infrared (SWIR) hyperspectra is vital to remote sensing mine exploration, remote sensing minerals mapping and field geological documentation of drill core, and have leaded to many identification methods including spectral angle mapping (SAM), spectral distance mapping (SDM), spectral feature fitting(SFF), linear spectral mixture model (LSMM), mathematical combination feature spectral linear inversion model(CFSLIM) etc. However, limitations of these methods affect their actual applications. The present paper firstly gives a unified minerals components spectral inversion (MCSI) model based on target sample spectrum and standard endmember spectral library evaluated by spectral similarity indexes. Then taking LSMM and SAM evaluation index for example, a specific formulation of unified MCSI model is presented in the form of a kind of combinatorial optimization. And then, an artificial immune colonial selection algorithm is used for solving minerals feature spectral linear inversion model optimization problem, which is named ICSFSLIM. Finally, an experiment was performed to use ICSFSLIM and CFSLIM to identify the contained minerals of 22 rock samples selected in Baogutu in Xinjiang China. The mean value of correctness and validness identification of ICSFSLIM are 34.22% and 54.08% respectively, which is better than that of CFSLIM 31.97% and 37.38%; the correctness and validness variance of ICSFSLIM are 0.11 and 0.13 smaller than that of CFSLIM, 0.15 and 0.25, indicating better identification stability.

  16. Porphyrin-based polymeric nanostructures for light harvesting applications: Ab initio calculations

    NASA Astrophysics Data System (ADS)

    Orellana, Walter

    The capture and conversion of solar energy into electricity is one of the most important challenges to the sustainable development of mankind. Among the large variety of materials available for this purpose, porphyrins concentrate great attention due to their well-known absorption properties in the visible range. However, extended materials like polymers with similar absorption properties are highly desirable. In this work, we investigate the stability, electronic and optical properties of polymeric nanostructures based on free-base porphyrins and phthalocyanines (H2P, H2Pc), within the framework of the time-dependent density functional perturbation theory. The aim of this work is the stability, electronic, and optical characterization of polymeric sheets and nanotubes obtained from H2P and H2Pc monomers. Our results show that H2P and H2Pc sheets exhibit absorption bands between 350 and 400 nm, slightly different that the isolated molecules. However, the H2P and H2Pc nanotubes exhibit a wide absorption in the visible and near-UV range, with larger peaks at 600 and 700 nm, respectively, suggesting good characteristic for light harvesting. The stability and absorption properties of similar structures obtained from ZnP and ZnPc molecules is also discussed. Departamento de Ciencias Físicas, República 220, 037-0134 Santiago, Chile.

  17. Comparison of Traditional and Simultaneous IMRT Boost Technique Basing on Therapeutic Gain Calculation

    SciTech Connect

    Slosarek, Krzysztof; Zajusz, Aleksander; Szlag, Marta

    2008-01-01

    Two different radiotherapy techniques, a traditional one (CRT) - based on consecutive decreasing of irradiation fields during treatment, and intensity modulated radiation therapy technique (IMRT) with concomitant boost, deliver different doses to treated volumes, increasing the dose in regions of interest. The fractionation schedule differs depending on the applied technique of irradiation. The aim of this study was to compare different fractionation schedules considering tumor control and normal tissue complications. The analysis of tumor control probability (TCP) and normal tissue complication probability (NTCP) were based on the linear quadratic (LQ) model of biologically equivalent dose. A therapeutic gain (TG) formula that combines NTCP and TCP for selected irradiated volumes was introduced to compare CRT and simultaneous boost (SIB) methods. TG refers to the different doses per fraction, overall treatment time (OTT), and selected biological factors such as tumor cell and repopulation time. Therapeutic gain increases with the dose per fraction and reaches the maximum for the doses at about 3 Gy. Further increase in dose per fraction results in decrease of TG, mainly because of the escalation of NTCP. The presented TG formula allows the optimization of radiotherapy planning by comparing different treatment plans for individual patients and by selecting optimal fraction dose.

  18. Bispectrum feature extraction of gearbox faults based on nonnegative Tucker3 decomposition with 3D calculations

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng

    2013-11-01

    Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.

  19. Parallel calculations on shared memory, NUMA-based computers using MATLAB

    NASA Astrophysics Data System (ADS)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2014-05-01

    Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU

  20. Fast method of calculating a photorealistic hologram based on orthographic ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Yamaguchi, Masahiro

    2016-04-01

    A computer-generated hologram based on ray-wavefront conversion can reconstruct photorealistic three-dimensional (3D) images containing deep virtual objects and complicated physical phenomena; however, the required computational cost has been a problem that needs to be solved. In this Letter, we introduce the concept of an orthographic projection in the ray-wavefront conversion technique for reducing the computational cost without degrading the image quality. In the proposed method, plane waves with angular spectra of the object are obtained via orthographic ray sampling and Fourier transformation, and only the plane waves incident on the hologram plane are numerically propagated. We verified this accelerated computational method theoretically and experimentally, and demonstrated optical reconstruction of a deep 3D image in which the effects of occlusions, transmission, refraction, and reflection were faithfully reproduced.

  1. The Venus nitric oxide night airglow - Model calculations based on the Venus Thermospheric General Circulation Model

    NASA Astrophysics Data System (ADS)

    Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fessen, C. G.

    1990-05-01

    The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.

  2. Application of Model Based Parameter Estimation for RCS Frequency Response Calculations Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    1998-01-01

    An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.

  3. The Venus nitric oxide night airglow - Model calculations based on the Venus Thermospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.

    1990-01-01

    The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.

  4. Use of ground-based remotely sensed data for surface energy balance calculations during Monsoon '90

    NASA Technical Reports Server (NTRS)

    Moran, M. S.; Kustas, William P.; Vidal, Alain; Stannard, David I.; Blanford, James

    1991-01-01

    Surface energy balance was evaluated at a semiarid watershed using direct and indirect measurements of the turbulent fluxes, a remote technique based on measurements of surface reflectance and temperature, and conventional meteorological information. Comparison of remote estimates of net radiant flux and soil heat flux densities with measured values showed errors on the order of +/-40 W/sq m. To account for the effects of sparse vegetation, semi-empirical adjustments to aerodynamic resistance were required for evaluation of sensible heat flux density (H). However, a significant scatter in estimated versus measured latent heat flux density (LE) was still observed, +/-75 W/sq m over a range from 100-400 W/sq m. The errors of H and LE estimates were reduced to +/-50 W/sq m when observations were restricted to clear sky conditions.

  5. Yield estimation based on calculated comparisons to particle velocity data recorded at low stress

    SciTech Connect

    Rambo, J.

    1993-05-01

    This paper deals with the problem of optimizing the yield estimation process if some of the material properties are known from geophysical measurements and others are inferred from in-situ dynamic measurements. The material models and 2-D simulations of the event are combined to determine the yield. Other methods of yield determination from peak particle velocity data have mostly been based on comparisons of nearby events in similar media at the Nevada Test Site. These methods are largely empirical and are subject to additional error when a new event has different properties than the population being used for a basis of comparison. The effect of material variations can be examined using Lawrence Livermore National Laboratory's KDYNA computer code. The data from the FLAX event provide an instructive example for simulation.

  6. Yield estimation based on calculated comparisons to particle velocity data recorded at low stress

    SciTech Connect

    Rambo, J.

    1993-05-01

    This paper deals with the problem of optimizing the yield estimation process if some of the material properties are known from geophysical measurements and others are inferred from in-situ dynamic measurements. The material models and 2-D simulations of the event are combined to determine the yield. Other methods of yield determination from peak particle velocity data have mostly been based on comparisons of nearby events in similar media at the Nevada Test Site. These methods are largely empirical and are subject to additional error when a new event has different properties than the population being used for a basis of comparison. The effect of material variations can be examined using Lawrence Livermore National Laboratory`s KDYNA computer code. The data from the FLAX event provide an instructive example for simulation.

  7. Calculation Of Position And Velocity Of GLONASS Satellite Based On Analytical Theory Of Motion

    NASA Astrophysics Data System (ADS)

    Góral, W.; Skorupa, B.

    2015-09-01

    The presented algorithms of computation of orbital elements and positions of GLONASS satellites are based on the asymmetric variant of the generalized problem of two fixed centers. The analytical algorithm embraces the disturbing acceleration due to the second J2 and third J3 coefficients, and partially fourth zonal harmonics in the expansion of the Earth's gravitational potential. Other main disturbing accelerations - due to the Moon and the Sun attraction - are also computed analytically, where the geocentric position vector of the Moon and the Sun are obtained by evaluating known analytical expressions for their motion. The given numerical examples show that the proposed analytical method for computation of position and velocity of GLONASS satellites can be an interesting alternative for presently used numerical methods.

  8. Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions

    SciTech Connect

    Li, Jun; Yim, Man-Sung; McNelis, David N.

    2007-07-01

    explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)

  9. Complex vibrational analysis of an antiferroelectric liquid crystal based on solid-state oriented quantum chemical calculations and experimental molecular spectroscopy.

    PubMed

    Drużbicki, Kacper; Mikuli, Edward; Kocot, Antoni; Ossowska-Chruściel, Mirosława Danuta; Chruściel, Janusz; Zalewski, Sławomir

    2012-08-01

    The experimental and theoretical vibrational spectroscopic study of one of a novel antiferroelectric liquid crystals (AFLC), known under the MHPSBO10 acronym, have been undertaken. The interpretation of both FT-IR and FT-Raman spectra was focused mainly on the solid-state data. To analyze the experimental results along with the molecular properties, density functional theory (DFT) computations were performed using several modern theoretical approaches. The presented calculations were performed within the isolated molecule model, probing the performance of modern exchange-correlations functionals, as well as going beyond, i.e., within hybrid (ONIOM) and periodic boundary conditions (PBC) methodologies. A detailed band assignment was supported by the normal-mode analysis with SQM ab initio force field scaling. The results are supplemented by the noncovalent interactions analysis (NCI). The relatively noticeable spectral differences observed upon Crystal to AFLC phase transition have also been reported. For the most prominent vibrational modes, the geometries of the transition dipole moments along with the main components of vibrational polarizability were analyzed in terms of the molecular frame. One of the goals of the paper was to optimize the procedure of solid-state calculations to obtain the results comparable with the all electron calculations, performed routinely for isolated molecules, and to test their performance. The presented study delivers a complex insight into the vibrational spectrum with a noticeable improvement of the theoretical results obtained for significantly attracting mesogens using modern molecular modeling approaches. The presented modeling conditions are very promising for further description of similar large molecular crystals. PMID:22709148

  10. Novel Displacement Agents for Aqueous 2-Phase Extraction Can Be Estimated Based on Hybrid Shortcut Calculations.

    PubMed

    Kress, Christian; Sadowski, Gabriele; Brandenbusch, Christoph

    2016-10-01

    The purification of therapeutic proteins is a challenging task with immediate need for optimization. Besides other techniques, aqueous 2-phase extraction (ATPE) of proteins has been shown to be a promising alternative to cost-intensive state-of-the-art chromatographic protein purification. Most likely, to enable a selective extraction, protein partitioning has to be influenced using a displacement agent to isolate the target protein from the impurities. In this work, a new displacement agent (lithium bromide [LiBr]) allowing for the selective separation of the target protein IgG from human serum albumin (represents the impurity) within a citrate-polyethylene glycol (PEG) ATPS is presented. In order to characterize the displacement suitability of LiBr on IgG, the mutual influence of LiBr and the phase formers on the aqueous 2-phase system (ATPS) and partitioning is investigated. Using osmotic virial coefficients (B22 and B23) accessible by composition gradient multiangle light-scattering measurements, the precipitating effect of LiBr on both proteins and an estimation of both protein partition coefficients is estimated. The stabilizing effect of LiBr on both proteins was estimated based on B22 and experimentally validated within the citrate-PEG ATPS. Our approach contributes to an efficient implementation of ATPE within the downstream processing development of therapeutic proteins. PMID:27449229

  11. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations.

    PubMed

    Attia, Khalid A M; El-Abasawi, Nasr M; Abdel-Azim, Ahmed H

    2016-04-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10(-2)-1.0 × 10(-5) M with detection limit 8.5 × 10(-6) M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. PMID:26838908

  12. Experimental design of membrane sensor for selective determination of phenazopyridine hydrochloride based on computational calculations.

    PubMed

    Attia, Khalid A M; El-Abasawi, Nasr M; Abdel-Azim, Ahmed H

    2016-04-01

    Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10(-2)-1.0 × 10(-5) M with detection limit 8.5 × 10(-6) M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision.

  13. A robust cooperative spectrum sensing scheme based on Dempster-Shafer theory and trustworthiness degree calculation in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru

    2014-12-01

    Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.

  14. Film based verification of calculation algorithms used for brachytherapy planning-getting ready for upcoming challenges of MBDCA

    PubMed Central

    Bielęda, Grzegorz; Skowronek, Janusz; Mazur, Magdalena

    2016-01-01

    Purpose Well-known defect of TG-43 based algorithms used in brachytherapy is a lack of information about interaction cross-sections, which are determined not only by electron density but also by atomic number. TG-186 recommendations with using of MBDCA (model-based dose calculation algorithm), accurate tissues segmentation, and the structure's elemental composition continue to create difficulties in brachytherapy dosimetry. For the clinical use of new algorithms, it is necessary to introduce reliable and repeatable methods of treatment planning systems (TPS) verification. The aim of this study is the verification of calculation algorithm used in TPS for shielded vaginal applicators as well as developing verification procedures for current and further use, based on the film dosimetry method. Material and methods Calibration data was collected by separately irradiating 14 sheets of Gafchromic® EBT films with the doses from 0.25 Gy to 8.0 Gy using HDR 192Ir source. Standard vaginal cylinders of three diameters were used in the water phantom. Measurements were performed without any shields and with three shields combination. Gamma analyses were performed using the VeriSoft® package. Results Calibration curve was determined as third-degree polynomial type. For all used diameters of unshielded cylinder and for all shields combinations, Gamma analysis were performed and showed that over 90% of analyzed points meets Gamma criteria (3%, 3 mm). Conclusions Gamma analysis showed good agreement between dose distributions calculated using TPS and measured by Gafchromic films, thus showing the viability of using film dosimetry in brachytherapy.

  15. Film based verification of calculation algorithms used for brachytherapy planning-getting ready for upcoming challenges of MBDCA

    PubMed Central

    Bielęda, Grzegorz; Skowronek, Janusz; Mazur, Magdalena

    2016-01-01

    Purpose Well-known defect of TG-43 based algorithms used in brachytherapy is a lack of information about interaction cross-sections, which are determined not only by electron density but also by atomic number. TG-186 recommendations with using of MBDCA (model-based dose calculation algorithm), accurate tissues segmentation, and the structure's elemental composition continue to create difficulties in brachytherapy dosimetry. For the clinical use of new algorithms, it is necessary to introduce reliable and repeatable methods of treatment planning systems (TPS) verification. The aim of this study is the verification of calculation algorithm used in TPS for shielded vaginal applicators as well as developing verification procedures for current and further use, based on the film dosimetry method. Material and methods Calibration data was collected by separately irradiating 14 sheets of Gafchromic® EBT films with the doses from 0.25 Gy to 8.0 Gy using HDR 192Ir source. Standard vaginal cylinders of three diameters were used in the water phantom. Measurements were performed without any shields and with three shields combination. Gamma analyses were performed using the VeriSoft® package. Results Calibration curve was determined as third-degree polynomial type. For all used diameters of unshielded cylinder and for all shields combinations, Gamma analysis were performed and showed that over 90% of analyzed points meets Gamma criteria (3%, 3 mm). Conclusions Gamma analysis showed good agreement between dose distributions calculated using TPS and measured by Gafchromic films, thus showing the viability of using film dosimetry in brachytherapy. PMID:27648087

  16. Calculation of the ionization state for LTE plasmas using a new relativistic-screened hydrogenic model based on analytical potentials

    NASA Astrophysics Data System (ADS)

    Rubiano, J. G.; Rodríguez, R.; Gil, J. M.; Martel, P.; Mínguez, E.

    2002-01-01

    In this work, the Saha equation is solved using atomic data provided by means of a new relativistic-screened hydrogenic model based on analytical potentials to calculate the ionization state and ion abundance for LTE iron plasmas. The plasma effects on the atomic structure are taken into account by including the classical continuum lowering correction of Stewart and Pyatt. For high density, the Saha equation is modified to consider the degeneration of free electrons using the Fermi Dirac statistics instead of the Maxwellian distribution commonly used. The results are compared with more sophisticated self-consistent codes.

  17. POLYANA-A tool for the calculation of molecular radial distribution functions based on Molecular Dynamics trajectories

    NASA Astrophysics Data System (ADS)

    Dimitroulis, Christos; Raptis, Theophanes; Raptis, Vasilios

    2015-12-01

    We present an application for the calculation of radial distribution functions for molecular centres of mass, based on trajectories generated by molecular simulation methods (Molecular Dynamics, Monte Carlo). When designing this application, the emphasis was placed on ease of use as well as ease of further development. In its current version, the program can read trajectories generated by the well-known DL_POLY package, but it can be easily extended to handle other formats. It is also very easy to 'hack' the program so it can compute intermolecular radial distribution functions for groups of interaction sites rather than whole molecules.

  18. Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force

    NASA Astrophysics Data System (ADS)

    Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.

    2016-01-01

    The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.

  19. Dynamics study of the OH + NH3 hydrogen abstraction reaction using QCT calculations based on an analytical potential energy surface

    NASA Astrophysics Data System (ADS)

    Monge-Palacios, M.; Corchado, J. C.; Espinosa-Garcia, J.

    2013-06-01

    To understand the reactivity and mechanism of the OH + NH3 → H2O + NH2 gas-phase reaction, which evolves through wells in the entrance and exit channels, a detailed dynamics study was carried out using quasi-classical trajectory calculations. The calculations were performed on an analytical potential energy surface (PES) recently developed by our group, PES-2012 [Monge-Palacios et al. J. Chem. Phys. 138, 084305 (2013)], 10.1063/1.4792719. Most of the available energy appeared as H2O product vibrational energy (54%), reproducing the only experimental evidence, while only the 21% of this energy appeared as NH2 co-product vibrational energy. Both products appeared with cold and broad rotational distributions. The excitation function (constant collision energy in the range 1.0-14.0 kcal mol-1) increases smoothly with energy, contrasting with the only theoretical information (reduced-dimensional quantum scattering calculations based on a simplified PES), which presented a peak at low collision energies, related to quantized states. Analysis of the individual reactive trajectories showed that different mechanisms operate depending on the collision energy. Thus, while at high energies (Ecoll ≥ 6 kcal mol-1) all trajectories are direct, at low energies about 20%-30% of trajectories are indirect, i.e., with the mediation of a trapping complex, mainly in the product well. Finally, the effect of the zero-point energy constraint on the dynamics properties was analyzed.

  20. Sand box experiments to evaluate the influence of subsurface temperature probe design on temperature based water flux calculation

    NASA Astrophysics Data System (ADS)

    Munz, M.; Oswald, S. E.; Schmidt, C.

    2011-11-01

    Quantification of subsurface water fluxes based on the one dimensional solution to the heat transport equation depends on the accuracy of measured subsurface temperatures. The influence of temperature probe setup on the accuracy of vertical water flux calculation was systematically evaluated in this experimental study. Four temperature probe setups were installed into a sand box experiment to measure temporal highly resolved vertical temperature profiles under controlled water fluxes in the range of ±1.3 m d-1. Pass band filtering provided amplitude differences and phase shifts of the diurnal temperature signal varying with depth depending on water flux. Amplitude ratios of setups directly installed into the saturated sediment significantly varied with sand box hydraulic gradients. Amplitude ratios provided an accurate basis for the analytical calculation of water flow velocities, which matched measured flow velocities. Calculated flow velocities were sensitive to thermal properties of saturated sediment and to temperature sensor spacing, but insensitive to thermal dispersivity equal to solute dispersivity. Amplitude ratios of temperature probe setups indirectly installed into piezometer pipes were influenced by thermal exchange processes within the pipes and significantly varied with water flux direction only. Temperature time lags of small sensor distances of all setups were found to be insensitive to vertical water flux.

  1. Sand box experiments to evaluate the influence of subsurface temperature probe design on temperature based water flux calculation

    NASA Astrophysics Data System (ADS)

    Munz, M.; Oswald, S. E.; Schmidt, C.

    2011-06-01

    Quantification of subsurface water fluxes based on the one dimensional solution to the heat transport equation depends on the accuracy of measured subsurface temperatures. The influence of temperature probe setup on the accuracy of vertical water flux calculation was systematically evaluated in this experimental study. Four temperature probe setups were installed into a sand box experiment to measure temporal highly resolved vertical temperature profiles under controlled water fluxes in the range of ±1.3 m d-1. Pass band filtered time series provided amplitude and phase of the diurnal temperature signal varying with depth depending on water flux. Amplitude ratios of setups directly installed into the saturated sediment significantly varied with sand box hydraulic gradients. Amplitude ratios provided an accurate basis for the analytical calculation of water flow velocities, which matched measured flow velocities. Calculated flow velocities were sensitive to thermal properties of saturated sediment and to probe distance, but insensitive to thermal dispersivity equal to solute dispersivity. Amplitude ratios of temperature probe setups indirectly installed into piezometer pipes were influenced by thermal exchange processes within the pipes and significantly varied with water flux direction only. Temperature time lags of small probe distances of all setups were found to be insensitive to vertical water flux.

  2. Dynamics study of the OH + NH3 hydrogen abstraction reaction using QCT calculations based on an analytical potential energy surface.

    PubMed

    Monge-Palacios, M; Corchado, J C; Espinosa-Garcia, J

    2013-06-01

    To understand the reactivity and mechanism of the OH + NH3 → H2O + NH2 gas-phase reaction, which evolves through wells in the entrance and exit channels, a detailed dynamics study was carried out using quasi-classical trajectory calculations. The calculations were performed on an analytical potential energy surface (PES) recently developed by our group, PES-2012 [Monge-Palacios et al. J. Chem. Phys. 138, 084305 (2013)]. Most of the available energy appeared as H2O product vibrational energy (54%), reproducing the only experimental evidence, while only the 21% of this energy appeared as NH2 co-product vibrational energy. Both products appeared with cold and broad rotational distributions. The excitation function (constant collision energy in the range 1.0-14.0 kcal mol(-1)) increases smoothly with energy, contrasting with the only theoretical information (reduced-dimensional quantum scattering calculations based on a simplified PES), which presented a peak at low collision energies, related to quantized states. Analysis of the individual reactive trajectories showed that different mechanisms operate depending on the collision energy. Thus, while at high energies (E(coll) ≥ 6 kcal mol(-1)) all trajectories are direct, at low energies about 20%-30% of trajectories are indirect, i.e., with the mediation of a trapping complex, mainly in the product well. Finally, the effect of the zero-point energy constraint on the dynamics properties was analyzed.

  3. Calculating averted caries attributable to school-based sealant programs with a minimal data set

    PubMed Central

    Griffin, Susan O.; Jones, Kari; Crespin, Matthew

    2016-01-01

    Objectives We describe a methodology for school-based sealant programs (SBSP) to estimate averted cavities,(i.e.,difference in cavities without and with SBSP) over 9 years using a minimal data set. Methods A Markov model was used to estimate averted cavities. SBSP would input estimates of their annual attack rate (AR) and 1-year retention rate. The model estimated retention 2+ years after placement with a functional form obtained from the literature. Assuming a constant AR, SBSP can estimate their AR with child-level data collected prior to sealant placement on sealant presence, number of decayed/filled first molars, and age. We demonstrate the methodology with data from the Wisconsin SBSP. Finally, we examine how sensitive averted cavities obtained with this methodology is if an SBSP were to over or underestimate their AR or 1-year retention. Results Demonstrating the methodology with estimated AR (= 7 percent) and 1-year retention (= 92 percent) from the Wisconsin SBSP data, we found that placing 31,324 sealants averted 10,718 cavities. Sensitivity analysis indicated that for any AR, the magnitude of the error (percent) in estimating averted cavities was always less than the magnitude of the error in specifying the AR and equal to the error in specifying the 1-year retention rate. We also found that estimates of averted cavities were more robust to misspecifications of AR for higher- versus lower-risk children. Conclusions With Excel (Microsoft Corporation, Redmond, WA, USA) spreadsheets available upon request, SBSP can use this methodology to generate reasonable estimates of their impact with a minimal data set. PMID:24423023

  4. Performance of dose calculation algorithms from three generations in lung SBRT: comparison with full Monte Carlo-based dose distributions.

    PubMed

    Ojala, Jarkko J; Kapanen, Mika K; Hyödynmaa, Simo J; Wigren, Tuija K; Pitkänen, Maunu A

    2014-01-01

    threshold criteria showed larger discrepancies. The TPS algorithm comparison results showed large dose discrepancies in the PTV mean dose (D50%), nearly 60%, for the PBC algorithm, and differences of nearly 20% for the AAA, occurring also in the small PTV size range. This work suggests the application of independent plan verification, when the AAA or the AXB algorithm are utilized in lung SBRT having PTVs smaller than 20-25 cc. The calculated data from this study can be used in converting the SBRT protocols based on type 'a' and/or type 'b' algorithms for the most recent generation type 'c' algorithms, such as the AXB algorithm. PMID:24710454

  5. Phase noise calculation and variability analysis of RFCMOS LC oscillator based on physics-based mixed-mode simulation

    NASA Astrophysics Data System (ADS)

    Hong, Sung-Min; Oh, Yongho; Kim, Namhyung; Rieh, Jae-Sung

    2013-01-01

    A mixed-mode technology computer-aided design framework, which can evaluate the periodic steady-state solution of the oscillator efficiently, has been applied to an RFCMOS LC oscillator. Physics-based simulation of active devices makes it possible to link the internal parameters inside the devices and the performance of the oscillator directly. The phase noise of the oscillator is simulated with physics-based device simulation and the results are compared with the experimental data. Moreover, the statistical effect of the random dopant fluctuation on the oscillation frequency is investigated.

  6. Investigation of possibility of surface rupture derived from PFDHA and calculation of surface displacement based on dislocation

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Irikura, K.

    2013-12-01

    A probability of surface rupture is important to configure the seismic source, such as area sources or fault models, for a seismic hazard evaluation. In Japan, Takemura (1998) estimated the probability based on the historical earthquake data. Kagawa et al. (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated probability indicates a sigmoid curve and increases between Mj (the local magnitude defined and calculated by Japan Meteorological Agency) =6.5 and Mj=7.0. The probability of surface rupture is also used in a probabilistic fault displacement analysis (PFDHA). The probability is determined from the collected earthquake catalog, which were classified into two categories: with surface rupture or without surface rupture. The logistic regression is performed for the classified earthquake data. Youngs et al. (2003), Ross and Moss (2011) and Petersen et al. (2011) indicate the logistic curves of the probability of surface rupture by normal, reverse and strike-slip faults, respectively. Takao et al. (2013) shows the logistic curve derived from only Japanese earthquake data. The Japanese probability curve shows the sharply increasing in narrow magnitude range by comparison with other curves. In this study, we estimated the probability of surface rupture applying the logistic analysis to the surface displacement derived from a surface displacement calculation. A source fault was defined in according to the procedure of Kagawa et al. (2004), which determined a seismic moment from a magnitude and estimated the area size of the asperity and the amount of slip. Strike slip and reverse faults were considered as source faults. We applied Wang et al. (2003) for calculations. The surface displacements with defined source faults were calculated by varying the depth of the fault. A threshold value as 5cm of surface displacement was used to evaluate whether a surface rupture reach or do not reach to the surface. We carried out the

  7. A DFT-based model for calculating solvolytic reactivity. The nucleofugality of aliphatic carboxylates in terms of Nf parameters.

    PubMed

    Denegri, Bernard; Matić, Mirela; Kronja, Olga

    2014-08-14

    The most comprehensive nucleofugality scale, based on the correlation and solvolytic rate constants of benzhydrylium derivatives, has recently been proposed by Mayr and co-workers (Acc. Chem. Res., 2010, 43, 1537-1549). In this work, the possibility of employing quantum chemical calculations in further determination of nucleofugality (Nf) parameters of leaving groups is explored. Whereas the heterolytic transition state of benzhydryl carboxylate cannot be optimized by quantum chemical calculations, the possibility of an alternative model reaction is examined in order to obtain nucleofugality parameters of various aliphatic carboxylates, which can properly be included in the current nucleofugality scale. For that purpose, ground and transition state structures have been optimized for the proposed model reaction, which includes anchimerically assisted heterolytic dissociation of cis-2,3-dihydroxycyclopropyl trans-carboxylates. The validity of the model reaction as well as of applied DFT methods in the presence of the IEFPCM solvation model is verified by correlating calculated free energies of activation of the model reaction with literature experimental data for solvolysis of reference dianisylmethyl carboxylates. For this purpose the ability of several functionals (including popular B3LYP) is examined, among which the M06-2X gives the best results. The very good correlation indicates acceptable accurate relative reactivities of aliphatic carboxylates, and enables the estimation of rate constants for solvolysis of other dianisylmethyl carboxylates in aqueous ethanol mixtures, from which the corresponding Nf parameters are determined using mentioned Mayr's equation. In addition, DFT calculations confirm the previous experimental observation that the abilities of aliphatic carboxylate leaving groups in solution are governed by the inductive effect of substituents attached to the carboxyl group. PMID:24964919

  8. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08... GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.207-08 Calculation and use of vehicle-specific 5-cycle-based fuel...

  9. A new technique for calculating reentry base heating. [analysis of laminar base flow field of two dimensional reentry body

    NASA Technical Reports Server (NTRS)

    Meng, J. C. S.

    1973-01-01

    The laminar base flow field of a two-dimensional reentry body has been studied by Telenin's method. The flow domain was divided into strips along the x-axis, and the flow variations were represented by Lagrange interpolation polynomials in the transformed vertical coordinate. The complete Navier-Stokes equations were used in the near wake region, and the boundary layer equations were applied elsewhere. The boundary conditions consisted of the flat plate thermal boundary layer in the forebody region and the near wake profile in the downstream region. The resulting two-point boundary value problem of 33 ordinary differential equations was then solved by the multiple shooting method. The detailed flow field and thermal environment in the base region are presented in the form of temperature contours, Mach number contours, velocity vectors, pressure distributions, and heat transfer coefficients on the base surface. The maximum heating rate was found on the centerline, and the two-dimensional stagnation point flow solution was adquate to estimate the maximum heating rate so long as the local Reynolds number could be obtained.

  10. Transport and optical properties of CH2 plastics: Ab initio calculation and density-of-states-based analysis

    NASA Astrophysics Data System (ADS)

    Knyazev, D. V.; Levashov, P. R.

    2015-11-01

    This work covers an ab initio calculation of transport and optical properties of plastics of the effective composition CH2 at density 0.954 g/cm3 in the temperature range from 5 kK up to 100 kK. The calculation is based on the quantum molecular dynamics, density functional theory and the Kubo-Greenwood formula. The temperature dependence of the static electrical conductivity σ1DC (T) has an unusual shape: σ1DC(T) grows rapidly for 5 kK ≤ T ≤ 10 kK and is almost constant for 20 kK ≤ T ≤ 60 kK. The additional analysis based on the investigation of the electron density of states (DOS) was performed. The rapid growth of σ1DC(T) at 5 kK ≤ T ≤ 10 kK is connected with the increase of DOS at the electron energy equal to the chemical potential ɛ = μ. The frequency dependence of the dynamic electrical conductivity σ1(ω) at 5 kK has the distinct non-Drude shape with the peak at ω ≈ 10 eV. This behavior of σ1(ω) was explained by the dip at the electron DOS.

  11. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    SciTech Connect

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.; Oedegaard-Jensen, A.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, and to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)

  12. A method for fast evaluation of neutron spectra for BNCT based on in-phantom figure-of-merit calculation.

    PubMed

    Martín, Guido

    2003-03-01

    In this paper a fast method to evaluate neutron spectra for brain BNCT is developed. The method is based on an algorithm to calculate dose distribution in the brain, for which a data matrix has been taken into account, containing weighted biological doses per position per incident energy and the incident neutron spectrum to be evaluated. To build the matrix, using the MCNP 4C code, nearly monoenergetic neutrons were transported into a head model. The doses were scored and an energy-dependent function to biologically weight the doses was used. To find the beam quality, dose distribution along the beam centerline was calculated. A neutron importance function for this therapy to bilaterally treat deep-seated tumors was constructed in terms of neutron energy. Neutrons in the energy range of a few tens of kilo-electron-volts were found to produce the best dose gain, defined as dose to tumor divided by maximum dose to healthy tissue. Various neutron spectra were evaluated through this method. An accelerator-based neutron source was found to be more reliable for this therapy in terms of therapeutic gain than reactors.

  13. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    PubMed

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  14. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    PubMed

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  15. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  16. Spinel compounds as multivalent battery cathodes: A systematic evaluation based on ab initio calculations

    SciTech Connect

    Liu, Miao; Rong, Ziqin; Malik, Rahul; Canepa, Pieremanuele; Jain, Anubhav; Ceder, Gerbrand; Persson, Kristin A.

    2014-12-16

    In this study, batteries that shuttle multivalent ions such as Mg2+ and Ca2+ ions are promising candidates for achieving higher energy density than available with current Li-ion technology. Finding electrode materials that reversibly store and release these multivalent cations is considered a major challenge for enabling such multivalent battery technology. In this paper, we use recent advances in high-throughput first-principles calculations to systematically evaluate the performance of compounds with the spinel structure as multivalent intercalation cathode materials, spanning a matrix of five different intercalating ions and seven transition metal redox active cations. We estimate the insertion voltage, capacity, thermodynamic stability of charged and discharged states, as well as the intercalating ion mobility and use these properties to evaluate promising directions. Our calculations indicate that the Mn2O4 spinel phase based on Mg and Ca are feasible cathode materials. In general, we find that multivalent cathodes exhibit lower voltages compared to Li cathodes; the voltages of Ca spinels are ~0.2 V higher than those of Mg compounds (versus their corresponding metals), and the voltages of Mg compounds are ~1.4 V higher than Zn compounds; consequently, Ca and Mg spinels exhibit the highest energy densities amongst all the multivalent cation species. The activation barrier for the Al³⁺ ion migration in the Mn₂O₄ spinel is very high (~1400 meV for Al3+ in the dilute limit); thus, the use of an Al based Mn spinel intercalation cathode is unlikely. Amongst the choice of transition metals, Mn-based spinel structures rank highest when balancing all the considered properties.

  17. First principles calculations of enthalpy and O-H stretching frequency of hydrogen-bonded acid-base complexes

    NASA Astrophysics Data System (ADS)

    Tsige, Mesfin; Bhatta, Ram; Dhinojwala, Ali

    2014-03-01

    Understanding the acid-base interactions is important in surface science as it helps to rationalize materials properties such as wetting, adhesion and tribology. Quantitative relation between changes in enthalpy (ΔH) and frequency shift (Δν) during the acid base interaction is particularly important. We investigate ΔH and Δν of twenty-five complexes of acids (methanol, ethanol, propanol, butanol and phenol) with bases (benzene, pyridine, DMSO, Et2O and THF) in CCl4 using intermolecular perturbation theory calculations. ΔH and Δν of complexes of all alcohols with bases except benzene fall in the range from -14 kJ/mol to -28 kJ/mol and 215 cm-1 to 523 cm- 1 , respectively. Smaller values of ΔH (-2 to -6 kJ/mol) and Δν (23 to 70 cm-1) are estimated for benzene. For all the studied complexes, ΔH varies linearly (R2 ? 0.974) with Δν yielding the average slope and intercept of 0.056 and 1.5, respectively. Linear correlations were found between theoretical and experimental values of ΔH as well as Δν and are concurrent with the Badger-Bauer rule. This work is supported by the National Science Foundation.

  18. A complete assignment of the vibrational spectra of sucrose in aqueous medium based on the SQM methodology and SCRF calculations.

    PubMed

    Brizuela, Alicia Beatriz; Castillo, María Victoria; Raschi, Ana Beatriz; Davies, Lilian; Romano, Elida; Brandán, Silvia Antonia

    2014-03-31

    In the present study, a complete assignment of the vibrational spectra of sucrose in aqueous medium was performed combining Pulay's Scaled Quantum Mechanics Force Field (SQMFF) methodology with self-consistent reaction field (SCRF) calculations. Aqueous saturated solutions of sucrose and solutions at different molar concentrations of sucrose in water were completely characterized by infrared, HATR, and Raman spectroscopies. In accordance with reported data of the literature for sucrose, the theoretical structures of sucrose penta and sucrose dihydrate were also optimized in gas and aqueous solution phases by using the density functional theory (DFT) calculations. The solvent effects for the three studied species were analyzed using the solvation PCM/SMD model and, then, their corresponding solvation energies were predicted. The presence of pure water, sucrose penta-hydrate, and sucrose dihydrate was confirmed by using theoretical calculations based on the hybrid B3LYP/6-31G(∗) method and the experimental vibrational spectra. The existence of both sucrose hydrate complexes in aqueous solution is evidenced in the IR and HATR spectra by means of the characteristic bands at 3388, 3337, 3132, 1648, 1375, 1241, 1163, 1141, 1001, 870, 851, 732, and 668cm(-1) while in the Raman spectrum, the groups of bands in the regions 3159-3053cm(-1), 2980, 2954, and 1749-1496cm(-1) characterize the vibration modes of those complexes. The inter and intra-molecular H bond formations in aqueous solution were studied by Natural Bond Orbital (NBO) and Atoms in Molecules theory (AIM) investigation. PMID:24632216

  19. First-principles transport calculation method based on real-space finite-difference nonequilibrium Green's function scheme

    NASA Astrophysics Data System (ADS)

    Ono, Tomoya; Egami, Yoshiyuki; Hirose, Kikuji

    2012-11-01

    We demonstrate an efficient nonequilibrium Green's function transport calculation procedure based on the real-space finite-difference method. The direct inversion of matrices for obtaining the self-energy terms of electrodes is computationally demanding in the real-space method because the matrix dimension corresponds to the number of grid points in the unit cell of electrodes, which is much larger than that of sites in the tight-binding approach. The procedure using the ratio matrices of the overbridging boundary-matching technique [Y. Fujimoto and K. Hirose, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.67.195315 67, 195315 (2003)], which is related to the wave functions of a couple of grid planes in the matching regions, greatly reduces the computational effort to calculate self-energy terms without losing mathematical strictness. In addition, the present procedure saves computational time to obtain the Green's function of the semi-infinite system required in the Landauer-Büttiker formula. Moreover, the compact expression to relate Green's functions and scattering wave functions, which provide a real-space picture of the scattering process, is introduced. An example of the calculated results is given for the transport property of the BN ring connected to (9,0) carbon nanotubes. The wave-function matching at the interface reveals that the rotational symmetry of wave functions with respect to the tube axis plays an important role in electron transport. Since the states coming from and going to electrodes show threefold rotational symmetry, the states in the vicinity of the Fermi level, the wave function of which exhibits fivefold symmetry, do not contribute to the electron transport through the BN ring.

  20. PbBr-Based Layered Perovskite Organic-Inorganic Superlattice Having Carbazole Chromophore; Hole-Mobility and Quantum Mechanical Calculation.

    PubMed

    Era, Masanao; Yasuda, Takeshi; Mori, Kento; Tomotsu, Norio; Kawano, Naoki; Koshimizu, Masanori; Asai, Keisuke

    2016-04-01

    We have successfully evaluated hole mobility in a spin-coated film of a lead-bromide based layered perovskite having carbazole chromophore-linked ammonium molecules as organic layer by using FET measurement. The values of hole mobility, threshold voltage and on/off ratio at room temperature were evaluated.to.be 1.7 x 10(-6) cm2 V-1 s-1, 27 V and 28 V, respectively. However, the spin-coated films on Si substrates were not so uniform compared with those on fused quartz substrates. To improve the film uniformity, we examined the relationship between substrate temperature during spin-coating and film morphology in the layered perovskite spin-coated films. The mean roughness of the spin-coated films on Si substrates was dependent on the substrate temperature. At 353 K, the mean roughness was minimized and the carrier mobility was enhanced by one order of magnitude; the values of hole mobility and threshold voltage were .estimated to be 3.4 x 10(-5) cm2 V-1 s-1, and 22 V at room temperature in a preliminary FET evaluation, respectively. In addition, we determined a crystal structure of the layered perovskite by X-ray diffraction analysis. To gain a better understanding of the observed hole transports, we conducted quantum mechanical calculations using the obtained crystal structure information. The calculated band structure of the layered organic perovskite showed that the valence band is composed of the organic carbazole layer, which confirms that.the measured hole mobility is mainly derived from the organic part of the layered perovskite. Band and hopping transport mechanisms were discussed by calculating the effective masses and transfer integrals for the 2D periodic system of the organic layer in isolation. PMID:27451598

  1. Multi-Server Approach for High-Throughput Molecular Descriptors Calculation based on Multi-Linear Algebraic Maps.

    PubMed

    García-Jacas, César R; Aguilera-Mendoza, Longendri; González-Pérez, Reisel; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Avdeenko, Tatiana

    2015-01-01

    The present report introduces a novel module of the QuBiLS-MIDAS software for the distributed computation of the 3D Multi-Linear algebraic molecular indices. The main motivation for developing this module is to deal with the computational complexity experienced during the calculation of the descriptors over large datasets. To accomplish this task, a multi-server computing platform named T-arenal was developed, which is suited for institutions with many workstations interconnected through a local network and without resources particularly destined for computation tasks. This new system was deployed in 337 workstations and it was perfectly integrated with the QuBiLS-MIDAS software. To illustrate the usability of the T-arenal platform, performance tests over a dataset comprised of 15 000 compounds are carried out, yielding a 52 and 60 fold reduction in the sequential processing time for the 2-Linear and 3-Linear indices, respectively. Therefore, it can be stated that the T-arenal based distribution of computation tasks constitutes a suitable strategy for performing high-throughput calculations of 3D Multi-Linear descriptors over thousands of chemical structures for posterior QSAR and/or ADME-Tox studies. PMID:27490863

  2. Analysis of electronic structure and optical properties of N-doped SiO2 based on DFT calculations

    NASA Astrophysics Data System (ADS)

    Zhang, Sui-Shuan; Zhao, Zong-Yan; Yang, Pei-Zhi

    2015-07-01

    The crystal structure, electronic structure and optical properties of N-doped SiO2 with different N impurity concentrations were calculated by density function theory within GGA+U method. The crystal distortion, impurity formation energy, band gap, band width and optical parameter of N-doped SiO2 are closely related with N impurity concentration. Based on the calculated results, there are three new impurity energy levels emerging in the band gap of N-doped SiO2, which determine the electronic structure and optical properties. The variations of optical properties induced by N doping are predominately determined by the unsaturated impurity states, which are more obvious at higher N impurity concentration. In addition, all the doping effects of N in both α-quartz SiO2 and β-quartz SiO2 are very similar. According to these findings, one could understand the relationship between nitrogen concentration and optical parameter of SiOxNy materials, and design new optoelectrionic Si-O-N compounds.

  3. Numerical calculation of thermo-mechanical problems at large strains based on complex step derivative approximation of tangent stiffness matrices

    NASA Astrophysics Data System (ADS)

    Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg

    2015-05-01

    In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.

  4. Intramolecular hydrogen bonds involving organic fluorine in the derivatives of hydrazides: an NMR investigation substantiated by DFT based theoretical calculations.

    PubMed

    Mishra, Sandeep Kumar; Suryaprakash, N

    2015-06-21

    The rare examples of intramolecular hydrogen bonds (HB) of the type the N-H∙∙∙F-C, detected in a low polarity solvent in the derivatives of hydrazides, by utilizing one and two-dimensional solution state multinuclear NMR techniques, are reported. The observation of through-space couplings, such as, (1h)JFH, and (1h)JFN, provides direct evidence for the existence of intra-molecular HB. Solvent induced perturbations and the variable temperature NMR experiments unambiguously establish the presence of intramolecular HB. The existence of multiple conformers in some of the investigated molecules is also revealed by two dimensional HOESY and (15)N-(1)H HSQC experiments. The (1)H DOSY experimental results discard any possibility of self or cross dimerization of the molecules. The derived NMR experimental results are further substantiated by Density Function Theory (DFT) based Non Covalent Interaction (NCI), and Quantum Theory of Atom in Molecule (QTAIM) calculations. The NCI calculations served as a very sensitive tool for detection of non-covalent interactions and also confirm the presence of bifurcated HBs.

  5. Mechanism of Magnetostructural Transitions in Copper-Nitroxide-Based Switchable Molecular Magnets: Insights from ab Initio Quantum Chemistry Calculations.

    PubMed

    Jung, Julie; Guennic, Boris Le; Fedin, Matvey V; Ovcharenko, Victor I; Calzado, Carmen J

    2015-07-20

    The gradual magnetostructural transition in breathing crystals based on copper(II) and pyrazolyl-substituted nitronyl nitroxides has been analyzed by means of DDCI quantum chemistry calculations. The magnetic coupling constants (J) within the spin triads of Cu(hfac)2L(Bu)·0.5C8H18 have been evaluated for the X-ray structures reported at different temperatures. The coupling is strongly antiferromagnetic at low temperature and becomes ferromagnetic when the temperature increases. The intercluster magnetic coupling (J') is antiferromagnetic and shows a marked dependence on temperature. The magnetostructural transition can be reproduced using the calculated J values for each structure in the simulation of the magnetic susceptibility. However, the μ(T) curve can be improved nicely by considering the coexistence of two phases in the transition region, whose ratio varies with temperature corresponding to both the weakly and strongly coupled spin states. These results complement a recent VT-FTIR study on the parent Cu(hfac)2L(Pr) compound with a gradual magnetostructural transition.

  6. Multi-Server Approach for High-Throughput Molecular Descriptors Calculation based on Multi-Linear Algebraic Maps.

    PubMed

    García-Jacas, César R; Aguilera-Mendoza, Longendri; González-Pérez, Reisel; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Avdeenko, Tatiana

    2015-01-01

    The present report introduces a novel module of the QuBiLS-MIDAS software for the distributed computation of the 3D Multi-Linear algebraic molecular indices. The main motivation for developing this module is to deal with the computational complexity experienced during the calculation of the descriptors over large datasets. To accomplish this task, a multi-server computing platform named T-arenal was developed, which is suited for institutions with many workstations interconnected through a local network and without resources particularly destined for computation tasks. This new system was deployed in 337 workstations and it was perfectly integrated with the QuBiLS-MIDAS software. To illustrate the usability of the T-arenal platform, performance tests over a dataset comprised of 15 000 compounds are carried out, yielding a 52 and 60 fold reduction in the sequential processing time for the 2-Linear and 3-Linear indices, respectively. Therefore, it can be stated that the T-arenal based distribution of computation tasks constitutes a suitable strategy for performing high-throughput calculations of 3D Multi-Linear descriptors over thousands of chemical structures for posterior QSAR and/or ADME-Tox studies.

  7. Non-equilibrium Green's function calculation of AlGaAs-well-based and GaSb-based terahertz quantum cascade laser structures

    SciTech Connect

    Yasuda, H. Hosako, I.

    2015-03-16

    We investigate the performance of terahertz quantum cascade lasers (THz-QCLs) based on Al{sub x}Ga{sub 1−x}As/Al{sub y}Ga{sub 1−y}As and GaSb/AlGaSb material systems to realize higher-temperature operation. Calculations with the non-equilibrium Green's function method reveal that the AlGaAs-well-based THz-QCLs do not show improved performance, mainly because of alloy scattering in the ternary compound semiconductor. The GaSb-based THz-QCLs offer clear advantages over GaAs-based THz-QCLs. Weaker longitudinal optical phonon–electron interaction in GaSb produces higher peaks in the spectral functions of the lasing levels, which enables more electrons to be accumulated in the upper lasing level.

  8. Electro-optic Mach-Zehnder Interferometer based Optical Digital Magnitude Comparator and 1's Complement Calculator

    NASA Astrophysics Data System (ADS)

    Kumar, Ajay; Raghuwanshi, Sanjeev Kumar

    2016-06-01

    The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.

  9. Optimum design of a moderator system based on dose calculation for an accelerator driven Boron Neutron Capture Therapy.

    PubMed

    Inoue, R; Hiraga, F; Kiyanagi, Y

    2014-06-01

    An accelerator based BNCT has been desired because of its therapeutic convenience. However, optimal design of a neutron moderator system is still one of the issues. Therefore, detailed studies on materials consisting of the moderator system are necessary to obtain the optimal condition. In this study, the epithermal neutron flux and the RBE dose have been calculated as the indicators to look for optimal materials for the filter and the moderator. As a result, it was found that a combination of MgF2 moderator with Fe filter gave best performance, and the moderator system gave a dose ratio greater than 3 and an epithermal neutron flux over 1.0×10(9)cm(-2)s(-1).

  10. Determination of the hyperfine magnetic field in magnetic carbon-based materials: DFT calculations and NMR experiments

    PubMed Central

    Freitas, Jair C. C.; Scopel, Wanderlã L.; Paz, Wendel S.; Bernardes, Leandro V.; Cunha-Filho, Francisco E.; Speglich, Carlos; Araújo-Moreira, Fernando M.; Pelc, Damjan; Cvitanić, Tonči; Požek, Miroslav

    2015-01-01

    The prospect of carbon-based magnetic materials is of immense fundamental and practical importance, and information on atomic-scale features is required for a better understanding of the mechanisms leading to carbon magnetism. Here we report the first direct detection of the microscopic magnetic field produced at 13C nuclei in a ferromagnetic carbon material by zero-field nuclear magnetic resonance (NMR). Electronic structure calculations carried out in nanosized model systems with different classes of structural defects show a similar range of magnetic field values (18–21 T) for all investigated systems, in agreement with the NMR experiments. Our results are strong evidence of the intrinsic nature of defect-induced magnetism in magnetic carbons and establish the magnitude of the hyperfine magnetic field created in the neighbourhood of the defects that lead to magnetic order in these materials. PMID:26434597

  11. Detecting sea-level hazards: Simple regression-based methods for calculating the acceleration of sea level

    USGS Publications Warehouse

    Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H., Jr.

    2015-01-01

    Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.

  12. Band alignment at the CdS/FeS2 interface based on the first-principles calculation

    NASA Astrophysics Data System (ADS)

    Ichimura, Masaya; Kawai, Shoichi

    2015-03-01

    FeS2 is potentially well-suited for the absorber layer of a thin-film solar cell. Since it usually has p-type conductivity, a pn heterojunction cell can be fabricated by combining it with an n-type material. In this work, the band alignment in the heterostructure based on FeS2 is investigated on the basis of the first-principles calculation. CdS, the most popular buffer-layer material for thin-film solar cells, is selected as the partner in the heterostructure. The results indicate that there is a large conduction band offset (0.65 eV) at the interface, which will hinder the flow of photogenerated electrons from FeS2 to CdS. Thus an n-type material with the conduction band minimum positioned lower than that of CdS will be preferable as the partner in the heterostructure.

  13. Thermodynamics of polymer nematics described with a worm-like chain model: particle-based simulations and SCF theory calculations

    NASA Astrophysics Data System (ADS)

    Greco, Cristina; Yiang, Ying; Kremer, Kurt; Chen, Jeff; Daoulas, Kostas

    Polymer liquid crystals, apart from traditional applications as high strength materials, are important for new technologies, e.g. Organic Electronics. Their studies often invoke mesoscale models, parameterized to reproduce thermodynamic properties of the real material. Such top-down strategies require advanced simulation techniques, predicting accurately the thermodynamics of mesoscale models as a function of characteristic features and parameters. Here a recently developed model describing nematic polymers as worm-like chains interacting with soft directional potentials is considered. We present a special thermodynamic integration scheme delivering free energies in particle-based Monte Carlo simulations of this model, avoiding thermodynamic singularities. Conformational and structural properties, as well as Helmholtz free energies are reported as a function of interaction strength. They are compared with state-of-art SCF calculations invoking a continuum analog of the same model, demonstrating the role of liquid-packing and fluctuations.

  14. Detecting sea-level hazards: Simple regression-based methods for calculating the acceleration of sea level

    USGS Publications Warehouse

    Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.

    2016-01-04

    Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.

  15. Interaction of curcumin with Al(III) and its complex structures based on experiments and theoretical calculations

    NASA Astrophysics Data System (ADS)

    Jiang, Teng; Wang, Long; Zhang, Sui; Sun, Ping-Chuan; Ding, Chuan-Fan; Chu, Yan-Qiu; Zhou, Ping

    2011-10-01

    Curcumin has been recognized as a potential natural drug to treat the Alzheimer's disease (AD) by chelating baleful metal ions, scavenging radicals and preventing the amyloid β (Aβ) peptides from the aggregation. In this paper, Al(III)-curcumin complexes with Al(III) were synthesized and characterized by liquid-state 1H, 13C and 27Al nuclear magnetic resonance (NMR), mass spectroscopy (MS), ultraviolet spectroscopy (UV) and generalized 2D UV-UV correlation spectroscopy. In addition, the density functional theory (DFT)-based UV and chemical shift calculations were also performed to view insight into the structures and properties of curcumin and its complexes. It was revealed that curcumin could interact strongly with Al(III) ion, and form three types of complexes under different molar ratios of [Al(III)]/[curcumin], which would restrain the interaction of Al(III) with the Aβ peptide, reducing the toxicity effect of Al(III) on the peptide.

  16. [Rigorous algorithms for calculating the exact concentrations and activity levels of all the different species during acid-base titrations in water].

    PubMed

    Burgot, G; Burgot, J L

    2000-10-01

    The principles of two algorithms allowing the calculations of the concentration and activity levels of the different species during acid-base titrations in water are described. They simulate titrations at constant and variable ionic strengths respectively. They are designed so acid and base strengths, their concentrations and the titrant volume added can be chosen freely. The calculations are based on rigorous equations with a general scope. They are sufficiently compact to be processed on pocket calculators. The algorithms can easily simulate pH-metric, spectrophotometric, conductometric and calorimetric titrations, and hence allow determining concentrations and some physico-chemical constants related to the occurring chemical systems.

  17. Microstructure-based calculations and experimental results for sound absorbing porous layers of randomly packed rigid spherical beads

    NASA Astrophysics Data System (ADS)

    Zieliński, Tomasz G.

    2014-07-01

    Acoustics of stiff porous media with open porosity can be very effectively modelled using the so-called Johnson-Champoux-Allard-Pride-Lafarge model for sound absorbing porous media with rigid frame. It is an advanced semi-phenomenological model with eight parameters, namely, the total porosity, the viscous permeability and its thermal analogue, the tortuosity, two characteristic lengths (one specific for viscous forces, the other for thermal effects), and finally, viscous and thermal tortuosities at the frequency limit of 0 Hz. Most of these parameters can be measured directly, however, to this end specific equipment is required different for various parameters. Moreover, some parameters are difficult to determine. This is one of several reasons for the so-called multiscale approach, where the parameters are computed from specific finite-element analyses based on some realistic geometric representations of the actual microstructure of porous material. Such approach is presented and validated for layers made up of loosely packed small identical rigid spheres. The sound absorption of such layers was measured experimentally in the impedance tube using the so-called two-microphone transfer function method. The layers are characterised by open porosity and semi-regular microstructure: the identical spheres are loosely packed by random pouring and mixing under the gravity force inside the impedance tubes of various size. Therefore, the regular sphere packings were used to generate Representative Volume Elements suitable for calculations at the micro-scale level. These packings involve only one, two, or four spheres so that the three-dimensional finite-element calculations specific for viscous, thermal, and tortuous effects are feasible. In the proposed geometric packings, the spheres were slightly shifted in order to achieve the correct value of total porosity which was precisely estimated for the layers tested experimentally. Finally, in this paper some results based on

  18. Calculating Individual Resources Variability and Uncertainty Factors Based on Their Contributions to the Overall System Balancing Needs

    SciTech Connect

    Makarov, Yuri V.; Du, Pengwei; Pai, M. A.; McManus, Bart

    2014-01-14

    The variability and uncertainty of wind power production requires increased flexibility in power systems, or more operational reserves to main a satisfactory level of reliability. The incremental increase in reserve requirement caused by wind power is often studied separately from the effects of loads. Accordingly, the cost in procuring reserves is allocated based on this simplification rather than a fair and transparent calculation of the different resources’ contribution to the reserve requirement. This work proposes a new allocation mechanism for intermittency and variability of resources regardless of their type. It is based on a new formula, called grid balancing metric (GBM). The proposed GBM has several distinct features: 1) it is directly linked to the control performance standard (CPS) scores and interconnection frequency performance, 2) it provides scientifically defined allocation factors for individual resources, 3) the sum of allocation factors within any group of resources is equal to the groups’ collective allocation factor (linearity), and 4) it distinguishes helpers and harmers. The paper illustrates and provides results of the new approach based on actual transmission system operator (TSO) data.

  19. Feasibility of MV CBCT-based treatment planning for urgent radiation therapy: dosimetric accuracy of MV CBCT-based dose calculations.

    PubMed

    Held, Mareike; Sneed, Penny K; Fogh, Shannon E; Pouliot, Jean; Morin, Olivier

    2015-01-01

    Unlike scheduled radiotherapy treatments, treatment planning time and resources are limited for emergency treatments. Consequently, plans are often simple 2D image-based treatments that lag behind technical capabilities available for nonurgent radiotherapy. We have developed a novel integrated urgent workflow that uses onboard MV CBCT imaging for patient simulation to improve planning accuracy and reduce the total time for urgent treatments. This study evaluates both MV CBCT dose planning accuracy and novel urgent workflow feasibility for a variety of anatomic sites. We sought to limit local mean dose differences to less than 5% compared to conventional CT simulation. To improve dose calculation accuracy, we created separate Hounsfield unit-to-density calibration curves for regular and extended field-of-view (FOV) MV CBCTs. We evaluated dose calculation accuracy on phantoms and four clinical anatomical sites (brain, thorax/spine, pelvis, and extremities). Plans were created for each case and dose was calculated on both the CT and MV CBCT. All steps (simulation, planning, setup verification, QA, and dose delivery) were performed in one 30 min session using phantoms. The monitor units (MU) for each plan were compared and dose distribution agreement was evaluated using mean dose difference over the entire volume and gamma index on the central 2D axial plane. All whole-brain dose distributions gave gamma passing rates higher than 95% for 2%/2 mm criteria, and pelvic sites ranged between 90% and 98% for 3%/3 mm criteria. However, thoracic spine treatments produced gamma passing rates as low as 47% for 3%/3 mm criteria. Our novel MV CBCT-based dose planning and delivery approach was feasible and time-efficient for the majority of cases. Limited MV CBCT FOV precluded workflow use for pelvic sites of larger patients and resulted in image clearance issues when tumor position was far off midline. The agreement of calculated MU on CT and MV CBCT was acceptable for all

  20. Feasibility of MV CBCT-based treatment planning for urgent radiation therapy: dosimetric accuracy of MV CBCT-based dose calculations.

    PubMed

    Held, Mareike; Sneed, Penny K; Fogh, Shannon E; Pouliot, Jean; Morin, Olivier

    2015-01-01

    Unlike scheduled radiotherapy treatments, treatment planning time and resources are limited for emergency treatments. Consequently, plans are often simple 2D image-based treatments that lag behind technical capabilities available for nonurgent radiotherapy. We have developed a novel integrated urgent workflow that uses onboard MV CBCT imaging for patient simulation to improve planning accuracy and reduce the total time for urgent treatments. This study evaluates both MV CBCT dose planning accuracy and novel urgent workflow feasibility for a variety of anatomic sites. We sought to limit local mean dose differences to less than 5% compared to conventional CT simulation. To improve dose calculation accuracy, we created separate Hounsfield unit-to-density calibration curves for regular and extended field-of-view (FOV) MV CBCTs. We evaluated dose calculation accuracy on phantoms and four clinical anatomical sites (brain, thorax/spine, pelvis, and extremities). Plans were created for each case and dose was calculated on both the CT and MV CBCT. All steps (simulation, planning, setup verification, QA, and dose delivery) were performed in one 30 min session using phantoms. The monitor units (MU) for each plan were compared and dose distribution agreement was evaluated using mean dose difference over the entire volume and gamma index on the central 2D axial plane. All whole-brain dose distributions gave gamma passing rates higher than 95% for 2%/2 mm criteria, and pelvic sites ranged between 90% and 98% for 3%/3 mm criteria. However, thoracic spine treatments produced gamma passing rates as low as 47% for 3%/3 mm criteria. Our novel MV CBCT-based dose planning and delivery approach was feasible and time-efficient for the majority of cases. Limited MV CBCT FOV precluded workflow use for pelvic sites of larger patients and resulted in image clearance issues when tumor position was far off midline. The agreement of calculated MU on CT and MV CBCT was acceptable for all

  1. SU-D-BRD-01: Cloud-Based Radiation Treatment Planning: Performance Evaluation of Dose Calculation and Plan Optimization

    SciTech Connect

    Na, Y; Kapp, D; Kim, Y; Xing, L; Suh, T

    2014-06-01

    Purpose: To report the first experience on the development of a cloud-based treatment planning system and investigate the performance improvement of dose calculation and treatment plan optimization of the cloud computing platform. Methods: A cloud computing-based radiation treatment planning system (cc-TPS) was developed for clinical treatment planning. Three de-identified clinical head and neck, lung, and prostate cases were used to evaluate the cloud computing platform. The de-identified clinical data were encrypted with 256-bit Advanced Encryption Standard (AES) algorithm. VMAT and IMRT plans were generated for the three de-identified clinical cases to determine the quality of the treatment plans and computational efficiency. All plans generated from the cc-TPS were compared to those obtained with the PC-based TPS (pc-TPS). The performance evaluation of the cc-TPS was quantified as the speedup factors for Monte Carlo (MC) dose calculations and large-scale plan optimizations, as well as the performance ratios (PRs) of the amount of performance improvement compared to the pc-TPS. Results: Speedup factors were improved up to 14.0-fold dependent on the clinical cases and plan types. The computation times for VMAT and IMRT plans with the cc-TPS were reduced by 91.1% and 89.4%, respectively, on average of the clinical cases compared to those with pc-TPS. The PRs were mostly better for VMAT plans (1.0 ≤ PRs ≤ 10.6 for the head and neck case, 1.2 ≤ PRs ≤ 13.3 for lung case, and 1.0 ≤ PRs ≤ 10.3 for prostate cancer cases) than for IMRT plans. The isodose curves of plans on both cc-TPS and pc-TPS were identical for each of the clinical cases. Conclusion: A cloud-based treatment planning has been setup and our results demonstrate the computation efficiency of treatment planning with the cc-TPS can be dramatically improved while maintaining the same plan quality to that obtained with the pc-TPS. This work was supported in part by the National Cancer Institute (1

  2. Screening nitrogen-rich bases and oxygen-rich acids by theoretical calculations for forming highly stable salts.

    PubMed

    Zhang, Xueli; Gong, Xuedong

    2014-08-01

    Nitrogen-rich heterocyclic bases and oxygen-rich acids react to produce energetic salts with potential application in the field of composite explosives and propellants. In this study, 12 salts formed by the reaction of the bases 4-amino-1,2,4-trizole (A), 1-amino-1,2,4-trizole (B), and 5-aminotetrazole (C), upon reaction with the acids HNO3 (I), HN(NO2 )2 (II), HClO4 (III), and HC(NO2 )3 (IV), are studied using DFT calculations at the B97-D/6-311++G** level of theory. For the reactions with the same base, those of HClO4 are the most exothermic and spontaneous, and the most negative Δr Gm in the formation reaction also corresponds to the highest decomposition temperature of the resulting salt. The ability of anions and cations to form hydrogen bonds decreases in the order NO3 (-) >N(NO2 )2 (-) >ClO4 (-) >C(NO2 )3 (-) , and C(+) >B(+) >A(+) . In particular, those different cation abilities are mainly due to their different conformations and charge distributions. For the salts with the same anion, the larger total hydrogen-bond energy (EH,tot ) leads to a higher melting point. The order of cations and anions on charge transfer (q), second-order perturbation energy (E2 ), and binding energy (Eb ) are the same to that of EH,tot , so larger q leads to larger E2 , Eb , and EH,tot . All salts have similar frontier orbitals distributions, and their HOMO and LUMO are derived from the anion and the cation, respectively. The molecular orbital shapes are kept as the ions form a salt. To produce energetic salts, 5-aminotetrazole and HClO4 are the preferred base and acid, respectively.

  3. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Goffin, Mark A.; Baker, Christopher M. J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k with directional dependence. General error estimators are derived for any given functional of the flux and applied to k to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  4. Effect of particle shape on dust shortwave direct radiative forcing calculations based on MODIS observations for a case study

    NASA Astrophysics Data System (ADS)

    Feng, Qian; Cui, Songxue; Zhao, Wei

    2015-09-01

    Assuming spheroidal and spherical particle shapes for mineral dust aerosols, the effect of particle shape on dust aerosol optical depth retrievals, and subsequently on instantaneous shortwave direct radiative forcing (SWDRF) at the top of the atmosphere (TOA), is assessed based on Moderate Resolution Imaging Spectroradiometer (MODIS) data for a case study. Specifically, a simplified aerosol retrieval algorithm based on the principle of the Deep Blue aerosol retrieval method is employed to retrieve dust aerosol optical depths, and the Fu-Liou radiative transfer model is used to derive the instantaneous SWDRF of dust at the TOA for cloud-free conditions. Without considering the effect of particle shape on dust aerosol optical depth retrievals, the effect of particle shape on the scattering properties of dust aerosols (e.g., extinction efficiency, single scattering albedo and asymmetry factor) is negligible, which can lead to a relative difference of at most 5% for the SWDRF at the TOA. However, the effect of particle shape on the SWDRF cannot be neglected provided that the effect of particle shape on dust aerosol optical depth retrievals is also taken into account for SWDRF calculations. The corresponding results in an instantaneous case study show that the relative differences of the SWDRF at the TOA between spheroids and spheres depend critically on the scattering angles at which dust aerosol optical depths are retrieved, and can be up to 40% for low dust-loading conditions.

  5. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    SciTech Connect

    Goffin, Mark A.; Baker, Christopher M.J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k{sub eff}, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k{sub eff} with directional dependence. General error estimators are derived for any given functional of the flux and applied to k{sub eff} to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k{sub eff} goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  6. A novel DNA sequence similarity calculation based on simplified pulse-coupled neural network and Huffman coding

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Nie, Rencan; Zhou, Dongming; Yao, Shaowen; Chen, Yanyan; Yu, Jiefu; Wang, Quan

    2016-11-01

    A novel method for the calculation of DNA sequence similarity is proposed based on simplified pulse-coupled neural network (S-PCNN) and Huffman coding. In this study, we propose a coding method based on Huffman coding, where the triplet code was used as a code bit to transform DNA sequence into numerical sequence. The proposed method uses the firing characters of S-PCNN neurons in DNA sequence to extract features. Besides, the proposed method can deal with different lengths of DNA sequences. First, according to the characteristics of S-PCNN and the DNA primary sequence, the latter is encoded using Huffman coding method, and then using the former, the oscillation time sequence (OTS) of the encoded DNA sequence is extracted. Simultaneously, relevant features are obtained, and finally the similarities or dissimilarities of the DNA sequences are determined by Euclidean distance. In order to verify the accuracy of this method, different data sets were used for testing. The experimental results show that the proposed method is effective.

  7. Prediction of {sup 2}D Rydberg energy levels of {sup 6}Li and {sup 7}Li based on very accurate quantum mechanical calculations performed with explicitly correlated Gaussian functions

    SciTech Connect

    Bubin, Sergiy; Sharkey, Keeper L.; Adamowicz, Ludwik

    2013-04-28

    Very accurate variational nonrelativistic finite-nuclear-mass calculations employing all-electron explicitly correlated Gaussian basis functions are carried out for six Rydberg {sup 2}D states (1s{sup 2}nd, n= 6, Horizontal-Ellipsis , 11) of the {sup 7}Li and {sup 6}Li isotopes. The exponential parameters of the Gaussian functions are optimized using the variational method with the aid of the analytical energy gradient determined with respect to these parameters. The experimental results for the lower states (n= 3, Horizontal-Ellipsis , 6) and the calculated results for the higher states (n= 7, Horizontal-Ellipsis , 11) fitted with quantum-defect-like formulas are used to predict the energies of {sup 2}D 1s{sup 2}nd states for {sup 7}Li and {sup 6}Li with n up to 30.

  8. Diffusion ellipsoids of anisotropic porous rocks calculated by X-ray computed tomography-based random walk simulations

    NASA Astrophysics Data System (ADS)

    Nakashima, Yoshito; Kamiya, Susumu; Nakano, Tsukasa

    2008-12-01

    Water molecules and contaminants migrate in water-saturated porous strata by diffusion in systems with small Péclet numbers. Natural porous rocks possess the anisotropy for diffusive transport along the percolated pore space. An X-ray computed tomography (CT) based approach is presented to quickly characterize anisotropic diffusion in porous rocks. High-resolution three-dimensional (3-D) pore images were obtained for a pumice and three sandstones by microfocus X-ray CT and synchrotron microtomography systems. The cluster-labeling process was applied to each image set to extract the 3-D image of a single percolated pore cluster through which diffusing species can migrate a long distance. The nonsorbing lattice random walk simulation was performed on the percolated pore cluster to obtain the mean square displacement. The self-diffusion coefficient along each direction in the 3-D space was calculated by taking the time derivative of the mean square displacement projected on the corresponding direction. A diffusion ellipsoid (i.e., polar representation of the direction-dependent normalized self-diffusivity) with three orthogonal principal axes was obtained for each rock sample. The 3-D two-point autocorrelation was also calculated for the percolated pore cluster of each rock sample to estimate the pore diameter anisotropy. The autocorrelation ellipsoids obtained by the ellipsoid fitting to the high correlation zone were prolate or oblate in shape, presumably depending on the eruption-induced deformation of magma and regional stress during sandstone diagenesis. The pore network anisotropy was estimated by calculating the diffusion ellipsoid for uniaxially elongated or compressed rock images. The degree and direction of the geological deformation of the samples estimated by the pore diameter anisotropy analysis agreed well with those estimated by the pore network anisotropy analysis. We found that the direction of the geological deformation coincided with the direction

  9. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework

    NASA Astrophysics Data System (ADS)

    Berger, Daniel; Logsdail, Andrew J.; Oberhofer, Harald; Farrow, Matthew R.; Catlow, C. Richard A.; Sherwood, Paul; Sokol, Alexey A.; Blum, Volker; Reuter, Karsten

    2014-07-01

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO2(110).

  10. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework.

    PubMed

    Berger, Daniel; Logsdail, Andrew J; Oberhofer, Harald; Farrow, Matthew R; Catlow, C Richard A; Sherwood, Paul; Sokol, Alexey A; Blum, Volker; Reuter, Karsten

    2014-07-14

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO2(110).

  11. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework

    SciTech Connect

    Berger, Daniel Oberhofer, Harald; Reuter, Karsten; Logsdail, Andrew J. Farrow, Matthew R.; Catlow, C. Richard A.; Sokol, Alexey A.; Sherwood, Paul; Blum, Volker

    2014-07-14

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO{sub 2}(110)

  12. Reliability of calculation of the lithosphere deformations in tectonically stable area of Poland based on the GPS measurements

    NASA Astrophysics Data System (ADS)

    Araszkiewicz, Andrzej; Jarosiński, Marek

    2013-04-01

    In this research we aimed to check if the GPS observations can be used for calculation of a reliable deformation pattern of the intracontinental lithosphere in seismically inactive areas, such as territory of Poland. For this purpose we have used data mainly from the ASG-EUPOS permanent network and the solutions developed by the MUT CAG team (Military University of Technology: Centre of Applied Geomatics). From the 128 analyzed stations almost 100 are mounted on buildings. Daily observations were processed in the Bernese 5.0 software and next the weekly solutions were used to determine the station velocities expressed in ETRF2000. The strain rates were determined for almost 200 triangles with GPS stations in their corners plotted used Delaunay triangulation. The obtained scattered directions of deformations and highly changeable values of strain rates point to insufficient antennas' stabilization as for geodynamical studies. In order to depict badly stabilized stations we carried out a benchmark test to show what might be the effect of one station drift on deformations in contacting triangles. Based on the benchmark results, from our network we have eliminated the stations which showed deformation pattern characteristic for instable station. After several rounds of strain rate calculations and eliminations of dubious points we have reduced the number of stations down to 60. The refined network revealed more consistent deformation pattern across Poland. Deformations compared with the recent stress field of the study area disclosed good correlation in some places and significant discrepancies in the others, which will be the subject of future research.

  13. Calculating Hillslope Contributions to River Basin Sediment Yield Using Observations in Small Watersheds and an Index-based Model

    NASA Astrophysics Data System (ADS)

    Kinner, D. A.; Kinner, D. A.; Stallard, R. F.

    2001-12-01

    Detailed observations of hillslope erosion are generally made in < 1 km2 watersheds to gain a process-level understanding in a given geomorphic setting. In addressing sediment and nutrient source-to-sink questions, a broader, river basin ( > 1000 km2) view of erosion and deposition is necessary to incorporate the geographic variability in the factors controlling sediment mobilization and storage. At the river basin scale, floodplain and reservoir storage become significant in sediment budgets. In this study, we used observations from USDA experimental watersheds to constrain an index-based model of hillslope erosion for the 7270 km2 Nishnabotna River Basin in the agricultural, loess-mantled region of southwest Iowa. Spatial and time-series measurements from two watersheds near Treynor, Iowa were used to calibrate the model for the row-cropped fields of the basin. By modeling rainfall events over an 18-year period, model error was quantified. We then applied the model to calculate basin-wide hillslope erosion and colluvial storage. Soil maps and the National Land-Cover Dataset were used to estimate model soil erodibility and land-use factors. By comparing modeled hillslope yields with observed basin sediment yields, we calculated that hillslope contributions to sediment yield were < 50% for the period 1974-1992. A major uncertainty in modeling is the percentage of basin area that is terraced. We will use the isotopes Cs137 and Pb210 to distinguish bank (isotope-poor) and hillslope (isotope-rich) contributions in flood plain deposits. This independent estimate of the relative hillslope contribution to sediment yield will reduce modeling uncertainty.

  14. Calculating the Fickian diffusivity for a lattice-based random walk with agents and obstacles of different shapes and sizes.

    PubMed

    Ellery, Adam J; Baker, Ruth E; Simpson, Matthew J

    2015-01-01

    Random walk models are often used to interpret experimental observations of the motion of biological cells and molecules. A key aim in applying a random walk model to mimic an in vitro experiment is to estimate the Fickian diffusivity (or Fickian diffusion coefficient), D. However, many in vivo experiments are complicated by the fact that the motion of cells and molecules is hindered by the presence of obstacles. Crowded transport processes have been modeled using repeated stochastic simulations in which a motile agent undergoes a random walk on a lattice that is populated by immobile obstacles. Early studies considered the most straightforward case in which the motile agent and the obstacles are the same size. More recent studies considered stochastic random walk simulations describing the motion of an agent through an environment populated by obstacles of different shapes and sizes. Here, we build on previous simulation studies by analyzing a general class of lattice-based random walk models with agents and obstacles of various shapes and sizes. Our analysis provides exact calculations of the Fickian diffusivity, allowing us to draw conclusions about the role of the size, shape and density of the obstacles, as well as examining the role of the size and shape of the motile agent. Since our analysis is exact, we calculate D directly without the need for random walk simulations. In summary, we find that the shape, size and density of obstacles has a major influence on the exact Fickian diffusivity. Furthermore, our results indicate that the difference in diffusivity for symmetric and asymmetric obstacles is significant. PMID:26599468

  15. Advances in binding free energies calculations: QM/MM-based free energy perturbation method for drug design.

    PubMed

    Rathore, R S; Sumakanth, M; Reddy, M Siva; Reddanna, P; Rao, Allam Appa; Erion, Mark D; Reddy, M R

    2013-01-01

    Multiple approaches have been devised and evaluated to computationally estimate binding free energies. Results using a recently developed Quantum Mechanics (QM)/Molecular Mechanics (MM) based Free Energy Perturbation (FEP) method suggest that this method has the potential to provide the most accurate estimation of binding affinities to date. The method treats ligands/inhibitors using QM while using MM for the rest of the system. The method has been applied and validated for a structurally diverse set of fructose 1,6- bisphosphatase (FBPase) inhibitors suggesting that the approach has the potential to be used as an integral part of drug discovery for both lead identification lead optimization, where there is a structure available. In addition, this QM/MM-based FEP method was shown to accurately replicate the anomalous hydration behavior exhibited by simple amines and amides suggesting that the method may also prove useful in predicting physical properties of molecules. While the method is about 5-fold more computationally demanding than conventional FEP, it has the potential to be less demanding on the end user since it avoids development of MM force field parameters for novel ligands and thereby eliminates this time-consuming step that often contributes significantly to the inaccuracy of binding affinity predictions using conventional FEP methods. The QM/MM-based FEP method has been extensively tested with respect to important considerations such as the length of the simulation required to obtain satisfactory convergence in the calculated relative solvation and binding free energies for both small and large structural changes between ligands. Future automation of the method and parallelization of the code is expected to enhance the speed and increase its use for drug design and lead optimization. PMID:23260025

  16. GTNEUT: A code for the calculation of neutral particle transport in plasmas based on the Transmission and Escape Probability method

    NASA Astrophysics Data System (ADS)

    Mandrekas, John

    2004-08-01

    GTNEUT is a two-dimensional code for the calculation of the transport of neutral particles in fusion plasmas. It is based on the Transmission and Escape Probabilities (TEP) method and can be considered a computationally efficient alternative to traditional Monte Carlo methods. The code has been benchmarked extensively against Monte Carlo and has been used to model the distribution of neutrals in fusion experiments. Program summaryTitle of program: GTNEUT Catalogue identifier: ADTX Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTX Computer for which the program is designed and others on which it has been tested: The program was developed on a SUN Ultra 10 workstation and has been tested on other Unix workstations and PCs. Operating systems or monitors under which the program has been tested: Solaris 8, 9, HP-UX 11i, Linux Red Hat v8.0, Windows NT/2000/XP. Programming language used: Fortran 77 Memory required to execute with typical data: 6 219 388 bytes No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.: 300 709 No. of lines in distributed program, including test data, etc.: 17 365 Distribution format: compressed tar gzip file Keywords: Neutral transport in plasmas, Escape probability methods Nature of physical problem: This code calculates the transport of neutral particles in thermonuclear plasmas in two-dimensional geometric configurations. Method of solution: The code is based on the Transmission and Escape Probability (TEP) methodology [1], which is part of the family of integral transport methods for neutral particles and neutrons. The resulting linear system of equations is solved by standard direct linear system solvers (sparse and non-sparse versions are included). Restrictions on the complexity of the problem: The current version of the code can

  17. Towards an automated and efficient calculation of resonating vibrational states based on state-averaged multiconfigurational approaches

    SciTech Connect

    Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian; Rauhut, Guntram

    2015-12-28

    Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.

  18. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT

    SciTech Connect

    Park, Justin C.; Li, Jonathan G.; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray

    2015-04-15

    Purpose: The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Results: Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm{sup 2} square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm{sup 2}, where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (

  19. Development of a quantum mechanics-based free-energy perturbation method: use in the calculation of relative solvation free energies.

    PubMed

    Reddy, M Rami; Singh, U C; Erion, Mark D

    2004-05-26

    Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.

  20. MEMS Calculator

    National Institute of Standards and Technology Data Gateway

    SRD 166 MEMS Calculator (Web, free access)   This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.

  1. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system

    NASA Astrophysics Data System (ADS)

    Wang, Lilie; Ding, George X.

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.

  2. A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies

    SciTech Connect

    Eça, L.; Hoekstra, M.

    2014-04-01

    This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.

  3. Cloud Radiative Effect by Cloud Types Based on Radiative Transfer Model Calculations and Collocated A-Train Data

    NASA Astrophysics Data System (ADS)

    Yue, Q.; Fetzer, E. J.; Schreier, M. M.; Kahn, B. H.; Huang, X.

    2014-12-01

    Cloud radiative effect is sensitive to both cloud types and the atmospheric conditions that are correspondent with the clouds. It is important to separate the radiative effects due to the microphysical and radiative properties of clouds and the impact of clouds on clear atmosphere radiation. To better quantify these components of cloud radiative effects, we construct a data record of water vapor, temperature, TOA shortwave and long-wave radiations, and cloud properties from collocated A-Train satellite observations and NASA MERRA reanalysis, stratified according to cloud types determined by MODIS observations. The sensitivity of cloud radiative effects on the properties of cloud is investigated in this study using the observation data. The cloud masking effect is quantified for different cloud types using the Fu and Liou radiative transfer model and the observed cloudy and clear atmospheric conditions. The sampling biases of the satellite observed temperature and water vapor vertical distributions are quantified based on comparisons between satellite observations and reanalysis, and then incorporated into the radiative transfer calculations to study the impact of these observational biases on cloud radiative effect estimation from the temperature and water vapor profiles obtained from satellite.

  4. Quantum theory of molecular collisions in a magnetic field: efficient calculations based on the total angular momentum representation.

    PubMed

    Tscherbul, T V; Dalgarno, A

    2010-11-14

    An efficient method is presented for rigorous quantum calculations of atom-molecule and molecule-molecule collisions in a magnetic field. The method is based on the expansion of the wave function of the collision complex in basis functions with well-defined total angular momentum in the body-fixed coordinate frame. We outline the general theory of the method for collisions of diatomic molecules in the (2)Σ and (3)Σ electronic states with structureless atoms and with unlike (2)Σ and (3)Σ molecules. The cross sections for elastic scattering and Zeeman relaxation in low-temperature collisions of CaH((2)Σ(+)) and NH((3)Σ(-)) molecules with (3)He atoms converge quickly with respect to the number of total angular momentum states included in the basis set, leading to a dramatic (>10-fold) enhancement in computational efficiency compared to the previously used methods [A. Volpi and J. L. Bohn, Phys. Rev. A 65, 052712 (2002); R. V. Krems and A. Dalgarno, J. Chem. Phys. 120, 2296 (2004)]. Our approach is thus well suited for theoretical studies of strongly anisotropic molecular collisions in the presence of external electromagnetic fields. PMID:21073210

  5. Comparison of a miniaturized shake-flask solubility method with automated potentiometric acid/base titrations and calculated solubilities.

    PubMed

    Glomme, A; März, J; Dressman, J B

    2005-01-01

    Solubility is one of the most important parameters for lead selection and optimization during drug discovery. Its determination should therefore take place as early as possible in the process. Because of the large numbers of compounds involved and the very low amounts of each compound available in the early development stage, it is highly desirable to measure the solubility with as little compound as possible and to be able to improve the throughput of the methods used. In this work, a miniaturized shake-flask method was developed and the solubility results were compared with those measured by semiautomated potentiometric acid/base titrations and computational methods for 21 poorly soluble compounds with solubilities mostly in the range 0.03-30 microg/mL. The potentiometric method is very economical (approximately 100 microg of a poorly soluble compound is needed) and is able to create a pH/solubility profile with one single determination, but is limited to ionizable compounds. The miniaturized shake-flask method can be used for all compounds and a wide variety of media. Its precision and throughput proved superior to the potentiometric method for very poorly soluble compounds. Up to 20 compounds a week can be studied with one set-up. Calculated solubility data seem to be sufficient for a first estimate of the solubility, but they cannot currently be used as a substitute for experimental measurements at key decision points in the development process.

  6. Synthesis, spectroscopy, X-ray crystallography, DFT calculations, DNA binding and molecular docking of a propargyl arms containing Schiff base.

    PubMed

    Balakrishnan, C; Subha, L; Neelakantan, M A; Mariappan, S S

    2015-11-01

    A propargyl arms containing Schiff base (L) was synthesized by the condensation of 1-[2-hydroxy-4-(prop-2-yn-1-yloxy)phenyl]ethanone with trans-1,2-diaminocyclohexane. The structure of L was characterized by IR, (1)H NMR, (13)C NMR and UV-Vis spectroscopy and by single crystal X-ray diffraction analysis. The UV-Visible spectral behavior of L in different solvents exhibits positive solvatochromism. Density functional calculation of the L in gas phase was performed by using DFT (B3LYP) method with 6-31G basis set. The computed vibrational frequencies and NMR signals of L were compared with the experimental data. Tautomeric stability study inferred that the enolimine is more stable than the ketoamine form. The charge delocalization has been analyzed using natural bond orbital (NBO) analysis. Electronic absorption and emission spectral studies were used to study the binding of L with CT-DNA. The molecular docking was done to identify the interaction of L with A-DNA and B-DNA.

  7. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  8. Quantum theory of molecular collisions in a magnetic field: efficient calculations based on the total angular momentum representation.

    PubMed

    Tscherbul, T V; Dalgarno, A

    2010-11-14

    An efficient method is presented for rigorous quantum calculations of atom-molecule and molecule-molecule collisions in a magnetic field. The method is based on the expansion of the wave function of the collision complex in basis functions with well-defined total angular momentum in the body-fixed coordinate frame. We outline the general theory of the method for collisions of diatomic molecules in the (2)Σ and (3)Σ electronic states with structureless atoms and with unlike (2)Σ and (3)Σ molecules. The cross sections for elastic scattering and Zeeman relaxation in low-temperature collisions of CaH((2)Σ(+)) and NH((3)Σ(-)) molecules with (3)He atoms converge quickly with respect to the number of total angular momentum states included in the basis set, leading to a dramatic (>10-fold) enhancement in computational efficiency compared to the previously used methods [A. Volpi and J. L. Bohn, Phys. Rev. A 65, 052712 (2002); R. V. Krems and A. Dalgarno, J. Chem. Phys. 120, 2296 (2004)]. Our approach is thus well suited for theoretical studies of strongly anisotropic molecular collisions in the presence of external electromagnetic fields.

  9. Scattering of electromagnetic radiation based on numerical calculation of the T-matrix through its integral representation

    NASA Astrophysics Data System (ADS)

    Tricoli, Ugo; Pfeilsticker, Klaus

    2014-08-01

    A novel numerical technique is presented to calculate the T-matrix for a single particle through the use of the volume integral equation for electromagnetic scattering. It is based on the method called Coupled Dipole Approximation (CDA), see O. J. F. Martin et al.1. The basic procedure includes the parallel use of the Lippmann-Schwinger and the Dyson equations to iteratively solve for the T-matrix and the Green's function dyadic respectively. The boundary conditions of the particle are thus automatically satisfied. The method can be used for the evaluation of the optical properties (e.g. Müller matrix) of anisotropic, inhomogeneous and asymmetric particles, both in far and near field, giving as output the T-matrix, which depends only on the scatterer itself and is independent from the polarization and direction of the incoming field. Estimation of the accuracy of the method is provided through comparison with the analytical spherical case (Mie theory) as well as non-spherical cubic ice particles.

  10. An efficient quasi-3D particle tracking-based approach for transport through fractures with application to dynamic dispersion calculation.

    PubMed

    Wang, Lichun; Cardenas, M Bayani

    2015-08-01

    The quantitative study of transport through fractured media has continued for many decades, but has often been constrained by observational and computational challenges. Here, we developed an efficient quasi-3D random walk particle tracking (RWPT) algorithm to simulate solute transport through natural fractures based on a 2D flow field generated from the modified local cubic law (MLCL). As a reference, we also modeled the actual breakthrough curves (BTCs) through direct simulations with the 3D advection-diffusion equation (ADE) and Navier-Stokes equations. The RWPT algorithm along with the MLCL accurately reproduced the actual BTCs calculated with the 3D ADE. The BTCs exhibited non-Fickian behavior, including early arrival and long tails. Using the spatial information of particle trajectories, we further analyzed the dynamic dispersion process through moment analysis. From this, asymptotic time scales were determined for solute dispersion to distinguish non-Fickian from Fickian regimes. This analysis illustrates the advantage and benefit of using an efficient combination of flow modeling and RWPT. PMID:26042625

  11. Refinement of the Cornell et al. Nucleic Acids Force Field Based on Reference Quantum Chemical Calculations of Glycosidic Torsion Profiles.

    PubMed

    Zgarbová, Marie; Otyepka, Michal; Sponer, Jiří; Mládek, Arnošt; Banáš, Pavel; Cheatham, Thomas E; Jurečka, Petr

    2011-09-13

    We report a reparameterization of the glycosidic torsion χ of the Cornell et al. AMBER force field for RNA, χ(OL). The parameters remove destabilization of the anti region found in the ff99 force field and thus prevent formation of spurious ladder-like structural distortions in RNA simulations. They also improve the description of the syn region and the syn-anti balance as well as enhance MD simulations of various RNA structures. Although χ(OL) can be combined with both ff99 and ff99bsc0, we recommend the latter. We do not recommend using χ(OL) for B-DNA because it does not improve upon ff99bsc0 for canonical structures. However, it might be useful in simulations of DNA molecules containing syn nucleotides. Our parametrization is based on high-level QM calculations and differs from conventional parametrization approaches in that it incorporates some previously neglected solvation-related effects (which appear to be essential for obtaining correct anti/high-anti balance). Our χ(OL) force field is compared with several previous glycosidic torsion parametrizations.

  12. Chaotic Calculations.

    ERIC Educational Resources Information Center

    Chenery, Gordon

    1991-01-01

    Uses chaos theory to investigate the nonlinear phenomenon of population growth fluctuation. Illustrates the use of computers and computer programs to make calculations in a nonlinear difference equation system. (MDH)

  13. System and method for radiation dose calculation within sub-volumes of a monte carlo based particle transport grid

    DOEpatents

    Bergstrom, Paul M.; Daly, Thomas P.; Moses, Edward I.; Patterson, Jr., Ralph W.; Schach von Wittenau, Alexis E.; Garrett, Dewey N.; House, Ronald K.; Hartmann-Siantar, Christine L.; Cox, Lawrence J.; Fujino, Donald H.

    2000-01-01

    A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.

  14. Solution-based thermodynamic modeling of the Ni-Al-Mo system using first-principles calculations

    SciTech Connect

    Zhou, S H; Wang, Y; Chen, L -Q; Liu, Z -K; Napolitano, R E

    2014-09-01

    A solution-based thermodynamic description of the ternary Ni–Al–Mo system is developed here, incorporating first-principles calculations and reported modeling of the binary Ni–Al, Ni–Mo and Al–Mo systems. To search for the configurations with the lowest energies of the N phase, the Alloy Theoretic Automated Toolkit (ATAT) was employed and combined with VASP. The liquid, bcc and γ-fcc phases are modeled as random atomic solutions, and the γ'-Ni3Al phase is modeled by describing the ordering within the fcc structure using two sublattices, summarized as (Al,Mo,Ni)0.75(Al,Mo,Ni)0.25. Thus, γ-fcc and γ'-Ni3Al are modeled with a single Gibbs free energy function with appropriate treatment of the chemical ordering contribution. In addition, notable improvements are the following: first, the ternary effects of Mo and Al in the B2-NiAl and D0a-Ni3Mo phases, respectively, are considered; second, the N-NiAl8Mo3 phase is described as a solid solution using a three-sublattice model; third, the X-Ni14Al75Mo11 phase is treated as a stoichiometric compound. Model parameters are evaluated using first-principles calculations of zero-Kelvin formation enthalpies and reported experimental data. In comparison with the enthalpies of formation for the compounds ψ-AlMo, θ-Al8Mo3 and B2-NiAl, the first-principles results indicate that the N-NiAl8Mo3 phase, which is stable at high temperatures, decomposes into other phases at low temperature. Resulting phase equilibria are summarized in the form of isothermal sections and liquidus projections. To clearly identify the relationship between the γ-fcc and γ'-Ni3Al phases in the ternary Ni–Al–Mo system, the specific γ-fcc and γ'-Ni3Al phase fields are plotted in x(Al)–x(Mo)–T space for a temperature range 1200–1800 K.

  15. Health Risk Assessment for Uranium in Groundwater - An Integrated Case Study Based on Hydrogeological Characterization and Dose Calculation

    NASA Astrophysics Data System (ADS)

    Franklin, M. R.; Veiga, L. H.; Py, D. A., Jr.; Fernandes, H. M.

    2010-12-01

    The uranium mining and milling facilities of Caetité (URA) is the only active uranium production center in Brazil. Operations take place at a very sensitive semi-arid region in the country where water resources are very scarce. Therefore, any contamination of the existing water bodies may trigger critical consequences to local communities because their sustainability is closely related to the availability of the groundwater resources. Due to the existence of several uranium anomalies in the region, groundwater can present radionuclide concentrations above the world average. The radiological risk associated to the ingestion of these waters have been questioned by members of the local communities, NGO’s and even regulatory bodies that suspected that the observed levels of radionuclide concentrations (specially Unat) could be related to the uranium mining and milling operations. Regardless the origin of these concentrations the fear that undesired health effects were taking place (e.g. increase in cancer incidence) remain despite the fact that no evidence - based on epidemiological studies - is available. This paper intends to present the connections between the local hydrogeology and the radiological characterization of groundwater in the neighboring areas of the uranium production center to understand the implications to the human health risk due to the ingestion of groundwater. The risk assessment was performed, taking into account the radiological and the toxicological risks. Samples from 12 wells have been collected and determinations of Unat, Thnat, 226Ra, 228Ra and 210Pb were performed. The radiation-related risks were estimated for adults and children by the calculation of the annual effective doses. The potential non-carcinogenic effects due to the ingestion of uranium were evaluated by the estimation of the hazard index (HI). Monte Carlo simulations were used to calculate the uncertainty associated with these estimates, i.e. the 95% confidence interval

  16. 40 CFR 600.207-12 - Calculation and use of vehicle-specific 5-cycle-based fuel economy and CO2 emission values for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Carbon-Related Exhaust Emission Values § 600.207-12 Calculation and use of vehicle-specific 5-cycle-based...-specific 5-cycle-based fuel economy and CO2 emission values for vehicle configurations. 600.207-12 Section... vehicle-specific 5-cycle city and highway fuel economy and CO2 emission values for each...

  17. Raman spectroscopy study and first-principles calculations of the interaction between nucleic acid bases and carbon nanotubes.

    PubMed

    Stepanian, Stepan G; Karachevtsev, Maksym V; Glamazda, Alexander Yu; Karachevtsev, Victor A; Adamowicz, L

    2009-04-16

    In this work, we have used Raman spectroscopy and quantum chemical methods (MP2 and DFT) to study the interactions between nucleic acid bases (NABs) and single-walled carbon nanotubes (SWCNT). We found that the appearance of the interaction between the nanotubes and the NABs is accompanied by a spectral shift of the high-frequency component of the SWCNT G band in the Raman spectrum to a lower frequency region. The value of this shift varies from 0.7 to 1.3 cm(-1) for the metallic nanotubes and from 2.1 to 3.2 cm(-1) for the semiconducting nanotubes. Calculations of the interaction energies between the NABs and a fragment of the zigzag(10,0) carbon nanotube performed at the MP2/6-31++G(d,p)[NABs atoms]|6-31G(d)[nanotube atoms] level of theory while accounting for the basis set superposition error during geometry optimization allowed us to order the NABs according to the increasing interaction energy value. The order is: guanine (-67.1 kJ mol(-1)) > adenine (-59.0 kJ mol(-1)) > cytosine (-50.3 kJ mol(-1)) approximately = thymine (-50.2 kJ mol(-1)) > uracil (-44.2 kJ mol(-1)). The MP2 equilibrium structures and the interaction energies were used as reference points in the evaluation of the ability of various functionals in the DFT method to predict those structures and energies. We showed that the M05, MPWB1K, and MPW1B95 density functionals are capable of correctly predicting the SWCNT-NAB geometries but not the interaction energies, while the M05-2X functional is capable of correctly predicting both the geometries and the interaction energies.

  18. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGES

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  19. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    SciTech Connect

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics. In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.

  20. Thermal Expansion Calculation of Silicate Glasses at 210°C, Based on the Systematic Analysis of Global Databases

    SciTech Connect

    Fluegel, Alex

    2010-10-01

    Thermal expansion data for more than 5500 compositions of silicate glasses were analyzed statistically. These data were gathered from the scientific literature, summarized in SciGlass© 6.5, a new version of the well known glass property database and information system. The analysis resulted in a data reduction from 5500 glasses to a core of 900, where the majority of the published values is located within commercial glass composition ranges and obtained over the temperature range 20 to 500°C. A multiple regression model for the linear thermal expansivity at 210°C, including error formula and detailed application limits, was developed based on those 900 core data from over 100 publications. The accuracy of the model predictions is improved about twice compared to previous work because systematic errors from certain laboratories were investigated and corrected. The standard model error (precision) was 0.37 ppm/K, with R² = 0.985. The 95% confidence interval for individual predictions largely depends on the glass composition of interest and the composition uncertainty. The model is valid for commercial silicate glasses containing Na2O, CaO, Al2O3, K2O, MgO, B2O3, Li2O, BaO, ZrO2, TiO2, ZnO, PbO, SrO, Fe2O3, CeO2, fining agents, and coloring and de-coloring components. In addition, a special model for ultra-low expansion glasses in the system SiO2-TiO2 is presented. The calculations allow optimizing the time-temperature cooling schedule of glassware, the development of glass sealing materials, and the design of specialty glass products that are exposed to varying temperatures.

  1. Evaluation of a deterministic grid-based Boltzmann solver (GBBS) for voxel-level absorbed dose calculations in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Mikell, Justin; Cheenu Kappadath, S.; Wareing, Todd; Erwin, William D.; Titt, Uwe; Mourtada, Firas

    2016-06-01

    To evaluate the 3D Grid-based Boltzmann Solver (GBBS) code ATTILA ® for coupled electron and photon transport in the nuclear medicine energy regime for electron (beta, Auger and internal conversion electrons) and photon (gamma, x-ray) sources. Codes rewritten based on ATTILA are used clinically for both high-energy photon teletherapy and 192Ir sealed source brachytherapy; little information exists for using the GBBS to calculate voxel-level absorbed doses in nuclear medicine. We compared DOSXYZnrc Monte Carlo (MC) with published voxel-S-values to establish MC as truth. GBBS was investigated for mono-energetic 1.0, 0.1, and 0.01 MeV electron and photon sources as well as 131I and 90Y radionuclides. We investigated convergence of GBBS by analyzing different meshes ({{M}0},{{M}1},{{M}2} ), energy group structures ({{E}0},{{E}1},{{E}2} ) for each radionuclide component, angular quadrature orders (≤ft. {{S}4},{{S}8},{{S}16}\\right) , and scattering order expansions ({{P}0} –{{P}6} ); higher indices imply finer discretization. We compared GBBS to MC in (1) voxel-S-value geometry for soft tissue, lung, and bone, and (2) a source at the interface between combinations of lung, soft tissue, and bone. Excluding Auger and conversion electrons, MC agreed within  ≈5% of published source voxel absorbed doses. For the finest discretization, most GBBS absorbed doses in the source voxel changed by less than 1% compared to the next finest discretization along each phase space variable indicating sufficient convergence. For the finest discretization, agreement with MC in the source voxel ranged from  ‑3% to  ‑20% with larger differences at lower energies (‑3% for 1 MeV electron in lung to  ‑20% for 0.01 MeV photon in bone); similar agreement was found for the interface geometries. Differences between GBBS and MC in the source voxel for 90Y and 131I were  ‑6%. The GBBS ATTILA was benchmarked against MC in the nuclear medicine regime. GBBS can be a

  2. Evaluation of a deterministic grid-based Boltzmann solver (GBBS) for voxel-level absorbed dose calculations in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Mikell, Justin; Cheenu Kappadath, S.; Wareing, Todd; Erwin, William D.; Titt, Uwe; Mourtada, Firas

    2016-06-01

    To evaluate the 3D Grid-based Boltzmann Solver (GBBS) code ATTILA ® for coupled electron and photon transport in the nuclear medicine energy regime for electron (beta, Auger and internal conversion electrons) and photon (gamma, x-ray) sources. Codes rewritten based on ATTILA are used clinically for both high-energy photon teletherapy and 192Ir sealed source brachytherapy; little information exists for using the GBBS to calculate voxel-level absorbed doses in nuclear medicine. We compared DOSXYZnrc Monte Carlo (MC) with published voxel-S-values to establish MC as truth. GBBS was investigated for mono-energetic 1.0, 0.1, and 0.01 MeV electron and photon sources as well as 131I and 90Y radionuclides. We investigated convergence of GBBS by analyzing different meshes ({{M}0},{{M}1},{{M}2} ), energy group structures ({{E}0},{{E}1},{{E}2} ) for each radionuclide component, angular quadrature orders (≤ft. {{S}4},{{S}8},{{S}16}\\right) , and scattering order expansions ({{P}0} -{{P}6} ); higher indices imply finer discretization. We compared GBBS to MC in (1) voxel-S-value geometry for soft tissue, lung, and bone, and (2) a source at the interface between combinations of lung, soft tissue, and bone. Excluding Auger and conversion electrons, MC agreed within  ≈5% of published source voxel absorbed doses. For the finest discretization, most GBBS absorbed doses in the source voxel changed by less than 1% compared to the next finest discretization along each phase space variable indicating sufficient convergence. For the finest discretization, agreement with MC in the source voxel ranged from  -3% to  -20% with larger differences at lower energies (-3% for 1 MeV electron in lung to  -20% for 0.01 MeV photon in bone); similar agreement was found for the interface geometries. Differences between GBBS and MC in the source voxel for 90Y and 131I were  -6%. The GBBS ATTILA was benchmarked against MC in the nuclear medicine regime. GBBS can be a viable

  3. Evaluation of a deterministic grid-based Boltzmann solver (GBBS) for voxel-level absorbed dose calculations in nuclear medicine.

    PubMed

    Mikell, Justin; Cheenu Kappadath, S; Wareing, Todd; Erwin, William D; Titt, Uwe; Mourtada, Firas

    2016-06-21

    To evaluate the 3D Grid-based Boltzmann Solver (GBBS) code ATTILA (®) for coupled electron and photon transport in the nuclear medicine energy regime for electron (beta, Auger and internal conversion electrons) and photon (gamma, x-ray) sources. Codes rewritten based on ATTILA are used clinically for both high-energy photon teletherapy and (192)Ir sealed source brachytherapy; little information exists for using the GBBS to calculate voxel-level absorbed doses in nuclear medicine. We compared DOSXYZnrc Monte Carlo (MC) with published voxel-S-values to establish MC as truth. GBBS was investigated for mono-energetic 1.0, 0.1, and 0.01 MeV electron and photon sources as well as (131)I and (90)Y radionuclides. We investigated convergence of GBBS by analyzing different meshes ([Formula: see text]), energy group structures ([Formula: see text]) for each radionuclide component, angular quadrature orders ([Formula: see text], and scattering order expansions ([Formula: see text]-[Formula: see text]); higher indices imply finer discretization. We compared GBBS to MC in (1) voxel-S-value geometry for soft tissue, lung, and bone, and (2) a source at the interface between combinations of lung, soft tissue, and bone. Excluding Auger and conversion electrons, MC agreed within  ≈5% of published source voxel absorbed doses. For the finest discretization, most GBBS absorbed doses in the source voxel changed by less than 1% compared to the next finest discretization along each phase space variable indicating sufficient convergence. For the finest discretization, agreement with MC in the source voxel ranged from  -3% to  -20% with larger differences at lower energies (-3% for 1 MeV electron in lung to  -20% for 0.01 MeV photon in bone); similar agreement was found for the interface geometries. Differences between GBBS and MC in the source voxel for (90)Y and (131)I were  -6%. The GBBS ATTILA was benchmarked against MC in the nuclear medicine regime. GBBS can be a

  4. Evaluation of a deterministic grid-based Boltzmann solver (GBBS) for voxel-level absorbed dose calculations in nuclear medicine.

    PubMed

    Mikell, Justin; Cheenu Kappadath, S; Wareing, Todd; Erwin, William D; Titt, Uwe; Mourtada, Firas

    2016-06-21

    To evaluate the 3D Grid-based Boltzmann Solver (GBBS) code ATTILA (®) for coupled electron and photon transport in the nuclear medicine energy regime for electron (beta, Auger and internal conversion electrons) and photon (gamma, x-ray) sources. Codes rewritten based on ATTILA are used clinically for both high-energy photon teletherapy and (192)Ir sealed source brachytherapy; little information exists for using the GBBS to calculate voxel-level absorbed doses in nuclear medicine. We compared DOSXYZnrc Monte Carlo (MC) with published voxel-S-values to establish MC as truth. GBBS was investigated for mono-energetic 1.0, 0.1, and 0.01 MeV electron and photon sources as well as (131)I and (90)Y radionuclides. We investigated convergence of GBBS by analyzing different meshes ([Formula: see text]), energy group structures ([Formula: see text]) for each radionuclide component, angular quadrature orders ([Formula: see text], and scattering order expansions ([Formula: see text]-[Formula: see text]); higher indices imply finer discretization. We compared GBBS to MC in (1) voxel-S-value geometry for soft tissue, lung, and bone, and (2) a source at the interface between combinations of lung, soft tissue, and bone. Excluding Auger and conversion electrons, MC agreed within  ≈5% of published source voxel absorbed doses. For the finest discretization, most GBBS absorbed doses in the source voxel changed by less than 1% compared to the next finest discretization along each phase space variable indicating sufficient convergence. For the finest discretization, agreement with MC in the source voxel ranged from  -3% to  -20% with larger differences at lower energies (-3% for 1 MeV electron in lung to  -20% for 0.01 MeV photon in bone); similar agreement was found for the interface geometries. Differences between GBBS and MC in the source voxel for (90)Y and (131)I were  -6%. The GBBS ATTILA was benchmarked against MC in the nuclear medicine regime. GBBS can be a

  5. WE-A-17A-07: Evaluation of a Grid-Based Boltzmann Solver for Nuclear Medicine Voxel-Based Dose Calculations

    SciTech Connect

    Mikell, J; Kappadath, S; Wareing, T; Mourtada, F

    2014-06-15

    Purpose: Grid-based Boltzmann solvers (GBBS) have been successfully implemented in radiation oncology clinics as dose calculations for e×ternal photon beams and 192Ir sealed-source brachytherapy. We report on the evaluation of a GBBS for nuclear medicine vo×el-based absorbed doses. Methods: Vo×el-S-values were calculated for monoenergetic betas and photons (1, 0.1, 0.01 MeV), 90Y, and 131I for 3 mm vo×el sizes using Monte Carlo (DOS×YZnrc) and GBBS (Attila 8.1-beta5, Transpire). The source distribution was uniform throughout a single vo×el. The material was an infinite 1.04 g/cc soft tissue slab. To e×plore convergence properties of the GBBS 3 tetrahedral meshes, 3 energy group structures, 3 different square Chebyschev-Legendre quadrature set orders (Sn), and 4×2013;7 spherical harmonic e×pansion terms (Pn) were investigated for a total of 168 discretizations per source. The mesh, energy group, and quadrature sets are 8×, 3×, and 16×, respectively, finer than the corresponding coarse discretization. GBBS cross sections were generated with full electronphoton-coupling using the vendors e×tended CEP×S code. For accuracy, percent differences (%Δ) in source vo×el absorbed doses between MC and GBBS are reported for the coarsest and finest discretization. For convergence, ratios of the two finest discretization solutions are reported along each variable. Results: For 1 MeV, 0.1 MeV, 0.01 MeV, Y90, and I-131 beta sources the %Δ in the source vo×el for (coarsest,finest) discretization were (+2.0,−6.4), (−8.0, −7.5), (−13.8, −13.4), (+0.9,−5.5), and (− 10.1,−9.0) respectively. The corresponding %Δ for photons were (+33.7,−7.1), (−9.4, −9.8), (−17.4, −15.2), and (−1.7,−7.7), respectively. For betas, the convergence ratio of mesh, energy, Sn, and Pn ranged from 0.991–1.000. For gammas, the convergence ratio of mesh, Sn, and Pn ranged from 0.998–1.003 while the ratio for energy ranged from 0.964–1.001. Conclusions: GBBS is

  6. Static and dynamic structural-sensitivity derivative calculations in the finite-element-based Engineering Analysis Language (EAL) system

    NASA Technical Reports Server (NTRS)

    Camarda, C. J.; Adelman, H. M.

    1984-01-01

    The implementation of static and dynamic structural-sensitivity derivative calculations in a general purpose, finite-element computer program denoted the Engineering Analysis Language (EAL) System is described. Derivatives are calculated with respect to structural parameters, specifically, member sectional properties including thicknesses, cross-sectional areas, and moments of inertia. Derivatives are obtained for displacements, stresses, vibration frequencies and mode shapes, and buckling loads and mode shapes. Three methods for calculating derivatives are implemented (analytical, semianalytical, and finite differences), and comparisons of computer time and accuracy are made. Results are presented for four examples: a swept wing, a box beam, a stiffened cylinder with a cutout, and a space radiometer-antenna truss.

  7. Vibrational spectroscopic, first-order hyperpolarizability and HOMO, LUMO studies of 4-chloro-2-(trifluoromethyl) aniline based on DFT calculations

    NASA Astrophysics Data System (ADS)

    Arivazhagan, M.; Subhasini, V. P.; Austine, A.

    2012-02-01

    The Fourier-transform infrared and FT-Raman spectra of 4-chloro-2-(trifluoromethyl) aniline (4C2TFA) were recorded in the region 4000-400 cm -1 and 3500-50 cm -1 respectively. Quantum chemical calculations of energies, geometrical structure and vibrational wavenumbers of 4C2TFA were carried out by density functional theory (DFT/B3LYP) method with 6-311+G(d,p) and 6-311++G(d,p) basis sets. The difference between the observed and scaled wavenumber values of most of the fundamentals is very small. The values of the total dipole moment ( μ) and the first order hyperpolarizability ( β) of the investigated compound were computed using B3LYP/6-311++G(d,p) calculations. The calculated results also show that 4C2TFA might have microscopic non-linear optical (NLO) behavior with non-zero values. A detailed interpretation of infrared and Raman spectra of 4C2TFA is also reported. The calculated HOMO-LUMO energy gap shows that charge transfer occurs within the molecule.

  8. 42 CFR 413.220 - Methodology for calculating the per-treatment base rate under the ESRD prospective payment system...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PRINCIPLES OF REASONABLE COST REIMBURSEMENT; PAYMENT FOR END-STAGE RENAL DISEASE SERVICES... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the...

  9. 42 CFR 413.220 - Methodology for calculating the per-treatment base rate under the ESRD prospective payment system...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PRINCIPLES OF REASONABLE COST REIMBURSEMENT; PAYMENT FOR END-STAGE RENAL DISEASE SERVICES... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the...

  10. Fission cross section calculations for 209Bi target nucleus based on fission reaction models in high energy regions

    NASA Astrophysics Data System (ADS)

    Kaplan, Abdullah; Capali, Veli; Ozdogan, Hasan

    2015-07-01

    Implementation of projects of new generation nuclear power plants requires the solving of material science and technological issues in developing of reactor materials. Melts of heavy metals (Pb, Bi and Pb-Bi) due to their nuclear and thermophysical properties, are the candidate coolants for fast reactors and accelerator-driven systems (ADS). In this study, α, γ, p, n and 3He induced fission cross section calculations for 209Bi target nucleus at high-energy regions for (α,f), (γ,f), (p,f), (n,f) and (3He,f) reactions have been investigated using different fission reaction models. Mamdouh Table, Sierk, Rotating Liquid Drop and Fission Path models of theoretical fission barriers of TALYS 1.6 code have been used for the fission cross section calculations. The calculated results have been compared with the experimental data taken from the EXFOR database. TALYS 1.6 Sierk model calculations exhibit generally good agreement with the experimental measurements for all reactions used in this study.

  11. Accurate high level ab initio-based global potential energy surface and dynamics calculations for ground state of CH2(+).

    PubMed

    Li, Y Q; Zhang, P Y; Han, K L

    2015-03-28

    A global many-body expansion potential energy surface is reported for the electronic ground state of CH2 (+) by fitting high level ab initio energies calculated at the multireference configuration interaction level with the aug-cc-pV6Z basis set. The topographical features of the new global potential energy surface are examined in detail and found to be in good agreement with those calculated directly from the raw ab initio energies, as well as previous calculations available in the literature. In turn, in order to validate the potential energy surface, a test theoretical study of the reaction CH(+)(X(1)Σ(+))+H((2)S)→C(+)((2)P)+H2(X(1)Σg (+)) has been carried out with the method of time dependent wavepacket on the title potential energy surface. The total integral cross sections and the rate coefficients have been calculated; the results determined that the new potential energy surface can both be recommended for dynamics studies of any type and as building blocks for constructing the potential energy surfaces of larger C(+)/H containing systems.

  12. Calculation of the Curie temperature of Ni using first principles based Wang-Landau Monte-Carlo

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Yin, Junqi; Li, Ying Wai; Nicholson, Don

    2015-03-01

    We combine constrained first principles density functional with a Wang-Landau Monte Carlo algorithm to calculate the Curie temperature of Ni. Mapping the magnetic interactions in Ni onto a Heisenberg like model to underestimates the Curie temperature. Using a model we show that the addition of the magnitude of the local magnetic moments can account for the difference in the calculated Curie temperature. For ab initio calculations, we have extended our Locally Selfconsistent Multiple Scattering (LSMS) code to constrain the magnitude of the local moments in addition to their direction and apply the Replica Exchange Wang-Landau method to sample the larger phase space efficiently to investigate Ni where the fluctuation in the magnitude of the local magnetic moments is of importance equal to their directional fluctuations. We will present our results for Ni where we compare calculations that consider only the moment directions and those including fluctuations of the magnetic moment magnitude on the Curie temperature. This research was sponsored by the Department of Energy, Offices of Basic Energy Science and Advanced Computing. We used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory, supported by US DOE under contract DE-AC05-00OR22725.

  13. Testing the Quick Seismic Event Locator and Magnitude Calculator (SSL_Calc) by Marsite Project Data Base

    NASA Astrophysics Data System (ADS)

    Tunc, Suleyman; Tunc, Berna; Caka, Deniz; Baris, Serif

    2016-04-01

    Locating and calculating size of the seismic events is quickly one of the most important and challenging issue in especially real time seismology. In this study, we developed a Matlab application to locate seismic events and calculate their magnitudes (Local Magnitude and empirical Moment Magnitude) using single station called SSL_Calc. This newly developed sSoftware has been tested on the all stations of the Marsite project "New Directions in Seismic Hazard Assessment through Focused Earth Observation in the Marmara Supersite-MARsite". SSL_Calc algorithm is suitable both for velocity and acceleration sensors. Data has to be in GCF (Güralp Compressed Format). Online or offline data can be selected in SCREAM software (belongs to Guralp Systems Limited) and transferred to SSL_Calc. To locate event P and S wave picks have to be marked by using SSL_Calc window manually. During magnitude calculation, instrument correction has been removed and converted to real displacement in millimeter. Then the displacement data is converted to Wood Anderson Seismometer output by using; Z=[0;0]; P=[-6.28+4.71j; -6.28-4.71j]; A0=[2080] parameters. For Local Magnitude calculation,; maximum displacement amplitude (A) and distance (dist) are used in formula (1) for distances up to 200km and formula (2) for more than 200km. ML=log10(A)-(-1.118-0.0647*dist+0.00071*dist2-3.39E-6*dist3+5.71e-9*dist4) (1) ML=log10(A)+(2.1173+0.0082*dist-0.0000059628*dist2) (2) Following Local Magnitude calculation, the programcode calculates two empiric Moment Magnitudes using formulas (3) Akkar et al. (2010) and (4) Ulusay et al. (2004). Mw=0.953* ML+0.422 (3) Mw=0.7768* ML+1.5921 (4) SSL_Calc is a software that is easy to implement and user friendly and offers practical solution to individual users to location of event and ML, Mw calculation.

  14. All-electron self-consistent G W in the Matsubara-time domain: Implementation and benchmarks of semiconductors and insulators

    NASA Astrophysics Data System (ADS)

    Chu, Iek-Heng; Trinastic, Jonathan P.; Wang, Yun-Peng; Eguiluz, Adolfo G.; Kozhevnikov, Anton; Schulthess, Thomas C.; Cheng, Hai-Ping

    2016-03-01

    The G W approximation is a well-known method to improve electronic structure predictions calculated within density functional theory. In this work, we have implemented a computationally efficient G W approach that calculates central properties within the Matsubara-time domain using the modified version of elk, the full-potential linearized augmented plane wave (FP-LAPW) package. Continuous-pole expansion (CPE), a recently proposed analytic continuation method, has been incorporated and compared to the widely used Padé approximation. Full crystal symmetry has been employed for computational speedup. We have applied our approach to 18 well-studied semiconductors/insulators that cover a wide range of band gaps computed at the levels of single-shot G0W0 , partially self-consistent G W0 , and fully self-consistent G W (full-G W ), in conjunction with the diagonal approximation. Our calculations show that G0W0 leads to band gaps that agree well with experiment for the case of simple s -p electron systems, whereas full-G W is required for improving the band gaps in 3 d electron systems. In addition, G W0 almost always predicts larger band gap values compared to full-G W , likely due to the substantial underestimation of screening effects as well as the diagonal approximation. Both the CPE method and Padé approximation lead to similar band gaps for most systems except strontium titantate, suggesting that further investigation into the latter approximation is necessary for strongly correlated systems. Moreover, the calculated cation d band energies suggest that both full-G W and G W0 lead to results in good agreement with experiment. Our computed band gaps serve as important benchmarks for the accuracy of the Matsubara-time G W approach.

  15. Evaluation of the efficiency and effectiveness of independent dose calculation followed by machine log file analysis against conventional measurement based IMRT QA.

    PubMed

    Sun, Baozhou; Rangaraj, Dharanipathy; Boddu, Sunita; Goddu, Murty; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar; Mutic, Sasa

    2012-09-06

    Experimental methods are commonly used for patient-specific IMRT delivery verification. There are a variety of IMRT QA techniques which have been proposed and clinically used with a common understanding that not one single method can detect all possible errors. The aim of this work was to compare the efficiency and effectiveness of independent dose calculation followed by machine log file analysis to conventional measurement-based methods in detecting errors in IMRT delivery. Sixteen IMRT treatment plans (5 head-and-neck, 3 rectum, 3 breast, and 5 prostate plans) created with a commercial treatment planning system (TPS) were recalculated on a QA phantom. All treatment plans underwent ion chamber (IC) and 2D diode array measurements. The same set of plans was also recomputed with another commercial treatment planning system and the two sets of calculations were compared. The deviations between dosimetric measurements and independent dose calculation were evaluated. The comparisons included evaluations of DVHs and point doses calculated by the two TPS systems. Machine log files were captured during pretreatment composite point dose measurements and analyzed to verify data transfer and performance of the delivery machine. Average deviation between IC measurements and point dose calculations with the two TPSs for head-and-neck plans were 1.2 ± 1.3% and 1.4 ± 1.6%, respectively. For 2D diode array measurements, the mean gamma value with 3% dose difference and 3 mm distance-to-agreement was within 1.5% for 13 of 16 plans. The mean 3D dose differences calculated from two TPSs were within 3% for head-and-neck cases and within 2% for other plans. The machine log file analysis showed that the gantry angle, jaw position, collimator angle, and MUs were consistent as planned, and maximal MLC position error was less than 0.5 mm. The independent dose calculation followed by the machine log analysis takes an average 47 ± 6 minutes, while the experimental approach (using IC and

  16. Relativistic shell model calculations

    NASA Astrophysics Data System (ADS)

    Furnstahl, R. J.

    1986-06-01

    Shell model calculations are discussed in the context of a relativistic model of nuclear structure based on renormalizable quantum field theories of mesons and baryons (quantum hadrodynamics). The relativistic Hartree approximation to the full field theory, with parameters determined from bulk properties of nuclear matter, predicts a shell structure in finite nuclei. Particle-hole excitations in finite nuclei are described in an RPA calculation based on this QHD ground state. The particle-hole interaction is prescribed by the Hartree ground state, with no additional parameters. Meson retardation is neglected in deriving the RPA equations, but it is found to have negligible effects on low-lying states. The full Dirac matrix structure is maintained throughout the calculation; no nonrelativistic reductions are made. Despite sensitive cancellations in the ground state calculation, reasonable excitation spectra are obtained for light nuclei. The effects of including charged mesons, problems with heavy nuclei, and prospects for improved and extended calculations are discussed.

  17. 40 CFR 600.207-12 - Calculation and use of vehicle-specific 5-cycle-based fuel economy and CO2 emission values for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-specific 5-cycle-based fuel economy and CO2 emission values for vehicle configurations. 600.207-12 Section... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and... fuel economy and CO2 emission values for vehicle configurations. (a) Fuel economy and CO2...

  18. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and...

  19. 40 CFR 600.207-12 - Calculation and use of vehicle-specific 5-cycle-based fuel economy and CO2 emission values for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-specific 5-cycle-based fuel economy and CO2 emission values for vehicle configurations. 600.207-12 Section... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and... fuel economy and CO2 emission values for vehicle configurations. (a) Fuel economy and CO2...

  20. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and...