Mizutani, U; Inukai, M; Sato, H; Zijlstra, E S; Lin, Q
2014-05-16
There are three key electronic parameters in elucidating the physics behind the Hume–Rothery electron concentration rule: the square of the Fermi diameter (2kF)2, the square of the critical reciprocal lattice vector and the electron concentration parameter or the number of itinerant electrons per atom e/a. We have reliably determined these three parameters for 10 Rhombic Triacontahedron-type 2/1–2/1–2/1 (N = 680) and 1/1–1/1–1/1 (N = 160–162) approximants by making full use of the full-potential linearized augmented plane wave-Fourier band calculations based on all-electron density-functional theory. We revealed that the 2/1–2/1–2/1 approximants Al13Mg27Zn45 and Na27Au27Ga31 belong to two different sub-groups classified in terms of equal to 126 and 109 and could explain why they take different e/a values of 2.13 and 1.76, respectively. Among eight 1/1–1/1–1/1 approximants Al3Mg4Zn3, Al9Mg8Ag3, Al21Li13Cu6, Ga21Li13Cu6, Na26Au24Ga30, Na26Au37Ge18, Na26Au37Sn18 and Na26Cd40Pb6, the first two, the second two and the last four compounds were classified into three sub-groups with = 50, 46 and 42; and were claimed to obey the e/a = 2.30, 2.10–2.15 and 1.70–1.80 rules, respectively.
NASA Astrophysics Data System (ADS)
Mizutani, U.; Inukai, M.; Sato, H.; Zijlstra, E. S.; Lin, Q.
2014-08-01
There are three key electronic parameters in elucidating the physics behind the Hume-Rothery electron concentration rule: the square of the Fermi diameter (2kF)2, the square of the critical reciprocal lattice vector ? and the electron concentration parameter or the number of itinerant electrons per atom e/a. We have reliably determined these three parameters for 10 Rhombic Triacontahedron-type 2/1-2/1-2/1 (N = 680) and 1/1-1/1-1/1 (N = 160-162) approximants by making full use of the full-potential linearized augmented plane wave-Fourier band calculations based on all-electron density-functional theory. We revealed that the 2/1-2/1-2/1 approximants Al13Mg27Zn45 and Na27Au27Ga31 belong to two different sub-groups classified in terms of ? equal to 126 and 109 and could explain why they take different e/a values of 2.13 and 1.76, respectively. Among eight 1/1-1/1-1/1 approximants Al3Mg4Zn3, Al9Mg8Ag3, Al21Li13Cu6, Ga21Li13Cu6, Na26Au24Ga30, Na26Au37Ge18, Na26Au37Sn18 and Na26Cd40Pb6, the first two, the second two and the last four compounds were classified into three sub-groups with ? = 50, 46 and 42; and were claimed to obey the e/a = 2.30, 2.10-2.15 and 1.70-1.80 rules, respectively.
Lamiel-Garcia, Oriol; Ko, Kyoung Chul; Lee, Jin Yong; Bromley, Stefan T; Illas, Francesc
2017-03-10
All electron relativistic density functional theory (DFT) based calculations using numerical atom-centered orbitals have been carried out to explore the relative stability, atomic, and electronic structure of a series of stoichiometric TiO2 anatase nanoparticles explicitly containing up to 1365 atoms as a function of size and morphology. The nanoparticles under scrutiny exhibit octahedral or truncated octahedral structures and span the 1-6 nm diameter size range. Initial structures were obtained using the Wulff construction, thus exhibiting the most stable (101) and (001) anatase surfaces. Final structures were obtained from geometry optimization with full relaxation of all structural parameters using both generalized gradient approximation (GGA) and hybrid density functionals. Results show that, for nanoparticles of a similar size, octahedral and truncated octahedral morphologies have comparable energetic stabilities. The electronic structure properties exhibit a clear trend converging to the bulk values as the size of the nanoparticles increases but with a marked influence of the density functional employed. Our results suggest that electronic structure properties, and hence reactivity, for the largest anatase nanoparticles considered in this study will be similar to those exhibited by even larger mesoscale particles or by bulk systems. Finally, we present compelling evidence that anatase nanoparticles become effectively bulklike when reaching a size of ∼20 nm diameter.
NASA Astrophysics Data System (ADS)
Ishida, Toyokazu
2008-09-01
To further understand the catalytic role of the protein environment in the enzymatic process, the author has analyzed the reaction mechanism of the Claisen rearrangement of Bacillus subtilis chorismate mutase (BsCM). By introducing a new computational strategy that combines all-electron QM calculations with ab initio QM/MM modelings, it was possible to simulate the molecular interactions between the substrate and the protein environment. The electrostatic nature of the transition state stabilization was characterized by performing all-electron QM calculations based on the fragment molecular orbital technique for the entire enzyme.
Ishida, Toyokazu
2008-09-17
To further understand the catalytic role of the protein environment in the enzymatic process, the author has analyzed the reaction mechanism of the Claisen rearrangement of Bacillus subtilis chorismate mutase (BsCM). By introducing a new computational strategy that combines all-electron QM calculations with ab initio QM/MM modelings, it was possible to simulate the molecular interactions between the substrate and the protein environment. The electrostatic nature of the transition state stabilization was characterized by performing all-electron QM calculations based on the fragment molecular orbital technique for the entire enzyme.
Large-scale All-electron Density Functional Theory Calculations using Enriched Finite Element Method
NASA Astrophysics Data System (ADS)
Kanungo, Bikash; Gavini, Vikram
We present a computationally efficient method to perform large-scale all-electron density functional theory calculations by enriching the Lagrange polynomial basis in classical finite element (FE) discretization with atom-centered numerical basis functions, which are obtained from the solutions of the Kohn-Sham (KS) problem for single atoms. We term these atom-centered numerical basis functions as enrichment functions. The integrals involved in the construction of the discrete KS Hamiltonian and overlap matrix are computed using an adaptive quadrature grid based on gradients in the enrichment functions. Further, we propose an efficient scheme to invert the overlap matrix by exploiting its LDL factorization and employing spectral finite elements along with Gauss-Lobatto quadrature rules. Finally, we use a Chebyshev polynomial based acceleration technique to compute the occupied eigenspace in each self-consistent iteration. We demonstrate the accuracy, efficiency and scalability of the proposed method on various metallic and insulating benchmark systems, with systems ranging in the order of 10,000 electrons. We observe a 50-100 fold reduction in the overall computational time when compared to classical FE calculations while being commensurate with the desired chemical accuracy. We acknowledge the support of NSF (Grant No. 1053145) and ARO (Grant No. W911NF-15-1-0158) in conducting this work.
NASA Astrophysics Data System (ADS)
Stanke, Monika; Palikot, Ewa; KÈ©dziera, Dariusz; Adamowicz, Ludwik
2016-12-01
An algorithm for calculating the first-order electronic orbit-orbit magnetic interaction correction for an electronic wave function expanded in terms of all-electron explicitly correlated molecular Gaussian (ECG) functions with shifted centers is derived and implemented. The algorithm is tested in calculations concerning the H2 molecule. It is also applied in calculations for LiH and H3+ molecular systems. The implementation completes our work on the leading relativistic correction for ECGs and paves the way for very accurate ECG calculations of ground and excited potential energy surfaces (PESs) of small molecules with two and more nuclei and two and more electrons, such as HeH-, H3+, HeH2, and LiH2+. The PESs will be used to determine rovibrational spectra of the systems.
NMR shieldings from density functional perturbation theory: GIPAW versus all-electron calculations
NASA Astrophysics Data System (ADS)
de Wijs, G. A.; Laskowski, R.; Blaha, P.; Havenith, R. W. A.; Kresse, G.; Marsman, M.
2017-02-01
We present a benchmark of the density functional linear response calculation of NMR shieldings within the gauge-including projector-augmented-wave method against all-electron augmented-plane-wave+local-orbital and uncontracted Gaussian basis set results for NMR shieldings in molecular and solid state systems. In general, excellent agreement between the aforementioned methods is obtained. Scalar relativistic effects are shown to be quite large for nuclei in molecules in the deshielded limit. The small component makes up a substantial part of the relativistic corrections.
Near-edge structures from first principles all-electron Bethe-Salpeter equation calculations.
Olovsson, W; Tanaka, I; Puschnig, P; Ambrosch-Draxl, C
2009-03-11
We obtain x-ray absorption near-edge structures (XANES) by solving the equation of motion for the two-particle Green's function for the electron-hole pair, the Bethe-Salpeter equation (BSE), within the all-electron full-potential linearized augmented plane wave method (FPLAPW). The excited states are calculated for the Li K-edge in the insulating solids LiF, Li(2)O and Li(2)S, and absorption spectra are compared with independent particle results using the random phase approximation (RPA), as well as supercell calculations using the core-hole approximation within density functional theory (DFT). The binding energies of strongly bound excitations are determined in the materials, and core-exciton wavefunctions are demonstrated for LiF.
Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations
Webster, R. Harrison, N. M.; Bernasconi, L.
2015-06-07
We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features of the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green’s function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional c{sub HF} and show that there exists one value of c{sub HF} (∼0.32) that reproduces at least semi-quantitatively the optical gap of this material.
Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations.
Webster, R; Bernasconi, L; Harrison, N M
2015-06-07
We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features of the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green's function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional cHF and show that there exists one value of cHF (∼0.32) that reproduces at least semi-quantitatively the optical gap of this material.
Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations
NASA Astrophysics Data System (ADS)
Webster, R.; Bernasconi, L.; Harrison, N. M.
2015-06-01
We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features of the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green's function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional cHF and show that there exists one value of cHF (˜0.32) that reproduces at least semi-quantitatively the optical gap of this material.
Ishida, Toyokazu; Fedorov, Dmitri G; Kitaura, Kazuo
2006-01-26
To elucidate the catalytic power of enzymes, we analyzed the reaction profile of Claisen rearrangement of Bacillus subtilis chorismate mutase (BsCM) by all electron quantum chemical calculations using the fragment molecular orbital (FMO) method. To the best of our knowledge, this is the first report of ab initio-based quantum chemical calculations of the entire enzyme system, where we provide a detailed analysis of the catalytic factors that accomplish transition-state stabilization (TSS). FMO calculations deliver an ab initio-level estimate of the intermolecular interaction between the substrate and the amino acid residues of the enzyme. To clarify the catalytic role of Arg90, we calculated the reaction profile of the wild-type BsCM as well as Lys90 and Cit90 mutant BsCMs. Structural refinement and the reaction path determination were performed at the ab initio QM/MM level, and FMO calculations were applied to the QM/MM refined structures. Comparison between three types of reactions established two collective catalytic factors in the BsCM reaction: (1) the hydrogen bonds connecting the Glu78-Arg90-substrate cooperatively control the stability of TS relative to the ES complex and (2) the positive charge on Arg90 polarizes the substrate in the TS region to gain more electrostatic stabilization.
Havu, V. Blum, V.; Havu, P.; Scheffler, M.
2009-12-01
We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as the more rigorous bottom-up approaches.
NASA Astrophysics Data System (ADS)
Boettger, Jonathan C.; Ray, Asok K.
2000-07-01
The fluorite structure light-actinide dioxides, uranium dioxide and plutonium dioxide, are both known to be prototypical Mott-Hubbard insulators, with band gaps produced by strong Coulomb correlation effects that are not adequately accounted for in traditional density functional theory (DFT) calculations. Indeed, DFT electronic structure calculations for these two actinide dioxides have been shown to incorrectly predict metallic behavior. The highly-correlated electron effects exhibited by the actinide dioxides, combined with the large relativistic effects (including spin-orbit coupling) expected for any actinide compound, provide an extreme challenge for electronic structure theorists. For this reason, few fully-self-consistent DFT calculations have been carried out for the actinide dioxides, in general, and only one for plutonium dioxide. In that calculation, the troublesome 5f electrons were treated as core electrons, and spin-orbit coupling was ignored.
Basis set limit and systematic errors in local-orbital based all-electron DFT
NASA Astrophysics Data System (ADS)
Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias
2006-03-01
With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).
All-electron G W +Bethe-Salpeter calculations on small molecules
NASA Astrophysics Data System (ADS)
Hirose, Daichi; Noguchi, Yoshifumi; Sugino, Osamu
2015-05-01
Accuracy of the first-principles G W +Bethe-Salpeter equation (BSE) method is examined for low-energy excited states of small molecules. The standard formalism, which is based on the one-shot G W approximation and the Tamm-Dancoff approximation (TDA), is found to underestimate the optical gap of N2, CO, H2O ,C2H4 , and CH2O by about 1 eV. Possible origins are investigated separately for the effect of TDA and for the approximate schemes of the self-energy operator, which are known to cause overbinding of the electron-hole pair and overscreening of the interaction. By applying the known correction formula, we find the amount of the correction is too small to overcome the underestimated excitation energy. This result indicates a need for fundamental revision of the G W +BSE method rather than adjustment of the standard one. We expect that this study makes the problems in the current G W +BSE formalism clearer and provides useful information for further intrinsic development beyond the current framework.
NASA Astrophysics Data System (ADS)
Kanungo, Bikash; Gavini, Vikram
2017-01-01
We present a computationally efficient approach to perform large-scale all-electron density functional theory calculations by enriching the classical finite element basis with compactly supported atom-centered numerical basis functions that are constructed from the solution of the Kohn-Sham (KS) problem for single atoms. We term these numerical basis functions as enrichment functions, and the resultant basis as the enriched finite element basis. The compact support for the enrichment functions is obtained by using smooth cutoff functions, which enhances the conditioning and maintains the locality of the enriched finite element basis. The integrals involved in the evaluation of the discrete KS Hamiltonian and overlap matrix in the enriched finite element basis are computed using an adaptive quadrature grid that is constructed based on the characteristics of enrichment functions. Further, we propose an efficient scheme to invert the overlap matrix by using a blockwise matrix inversion in conjunction with special reduced-order quadrature rules, which is required to transform the discrete Kohn-Sham problem to a standard eigenvalue problem. Finally, we solve the resulting standard eigenvalue problem, in each self-consistent field iteration, by using a Chebyshev polynomial based filtering technique to compute the relevant eigenspectrum. We demonstrate the accuracy, efficiency, and parallel scalability of the proposed method on semiconducting and heavy-metallic systems of various sizes, with the largest system containing 8694 electrons. We obtain accuracies in the ground-state energies that are ˜1 mHa with reference ground-state energies employing classical finite element as well as Gaussian basis sets. Using the proposed formulation based on enriched finite element basis, for accuracies commensurate with chemical accuracy, we observe a staggering 50 -300 -fold reduction in the overall computational time when compared to classical finite element basis. Further, we find a
Noguchi, Yoshifumi; Ohno, Kaoru
2010-04-15
The optical absorption spectra of sodium clusters (Na{sub 2n}, n{<=} 4) are calculated by using an all-electron first-principles GW+Bethe-Salpeter method with the mixed-basis approach within the Tamm-Dancoff approximation. In these small systems, the excitonic effect strongly affects the optical properties due to the confinement of exciton in the small system size. The present state-of-the-art method treats the electron-hole two-particle Green's function by incorporating the ladder diagrams up to the infinite order and therefore takes into account the excitonic effect in a good approximation. We check the accuracy of the present method by comparing the resulting spectra with experiments. In addition, the effect of delocalization in particular in the lowest unoccupied molecular orbital in the GW quasiparticle wave function is also discussed by rediagonalizing the Dyson equation.
Spectrum-splitting approach for Fermi-operator expansion in all-electron Kohn-Sham DFT calculations
NASA Astrophysics Data System (ADS)
Motamarri, Phani; Gavini, Vikram; Bhattacharya, Kaushik; Ortiz, Michael
2017-01-01
We present a spectrum-splitting approach to conduct all-electron Kohn-Sham density functional theory (DFT) calculations by employing Fermi-operator expansion of the Kohn-Sham Hamiltonian. The proposed approach splits the subspace containing the occupied eigenspace into a core subspace, spanned by the core eigenfunctions, and its complement, the valence subspace, and thereby enables an efficient computation of the Fermi-operator expansion by reducing the expansion to the valence-subspace projected Kohn-Sham Hamiltonian. The key ideas used in our approach are as follows: (i) employ Chebyshev filtering to compute a subspace containing the occupied states followed by a localization procedure to generate nonorthogonal localized functions spanning the Chebyshev-filtered subspace; (ii) compute the Kohn-Sham Hamiltonian projected onto the valence subspace; (iii) employ Fermi-operator expansion in terms of the valence-subspace projected Hamiltonian to compute the density matrix, electron density, and band energy. We demonstrate the accuracy and performance of the method on benchmark materials systems involving silicon nanoclusters up to 1330 electrons, a single gold atom, and a six-atom gold nanocluster. The benchmark studies on silicon nanoclusters revealed a staggering fivefold reduction in the Fermi-operator expansion polynomial degree by using the spectrum-splitting approach for accuracies in the ground-state energies of ˜10-4Ha/atom with respect to reference calculations. Further, numerical investigations on gold suggest that spectrum splitting is indispensable to achieve meaningful accuracies, while employing Fermi-operator expansion.
NASA Astrophysics Data System (ADS)
Khan, Suffian; Alam, Aftab; Johnson, Duane
2009-03-01
To perform electronic-structure calculations for inherently large systems, such as a quantum dots or interfaces like domain walls, we must perform the calculations over very large unit cells (10^4 to 10^8 atoms). For the inverse Green's function G-1, KKR methods typically solve for G by direct inversion. Using a screened, k-space hybrid KKR, we solve Dyson's equation for the Green's function using a reference state via G = Gref [ I - (t - tref) Gref]-1, scattering matrices t and tref are known and the non-Hermitian tensor Gref is chosen for convenience and sparsity [1]. The approach is O(N) for bandgap materials, whereas it is O(N^2) for metals but with a potentially large prefactor. Based upon Sparse Approximate Inverse (or SPAI) technique [2], we generalize the algorithm for complex, non-Hermitian matrices, then use the method as a preconditioner for the inversion to reduce the iteration counts (hence, reduce the prefactor) of the iterative Krylov-space inverses, such as TFQMR, to address large-scale metallic systems. Parallel iterative and energy contour solves are made also. We explore the numerical efficiency and scaling versus atoms per unit cells. [1] Smirnov and Johnson, Comp. Phys. Comm. 148, 74-80 (2002). [2] Grote and Huckle, SIAM J. Sci. Comput. 18, 8
NASA Astrophysics Data System (ADS)
Stanke, Monika; Jurkowski, Jacek; Adamowicz, Ludwik
2017-03-01
Algorithms for calculating the quantum electrodynamics Araki–Sucher correction for n-electron explicitly correlated molecular Gaussian functions with shifted centers are derived and implemented. The algorithms are tested in calculations concerning the H2 molecule and applied in ground-state calculations of LiH and {{{H}}}3+ molecules. The implementation will significantly increase the accuracy of the calculations of potential energy surfaces of small diatomic and triatomic molecules and their rovibrational spectra.
NASA Astrophysics Data System (ADS)
Yang, Jia-Yue; Yue, Sheng-Ying; Hu, Ming
2016-12-01
Considerable discussions have occurred about the critical role played by free electrons in the transport of heat in pure metals. In principle, any environment that can influence the dynamical behaviors of electrons would have impact on electronic thermal conductivity (κel) of metals. Over the past decades, significant progress and comprehensive understanding have been gained from theoretical, as well as experimental, investigations by taking into account the effects of various conditions, typically temperature, impurities, strain, dimensionality, interface, etc. However, the effect of external magnetic field has received less attention. In this paper, the magnetic-field dependence of electron-phonon scattering, the electron's lifetime, and κel of representative metals (Al, Ni, and Nb) are investigated within the framework of all-electron spin-density functional theory. For Al and Ni, the induced magnetization vector field and difference in electron density under external magnetic-field aggregate toward the center of unit cell, leading to the enhanced electron-phonon scattering, the damped electron's lifetime, and thus the reduced κel. On the contrary, for Nb with strong intrinsic electron-phonon interaction, the electron's lifetime and κel slightly increase as external magnetic field is enhanced. This is mainly attributed to the separately distributed magnetization vector field and difference in electron density along the corner of unit cell. This paper sheds light on the origin of influence of external magnetic field on κel for pure metals and offers a new route for robust manipulation of electronic thermal transport via applying external magnetic field.
Karamanis, Panaghiotis; Maroulis, George; Pouchan, Claude
2006-02-21
We have calculated molecular geometries and electric polarizabilities for small cadmium selenide clusters. Our calculations were performed with conventional ab initio and density functional theory methods and Gaussian-type basis sets especially designed for (CdSe)(n). We find that the dipole polarizability per atom converges rapidly to the bulk value.
Viñes, Francesc; Illas, Francesc
2017-03-30
The atomic and electronic structure of stoichiometric and reduced ZnO wurtzite has been studied using a periodic relativistic all electron hybrid density functional (PBE0) approach and numeric atom-centered orbital basis set with quality equivalent to aug-cc-pVDZ. To assess the importance of relativistic effects, calculations were carried out without and with explicit inclusion of relativistic effects through the zero order regular approximation. The calculated band gap is ∼0.2 eV smaller than experiment, close to previous PBE0 results including relativistic calculation through the pseudopotential and ∼0.25 eV smaller than equivalent nonrelativistic all electron PBE0 calculations indicating possible sources of error in nonrelativistic all electron density functional calculations for systems containing elements with relatively high atomic number. The oxygen vacancy formation energy converges rather fast with the supercell size, the predicted value agrees with previously hybrid density functional calculations and analysis of the electronic structure evidences the presence of localized electrons at the vacancy site with a concomitant well localized peak in the density of states ∼0.5 eV above the top of the valence band and a significant relaxation of the Zn atoms near to the oxygen vacancy. Finally, present work shows that accurate results can be obtained in systems involving large supercells containing up to ∼450 atoms using a numeric atomic-centered orbital basis set within a full all electron description including scalar relativistic effects at an affordable cost. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Khan, Suffian; Johnson, Duane
2010-03-01
To perform electronic-structure calculations for inherently large systems, such as a quantum dots with heterogeneous interfaces, we must perform the calculations over very large unit cells (10^4 to 10^8 atoms). KKR methods typically solve for G by direct inversion G-1, with known analytic form. Using a screened, k-space hybrid KKR, we solve Dyson's equation for the Green's function using a reference state via G = Gref [ I - (t - tref) Gref]-1, scattering matrices t and tref are known and the non-Hermitian tensor Gref is chosen for convenience and sparsity [1]. The approach is O(N) for bandgap materials, whereas it is O(N^2) for metals but with a potentially large prefactor. We use Krylov-space solvers to reduce storage and exploit known symmetries. Parallel iterative and energy contour solves are made also. We explore the numerical efficiency and scaling versus atoms per unit cells. [1] Smirnov and Johnson, Comp! ^1Phys. Comm. 148, 74-80 (2002).
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.
1993-01-01
Dirac-Hartree-Fock calculations have been carried out on the ground states of the group IV monoxides GeO, SnO and PbO. Geometries, dipole moments and infrared data are presented. For comparison, nonrelativistic, first-order perturbation and relativistic effective core potential calculations have also been carried out. Where appropriate the results are compared with the experimental data and previous calculations. Spin-orbit effects are of great importance for PbO, where first-order perturbation theory including only the mass-velocity and Darwin terms is inadequate to predict the relativistic corrections to the properties. The relativistic effective core potential results show a larger deviation from the all-electron values than for the hydrides, and confirm the conclusions drawn on the basis of the hydride calculations.
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.
1991-01-01
Dirac-Hartree-Fock calculations have been carried out on the ground states of the group IV monoxides GeO, SnO and PbO. Geometries, dipole moments and infrared data are presented. For comparison, nonrelativistic, first-order perturbation and relativistic effective core potential calculations have also been carried out. Where appropriate the results are compared with the experimental data and previous calculations. Spin-orbit effects are of great importance for PbO, where first-order perturbation theory including only the mass-velocity and Darwin terms is inadequate to predict the relativistic corrections to the properties. The relativistic effective core potential results show a larger deviation from the all-electron values than for the hydrides, and confirm the conclusions drawn on the basis of the hydride calculations.
NASA Astrophysics Data System (ADS)
Blum, Volker
This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.
Mitin, Alexander V; van Wüllen, Christoph
2006-02-14
A two-component quasirelativistic Hamiltonian based on spin-dependent effective core potentials is used to calculate ionization energies and electron affinities of the heavy halogen atom bromine through the superheavy element 117 (eka-astatine) as well as spectroscopic constants of the homonuclear dimers of these atoms. We describe a two-component Hartree-Fock and density-functional program that treats spin-orbit coupling self-consistently within the orbital optimization procedure. A comparison with results from high-order Douglas-Kroll calculations--for the superheavy systems also with zeroth-order regular approximation and four-component Dirac results--demonstrates the validity of the pseudopotential approximation. The density-functional (but not the Hartree-Fock) results show very satisfactory agreement with theoretical coupled cluster as well as experimental data where available, such that the theoretical results can serve as an estimate for the hitherto unknown properties of astatine, element 117, and their dimers.
NASA Astrophysics Data System (ADS)
Klüppelberg, Daniel A.; Betzinger, Markus; Blügel, Stefan
2015-01-01
We analyze the accuracy of the atomic force within the all-electron full-potential linearized augmented plane-wave (FLAPW) method using the force formalism of Yu et al. [Phys. Rev. B 43, 6411 (1991), 10.1103/PhysRevB.43.6411]. A refinement of this formalism is presented that explicitly takes into account the tail of high-lying core states leaking out of the muffin-tin sphere and considers the small discontinuities of LAPW wave function, density, and potential at the muffin-tin sphere boundaries. For MgO and EuTiO3 it is demonstrated that these amendments substantially improve the acoustic sum rule and the symmetry of the force constant matrix. Sum rule and symmetry are realized with an accuracy of μ Htr /aB .
NASA Astrophysics Data System (ADS)
Rangel, T.; Caliste, D.; Genovese, L.; Torrent, M.
2016-11-01
We present a Projector Augmented-Wave (PAW) method based on a wavelet basis set. We implemented our wavelet-PAW method as a PAW library in the ABINIT package [http://www.abinit.org] and into BigDFT [http://www.bigdft.org]. We test our implementation in prototypical systems to illustrate the potential usage of our code. By using the wavelet-PAW method, we can simulate charged and special boundary condition systems with frozen-core all-electron precision. Furthermore, our work paves the way to large-scale and potentially order- N simulations within a PAW method.
NASA Astrophysics Data System (ADS)
Malli, Gulzari L.; Siegert, Martin; Turner, David P.
All-electron all-virtual spinor space (AVSS) relativistic second order Møller-Plesset (RMP2), coupled-cluster singles doubles (RCCSD), RCCSD(T) (RCCSD plus the triple excitation correction included perturbationally) calculations are reported for tetrahedral (Td) PbH4 at various bond lengths using our finite contracted universal Gaussian basis set. Our relativistic calculations predict the RMP2, RCCSD, and RCCD(T) molecular correlation energy for PbH4 as -2.2563, -2.1917, and -2.2311 au, respectively. Ours are the first AVSS RMP2, RCCSD, and RCCSD(T) molecular calculations for electron correlation energy of the heavy element molecule PbH4. All-electron AVSS coupled-cluster calculations for the Pb atom are also reported and these were used (in conjunction with the corresponding molecular electron correlation energy calculations for PbH4) to predict atomization energy (Ae) of PbH4 at various levels of coupled-cluster electron correlation. Our predicted atomization energy for PbH4 (at the optimized bond length of 1.749 Å) with our Dirac-Fock, RMP2, RCCSD, and RCCSD(T) calculations is 5.73, 7.27, 11.24, and 11.62 eV, respectively. Neither such relativistic molecular correlation energy nor atomization energy has been reported so far for heavy polyatomic with 86 electrons. Calculation of relativistic molecular correlation energy is no more a nightmare, and bottlenecks are broken for the calculation of relativistic correlation as well as atomization energy for molecules of heavy elements.
NASA Astrophysics Data System (ADS)
Havu, Vile; Blum, Volker; Scheffler, Matthias
2007-03-01
Numeric atom-centered local orbitals (NAO) are efficient basis sets for all-electron electronic structure theory. The locality of NAO's can be exploited to render (in principle) all operations of the self-consistency cycle O(N). This is straightforward for 3D integrals using domain decomposition into spatially close subsets of integration points, enabling critical computational savings that are effective from ˜tens of atoms (no significant overhead for smaller systems) and make large systems (100s of atoms) computationally feasible. Using a new all-electron NAO-based code,^1 we investigate the quantitative impact of exploiting this locality on two distinct classes of systems: Large light-element molecules [Alanine-based polypeptide chains (Ala)n], and compact transition metal clusters. Strict NAO locality is achieved by imposing a cutoff potential with an onset radius rc, and exploited by appropriately shaped integration domains (subsets of integration points). Conventional tight rc<= 3å have no measurable accuracy impact in (Ala)n, but introduce inaccuracies of 20-30 meV/atom in Cun. The domain shape impacts the computational effort by only 10-20 % for reasonable rc. ^1 V. Blum, R. Gehrke, P. Havu, V. Havu, M. Scheffler, The FHI Ab Initio Molecular Simulations (aims) Project, Fritz-Haber-Institut, Berlin (2006).
Sharkey, Keeper L; Adamowicz, Ludwik
2014-05-07
An algorithm for quantum-mechanical nonrelativistic variational calculations of L = 0 and M = 0 states of atoms with an arbitrary number of s electrons and with three p electrons have been implemented and tested in the calculations of the ground (4)S state of the nitrogen atom. The spatial part of the wave function is expanded in terms of all-electrons explicitly correlated Gaussian functions with the appropriate pre-exponential Cartesian angular factors for states with the L = 0 and M = 0 symmetry. The algorithm includes formulas for calculating the Hamiltonian and overlap matrix elements, as well as formulas for calculating the analytic energy gradient determined with respect to the Gaussian exponential parameters. The gradient is used in the variational optimization of these parameters. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all-particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. With that, the mass effect on the total ground-state energy is determined.
Sharkey, Keeper L.; Adamowicz, Ludwik
2014-05-07
An algorithm for quantum-mechanical nonrelativistic variational calculations of L = 0 and M = 0 states of atoms with an arbitrary number of s electrons and with three p electrons have been implemented and tested in the calculations of the ground {sup 4}S state of the nitrogen atom. The spatial part of the wave function is expanded in terms of all-electrons explicitly correlated Gaussian functions with the appropriate pre-exponential Cartesian angular factors for states with the L = 0 and M = 0 symmetry. The algorithm includes formulas for calculating the Hamiltonian and overlap matrix elements, as well as formulas for calculating the analytic energy gradient determined with respect to the Gaussian exponential parameters. The gradient is used in the variational optimization of these parameters. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all-particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. With that, the mass effect on the total ground-state energy is determined.
Velocity Based Modulus Calculations
NASA Astrophysics Data System (ADS)
Dickson, W. C.
2007-12-01
A new set of equations are derived for the modulus of elasticity E and the bulk modulus K which are dependent only upon the seismic wave propagation velocities Vp, Vs and the density ρ. The three elastic moduli, E (Young's modulus), the shear modulus μ (Lamé's second parameter) and the bulk modulus K are found to be simple functions of the density and wave propagation velocities within the material. The shear and elastic moduli are found to equal the density of the material multiplied by the square of their respective wave propagation-velocities. The bulk modulus may be calculated from the elastic modulus using Poisson's ratio. These equations and resultant values are consistent with published literature and values in both magnitude and dimension (N/m2) and are applicable to the solid, liquid and gaseous phases. A 3D modulus of elasticity model for the Parkfield segment of the San Andreas Fault is presented using data from the wavespeed model of Thurber et al. [2006]. A sharp modulus gradient is observed across the fault at seismic depths, confirming that "variation in material properties play a key role in fault segmentation and deformation style" [Eberhart-Phillips et al., 1993] [EPM93]. The three elastic moduli E, μ and K may now be calculated directly from seismic pressure and shear wave propagation velocities. These velocities may be determined using conventional seismic reflection, refraction or transmission data and techniques. These velocities may be used in turn to estimate the density. This allows velocity based modulus calculations to be used as a tool for geophysical analysis, modeling, engineering and prospecting.
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.
1992-01-01
Relativistic corrections to a number of properties of the Group IV hydrides are calculated using the Dirac-Hartree-Fock method. The use of first-order perturbation theory is sufficient to obtain relativistic corrections for Ge, but the effects of spin-orbit interaction and other higher-order effects begin to show for Sn and become important for Pb. The energy of the reaction XH4 yields XH2 + H2 (X = Si, Ge, Sn, and Pb) is also calculated. The results are compared with relativistic effective core potential calculations, first-order perturbation theory calculations, and limited experimental data.
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Taylor, Peter R.; Faegri, Knut, Jr.; Partridge, Harry
1991-01-01
A basis-set-expansion Dirac-Hartree-Fock program for molecules is described. Bond lengths and harmonic frequencies are presented for the ground states of the group 4 tetrahydrides, CH4, SiH4, GeH4, SnH4, and PbH4. The results are compared with relativistic effective core potential (RECP) calculations, first-order perturbation theory (PT) calculations and with experimental data. The bond lengths are well predicted by first-order perturbation theory for all molecules, but none of the RECP's considered provides a consistent prediction. Perturbation theory overestimates the relativistic correction to the harmonic frequencies; the RECP calculations underestimate the correction.
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Taylor, Peter R.; Faegri, Knut, Jr.; Partridge, Harry
1990-01-01
A basis-set-expansion Dirac-Hartree-Fock program for molecules is described. Bond lengths and harmonic frequencies are presented for the ground states of the group 4 tetrahydrides, CH4, SiH4, GeH4, SnH4, and PbH4. The results are compared with relativistic effective core potential (RECP) calculations, first-order perturbation theory (PT) calculations and with experimental data. The bond lengths are well predicted by first-order perturbation theory for all molecules, but non of the RECP's considered provides a consistent prediction. Perturbation theory overestimates the relativistic correction to the harmonic frequencies; the RECP calculations underestimate the correction.
Improved Segmented All-Electron Relativistically Contracted Basis Sets for the Lanthanides.
Aravena, Daniel; Neese, Frank; Pantazis, Dimitrios A
2016-03-08
Improved versions of the segmented all-electron relativistically contracted (SARC) basis sets for the lanthanides are presented. The second-generation SARC2 basis sets maintain efficient construction of their predecessors and their individual adaptation to the DKH2 and ZORA Hamiltonians, but feature exponents optimized with a completely new orbital shape fitting procedure and a slightly expanded f space that results in sizable improvement in CASSCF energies and in significantly more accurate prediction of spin-orbit coupling parameters. Additionally, an extended set of polarization/correlation functions is constructed that is appropriate for multireference correlated calculations and new auxiliary basis sets for use in resolution-of-identity (density-fitting) approximations in combination with both DFT and wave function based treatments. Thus, the SARC2 basis sets extend the applicability of the first-generation DFT-oriented basis sets to routine all-electron wave function-based treatments of lanthanide complexes. The new basis sets are benchmarked with respect to excitation energies, radial distribution functions, optimized geometries, orbital eigenvalues, ionization potentials, and spin-orbit coupling parameters of lanthanide systems and are shown to be suitable for the description of magnetic and spectroscopic properties using both DFT and multireference wave function-based methods.
Self-consistent GW: All-electron implementation with localized basis functions
NASA Astrophysics Data System (ADS)
Caruso, Fabio; Rinke, Patrick; Ren, Xinguo; Rubio, Angel; Scheffler, Matthias
2013-08-01
This paper describes an all-electron implementation of the self-consistent GW (sc-GW) approach—i.e., based on the solution of the Dyson equation—in an all-electron numeric atom-centered orbital basis set. We cast Hedin's equations into a matrix form that is suitable for numerical calculations by means of (i) the resolution-of-identity technique to handle four-center integrals and (ii) a basis representation for the imaginary-frequency dependence of dynamical operators. In contrast to perturbative G0W0, sc-GW provides a consistent framework for ground- and excited-state properties and facilitates an unbiased assessment of the GW approximation. For excited states, we benchmark sc-GW for five molecules relevant for organic photovoltaic applications: thiophene, benzothiazole, 1,2,5-thiadiazole, naphthalene, and tetrathiafulvalene. At self-consistency, the quasiparticle energies are found to be in good agreement with experiment and, on average, more accurate than G0W0 based on Hartree-Fock or density-functional theory with the Perdew-Burke-Ernzerhof exchange-correlation functional. Based on the Galitskii-Migdal total energy, structural properties are investigated for a set of diatomic molecules. For binding energies, bond lengths, and vibrational frequencies sc-GW and G0W0 achieve a comparable performance, which is, however, not as good as that of exact-exchange plus correlation in the random-phase approximation and its advancement to renormalized second-order perturbation theory. Finally, the improved description of dipole moments for a small set of diatomic molecules demonstrates the quality of the sc-GW ground-state density.
Label-free all-electronic biosensing in microfluidic systems
NASA Astrophysics Data System (ADS)
Stanton, Michael A.
Label-free, all-electronic detection techniques offer great promise for advancements in medical and biological analysis. Electrical sensing can be used to measure both interfacial and bulk impedance changes in conducting solutions. Electronic sensors produced using standard microfabrication processes are easily integrated into microfluidic systems. Combined with the sensitivity of radiofrequency electrical measurements, this approach offers significant advantages over competing biological sensing methods. Scalable fabrication methods also provide a means of bypassing the prohibitive costs and infrastructure associated with current technologies. We describe the design, development and use of a radiofrequency reflectometer integrated into a microfluidic system towards the specific detection of biologically relevant materials. We developed a detection protocol based on impedimetric changes caused by the binding of antibody/antigen pairs to the sensing region. Here we report the surface chemistry that forms the necessary capture mechanism. Gold-thiol binding was utilized to create an ordered alkane monolayer on the sensor surface. Exposed functional groups target the N-terminus, affixing a protein to the monolayer. The general applicability of this method lends itself to a wide variety of proteins. To demonstrate specificity, commercially available mouse anti- Streptococcus Pneumoniae monoclonal antibody was used to target the full-length recombinant pneumococcal surface protein A, type 2 strain D39 expressed by Streptococcus Pneumoniae. We demonstrate the RF response of the sensor to both the presence of the surface decoration and bound SPn cells in a 1x phosphate buffered saline solution. The combined microfluidic sensor represents a powerful platform for the analysis and detection of cells and biomolecules.
Upper Subcritical Calculations Based on Correlated Data
Sobes, Vladimir; Rearden, Bradley T; Mueller, Don; Marshall, William BJ J; Scaglione, John M; Dunn, Michael E
2015-01-01
The American National Standards Institute and American Nuclear Society standard for Validation of Neutron Transport Methods for Nuclear Criticality Safety Calculations defines the upper subcritical limit (USL) as “a limit on the calculated k-effective value established to ensure that conditions calculated to be subcritical will actually be subcritical.” Often, USL calculations are based on statistical techniques that infer information about a nuclear system of interest from a set of known/well-characterized similar systems. The work in this paper is part of an active area of research to investigate the way traditional trending analysis is used in the nuclear industry, and in particular, the research is assessing the impact of the underlying assumption that the experimental data being analyzed for USL calculations are statistically independent. In contrast, the multiple experiments typically used for USL calculations can be correlated because they are often performed at the same facilities using the same materials and measurement techniques. This paper addresses this issue by providing a set of statistical inference methods to calculate the bias and bias uncertainty based on the underlying assumption that the experimental data are correlated. Methods to quantify these correlations are the subject of a companion paper and will not be discussed here. The newly proposed USL methodology is based on the assumption that the integral experiments selected for use in the establishment of the USL are sufficiently applicable and that experimental correlations are known. Under the assumption of uncorrelated data, the new methods collapse directly to familiar USL equations currently used. We will demonstrate our proposed methods on real data and compare them to calculations of currently used methods such as USLSTATS and NUREG/CR-6698. Lastly, we will also demonstrate the effect experiment correlations can have on USL calculations.
GPU-based fast gamma index calculation
NASA Astrophysics Data System (ADS)
Gu, Xuejun; Jia, Xun; Jiang, Steve B.
2011-03-01
The γ-index dose comparison tool has been widely used to compare dose distributions in cancer radiotherapy. The accurate calculation of γ-index requires an exhaustive search of the closest Euclidean distance in the high-resolution dose-distance space. This is a computational intensive task when dealing with 3D dose distributions. In this work, we combine a geometric method (Ju et al 2008 Med. Phys. 35 879-87) with a radial pre-sorting technique (Wendling et al 2007 Med. Phys. 34 1647-54) and implement them on computer graphics processing units (GPUs). The developed GPU-based γ-index computational tool is evaluated on eight pairs of IMRT dose distributions. The γ-index calculations can be finished within a few seconds for all 3D testing cases on one single NVIDIA Tesla C1060 card, achieving 45-75× speedup compared to CPU computations conducted on an Intel Xeon 2.27 GHz processor. We further investigated the effect of various factors on both CPU and GPU computation time. The strategy of pre-sorting voxels based on their dose difference values speeds up the GPU calculation by about 2.7-5.5 times. For n-dimensional dose distributions, γ-index calculation time on CPU is proportional to the summation of γn over all voxels, while that on GPU is affected by γn distributions and is approximately proportional to the γn summation over all voxels. We found that increasing the resolution of dose distributions leads to a quadratic increase of computation time on CPU, while less-than-quadratic increase on GPU. The values of dose difference and distance-to-agreement criteria also have an impact on γ-index calculation time.
Ab initio GW quasiparticle energies of small sodium clusters by an all-electron mixed-basis approach
NASA Astrophysics Data System (ADS)
Ishii, Soh; Ohno, Kaoru; Kawazoe, Yoshiyuki; Louie, Steven G.
2001-04-01
A state-of-the-art GW calculation is carried out for small sodium clusters, Na2, Na4, Na6, and Na8. The quasiparticle energies are evaluated by employing an ab initio GW code based on an all-electron mixed-basis approach, which uses both plane waves and atomic orbitals as basis functions. The calculated ionization potential and the electron affinity are in excellent agreement with available experimental data. The exchange and correlation parts to the electron self-energy within the GW approximation are presented from the viewpoint of their size dependence. In addition, the effect of the off-diagonal elements of the self-energy corrections to the local-density-approximation exchange-correlation potential is discussed. Na2 and Na8 have a larger energy gap than Na4 and Na6, consistent with the fact that they are magic number clusters.
All-electron Hybrid Functional Treatment of Oxides using the FLAPW Method
NASA Astrophysics Data System (ADS)
Betzinger, Markus; Schlipf, Martin; Friedrich, Christoph; Ležaić, Marjana; Blügel, Stefan
2010-03-01
Hybrid functionals are a practical approximation for the exchange-correlation (xc) functional of density-functional theory. They combine a local or semi-local xc functional with nonlocal Hartree-Fock (HF) exchange and improve the band gap for semiconductors and insulators as well as the description of localized states. So far, most implementations for periodic systems employ a pseudopotential planewave approach. We present an efficient all-electron implementation in the context of the FLAPW methodology realized in the FLEUR (www.flapw.de) code. We report on the implementation of the PBE0 and HSE functionals where an auxiliary basis is constructed from products of LAPW basis functions and used to calculate the HF potential. The Coulomb matrix^1 then has a sparse form. Spatial and time-reversal symmetry is exploited in restricting the Brillouin zone sum in the nonlocal potential to an irreducible wedge. We give account on the efficiency of our concept and of the convergence of the self-consistency cycle. Finally we present results for a variety of oxides and compare those to results obtained with functionals based on the generalized gradient approximation. [1] Comput. Phys. Comm. 180, 347 (2009)
Rapid Bacterial Detection via an All-Electronic CMOS Biosensor
Nikkhoo, Nasim; Cumby, Nichole; Gulak, P. Glenn; Maxwell, Karen L.
2016-01-01
The timely and accurate diagnosis of infectious diseases is one of the greatest challenges currently facing modern medicine. The development of innovative techniques for the rapid and accurate identification of bacterial pathogens in point-of-care facilities using low-cost, portable instruments is essential. We have developed a novel all-electronic biosensor that is able to identify bacteria in less than ten minutes. This technology exploits bacteriocins, protein toxins naturally produced by bacteria, as the selective biological detection element. The bacteriocins are integrated with an array of potassium-selective sensors in Complementary Metal Oxide Semiconductor technology to provide an inexpensive bacterial biosensor. An electronic platform connects the CMOS sensor to a computer for processing and real-time visualization. We have used this technology to successfully identify both Gram-positive and Gram-negative bacteria commonly found in human infections. PMID:27618185
Grid-based electronic structure calculations: The tensor decomposition approach
Rakhuba, M.V.; Oseledets, I.V.
2016-05-01
We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.
NASA Astrophysics Data System (ADS)
Rury, Aaron S.; Mansour, Kamjou; Yu, Nan
2015-07-01
This study examines the capability to significantly suppress the frequency noise of a semiconductor distributed feedback diode laser using a universally applicable approach: a combination of a high-Q crystalline whispering gallery mode microresonator reference and the Pound-Drever-Hall locking scheme using an all-electronic servo loop. An out-of-loop delayed self-heterodyne measurement system demonstrates the ability of this approach to reduce a test laser's absolute line width by nearly a factor of 100. In addition, in-loop characterization of the laser stabilized using this method demonstrates a 1-kHz residual line width with reference to the resonator frequency. Based on these results, we propose that utilization of an all-electronic loop combined with the use of the wide transparency window of crystalline materials enable this approach to be readily applicable to diode lasers emitting in other regions of the electromagnetic spectrum, especially in the UV and mid-IR.
NASA Astrophysics Data System (ADS)
Ishida, Toyokazu
2008-09-01
In this study, we investigated the electronic character of protein environment in enzymatic processes by performing all-electron QM calculations based on the fragment molecular orbital (FMO) method. By introducing a new computational strategy combining all-electron QM analysis with ab initio QM/MM modeling, we investigated the details of molecular interaction energy between a reactive substrate and amino acid residues at a catalytic site. For a practical application, we selected the chorismate mutase catalyzed reaction as an example. Because the computational time required to perform all-electron QM reaction path searches was very large, we employed the ab initio QM/MM modeling technique to construct reliable reaction profiles and performed all-electron FMO calculations for the selected geometries. The main focus of the paper is to analyze the details of electrostatic stabilization, which is considered to be the major feature of enzymatic catalyses, and to clarify how the electronic structure of proteins is polarized in response to the change in electron distribution of the substrate. By performing interaction energy decomposition analysis from a quantum chemical viewpoint, we clarified the relationship between the location of amino acid residues on the protein domain and the degree of electronic polarization of each residue. In particular, in the enzymatic transition state, Arg7, Glu78, and Arg90 are highly polarized in response to the delocalized electronic character of the substrate, and as a result, a large amount of electrostatic stabilization energy is stored in the molecular interaction between the enzyme and the substrate and supplied for transition state stabilization.
Ishida, Toyokazu
2008-09-28
In this study, we investigated the electronic character of protein environment in enzymatic processes by performing all-electron QM calculations based on the fragment molecular orbital (FMO) method. By introducing a new computational strategy combining all-electron QM analysis with ab initio QM/MM modeling, we investigated the details of molecular interaction energy between a reactive substrate and amino acid residues at a catalytic site. For a practical application, we selected the chorismate mutase catalyzed reaction as an example. Because the computational time required to perform all-electron QM reaction path searches was very large, we employed the ab initio QM/MM modeling technique to construct reliable reaction profiles and performed all-electron FMO calculations for the selected geometries. The main focus of the paper is to analyze the details of electrostatic stabilization, which is considered to be the major feature of enzymatic catalyses, and to clarify how the electronic structure of proteins is polarized in response to the change in electron distribution of the substrate. By performing interaction energy decomposition analysis from a quantum chemical viewpoint, we clarified the relationship between the location of amino acid residues on the protein domain and the degree of electronic polarization of each residue. In particular, in the enzymatic transition state, Arg7, Glu78, and Arg90 are highly polarized in response to the delocalized electronic character of the substrate, and as a result, a large amount of electrostatic stabilization energy is stored in the molecular interaction between the enzyme and the substrate and supplied for transition state stabilization.
Spreadsheet Based Scaling Calculations and Membrane Performance
Wolfe, T D; Bourcier, W L; Speth, T F
2000-12-28
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI) for each solid of
SPREADSHEET BASED SCALING CALCULATIONS AND MEMBRANE PERFORMANCE
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total...
[Hyponatremia: effective treatment based on calculated outcomes].
Vervoort, G; Wetzels, J F M
2006-09-30
A 78-year-old man was treated for symptomatic hyponatremia. Despite administration of an isotonic NaCl 0.9% solution, plasma sodium remained unchanged due to high concentrations of sodium and potassium in the urine. After infusion of a hypertonic NaCl solution, a satisfactory increase in plasma sodium was reached and symptoms resolved gradually. The hyponatremia was found to be caused by hypothyroidism, which was treated. A 70-year-old female was admitted to the hospital with loss of consciousness and hyponatremia. She was treated initially with a hypertonic NaCl 2.5% solution, which resulted in a steady increase in plasma sodium and a resolution of symptoms. Treatment was changed to an isotonic NaCl 0.9% infusion to attenuate the rise of serum sodium. Nevertheless plasma sodium increased too rapidly due to increased diuresis and reduced urinary sodium and potassium excretion. A slower increase in plasma sodium was achieved by administering a glucose 5% infusion. Hyponatremia is frequently observed in hospitalised patients. It should be treated effectively, and the rate of correction should be adapted to the clinical situation. Effective treatment is determined by calculating changes in effective osmoles and the resulting changes in the distribution of water over extra- and intracellular spaces. Changes in urine production and urinary excretion of sodium and potassium should be taken into account.
Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set.
Paschoal, D; Guerra, C Fonseca; de Oliveira, M A L; Ramalho, T C; Dos Santos, H F
2016-10-05
Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the Pt-195 NMR chemical shift, are presented for Pt and all elements commonly found as Pt-ligands. The new basis sets identified as NMR-DKH were partially contracted as a triple-zeta doubly polarized scheme with all coefficients obtained from a Douglas-Kroll-Hess (DKH) second-order scalar relativistic calculation. The Pt-195 chemical shift was predicted through empirical models fitted to reproduce experimental data for a set of 183 Pt(II) complexes which NMR sign ranges from -1000 to -6000 ppm. Furthermore, the models were validated using a new set of 75 Pt(II) complexes, not included in the descriptive set. The models were constructed using non-relativistic Hamiltonian at density functional theory (DFT-PBEPBE) level with NMR-DKH basis set for all atoms. For the best model, the mean absolute deviation (MAD) and the mean relative deviation (MRD) were 150 ppm and 6%, respectively, for the validation set (75 Pt-complexes) and 168 ppm (MAD) and 5% (MRD) for all 258 Pt(II) complexes. These results were comparable with relativistic DFT calculation, 200 ppm (MAD) and 6% (MRD). © 2016 Wiley Periodicals, Inc.
All-electron Kohn-Sham density functional theory on hierarchic finite element spaces
NASA Astrophysics Data System (ADS)
Schauer, Volker; Linder, Christian
2013-10-01
In this work, a real space formulation of the Kohn-Sham equations is developed, making use of the hierarchy of finite element spaces from different polynomial order. The focus is laid on all-electron calculations, having the highest requirement onto the basis set, which must be able to represent the orthogonal eigenfunctions as well as the electrostatic potential. A careful numerical analysis is performed, which points out the numerical intricacies originating from the singularity of the nuclei and the necessity for approximations in the numerical setting, with the ambition to enable solutions within a predefined accuracy. In this context the influence of counter-charges in the Poisson equation, the requirement of a finite domain size, numerical quadratures and the mesh refinement are examined as well as the representation of the electrostatic potential in a high order finite element space. The performance and accuracy of the method is demonstrated in computations on noble gases. In addition the finite element basis proves its flexibility in the calculation of the bond-length as well as the dipole moment of the carbon monoxide molecule.
All-electron Kohn–Sham density functional theory on hierarchic finite element spaces
Schauer, Volker; Linder, Christian
2013-10-01
In this work, a real space formulation of the Kohn–Sham equations is developed, making use of the hierarchy of finite element spaces from different polynomial order. The focus is laid on all-electron calculations, having the highest requirement onto the basis set, which must be able to represent the orthogonal eigenfunctions as well as the electrostatic potential. A careful numerical analysis is performed, which points out the numerical intricacies originating from the singularity of the nuclei and the necessity for approximations in the numerical setting, with the ambition to enable solutions within a predefined accuracy. In this context the influence of counter-charges in the Poisson equation, the requirement of a finite domain size, numerical quadratures and the mesh refinement are examined as well as the representation of the electrostatic potential in a high order finite element space. The performance and accuracy of the method is demonstrated in computations on noble gases. In addition the finite element basis proves its flexibility in the calculation of the bond-length as well as the dipole moment of the carbon monoxide molecule.
A basic insight to FEM_based temperature distribution calculation
NASA Astrophysics Data System (ADS)
Purwaningsih, A.; Khairina
2012-06-01
A manual for finite element method (FEM)-based temperature distribution calculation has been performed. The code manual is written in visual basic that is operated in windows. The calculation of temperature distribution based on FEM has three steps namely preprocessor, processor and post processor. Therefore, three manuals are produced namely a preprocessor to prepare the data, a processor to solve the problem, and a post processor to display the result. In these manuals, every step of a general procedure is described in detail. It is expected, by these manuals, the understanding of calculating temperature distribution be better and easier.
Calculated state densities of aperiodic nucleotide base stacks
NASA Astrophysics Data System (ADS)
Ye, Yuan-Jie; Chen, Run-Shen; Martinez, Alberto; Otto, Peter; Ladik, Janos
2000-05-01
Electronic density of states (DOS) histograms and of the nucleotide base stack regions of a segment of human oncogene (both single and double stranded, in B conformation) and of single-stranded random DNA base stack (also in B conformation), were calculated. The computations were performed with the help of the ab initio matrix block negative factor counting (NFC) method for the DOSs. The neglected effects of the sugar-phosphate chain and the water environment (with the counterions) were assessed on the basis of previous ab initio band structure calculations. Further, in the calculation of single nucleotide base stacks also basis set and correlation effects have been investigated. In the case of a single strand the level spacing widths of the allowed regions and the fundamental gap were calculated also with Clementi's double ς basis and corrected for correlation at the MP2 level. The inverse interaction method was applied for the study of Anderson localization.
Web based brain volume calculation for magnetic resonance images.
Karsch, Kevin; Grinstead, Brian; He, Qing; Duan, Ye
2008-01-01
Brain volume calculations are crucial in modern medical research, especially in the study of neurodevelopmental disorders. In this paper, we present an algorithm for calculating two classifications of brain volume, total brain volume (TBV) and intracranial volume (ICV). Our algorithm takes MRI data as input, performs several preprocessing and intermediate steps, and then returns each of the two calculated volumes. To simplify this process and make our algorithm publicly accessible to anyone, we have created a web-based interface that allows users to upload their own MRI data and calculate the TBV and ICV for the given data. This interface provides a simple and efficient method for calculating these two classifications of brain volume, and it also removes the need for the user to download or install any applications.
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle...
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle...
Object-oriented Development of an All-electron Gaussian Basis DFT Code for Periodic Systems
NASA Astrophysics Data System (ADS)
Alford, John
2005-03-01
We report on the construction of an all-electron Gaussian-basis DFT code for systems periodic in one, two, and three dimensions. This is in part a reimplementation of algorithms in the serial code, GTOFF, which has been successfully applied to the study of crystalline solids, surfaces, and ultra-thin films. The current development is being carried out in an object-oriented parallel framework using C++ and MPI. Some rather special aspects of this code are the use of density fitting methodologies and the implementation of a generalized Ewald technique to do lattice summations of Coulomb integrals, which is typically more accurate than multipole methods. Important modules that have already been created will be described, for example, a flexible input parser and storage class that can parse and store generically tagged data (e.g. XML), an easy to use processor communication mechanism, and the integrals package. Though C++ is generally inferior to F77 in terms of optimization, we show that careful redesigning has allowed us to make up the run-time performance difference in the new code. Timing comparisons and scalability features will be presented. The purpose of this reconstruction is to facilitate the inclusion of new physics. Our goal is to study orbital currents using modified gaussian bases and external magnetic field effects in the weak and ultra-strong ( ˜10^5 T) field regimes. This work is supported by NSF-ITR DMR-0218957.
Software-Based Visual Loan Calculator For Banking Industry
NASA Astrophysics Data System (ADS)
Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.
2012-03-01
industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.
Gamma Knife radiosurgery with CT image-based dose calculation.
Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful
2015-11-01
The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution
Gamma Knife radiosurgery with CT image-based dose calculation.
Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful
2015-11-08
The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution
NASA Astrophysics Data System (ADS)
Nishioka, Hirotaka; Ando, Koji
2011-05-01
By making use of an ab initio fragment-based electronic structure method, fragment molecular orbital-linear combination of MOs of the fragments (FMO-LCMO), developed by Tsuneyuki et al. [Chem. Phys. Lett. 476, 104 (2009)], 10.1016/j.cplett.2009.05.069, we propose a novel approach to describe long-distance electron transfer (ET) in large system. The FMO-LCMO method produces one-electron Hamiltonian of whole system using the output of the FMO calculation with computational cost much lower than conventional all-electron calculations. Diagonalizing the FMO-LCMO Hamiltonian matrix, the molecular orbitals (MOs) of the whole system can be described by the LCMOs. In our approach, electronic coupling TDA of ET is calculated from the energy splitting of the frontier MOs of whole system or perturbation method in terms of the FMO-LCMO Hamiltonian matrix. Moreover, taking into account only the valence MOs of the fragments, we can considerably reduce computational cost to evaluate TDA. Our approach was tested on four different kinds of model ET systems with non-covalent stacks of methane, non-covalent stacks of benzene, trans-alkanes, and alanine polypeptides as their bridge molecules, respectively. As a result, it reproduced reasonable TDA for all cases compared to the reference all-electron calculations. Furthermore, the tunneling pathway at fragment-based resolution was obtained from the tunneling current method with the FMO-LCMO Hamiltonian matrix.
Nishioka, Hirotaka; Ando, Koji
2011-05-28
By making use of an ab initio fragment-based electronic structure method, fragment molecular orbital-linear combination of MOs of the fragments (FMO-LCMO), developed by Tsuneyuki et al. [Chem. Phys. Lett. 476, 104 (2009)], we propose a novel approach to describe long-distance electron transfer (ET) in large system. The FMO-LCMO method produces one-electron Hamiltonian of whole system using the output of the FMO calculation with computational cost much lower than conventional all-electron calculations. Diagonalizing the FMO-LCMO Hamiltonian matrix, the molecular orbitals (MOs) of the whole system can be described by the LCMOs. In our approach, electronic coupling T(DA) of ET is calculated from the energy splitting of the frontier MOs of whole system or perturbation method in terms of the FMO-LCMO Hamiltonian matrix. Moreover, taking into account only the valence MOs of the fragments, we can considerably reduce computational cost to evaluate T(DA). Our approach was tested on four different kinds of model ET systems with non-covalent stacks of methane, non-covalent stacks of benzene, trans-alkanes, and alanine polypeptides as their bridge molecules, respectively. As a result, it reproduced reasonable T(DA) for all cases compared to the reference all-electron calculations. Furthermore, the tunneling pathway at fragment-based resolution was obtained from the tunneling current method with the FMO-LCMO Hamiltonian matrix.
Calculating track-based observables for the LHC.
Chang, Hsi-Ming; Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J
2013-09-06
By using observables that only depend on charged particles (tracks), one can efficiently suppress pileup contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new nonperturbative objects called track functions which absorb infrared divergences. The track function Ti(x) describes the energy fraction x of a hard parton i which is converted into charged hadrons. We give a field-theoretic definition of the track function and derive its renormalization group evolution, which is in excellent agreement with the pythia parton shower. We then perform a next-to-leading order calculation of the total energy fraction of charged particles in e+ e-→ hadrons. To demonstrate the implications of our framework for the LHC, we match the pythia parton shower onto a set of track functions to describe the track mass distribution in Higgs plus one jet events. We also show how to reduce smearing due to hadronization fluctuations by measuring dimensionless track-based ratios.
Ameri, Shideh Kabiri; Singh, Pramod K; Dokmeci, Mehmet R; Khademhosseini, Ali; Xu, Qiaobing; Sonkusale, Sameer R
2014-04-15
We present a portable lab-on-chip device for high-throughput trapping and lysis of single cells with in-situ impedance monitoring in an all-electronic approach. The lab-on-chip device consists of microwell arrays between transparent conducting electrodes within a microfluidic channel to deliver and extract cells using alternating current (AC) dielectrophoresis. Cells are lysed with high efficiency using direct current (DC) electric fields between the electrodes. Results are presented for trapping and lysis of human red blood cells. Impedance spectroscopy is used to estimate the percentage of filled wells with cells and to monitor lysis. The results show impedance between electrodes decreases with increase in the percentage of filled wells with cells and drops to a minimum after lysis. Impedance monitoring provides a reasonably accurate measurement of cell trapping and lysis. Utilizing an all-electronic approach eliminates the need for bulky optical components and cameras for monitoring.
Wannier-based calculation of the orbital magnetization in crystals
NASA Astrophysics Data System (ADS)
Lopez, M. G.; Vanderbilt, David; Thonhauser, T.; Souza, Ivo
2012-01-01
We present a first-principles scheme that allows the orbital magnetization of a magnetic crystal to be evaluated accurately and efficiently even in the presence of complex Fermi surfaces. Starting from an initial electronic-structure calculation with a coarse ab initio k-point mesh, maximally localized Wannier functions are constructed and used to interpolate the necessary k-space quantities on a fine mesh, in parallel to a previously developed formalism for the anomalous Hall conductivity [X. Wang, J. Yates, I. Souza, and D. Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.74.195118 74, 195118 (2006)]. We formulate our new approach in a manifestly gauge-invariant manner, expressing the orbital magnetization in terms of traces over matrices in Wannier space. Since only a few (e.g., of the order of 20) Wannier functions are typically needed to describe the occupied and partially occupied bands, these Wannier matrices are small, which makes the interpolation itself very efficient. The method has been used to calculate the orbital magnetization of bcc Fe, hcp Co, and fcc Ni. Unlike an approximate calculation based on integrating orbital currents inside atomic spheres, our results nicely reproduce the experimentally measured ordering of the orbital magnetization in these three materials.
Sensor Based Engine Life Calculation: A Probabilistic Perspective
NASA Technical Reports Server (NTRS)
Guo, Ten-Huei; Chen, Philip
2003-01-01
It is generally known that an engine component will accumulate damage (life usage) during its lifetime of use in a harsh operating environment. The commonly used cycle count for engine component usage monitoring has an inherent range of uncertainty which can be overly costly or potentially less safe from an operational standpoint. With the advance of computer technology, engine operation modeling, and the understanding of damage accumulation physics, it is possible (and desirable) to use the available sensor information to make a more accurate assessment of engine component usage. This paper describes a probabilistic approach to quantify the effects of engine operating parameter uncertainties on the thermomechanical fatigue (TMF) life of a selected engine part. A closed-loop engine simulation with a TMF life model is used to calculate the life consumption of different mission cycles. A Monte Carlo simulation approach is used to generate the statistical life usage profile for different operating assumptions. The probabilities of failure of different operating conditions are compared to illustrate the importance of the engine component life calculation using sensor information. The results of this study clearly show that a sensor-based life cycle calculation can greatly reduce the risk of component failure as well as extend on-wing component life by avoiding unnecessary maintenance actions.
Supersampling method for efficient grid-based electronic structure calculations.
Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn
2016-03-07
The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.
Towards automated calculation of evidence-based clinical scores
Aakre, Christopher A; Dziadzko, Mikhail A; Herasevich, Vitaly
2017-01-01
AIM To determine clinical scores important for automated calculation in the inpatient setting. METHODS A modified Delphi methodology was used to create consensus of important clinical scores for inpatient practice. A list of 176 externally validated clinical scores were identified from freely available internet-based services frequently used by clinicians. Scores were categorized based on pertinent specialty and a customized survey was created for each clinician specialty group. Clinicians were asked to rank each score based on importance of automated calculation to their clinical practice in three categories - “not important”, “nice to have”, or “very important”. Surveys were solicited via specialty-group listserv over a 3-mo interval. Respondents must have been practicing physicians with more than 20% clinical time spent in the inpatient setting. Within each specialty, consensus was established for any clinical score with greater than 70% of responses in a single category and a minimum of 10 responses. Logistic regression was performed to determine predictors of automation importance. RESULTS Seventy-nine divided by one hundred and forty-four (54.9%) surveys were completed and 72/144 (50%) surveys were completed by eligible respondents. Only the critical care and internal medicine specialties surpassed the 10-respondent threshold (14 respondents each). For internists, 2/110 (1.8%) of scores were “very important” and 73/110 (66.4%) were “nice to have”. For intensivists, no scores were “very important” and 26/76 (34.2%) were “nice to have”. Only the number of medical history (OR = 2.34; 95%CI: 1.26-4.67; P < 0.05) and vital sign (OR = 1.88; 95%CI: 1.03-3.68; P < 0.05) variables for clinical scores used by internists was predictive of desire for automation. CONCLUSION Few clinical scores were deemed “very important” for automated calculation. Future efforts towards score calculator automation should focus on technically feasible
NASA Astrophysics Data System (ADS)
Losilla, S. A.; Sundholm, D.
2012-06-01
A computational scheme to perform accurate numerical calculations of electrostatic potentials and interaction energies for molecular systems has been developed and implemented. Molecular electron and energy densities are divided into overlapping atom-centered atomic contributions and a three-dimensional molecular remainder. The steep nuclear cusps are included in the atom-centered functions making the three-dimensional remainder smooth enough to be accurately represented with a tractable amount of grid points. The one-dimensional radial functions of the atom-centered contributions as well as the three-dimensional remainder are expanded using finite element functions. The electrostatic potential is calculated by integrating the Coulomb potential for each separate density contribution, using our tensorial finite element method for the three-dimensional remainder. We also provide algorithms to compute accurate electron-electron and electron-nuclear interactions numerically using the proposed partitioning. The methods have been tested on all-electron densities of 18 reasonable large molecules containing elements up to Zn. The accuracy of the calculated Coulomb interaction energies is in the range of 10-3 to 10-6 Eh when using an equidistant grid with a step length of 0.05 a0.
LRCS calculation and imaging of complex target based on GRECO
NASA Astrophysics Data System (ADS)
Wu, Wen; Xu, Fu-chang; Han, Xiang'e.
2013-09-01
The research on Laser Radar Cross Section(LRCS) is of great significance in many research fields, such as defense, aviation, aerospace, meteorology etc. Current study of LRCS focuses mainly on the full-size target. The LRCS of full-size target, characterized by the scattering properties of the target, is influenced by target material, shape, size, and the wavelength of laser, but it is independent on the size of irradiation beam. In fact, when the target is in large size, and the beam emitted from laser radar is very narrow, it may be in a local rather than a full-size irradiation. In this case, the scattering properties of a target are dependent on not only the size of irradiation beam on the target, but also the direction of irradiation beam. Therefore, it is essential to analyze the scattering properties of a complex target in a local irradiation. Based on the basic theory of Graphic-electromagnetic Computing(GRECO), we improved the method used in the processing of electromagnetic scattering, calculated the monostatic and bistatic LRCS of several targets. The results are consistent with that in the early work done by other researchers. In addition, by changing the divergence angle of the incident beam, the situation of narrow beam in a local irradiation was presented. Under different sizes of irradiation beam, analysis and calculation of local cross section was made in detail. The results indicate that the size of irradiation beam can greatly affect the LRCS for targets. Finally, we calculated scattering cross section per unit of each location point; with color tag, scattering intensity distribution of every location point on the target was displayed, which can be revealed by the color of every pixel point. On the basis of scattering intensity distribution of every location point, the imaging of a target was realized, which provides a reference for quick identification of the target.
Children Base Their Investment on Calculated Pay-Off
Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard
2012-01-01
To investigate the rise of economic abilities during development we studied children aged between 3 and 10 in an exchange situation requiring them to calculate their investment based on different offers. One experimenter gave back a reward twice the amount given by the children, and a second always gave back the same quantity regardless of the amount received. To maximize pay-offs children had to invest a maximal amount with the first, and a minimal amount with the second. About one third of the 5-year-olds and most 7- and 10-year-olds were able to adjust their investment according to the partner, while all 3-year-olds failed. Such performances should be related to the rise of cognitive and social skills after 4 years. PMID:22413006
NASA Astrophysics Data System (ADS)
Li, Jun; Williamson, Andrew
2005-03-01
Recent experimentsootnotetextY. Wu, et.al., Nature 430, 61 (2004); and references therein invoke Si nanowires as promising materials for nanoscale electronic and optical devices. We carried out electronic structure calculations of silicon chains and nanowires, by using both the full-potential linearized augmented plane wave (FLAPW) methodootnotetextE.Wimmer, H.Krakauer, M.Weinert, AJ Freeman, PRB 24, 864 (1981) and the pseudopotential plane wave method. We studied two sets of H-terminated one nanometer silicon wires, one oriented along (001) and the other along(111); both show direct band gaps, with the (111) oriented wires showing a smaller gap (˜2.1 eV) than (001) (˜2.5 eV). This trend differs from that reported in the literature ootnotetextF. Buda, et.al., PRL 69, 1272 (1992); A. M. Saitta, et.al., PRB 53, 1446 (1996), but it is the same in both our all-electron and well converged pseudopotential calculations. We also found that structural relaxations induce different effects on the band structure of differently oriented wires; the band gap change is nearly 0.2 eV between the ideal and relaxed models for (001) while it is negligible for (111) wires.
NASA Astrophysics Data System (ADS)
Knuth, Franz; Carbogno, Christian; Atalla, Viktor; Blum, Volker; Scheffler, Matthias
2015-05-01
We derive and implement the strain derivatives of the total energy of solids, i.e., the analytic stress tensor components, in an all-electron, numeric atom-centered orbital based density-functional formalism. We account for contributions that arise in the semi-local approximation (LDA/GGA) as well as in the generalized Kohn-Sham case, in which a fraction of exact exchange (hybrid functionals) is included. In this work, we discuss the details of the implementation including the numerical corrections for sparse integrations grids which allow to produce accurate results. We validate the implementation for a variety of test cases by comparing to strain derivatives performed via finite differences. Additionally, we include the detailed definition of the overlapping atom-centered integration formalism used in this work to obtain total energies and their derivatives.
Rapid Parallel Calculation of shell Element Based On GPU
NASA Astrophysics Data System (ADS)
Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao
2010-06-01
Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.
Ice flood velocity calculating approach based on single view metrology
NASA Astrophysics Data System (ADS)
Wu, X.; Xu, L.
2017-02-01
Yellow River is the river in which the ice flood occurs most frequently in China, hence, the Ice flood forecasting has great significance for the river flood prevention work. In various ice flood forecast models, the flow velocity is one of the most important parameters. In spite of the great significance of the flow velocity, its acquisition heavily relies on manual observation or deriving from empirical formula. In recent years, with the high development of video surveillance technology and wireless transmission network, the Yellow River Conservancy Commission set up the ice situation monitoring system, in which live videos can be transmitted to the monitoring center through 3G mobile networks. In this paper, an approach to get the ice velocity based on single view metrology and motion tracking technique using monitoring videos as input data is proposed. First of all, River way can be approximated as a plane. On this condition, we analyze the geometry relevance between the object side and the image side. Besides, we present the principle to measure length in object side from image. Secondly, we use LK optical flow which support pyramid data to track the ice in motion. Combining the result of camera calibration and single view metrology, we propose a flow to calculate the real velocity of ice flood. At last we realize a prototype system by programming and use it to test the reliability and rationality of the whole solution.
Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization
LaMar, E; Hamann, B; Joy, K I
2001-10-16
Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.
Lee, Kyungmin; Cho, Soohyun
2017-01-26
Mathematics anxiety (MA) refers to the experience of negative affect when engaging in mathematical activity. According to Ashcraft and Kirk (2001), MA selectively affects calculation with high working memory (WM) demand. On the other hand, Maloney, Ansari, and Fugelsang (2011) claim that MA affects all mathematical activities, including even the most basic ones such as magnitude comparison. The two theories make opposing predictions on the negative effect of MA on magnitude processing and simple calculation that make minimal demands on WM. We propose that MA has a selective impact on mathematical problem solving that likely involves processing of magnitude representations. Based on our hypothesis, MA will impinge upon magnitude processing even though it makes minimal demand on WM, but will spare retrieval-based, simple calculation, because it does not require magnitude processing. Our hypothesis can reconcile opposing predictions on the negative effect of MA on magnitude processing and simple calculation. In the present study, we observed a negative relationship between MA and performance on magnitude comparison and calculation with high but not low WM demand. These results demonstrate that MA has an impact on a wide range of mathematical performance, which depends on one's sense of magnitude, but spares over-practiced, retrieval-based calculation.
[CUDA-based fast dose calculation in radiotherapy].
Wang, Xianliang; Liu, Cao; Hou, Qing
2011-10-01
Dose calculation plays a key role in treatment planning of radiotherapy. Algorithms for dose calculation require high accuracy and computational efficiency. Finite size pencil beam (FSPB) algorithm is a method commonly adopted in the treatment planning system for radiotherapy. However, improvement on its computational efficiency is still desirable for such purpose as real time treatment planning. In this paper, we present an implementation of the FSPB, by which the most time-consuming parts in the algorithm are parallelized and ported on graphic processing unit (GPU). Compared with the FSPB completely running on central processing unit (CPU), the GPU-implemented FSPB can speed up the dose calculation for 25-35 times on a low price GPU (Geforce GT320) and for 55-100 times on a Tesla C1060, indicating that the GPU-implemented FSPB can provide fast enough dose calculations for real-time treatment planning.
Ruiz, B C; Tucker, W K; Kirby, R R
1975-01-01
With a desk-top, programmable calculator, it is now possible to do complex, previously time-consuming computations in the blood-gas laboratory. The authors have developed a program with the necessary algorithms for temperature correction of blood gases and calculation of acid-base variables and intrapulmonary shunt. It was necessary to develop formulas for the Po2 temperature-correction coefficient, the oxyhemoglobin-dissociation curve for adults (withe necessary adjustments for fetal blood), and changes in water vapor pressure due to variation in body temperature. Using this program in conjuction with a Monroe 1860-21 statistical programmable calculator, it is possible to temperature-correct pH,Pco2, and Po2. The machine will compute alveolar-arterial oxygen tension gradient, oxygen saturation (So2), oxygen content (Co2), actual HCO minus 3 and a modified base excess. If arterial blood and mixed venous blood are obtained, the calculator will print out intrapulmonary shunt data (Qs/Qt) and arteriovenous oxygen differences (a minus vDo2). There also is a formula to compute P50 if pH,Pco2,Po2, and measured So2 from two samples of tonometered blood (one above 50 per cent and one below 50 per cent saturation) are put into the calculator.
Diffraction Grating Efficiency Calculations Based on Real Groove Profiles
NASA Technical Reports Server (NTRS)
Content, David; Sroda, Tom; Palmer, Christopher; Kuznetsov, Ivan
2000-01-01
The program we are attempting to bring about combines 3 difficult features, in order to demonstrate accuracy of efficiency predictions: (1) Accurate groove metrology methods on surface relief gratings; (2) Rigorous and usable electromagnetic efficiency calculation codes; (3) Accurate efficiency measurements in polarized light The benefit would be an increase in yield for high-performance gratings. Many such applications suffer long lead time or serious performance loss when new gratings are made which do not meet requirements or expectations.
NASA Astrophysics Data System (ADS)
Gulans, Andris; Kontur, Stefan; Meisenbichler, Christian; Nabok, Dmitrii; Pavone, Pasquale; Rigamonti, Santiago; Sagmeister, Stephan; Werner, Ute; Draxl, Claudia
2014-09-01
Linearized augmented planewave methods are known as the most precise numerical schemes for solving the Kohn-Sham equations of density-functional theory (DFT). In this review, we describe how this method is realized in the all-electron full-potential computer package, exciting. We emphasize the variety of different related basis sets, subsumed as (linearized) augmented planewave plus local orbital methods, discussing their pros and cons and we show that extremely high accuracy (microhartrees) can be achieved if the basis is chosen carefully. As the name of the code suggests, exciting is not restricted to ground-state calculations, but has a major focus on excited-state properties. It includes time-dependent DFT in the linear-response regime with various static and dynamical exchange-correlation kernels. These are preferably used to compute optical and electron-loss spectra for metals, molecules and semiconductors with weak electron-hole interactions. exciting makes use of many-body perturbation theory for charged and neutral excitations. To obtain the quasi-particle band structure, the GW approach is implemented in the single-shot approximation, known as G0W0. Optical absorption spectra for valence and core excitations are handled by the solution of the Bethe-Salpeter equation, which allows for the description of strongly bound excitons. Besides these aspects concerning methodology, we demonstrate the broad range of possible applications by prototypical examples, comprising elastic properties, phonons, thermal-expansion coefficients, dielectric tensors and loss functions, magneto-optical Kerr effect, core-level spectra and more.
Gulans, Andris; Kontur, Stefan; Meisenbichler, Christian; Nabok, Dmitrii; Pavone, Pasquale; Rigamonti, Santiago; Sagmeister, Stephan; Werner, Ute; Draxl, Claudia
2014-09-10
Linearized augmented planewave methods are known as the most precise numerical schemes for solving the Kohn-Sham equations of density-functional theory (DFT). In this review, we describe how this method is realized in the all-electron full-potential computer package, exciting. We emphasize the variety of different related basis sets, subsumed as (linearized) augmented planewave plus local orbital methods, discussing their pros and cons and we show that extremely high accuracy (microhartrees) can be achieved if the basis is chosen carefully. As the name of the code suggests, exciting is not restricted to ground-state calculations, but has a major focus on excited-state properties. It includes time-dependent DFT in the linear-response regime with various static and dynamical exchange-correlation kernels. These are preferably used to compute optical and electron-loss spectra for metals, molecules and semiconductors with weak electron-hole interactions. exciting makes use of many-body perturbation theory for charged and neutral excitations. To obtain the quasi-particle band structure, the GW approach is implemented in the single-shot approximation, known as G(0)W(0). Optical absorption spectra for valence and core excitations are handled by the solution of the Bethe-Salpeter equation, which allows for the description of strongly bound excitons. Besides these aspects concerning methodology, we demonstrate the broad range of possible applications by prototypical examples, comprising elastic properties, phonons, thermal-expansion coefficients, dielectric tensors and loss functions, magneto-optical Kerr effect, core-level spectra and more.
Contact resistance calculations based on a variational method
NASA Astrophysics Data System (ADS)
Leong, M. S.; Choo, S. C.; Tan, L. S.; Goh, T. L.
1988-07-01
Noble's variational method is used to solve the contact resistance problem that arises when a circular disc source electrode is in contact with a semiconductor slab through an infinitesimally thin layer of resistive material. The method assumes that the source current density distribution J( r) has the form K 1(1 - r 2) -μ + K 2(1 - r 2) {1}/{2} + K 3(1 - r 2) {3}/{2}, where the parameters K1, K2, K3 and μ are determined by variational principles. Calculations of the source current density and the total slab resistance, performed for a wide range of contact resistivities, show that the results are practically indistinguishable from those derived from an exact mixed boundary value method proposed earlier by us. Whilst this method of using an optimised μ is very accurate, it is computationally slow. By fixing μ at a constant value of {1}/{4}, we find that we can drastically reduce the computation time for each calculation of the total slab resistance to 1.5 s on an Apple II microcomputer, and still achieve an overall accuracy of 1%. Tables of the abscissas and weights required for implementation of the numerical scheme are provided in the paper.
Coupled-cluster based basis sets for valence correlation calculations
NASA Astrophysics Data System (ADS)
Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J.
2016-03-01
Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via
QED Based Calculation of the Fine Structure Constant
Lestone, John Paul
2016-10-13
Quantum electrodynamics is complex and its associated mathematics can appear overwhelming for those not trained in this field. Here, semi-classical approaches are used to obtain a more intuitive feel for what causes electrostatics, and the anomalous magnetic moment of the electron. These intuitive arguments lead to a possible answer to the question of the nature of charge. Virtual photons, with a reduced wavelength of λ, are assumed to interact with isolated electrons with a cross section of πλ^{2}. This interaction is assumed to generate time-reversed virtual photons that are capable of seeking out and interacting with other electrons. This exchange of virtual photons between particles is assumed to generate and define the strength of electromagnetism. With the inclusion of near-field effects the model presented here gives a fine structure constant of ~1/137 and an anomalous magnetic moment of the electron of ~0.00116. These calculations support the possibility that near-field corrections are the key to understanding the numerical value of the dimensionless fine structure constant.
Ray-Based Calculations of Backscatter in Laser Fusion Targets
Strozzi, D J; Williams, E A; Hinkel, D E; Froula, D H; London, R A; Callahan, D A
2008-02-26
A steady-state model for Brillouin and Raman backscatter along a laser ray path is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code deplete, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as 'plane-wave' simulations with the paraxial propagation code pf3d. Comparisons with Brillouin-scattering experiments at the OMEGA Laser Facility [T. R. Boehly et al., Opt. Commun. 133, p. 495 (1997)] show that laser speckles greatly enhance the reflectivity over the deplete results. An approximate upper bound on this enhancement, motivated by phase conjugation, is given by doubling the deplete coupling coefficient. Analysis with deplete of an ignition design for the National Ignition Facility (NIF) [J. A. Paisner, E. M. Campbell, and W. J. Hogan, Fusion Technol. 26, p. 755 (1994)], with a peak radiation temperature of 285 eV, shows encouragingly low reflectivity. Doubling the coupling to bound the speckle enhancement suggests a less optimistic picture. Re-absorption of Raman light is seen to be significant in this design.
Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.
Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano
2014-09-09
A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.
Stress formulation in the all-electron full-potential linearized augmented plane wave method
NASA Astrophysics Data System (ADS)
Nagasako, Naoyuki; Oguchi, Tamio
2012-02-01
Stress formulation in the linearlized augmented plane wave (LAPW) method has been proposed in 2002 [1] as an extension of the force formulation in the LAPW method [2]. However, pressure calculations only for Al and Si were reported in Ref.[1] and even now stress calculations have not yet been fully established in the LAPW method. In order to make it possible to efficiently relax lattice shape and atomic positions simultaneously and to precisely evaluate the elastic constants in the LAPW method, we reformulate stress formula in the LAPW method with the Soler-Williams representation [3]. Validity of the formulation is tested by comparing the pressure obtained as the trace of stress tensor with that estimated from total energies for a wide variety of material systems. Results show that pressure is estimated within the accuracy of less than 0.1 GPa. Calculations of the shear elastic constant show that the shear components of the stress tensor are also precisely computed with the present formulation [4].[4pt] [1] T. Thonhauser et al., Solid State Commun. 124, 275 (2002).[0pt] [2] R. Yu et al., Phys. Rev. B 43, 6411 (1991).[0pt] [3] J. M. Soler and A. R. Williams, Phys. Rev. B 40, 1560 (1989).[0pt] [4] N. Nagasako and T. Oguchi, J. Phys. Soc. Jpn. 80, 024701 (2011).
All-electron GW quasiparticle band structures of group 14 nitride compounds
NASA Astrophysics Data System (ADS)
Chu, Iek-Heng; Kozhevnikov, Anton; Schulthess, Thomas C.; Cheng, Hai-Ping
2014-07-01
We have investigated the group 14 nitrides (M3N4) in the spinel phase (γ-M3N4 with M = C, Si, Ge, and Sn) and β phase (β-M3N4 with M = Si, Ge, and Sn) using density functional theory with the local density approximation and the GW approximation. The Kohn-Sham energies of these systems have been first calculated within the framework of full-potential linearized augmented plane waves (LAPW) and then corrected using single-shot G0W0 calculations, which we have implemented in the modified version of the Elk full-potential LAPW code. Direct band gaps at the Γ point have been found for spinel-type nitrides γ-M3N4 with M = Si, Ge, and Sn. The corresponding GW-corrected band gaps agree with experiment. We have also found that the GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications because of their direct and tunable band gaps.
All-electron GW quasiparticle band structures of group 14 nitride compounds
Chu, Iek-Heng; Cheng, Hai-Ping; Kozhevnikov, Anton; Schulthess, Thomas C.
2014-07-28
We have investigated the group 14 nitrides (M{sub 3}N{sub 4}) in the spinel phase (γ-M{sub 3}N{sub 4} with M = C, Si, Ge, and Sn) and β phase (β-M{sub 3}N{sub 4} with M = Si, Ge, and Sn) using density functional theory with the local density approximation and the GW approximation. The Kohn-Sham energies of these systems have been first calculated within the framework of full-potential linearized augmented plane waves (LAPW) and then corrected using single-shot G{sub 0}W{sub 0} calculations, which we have implemented in the modified version of the Elk full-potential LAPW code. Direct band gaps at the Γ point have been found for spinel-type nitrides γ-M{sub 3}N{sub 4} with M = Si, Ge, and Sn. The corresponding GW-corrected band gaps agree with experiment. We have also found that the GW calculations with and without the plasmon-pole approximation give very similar results, even when the system contains semi-core d electrons. These spinel-type nitrides are novel materials for potential optoelectronic applications because of their direct and tunable band gaps.
UAV-based NDVI calculation over grassland: An alternative approach
NASA Astrophysics Data System (ADS)
Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc
2016-04-01
The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a
Environment-based pin-power reconstruction method for homogeneous core calculations
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)
Model-based calculations of fiber output fields for fiber-based spectroscopy
NASA Astrophysics Data System (ADS)
Hernandez, Eloy; Bodenmüller, Daniel; Roth, Martin M.; Kelz, Andreas
2016-08-01
The accurate characterization of the field at the output of the optical fibres is of relevance for precision spectroscopy in astronomy. The modal effects of the fibre translate to the illumination of the pupil in the spectrograph and impact on the resulting point spread function (PSF). A Model is presented that is based on the Eigenmode Expansion Method (EEM) that calculates the output field from a given fibre for different manipulations of the input field. The fibre design and modes calculation are done via the commercially available Rsoft-FemSIM software. We developed a Python script to apply the EEM. Results are shown for different configuration parameters, such as spatial and angular displacements of the input field, spot size and propagation length variations, different transverse fibre geometries and different wavelengths. This work is part of the phase A study of the fibre system for MOSAIC, a proposed multi-object spectrograph for the European Extremely Large Telescope (ELT-MOS).
Aeroelastic Calculations Based on Three-Dimensional Euler Analysis
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Stefko, George L.
1998-01-01
This paper presents representative results from an aeroelastic code (TURBO-AE) based on an Euler/Navier-Stokes unsteady aerodynamic code (TURBO). Unsteady pressure, lift, and moment distributions are presented for a helical fan test configuration which is used to verify the code by comparison to two-dimensional linear potential (flat plate) theory. The results are for pitching and plunging motions over a range of phase angles, Good agreement with linear theory is seen for all phase angles except those near acoustic resonances. The agreement is better for pitching motions than for plunging motions. The reason for this difference is not understood at present. Numerical checks have been performed to ensure that solutions are independent of time step, converged to periodicity, and linearly dependent on amplitude of blade motion. The paper concludes with an evaluation of the current state of development of the TURBO-AE code and presents some plans for further development and validation of the TURBO-AE code.
Evidence-Based Current Surgical Practice: Calculous Gallbladder Disease
Duncan, Casey B.; Riall, Taylor S.
2012-01-01
Gallbladder disease is common and, if managed incorrectly, can lead to high rates of morbidity, mortality, and extraneous costs. The most common complications of gallstones include biliary colic, acute cholecystitis, common bile duct stones, and gallstone pancreatitis. Ultrasound is the initial imaging modality of choice. Additional diagnostic and therapeutic studies including computed tomography (CT), magnetic resonance imaging (MRI), magnetic resonance cholangiopancreatography (MRCP), endoscopic ultrasound (EUS), and endoscopic retrograde cholangiopancreatography (ERCP) are not routinely required but may play a role in specific situations. Biliary colic and acute cholecystitis are best treated with early laparoscopic cholecystectomy. Patients with common bile duct stones should be managed with cholecystectomy, either after or concurrent with endoscopic or surgical relief of obstruction and clearance of stones from the bile duct. Mild gallstone pancreatitis should be treated with cholecystectomy during the initial hospitalization to prevent recurrence. Emerging techniques for cholecystectomy include single-incision laparoscopic surgery (SILS) and natural orifice transluminal endoscopic surgery (NOTES). Early results in highly selected patients demonstrate the safety of these techniques. The management of complications of the gallbladder should be timely and evidence-based, and choice of procedures, particularly for common bile duct stones, is largely influenced by facility and surgeon factors. PMID:22986769
Bubin, Sergiy; Adamowicz, Ludwik
2014-01-14
Benchmark variational calculations are performed for the seven lowest 1s{sup 2}2s np ({sup 1}P), n = 2…8, states of the beryllium atom. The calculations explicitly include the effect of finite mass of {sup 9}Be nucleus and account perturbatively for the mass-velocity, Darwin, and spin-spin relativistic corrections. The wave functions of the states are expanded in terms of all-electron explicitly correlated Gaussian functions. Basis sets of up to 12 500 optimized Gaussians are used. The maximum discrepancy between the calculated nonrelativistic and experimental energies of 1s{sup 2}2s np ({sup 1}P) →1s{sup 2}2s{sup 2} ({sup 1}S) transition is about 12 cm{sup −1}. The inclusion of the relativistic corrections reduces the discrepancy to bellow 0.8 cm{sup −1}.
NASA Astrophysics Data System (ADS)
Shiga, Motoyuki; Tachikawa, Masanori; Miura, Shinichi
2000-12-01
We present an accurate calculational scheme for many-body systems composed of electrons and nuclei, by path integral molecular dynamics technique combined with the ab initio molecular orbital theory. Based upon the scheme, the simulation of a water molecule at room temperature is demonstrated, applying all-electron calculation at the Hartree-Fock level of theory.
Fast calculation with point-based method to make CGHs of the polygon model
NASA Astrophysics Data System (ADS)
Ogihara, Yuki; Ichikawa, Tsubasa; Sakamoto, Yuji
2014-02-01
Holography is one of the three-dimensional technology. Light waves from an object are recorded and reconstructed by using a hologram. Computer generated holograms (CGHs), which are made by simulating light propagation using a computer, are able to represent virtual object. However, an enormous amount of computation time is required to make CGHs. There are two primary methods of calculating CGHs: the polygon-based method and the point-based method. In the polygon-based method with Fourier transforms, CGHs are calculated using a fast Fourier transform (FFT). The calculation of complex objects composed of multiple polygons requires as many FFTs, so unfortunately the calculation time become enormous. In contrast, in the point-based method, it is easy to express complex objects, an enormous calculation time is still required. Graphics processing units (GPUs) have been used to speed up the calculations of point-based method. Because a GPU is specialized for parallel computation and CGH calculation can be calculated independently for each pixel. However, expressing a planar object by the point-based method requires a signi cant increase in the density of points and consequently in the number of point light sources. In this paper, we propose a fast calculation algorithm to express planar objects by the point-based method with a GPU. The proposed method accelerate calculation by obtaining the distance between a pixel and the point light source from the adjacent point light source by a difference method. Under certain speci ed conditions, the difference between adjacent object points becomes constant, so the distance is obtained by only an additions. Experimental results showed that the proposed method is more effective than the polygon-based method with FFT when the number of polygons composing an objects are high.
NASA Astrophysics Data System (ADS)
Amador, Davi H. T.; de Oliveira, Heibbe C. B.; Sambrano, Julio R.; Gargano, Ricardo; de Macedo, Luiz Guilherme M.
2016-10-01
A prolapse-free basis set for Eka-Actinium (E121, Z = 121), numerical atomic calculations on E121, spectroscopic constants and accurate analytical form for the potential energy curve of diatomic E121F obtained at 4-component all-electron CCSD(T) level including Gaunt interaction are presented. The results show a strong and polarized bond (≈181 kcal/mol in strength) between E121 and F, the outermost frontier molecular orbitals from E121F should be fairly similar to the ones from AcF and there is no evidence of break of periodic trends. Moreover, the Gaunt interaction, although small, is expected to influence considerably the overall rovibrational spectra.
McCarthy, Shane P; Thakkar, Ajit J
2011-01-28
All-electron correlation energies E(c) are not very well-known for atoms with more than 18 electrons. Hence, coupled-cluster calculations in carefully designed basis sets are combined with fully converged second-order Møller-Plesset perturbation theory (MP2) computations to obtain fairly accurate, nonrelativistic E(c) values for the 12 closed-shell atoms from Ar to Rn. These energies will be useful for the evaluation and parameterization of density functionals. The results show that MP2 overestimates ∣E(c)∣ for heavy atoms. Spin-component scaling of the MP2 correlation energy is used to provide a simple explanation for this overestimation.
Full-resolution autostereoscopic display using an all-electronic tracking/steering system
NASA Astrophysics Data System (ADS)
Gaudreau, Jean-Etienne
2012-03-01
PolarScreens is developing a new 3D display technology capable of displaying full HD resolution in each eye without the need for glasses. The technology combines a regular backlight, a 120Hz 3D LCD panel, a vertical Patterned active shutter panel and a head tracking system. The technology relies on a 12-sub-pixel wide alternated pattern encoded in the stereo image to follow the head movement. Alternatively for a passive 3D display, the barrier is made of vertical strip Polarizer Film. This can be applied to any full resolution polarized display like iZ3D, Perceiva, or active retarder 3D display. The end result is a full resolution autostereoscopic display with complete head movement freedom. There are no mechanical moving part (like lenticular) or extra active components to steer the correct L/R image to the user's eyes. The new display has the capacity of displaying 2D/3D information on a pixel per pixel base so there is no need for full screen or windowed 2D/3D switchable apparatus.
Creative Uses for Calculator-based Laboratory (CBL) Technology in Chemistry.
ERIC Educational Resources Information Center
Sales, Cynthia L.; Ragan, Nicole M.; Murphy, Maureen Kendrick
1999-01-01
Reviews three projects that use a graphing calculator linked to a calculator-based laboratory device as a portable data-collection system for students in chemistry classes. Projects include Isolation, Purification and Quantification of Buckminsterfullerene from Woodstove Ashes; Determination of the Activation Energy Associated with the…
19 CFR 351.405 - Calculation of normal value based on constructed value.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 3 2013-04-01 2013-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value,...
19 CFR 351.405 - Calculation of normal value based on constructed value.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value,...
19 CFR 351.405 - Calculation of normal value based on constructed value.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value,...
19 CFR 351.405 - Calculation of normal value based on constructed value.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 3 2012-04-01 2012-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value,...
19 CFR 351.405 - Calculation of normal value based on constructed value.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value,...
[Design of high performance DSP-based gradient calculation module for MRI].
Pan, Wenyu; Zhang, Fu; Luo, Hai; Zhou, Heqin
2011-05-01
A gradient calculation module based on high performance DSP was designed to meet the needs of digital MRI spectrometer. According to the requirements of users, this apparatus can achieve rotation transformation, pre-emphasis, shimming and other gradient calculation functions in a single chip of DSP. It then outputs gradient waveform data of channel X, Y, Z and shimming data of channel B0. Experiments show that the design has good versatility and can satisfy the functional, speed and accuracy requirements of MRI gradient calculation. It provides a practical gradient calculation solution for the development of digital spectrometer.
Er, Li; Xiangying, Zeng
2014-01-01
To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models.
NASA Astrophysics Data System (ADS)
Kehlenbeck, Matthias; Breitner, Michael H.
Business users define calculated facts based on the dimensions and facts contained in a data warehouse. These business calculation definitions contain necessary knowledge regarding quantitative relations for deep analyses and for the production of meaningful reports. The business calculation definitions are implementation and widely organization independent. But no automated procedures facilitating their exchange across organization and implementation boundaries exist. Separately each organization currently has to map its own business calculations to analysis and reporting tools. This paper presents an innovative approach based on standard Semantic Web technologies. This approach facilitates the exchange of business calculation definitions and allows for their automatic linking to specific data warehouses through semantic reasoning. A novel standard proxy server which enables the immediate application of exchanged definitions is introduced. Benefits of the approach are shown in a comprehensive case study.
Calculation of the stabilization energies of oxidatively damaged guanine base pairs with guanine.
Suzuki, Masayo; Kino, Katsuhito; Morikawa, Masayuki; Kobayashi, Takanobu; Komori, Rie; Miyazawa, Hiroshi
2012-06-01
DNA is constantly exposed to endogenous and exogenous oxidative stresses. Damaged DNA can cause mutations, which may increase the risk of developing cancer and other diseases. G:C-C:G transversions are caused by various oxidative stresses. 2,2,4-Triamino-5(2H)-oxazolone (Oz), guanidinohydantoin (Gh)/iminoallantoin (Ia) and spiro-imino-dihydantoin (Sp) are known products of oxidative guanine damage. These damaged bases can base pair with guanine and cause G:C-C:G transversions. In this study, the stabilization energies of these bases paired with guanine were calculated in vacuo and in water. The calculated stabilization energies of the Ia:G base pairs were similar to that of the native C:G base pair, and both bases pairs have three hydrogen bonds. By contrast, the calculated stabilization energies of Gh:G, which form two hydrogen bonds, were lower than the Ia:G base pairs, suggesting that the stabilization energy depends on the number of hydrogen bonds. In addition, the Sp:G base pairs were less stable than the Ia:G base pairs. Furthermore, calculations showed that the Oz:G base pairs were less stable than the Ia:G, Gh:G and Sp:G base pairs, even though experimental results showed that incorporation of guanine opposite Oz is more efficient than that opposite Gh/Ia and Sp.
Karabag Aydin, Arzu; Dinç, Leyla
2016-12-29
Drug dosage calculation skill is critical for all nursing students to ensure patient safety, particularly during clinical practice. The study purpose was to evaluate the effectiveness of Web-based instruction on improving nursing students' arithmetical and drug dosage calculation skills using a pretest-posttest design. A total of 63 nursing students participated. Data were collected through the Demographic Information Form, and the Arithmetic Skill Test and Drug Dosage Calculation Skill Test were used as pre and posttests. The pretest was conducted in the classroom. A Web site was then constructed, which included audio presentations of lectures, quizzes, and online posttests. Students had Web-based training for 8 weeks and then they completed the posttest. Pretest and posttest scores were compared using the Wilcoxon test and correlation coefficients were used to identify the relationship between arithmetic and calculation skills scores. The results demonstrated that Web-based teaching improves students' arithmetic and drug dosage calculation skills. There was a positive correlation between the arithmetic skill and drug dosage calculation skill scores of students. Web-based teaching programs can be used to improve knowledge and skills at a cognitive level in nursing students.
Atomic-Based Calculations of Two-Detector Doppler-Broadening Spectra
Asoka-Kumar, P; Howell, R
2001-10-11
We present a simplified approach for calculating Doppler broadening spectra based purely on atomic calculations. This approach avoids the need for detailed atomic positions, and can provide the characteristic Doppler broadening momentum spectra for any element. We demonstrate the power of this method by comparing theory and experiment for a number of elemental metals and alkali halides. In the alkali halides, the annihilation appears to be entirely with halide electrons.
Density functional theory calculations of III-N based semiconductors with mBJLDA
NASA Astrophysics Data System (ADS)
Gürel, Hikmet Hakan; Akıncı, Özden; Ünlü, Hilmi
2017-02-01
In this work, we present first principles calculations based on a full potential linear augmented plane-wave method (FP-LAPW) to calculate structural and electronic properties of III-V based nitrides such as GaN, AlN, InN in a zinc-blende cubic structure. First principles calculation using the local density approximation (LDA) and generalized gradient approximation (GGA) underestimate the band gap. We proposed a new potential called modified Becke-Johnson local density approximation (MBJLDA) that combines modified Becke-Johnson exchange potential and the LDA correlation potential to get better band gap results compared to experiment. We compared various exchange-correlation potentials (LSDA, GGA, HSE, and MBJLDA) to determine band gaps and structural properties of semiconductors. We show that using MBJLDA density potential gives a better agreement with experimental data for band gaps III-V nitrides based semiconductors.
Poongavanam, Vasanthanathan; Steinmann, Casper; Kongsted, Jacob
2014-01-01
Quantum mechanical (QM) calculations have been used to predict the binding affinity of a set of ligands towards HIV-1 RT associated RNase H (RNH). The QM based chelation calculations show improved binding affinity prediction for the inhibitors compared to using an empirical scoring function. Furthermore, full protein fragment molecular orbital (FMO) calculations were conducted and subsequently analysed for individual residue stabilization/destabilization energy contributions to the overall binding affinity in order to better understand the true and false predictions. After a successful assessment of the methods based on the use of a training set of molecules, QM based chelation calculations were used as filter in virtual screening of compounds in the ZINC database. By this, we find, compared to regular docking, QM based chelation calculations to significantly reduce the large number of false positives. Thus, the computational models tested in this study could be useful as high throughput filters for searching HIV-1 RNase H active-site molecules in the virtual screening process. PMID:24897431
[Terahertz Absorption Spectra Simulation of Glutamine Based on Quantum-Chemical Calculation].
Zhang, Tian-yao; Zhang, Zhao-hui; Zhao, Xiao-yan; Zhang, Han; Yan, Fang; Qian, Ping
2015-08-01
With simulation of absorption spectra in THz region based on quantum-chemical calculation, the THz absorption features of target materials can be assigned with theoretical normal vibration modes. This is necessary for deeply understanding the origin of THz absorption spectra. The reliabilities of simulation results mainly depend on the initial structures and theoretical methods used throughout the calculation. In our study, we utilized THz-TDS to obtain the THz absorption spectrum of solid-state L-glutamine. Then three quantum-chemical calculation schemes with different initial structures commonly used in previous studies were proposed to study the inter-molecular interactions' contribution to the THz absorption of glutamine, containing monomer structure, dimer structure and crystal unit cell structure. After structure optimization and vibration modes' calculation based on density functional theory, the calculation results were converted to absorption spectra by Lorentzian line shape function for visual comparison with experimental spectra. The result of dimmer structure is better than monomer structure in number of absorption features while worse than crystal unit cell structure in position of absorption peaks. With the most reliable simulation result from crystal unit cell calculation, we successfully assigned all three experimental absorption peaks of glutamine ranged from 0.3 to 2.6 THz with overall vibration modes. Our study reveals that the crystal unit cell should be used as initial structure during theoretical simulation of solid-state samples' THz absorption spectrum which comprehensively considers not only the intra-molecular interactions but also inter-molecular interactions.
Huang, Yuanshen; Li, Ting; Xu, Banglian; Hong, Ruijin; Tao, Chunxian; Ling, Jinzhong; Li, Baicheng; Zhang, Dawei; Ni, Zhengji; Zhuang, Songlin
2013-02-10
Fraunhofer diffraction formula cannot be applied to calculate the diffraction wave energy distribution of concave gratings like plane gratings because their grooves are distributed on a concave spherical surface. In this paper, a method based on the Kirchhoff diffraction theory is proposed to calculate the diffraction efficiency on concave gratings by considering the curvature of the whole concave spherical surface. According to this approach, each groove surface is divided into several limited small planes, on which the Kirchhoff diffraction field distribution is calculated, and then the diffraction field of whole concave grating can be obtained by superimposition. Formulas to calculate the diffraction efficiency of Rowland-type and flat-field concave gratings are deduced from practical applications. Experimental results showed strong agreement with theoretical computations. With the proposed method, light energy can be optimized to the expected diffraction wave range while implementing aberration-corrected design of concave gratings, particularly for the concave blazed gratings.
NASA Astrophysics Data System (ADS)
Lai, B. W.; Wu, Z. X.; Dong, X. P.; Lu, D.; Tao, S. C.
2016-07-01
We proposed a novel method to calculate the similarity between samples with only small differences at unknown and specific positions in their Raman spectra, using a moving interval window scanning across the whole Raman spectra. Two ABS plastic samples, one with and the other without flame retardant, were tested in the experiment. Unlike the traditional method in which the similarity is calculated based on the whole spectrum, we do the calculation by using a window to cut out a certain segment from Raman spectra, each at a time as the window moves across the entire spectrum range. By our method, a curve of similarity versus wave number is obtained. And the curve shows a large change where the partial spectra of the two samples is different. Thus, the new similarity calculation method identifies samples with tiny difference in their Raman spectra better.
The effects of calculator-based laboratories on standardized test scores
NASA Astrophysics Data System (ADS)
Stevens, Charlotte Bethany Rains
Nationwide, the goal of providing a productive science and math education to our youth in today's educational institutions is centering itself around the technology being utilized in these classrooms. In this age of digital technology, educational software and calculator-based laboratories (CBL) have become significant devices in the teaching of science and math for many states across the United States. Among the technology, the Texas Instruments graphing calculator and Vernier Labpro interface, are among some of the calculator-based laboratories becoming increasingly popular among middle and high school science and math teachers in many school districts across this country. In Tennessee, however, it is reported that this type of technology is not regularly utilized at the student level in most high school science classrooms, especially in the area of Physical Science (Vernier, 2006). This research explored the effect of calculator based laboratory instruction on standardized test scores. The purpose of this study was to determine the effect of traditional teaching methods versus graphing calculator teaching methods on the state mandated End-of-Course (EOC) Physical Science exam based on ability, gender, and ethnicity. The sample included 187 total tenth and eleventh grade physical science students, 101 of which belonged to a control group and 87 of which belonged to the experimental group. Physical Science End-of-Course scores obtained from the Tennessee Department of Education during the spring of 2005 and the spring of 2006 were used to examine the hypotheses. The findings of this research study suggested the type of teaching method, traditional or calculator based, did not have an effect on standardized test scores. However, the students' ability level, as demonstrated on the End-of-Course test, had a significant effect on End-of-Course test scores. This study focused on a limited population of high school physical science students in the middle Tennessee
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Truscott, Tadd
2016-11-01
Little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure calculation. Rather than measure experimental error, we analytically investigate error propagation by examining the properties of the Poisson equation directly. Our results provide two contributions to the PIV community. First, we quantify the error bound in the pressure field by illustrating the mathematical roots of why and how PIV-based pressure calculations propagate. Second, we design the "worst case error" for a pressure Poisson solver. In other words, we provide a systematic example where the relatively small errors in the experimental data can lead to maximum error in the corresponding pressure calculations. The 2D calculation of the worst case error surprisingly leads to the classic Kirchhoff plates problem, and connects the PIV-based pressure calculation, which is a typical fluid problem, to elastic dynamics. The results can be used for optimizing experimental error minimization by avoiding worst case scenarios. More importantly, they can be used to design synthetic velocity error for future PIV-pressure challenges, which can be the hardest test case in the examinations.
Calculation of thermal expansion coefficient of glasses based on topological constraint theory
NASA Astrophysics Data System (ADS)
Zeng, Huidan; Ye, Feng; Li, Xiang; Wang, Ling; Yang, Bin; Chen, Jianding; Zhang, Xianghua; Sun, Luyi
2016-10-01
In this work, the thermal expansion behavior and the structure configuration evolution of glasses were studied. Degree of freedom based on the topological constraint theory is correlated with configuration evolution; considering the chemical composition and the configuration change, the analytical equation for calculating the thermal expansion coefficient of glasses from degree of freedom was derived. The thermal expansion of typical silicate and chalcogenide glasses was examined by calculating their thermal expansion coefficients (TEC) using the approach stated above. The results showed that this approach was energetically favorable for glass materials and revealed the corresponding underlying essence from viewpoint of configuration entropy. This work establishes a configuration-based methodology to calculate the thermal expansion coefficient of glasses that, lack periodic order.
Wooten, E Wrenn
2003-12-01
A general formalism for calculating parameters describing physiological acid-base balance in single compartments is extended to multicompartment systems and demonstrated for the multicompartment example of human whole blood. Expressions for total titratable base, strong ion difference, change in total titratable base, change in strong ion difference, and change in Van Slyke standard bicarbonate are derived, giving calculated values in agreement with experimental data. The equations for multicompartment systems are found to have the same mathematical interrelationships as those for single compartments, and the relationship of the present formalism to the traditional form of the Van Slyke equation is also demonstrated. The multicompartment model brings the strong ion difference theory to the same quantitative level as the base excess method.
Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene
NASA Astrophysics Data System (ADS)
Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.
2012-02-01
We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.
New Soft-Core Potential Function for Molecular Dynamics Based Alchemical Free Energy Calculations.
Gapsys, Vytautas; Seeliger, Daniel; de Groot, Bert L
2012-07-10
The fields of rational drug design and protein engineering benefit from accurate free energy calculations based on molecular dynamics simulations. A thermodynamic integration scheme is often used to calculate changes in the free energy of a system by integrating the change of the system's Hamiltonian with respect to a coupling parameter. These methods exploit nonphysical pathways over thermodynamic cycles involving particle introduction and annihilation. Such alchemical transitions require the modification of the classical nonbonded potential energy terms by applying soft-core potential functions to avoid singularity points. In this work, we propose a novel formulation for a soft-core potential to be applied in nonequilibrium free energy calculations that alleviates singularities, numerical instabilities, and additional minima in the potential energy for all combinations of nonbonded interactions at all intermediate alchemical states. The method was validated by application to (a) the free energy calculations of a closed thermodynamic cycle, (b) the mutation influence on protein thermostability, (c) calculations of small ligand solvation free energies, and (d) the estimation of binding free energies of trypsin inhibitors. The results show that the novel soft-core function provides a robust and accurate general purpose solution to alchemical free energy calculations.
Medication calculation: the potential role of digital game-based learning in nurse education.
Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle
2013-12-01
Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.
A fast and flexible library-based thick-mask near-field calculation method
NASA Astrophysics Data System (ADS)
Ma, Xu; Gao, Jie; Chen, Xuanbo; Dong, Lisong; Li, Yanqiu
2015-03-01
Aerial image calculation is the basis of the current lithography simulation. As the critical dimension (CD) of the integrated circuits continuously shrinks, the thick mask near-field calculation has increasing influence on the accuracy and efficiency of the entire aerial image calculation process. This paper develops a flexible librarybased approach to significantly improve the efficiency of the thick mask near-field calculation compared to the rigorous modeling method, while leading to much higher accuracy than the Kirchhoff approximation method. Specifically, a set of typical features on the fullchip are selected to serve as the training data, whose near-fields are pre-calculated and saved in the library. Given an arbitrary test mask, we first decompose it into convex corners, concave corners and edges, afterwards match each patch to the training layouts based on nonparametric kernel regression. Subsequently, we use the matched near-fields in the library to replace the mask patches, and rapidly synthesize the near-field for the entire test mask. Finally, a data-fitting method is proposed to improve the accuracy of the synthesized near-field based on least square estimate (LSE). We use a pair of two-dimensional mask patterns to test our method. Simulations show that the proposed method can significantly speed up the current FDTD method, and effectively improve the accuracy of the Kirchhoff approximation method.
NASA Astrophysics Data System (ADS)
Odbadrakh, Khorgolkhuu; Nicholson, Don; Eisenbach, Markus; Brown, Gregory; Rusanu, Aurelian; Materials Theory Group Team
2014-03-01
Magnetic entropy change in Magneto-caloric Effect materials is one of the key parameters in choosing materials appropriate for magnetic cooling and offers insight into the coupling between the materials' thermodynamic and magnetic degrees of freedoms. We present computational workflow to calculate the change of magnetic entropy due to a magnetic field using the DFT based statistical sampling of the energy landscape of Ni2MnGa. The statistical density of magnetic states is calculated with Wang-Landau sampling, and energies are calculated with the Locally Self-consistent Multiple Scattering technique. The high computational cost of calculating energies of each state from first principles is tempered by exploiting a model Hamiltonian fitted to the DFT based sampling. The workflow is described and justified. The magnetic adiabatic temperature change calculated from the statistical density of states agrees with the experimentally obtained value in the absence of structural transformation. The study also reveals that the magnetic subsystem alone cannot explain the large MCE observed in Ni2MnGa alloys. This work was performed at the ORNL, which is managed by UT-Batelle for the U.S. DOE. It was sponsored by the Division of Material Sciences and Engineering, OBES. This research used resources of the OLCF at ORNL, which is supported by the Office of Science of the U.S. DOE under Contract DE-AC05-00OR22725.
40 CFR 1066.605 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... meter inlet, measured directly or calculated as the sum of atmospheric pressure plus a differential pressure referenced to atmospheric pressure. T std = standard temperature. p std = standard pressure. T in... temperature and pressure. m PMfil = mass of particulate matter emissions on the filter over the test...
Mcnp-Based Methodology to Calculate Helium Production in Bwr Shrouds
NASA Astrophysics Data System (ADS)
Sitaraman, S.; Chiang, R.-T.; Oliver, B. M.
2003-06-01
A three-dimensional computational method based on Monte Carlo radiation transport techniques was developed to calculate thermal and fast neutron fields in the downcomer region of a Boiling Water Reactor (BWR). This methodology was validated using measured data obtained from an operating BWR. The helium production was measured in stainless steel at locations near the shroud and compared with values from the Monte Carlo calculations. The methodology produced results that were in agreement with measurements, thereby providing a useful tool for the determination of helium levels in shroud components.
A new approach to calculate Plant Area Density (PAD) using 3D ground-based lidar
NASA Astrophysics Data System (ADS)
Taheriazad, Leila; Moghadas, Hamid; Sanchez-Azofeifa, Arturo
2016-10-01
This paper presents a novel algorithm for calculation of plant area density based on surface and volume convex hull which is applied to each horizontal cut of a point cloud data. This method can be used as an alternative to conventional voxelization approaches to improve accuracy and computation efficiency. The terrestrial data was collected from a boreal forest at Peace River, Alberta, Canada during summer and fall in 2014. This technique can be applied to an arbitrary point cloud data to calculate all other metrics of forests including plant area index, leaf area density, and also leaf area index.
Duality-based calculations for transition probabilities in stochastic chemical reactions
NASA Astrophysics Data System (ADS)
Ohkubo, Jun
2017-02-01
An idea for evaluating transition probabilities in chemical reaction systems is proposed, which is efficient for repeated calculations with various rate constants. The idea is based on duality relations; instead of direct time evolutions of the original reaction system, the dual process is dealt with. Usually, if one changes rate constants of the original reaction system, the direct time evolutions should be performed again, using the new rate constants. On the other hands, only one solution of an extended dual process can be reused to calculate the transition probabilities for various rate constant cases. The idea is demonstrated in a parameter estimation problem for the Lotka-Volterra system.
Modification method of numerical calculation of heat flux over dome based on turbulence models
NASA Astrophysics Data System (ADS)
Zhang, Daijun; Luo, Haibo; Zhang, Junchao; Zhang, Xiangyue
2016-10-01
For the optical guidance system flying at low altitude and high speed, the calculation of turbulent convection heat transfer over its dome is the key to designing this kind of aircraft. RANS equations-based turbulence models are of high computation efficiency and their calculation accuracy can satisfy the engineering requirement. But for the calculation of the flow in the shock layer of strong entropy and pressure disturbances existence, especially of aerodynamic heat, some parameters in the RANS energy equation are necessary to be modified. In this paper, we applied turbulence models on the calculation of the heat flux over the dome of sphere-cone body at zero attack. Based on Billig's results, the shape and position of detached shock were extracted in flow field using multi-block structured grid. The thermal conductivity of the inflow was set to kinetic theory model with respect to temperature. When compared with Klein's engineering formula at the stagnation point, we found that the results of turbulent models were larger. By analysis, we found that the main reason of larger values was the interference from entropy layer to boundary layer. Then thermal conductivity of inflow was assigned a fixed value as equivalent thermal conductivity in order to compensate the overestimate of the turbulent kinetic energy. Based on the SST model, numerical experiments showed that the value of equivalent thermal conductivity was only related with the Mach number. The proposed modification approach of equivalent thermal conductivity for inflow in this paper could also be applied to other turbulence models.
A design of a DICOM-RT-based tool box for nonrigid 4D dose calculation.
Wong, Victy Y W; Baker, Colin R; Leung, T W; Tung, Stewart Y
2016-03-08
The study was aimed to introduce a design of a DICOM-RT-based tool box to facilitate 4D dose calculation based on deformable voxel-dose registration. The computational structure and the calculation algorithm of the tool box were explicitly discussed in the study. The tool box was written in MATLAB in conjunction with CERR. It consists of five main functions which allow a) importation of DICOM-RT-based 3D dose plan, b) deformable image registration, c) tracking voxel doses along breathing cycle, d) presentation of temporal dose distribution at different time phase, and e) derivation of 4D dose. The efficacy of using the tool box for clinical application had been verified with nine clinical cases on retrospective-study basis. The logistic and the robustness of the tool box were tested with 27 applications and the results were shown successful with no computational errors encountered. In the study, the accumulated dose coverage as a function of planning CT taken at end-inhale, end-exhale, and mean tumor position were assessed. The results indicated that the majority of the cases (67%) achieved maximum target coverage, while the planning CT was taken at the temporal mean tumor position and 56% at the end-exhale position. The comparable results to the literature imply that the studied tool box can be reliable for 4D dose calculation. The authors suggest that, with proper application, 4D dose calculation using deformable registration can provide better dose evaluation for treatment with moving target.
Domain overlap matrices from plane-wave-based methods of electronic structure calculation
NASA Astrophysics Data System (ADS)
Golub, Pavlo; Baranov, Alexey I.
2016-10-01
Plane waves are one of the most popular and efficient basis sets for electronic structure calculations of solids; however, their delocalized nature makes it difficult to employ for them classical orbital-based methods of chemical bonding analysis. The quantum chemical topology approach, introducing chemical concepts via partitioning of real space into chemically meaningful domains, has no difficulties with plane-wave-based basis sets. Many popular tools employed within this approach, for instance delocalization indices, need overlap integrals over these domains—the elements of the so called domain overlap matrices. This article reports an efficient algorithm for evaluation of domain overlap matrix elements for plane-wave-based calculations as well as evaluation of its implementation for one of the most popular projector augmented wave (PAW) methods on the small set of simple and complex solids. The stability of the obtained results with respect to PAW calculation parameters has been investigated, and the comparison of the results with the results from other calculation methods has also been made.
The Triangle Technique: a new evidence-based educational tool for pediatric medication calculations.
Sredl, Darlene
2006-01-01
Many nursing student verbalize an aversion to mathematical concepts and experience math anxiety whenever a mathematical problem is confronted. Since nurses confront mathematical problems on a daily basis, they must learn to feel comfortable with their ability to perform these calculations correctly. The Triangle Technique, a new educational tool available to nurse educators, incorporates evidence-based concepts within a graphic model using visual, auditory, and kinesthetic learning styles to demonstrate pediatric medication calculations of normal therapeutic ranges. The theoretical framework for the technique is presented, as is a pilot study examining the efficacy of the educational tool. Statistically significant results obtained by Pearson's product-moment correlation indicate that students are better able to calculate accurate pediatric therapeutic dosage ranges after participation in the educational intervention of learning the Triangle Technique.
GPU-based acceleration of free energy calculations in solid state physics
NASA Astrophysics Data System (ADS)
Januszewski, Michał; Ptok, Andrzej; Crivelli, Dawid; Gardas, Bartłomiej
2015-07-01
Obtaining a thermodynamically accurate phase diagram through numerical calculations is a computationally expensive problem that is crucially important to understanding the complex phenomena of solid state physics, such as superconductivity. In this work we show how this type of analysis can be significantly accelerated through the use of modern GPUs. We illustrate this with a concrete example of free energy calculation in multi-band iron-based superconductors, known to exhibit a superconducting state with oscillating order parameter (OP). Our approach can also be used for classical BCS-type superconductors. With a customized algorithm and compiler tuning we are able to achieve a 19×speedup compared to the CPU (119×compared to a single CPU core), reducing calculation time from minutes to mere seconds, enabling the analysis of larger systems and the elimination of finite size effects.
An automated Monte-Carlo based method for the calculation of cascade summing factors
NASA Astrophysics Data System (ADS)
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
Efficient algorithms for semiclassical instanton calculations based on discretized path integrals
Kawatsu, Tsutomu E-mail: smiura@mail.kanazawa-u.ac.jp; Miura, Shinichi E-mail: smiura@mail.kanazawa-u.ac.jp
2014-07-14
Path integral instanton method is a promising way to calculate the tunneling splitting of energies for degenerated two state systems. In order to calculate the tunneling splitting, we need to take the zero temperature limit, or the limit of infinite imaginary time duration. In the method developed by Richardson and Althorpe [J. Chem. Phys. 134, 054109 (2011)], the limit is simply replaced by the sufficiently long imaginary time. In the present study, we have developed a new formula of the tunneling splitting based on the discretized path integrals to take the limit analytically. We have applied our new formula to model systems, and found that this approach can significantly reduce the computational cost and gain the numerical accuracy. We then developed the method combined with the electronic structure calculations to obtain the accurate interatomic potential on the fly. We present an application of our ab initio instanton method to the ammonia umbrella flip motion.
Gong, Jian; Kim, Chang-Jin C J
2008-06-01
Electrowetting-on-dielectric (EWOD) actuation enables digital (or droplet) microfluidics where small packets of liquids are manipulated on a two-dimensional surface. Due to its mechanical simplicity and low energy consumption, EWOD holds particular promise for portable systems. To improve volume precision of the droplets, which is desired for quantitative applications such as biochemical assays, existing practices would require near-perfect device fabrication and operation conditions unless the droplets are generated under feedback control by an extra pump setup off of the chip. In this paper, we develop an all-electronic (i.e., no ancillary pumping) real-time feedback control of on-chip droplet generation. A fast voltage modulation, capacitance sensing, and discrete-time PID feedback controller are integrated on the operating electronic board. A significant improvement is obtained in the droplet volume uniformity, compared with an open loop control as well as the previous feedback control employing an external pump. Furthermore, this new capability empowers users to prescribe the droplet volume even below the previously considered minimum, allowing, for example, 1 : x (x < 1) mixing, in comparison to the previously considered n : m mixing (i.e., n and m unit droplets).
Gong, Jian; Kim, Chang-Jin “CJ”
2009-01-01
Electrowetting-on-dielectric (EWOD) actuation enables digital (or droplet) microfluidics where small packets of liquids are manipulated on a two-dimensional surface. Due to its mechanical simplicity and low energy consumption, EWOD holds particular promise for portable systems. To improve volume precision of the droplets, which is desired for quantitative applications such as biochemical assays, existing practices would require near-perfect device fabricaion and operation conditions unless the droplets are generated under feedback control by an extra pump setup off of the chip. In this paper, we develop an all-electronic (i.e., no ancillary pumping) real-time feedback control of on-chip droplet generation. A fast voltage modulation, capacitance sensing, and discrete-time PID feedback controller are integrated on the operating electronic board. A significant improvement is obtained in the droplet volume uniformity, compared with an open loop control as well as the previous feedback control employing an external pump. Furthermore, this new capability empowers users to prescribe the droplet volume even below the previously considered minimum, allowing, for example, 1:x (x < 1) mixing, in comparison to the previously considered n:m mixing (i.e., n and m unit droplets). PMID:18497909
Gupta, G; Sasisekharan, V
1978-01-01
Base-base interactions were computed for single- and double stranded poly,ucleotides, for all possible base sequences. In each case, both right and left stacking arrangements are energetically possible. The preference of one over the other depends upon the base-sequence and the orientation of the bases with respect to helix-axis. Inverted stacking arrangement is also energetically possible for both single- and double-stranded polynucleotides. Finally, interacting energies of a regular duplex and the alternative structures were compared. It was found that the type II model is energetically more favourable than the rest. PMID:662698
Xu, H; Guerrero, M; Chen, S; Langen, K; Prado, K; Yang, X; Schinkel, C
2015-06-15
Purpose: The TG-71 report was published in 2014 to present standardized methodologies for MU calculations and determination of dosimetric quantities. This work explores the clinical implementation of a TG71-based electron MU calculation algorithm and compares it with a recently released commercial secondary calculation program–Mobius3D (Mobius Medical System, LP). Methods: TG-71 electron dosimetry data were acquired, and MU calculations were performed based on the recently published TG-71 report. The formalism in the report for extended SSD using air-gap corrections was used. The dosimetric quantities, such PDD, output factor, and f-air factors were incorporated into an organized databook that facilitates data access and subsequent computation. The Mobius3D program utilizes a pencil beam redefinition algorithm. To verify the accuracy of calculations, five customized rectangular cutouts of different sizes–6×12, 4×12, 6×8, 4×8, 3×6 cm{sup 2}–were made. Calculations were compared to each other and to point dose measurements for electron beams of energy 6, 9, 12, 16, 20 MeV. Each calculation / measurement point was at the depth of maximum dose for each cutout in a 10×10 cm{sup 2} or 15×15cm{sup 2} applicator with SSDs 100cm and 110cm. Validation measurements were made with a CC04 ion chamber in a solid water phantom for electron beams of energy 9 and 16 MeV. Results: Differences between the TG-71 and the commercial system relative to measurements were within 3% for most combinations of electron energy, cutout size, and SSD. A 5.6% difference between the two calculation methods was found only for the 6MeV electron beam with 3×6 cm{sup 2}cutout in the 10×10{sup 2}cm applicator at 110cm SSD. Both the TG-71 and the commercial calculations show good consistency with chamber measurements: for 5 cutouts, <1% difference for 100cm SSD, and 0.5–2.7% for 110cm SSD. Conclusions: Based on comparisons with measurements, a TG71-based computation method and a Mobius3D
Effect of composition on antiphase boundary energy in Ni3Al based alloys: Ab initio calculations
NASA Astrophysics Data System (ADS)
Gorbatov, O. I.; Lomaev, I. L.; Gornostyrev, Yu. N.; Ruban, A. V.; Furrer, D.; Venkatesh, V.; Novikov, D. L.; Burlatsky, S. F.
2016-06-01
The effect of composition on the antiphase boundary (APB) energy of Ni-based L 12-ordered alloys is investigated by ab initio calculations employing the coherent potential approximation. The calculated APB energies for the {111} and {001} planes reproduce experimental values of the APB energy. The APB energies for the nonstoichiometric γ' phase increase with Al concentration and are in line with the experiment. The magnitude of the alloying effect on the APB energy correlates with the variation of the ordering energy of the alloy according to the alloying element's position in the 3 d row. The elements from the left side of the 3 d row increase the APB energy of the Ni-based L 12-ordered alloys, while the elements from the right side slightly affect it except Ni. The way to predict the effect of an addition on the {111} APB energy in a multicomponent alloy is discussed.
Iterative diagonalization in augmented plane wave based methods in electronic structure calculations
Blaha, P.; Laskowski, R.; Schwarz, K.
2010-01-20
Due to the increased computer power and advanced algorithms, quantum mechanical calculations based on Density Functional Theory are more and more widely used to solve real materials science problems. In this context large nonlinear generalized eigenvalue problems must be solved repeatedly to calculate the electronic ground state of a solid or molecule. Due to the nonlinear nature of this problem, an iterative solution of the eigenvalue problem can be more efficient provided it does not disturb the convergence of the self-consistent-field problem. The blocked Davidson method is one of the widely used and efficient schemes for that purpose, but its performance depends critically on the preconditioning, i.e. the procedure to improve the search space for an accurate solution. For more diagonally dominated problems, which appear typically for plane wave based pseudopotential calculations, the inverse of the diagonal of (H - ES) is used. However, for the more efficient 'augmented plane wave + local-orbitals' basis set this preconditioning is not sufficient due to large off-diagonal terms caused by the local orbitals. We propose a new preconditioner based on the inverse of (H - {lambda}S) and demonstrate its efficiency for real applications using both, a sequential and a parallel implementation of this algorithm into our WIEN2k code.
An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet
NASA Astrophysics Data System (ADS)
Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon
2015-08-01
The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.
Integration based profile likelihood calculation for PDE constrained parameter estimation problems
NASA Astrophysics Data System (ADS)
Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.
2016-12-01
Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.
Quantification of confounding factors in MRI-based dose calculations as applied to prostate IMRT
NASA Astrophysics Data System (ADS)
Maspero, Matteo; Seevinck, Peter R.; Schubert, Gerald; Hoesl, Michaela A. U.; van Asselen, Bram; Viergever, Max A.; Lagendijk, Jan J. W.; Meijer, Gert J.; van den Berg, Cornelis A. T.
2017-02-01
Magnetic resonance (MR)-only radiotherapy treatment planning requires pseudo-CT (pCT) images to enable MR-based dose calculations. To verify the accuracy of MR-based dose calculations, institutions interested in introducing MR-only planning will have to compare pCT-based and computer tomography (CT)-based dose calculations. However, interpreting such comparison studies may be challenging, since potential differences arise from a range of confounding factors which are not necessarily specific to MR-only planning. Therefore, the aim of this study is to identify and quantify the contribution of factors confounding dosimetric accuracy estimation in comparison studies between CT and pCT. The following factors were distinguished: set-up and positioning differences between imaging sessions, MR-related geometric inaccuracy, pCT generation, use of specific calibration curves to convert pCT into electron density information, and registration errors. The study comprised fourteen prostate cancer patients who underwent CT/MRI-based treatment planning. To enable pCT generation, a commercial solution (MRCAT, Philips Healthcare, Vantaa, Finland) was adopted. IMRT plans were calculated on CT (gold standard) and pCTs. Dose difference maps in a high dose region (CTV) and in the body volume were evaluated, and the contribution to dose errors of possible confounding factors was individually quantified. We found that the largest confounding factor leading to dose difference was the use of different calibration curves to convert pCT and CT into electron density (0.7%). The second largest factor was the pCT generation which resulted in pCT stratified into a fixed number of tissue classes (0.16%). Inter-scan differences due to patient repositioning, MR-related geometric inaccuracy, and registration errors did not significantly contribute to dose differences (0.01%). The proposed approach successfully identified and quantified the factors confounding accurate MRI-based dose calculation in
Quantification of confounding factors in MRI-based dose calculations as applied to prostate IMRT.
Maspero, Matteo; Seevinck, Peter R; Schubert, Gerald; Hoesl, Michaela A U; van Asselen, Bram; Viergever, Max A; Lagendijk, Jan J W; Meijer, Gert J; van den Berg, Cornelis A T
2017-02-07
Magnetic resonance (MR)-only radiotherapy treatment planning requires pseudo-CT (pCT) images to enable MR-based dose calculations. To verify the accuracy of MR-based dose calculations, institutions interested in introducing MR-only planning will have to compare pCT-based and computer tomography (CT)-based dose calculations. However, interpreting such comparison studies may be challenging, since potential differences arise from a range of confounding factors which are not necessarily specific to MR-only planning. Therefore, the aim of this study is to identify and quantify the contribution of factors confounding dosimetric accuracy estimation in comparison studies between CT and pCT. The following factors were distinguished: set-up and positioning differences between imaging sessions, MR-related geometric inaccuracy, pCT generation, use of specific calibration curves to convert pCT into electron density information, and registration errors. The study comprised fourteen prostate cancer patients who underwent CT/MRI-based treatment planning. To enable pCT generation, a commercial solution (MRCAT, Philips Healthcare, Vantaa, Finland) was adopted. IMRT plans were calculated on CT (gold standard) and pCTs. Dose difference maps in a high dose region (CTV) and in the body volume were evaluated, and the contribution to dose errors of possible confounding factors was individually quantified. We found that the largest confounding factor leading to dose difference was the use of different calibration curves to convert pCT and CT into electron density (0.7%). The second largest factor was the pCT generation which resulted in pCT stratified into a fixed number of tissue classes (0.16%). Inter-scan differences due to patient repositioning, MR-related geometric inaccuracy, and registration errors did not significantly contribute to dose differences (0.01%). The proposed approach successfully identified and quantified the factors confounding accurate MRI-based dose calculation in
Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method
NASA Astrophysics Data System (ADS)
Osadchy, A. V.; Volotovskiy, S. G.; Obraztsova, E. D.; Savin, V. V.; Golovashkin, D. L.
2016-08-01
In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars.
Automated Calculation of Water-equivalent Diameter (DW) Based on AAPM Task Group 220.
Anam, Choirul; Haryanto, Freddy; Widita, Rena; Arif, Idam; Dougherty, Geoff
2016-07-08
The purpose of this study is to accurately and effectively automate the calculation of the water-equivalent diameter (DW) from 3D CT images for estimating the size-specific dose. DW is the metric that characterizes the patient size and attenuation. In this study, DW was calculated for standard CTDI phantoms and patient images. Two types of phantom were used, one representing the head with a diameter of 16 cm and the other representing the body with a diameter of 32 cm. Images of 63 patients were also taken, 32 who had undergone a CT head examination and 31 who had undergone a CT thorax examination. There are three main parts to our algorithm for automated DW calculation. The first part is to read 3D images and convert the CT data into Hounsfield units (HU). The second part is to find the contour of the phantoms or patients automatically. And the third part is to automate the calculation of DW based on the automated contouring for every slice (DW,all). The results of this study show that the automated calculation of DW and the manual calculation are in good agreement for phantoms and patients. The differences between the automated calculation of DW and the manual calculation are less than 0.5%. The results of this study also show that the estimating of DW,all using DW,n=1 (central slice along longitudinal axis) produces percentage differences of -0.92% ± 3.37% and 6.75%± 1.92%, and estimating DW,all using DW,n=9 produces percentage differences of 0.23% ± 0.16% and 0.87% ± 0.36%, for thorax and head examinations, respectively. From this study, the percentage differences between normalized size-specific dose estimate for every slice (nSSDEall) and nSSDEn=1 are 0.74% ± 2.82% and -4.35% ± 1.18% for thorax and head examinations, respectively; between nSSDEall and nSSDEn=9 are 0.00% ± 0.46% and -0.60% ± 0.24% for thorax and head examinations, respectively.
Automated Calculation of Water-equivalent Diameter (DW ) Based on AAPM Task Group 220.
Anam, Choirul; Haryanto, Freddy; Widita, Rena; Arif, Idam; Dougherty, Geoff
2016-07-01
The purpose of this study is to accurately and effectively automate the calculation of the water-equivalent diameter (DW) from 3D CT images for estimating the size-specific dose. DW is the metric that characterizes the patient size and attenuation. In this study, DW was calculated for standard CTDI phantoms and patient images. Two types of phantom were used, one representing the head with a diameter of 16 cm and the other representing the body with a diameter of 32 cm. Images of 63 patients were also taken, 32 who had undergone a CT head examination and 31 who had undergone a CT thorax examination. There are three main parts to our algorithm for automated DW calculation. The first part is to read 3D images and convert the CT data into Hounsfield units (HU). The second part is to find the contour of the phantoms or patients automatically. And the third part is to automate the calculation of DW based on the automated contouring for every slice (DW,all). The results of this study show that the automated calculation of DW and the manual calculation are in good agreement for phantoms and patients. The differences between the automated calculation of DW and the manual calculation are less than 0.5%. The results of this study also show that the estimating of DW,all using DW,n=1 (central slice along longitudinal axis) produces percentage differences of -0.92%±3.37% and 6.75%±1.92%, and estimating DW,all using DW,n=9 produces percentage differences of 0.23%±0.16% and 0.87%±0.36%, for thorax and head examinations, respectively. From this study, the percentage differences between normalized size-specific dose estimate for every slice (nSSDEall) and nSSDEn=1 are 0.74%±2.82% and -4.35%±1.18% for thorax and head examinations, respectively; between nSSDEall and nSSDEn=9 are 0.00%±0.46% and -0.60%±0.24% for thorax and head examinations, respectively. PACS number(s): 87.57.Q-, 87.57.uq.
Brittleness index calculation and evaluation for CBM reservoirs based on AVO simultaneous inversion
NASA Astrophysics Data System (ADS)
Wu, Haibo; Dong, Shouhua; Huang, Yaping; Wang, Haolong; Chen, Guiwu
2016-11-01
In this paper, a new approach is proposed for coalbed methane (CBM) reservoir brittleness index (BI) calculations. The BI, as a guide for fracture area selection, is calculated by dynamic elastic parameters (dynamic Young's modulus Ed and dynamic Poisson's ratio υd) obtained from an amplitude versus offset (AVO) simultaneous inversion. Among the three different classes of CBM reservoirs distinguished on the basis of brittleness in the theoretical part of this study, class I reservoirs with high BI values are identified as preferential target areas for fracturing. Therefore, we derive the AVO approximation equation expressed by Ed and υd first. This allows the direct inversion of the dynamic elastic parameters through the pre-stack AVO simultaneous inversion, which is based on Bayes' theorem. Thereafter, a test model with Gaussian white noise and a through-well seismic profile inversion is used to demonstrate the high reliability of the inversion parameters. Accordingly, the BI of a CBM reservoir section from the Qinshui Basin is calculated using the proposed method and a class I reservoir section detected through brittleness evaluation. From the outcome of this study, we believe the adoption of this new approach could act as a guide and reference for BI calculations and evaluations of CBM reservoirs.
Grid-based steered thermodynamic integration accelerates the calculation of binding free energies.
Fowler, Philip W; Jha, Shantenu; Coveney, Peter V
2005-08-15
The calculation of binding free energies is important in many condensed matter problems. Although formally exact computational methods have the potential to complement, add to, and even compete with experimental approaches, they are difficult to use and extremely time consuming. We describe a Grid-based approach for the calculation of relative binding free energies, which we call Steered Thermodynamic Integration calculations using Molecular Dynamics (STIMD), and its application to Src homology 2 (SH2) protein cell signalling domains. We show that the time taken to compute free energy differences using thermodynamic integration can be significantly reduced: potentially from weeks or months to days of wall-clock time. To be able to perform such accelerated calculations requires the ability to both run concurrently and control in realtime several parallel simulations on a computational Grid. We describe how the RealityGrid computational steering system, in conjunction with a scalable classical MD code, can be used to dramatically reduce the time to achieve a result. This is necessary to improve the adoption of this technique and further allows more detailed investigations into the accuracy and precision of thermodynamic integration. Initial results for the Src SH2 system are presented and compared to a reported experimental value. Finally, we discuss the significance of our approach.
A brief look at model-based dose calculation principles, practicalities, and promise
Morrison, Hali; Cawston-Grant, Brie; Menon, Geetha V.
2017-01-01
Model-based dose calculation algorithms (MBDCAs) have recently emerged as potential successors to the highly practical, but sometimes inaccurate TG-43 formalism for brachytherapy treatment planning. So named for their capacity to more accurately calculate dose deposition in a patient using information from medical images, these approaches to solve the linear Boltzmann radiation transport equation include point kernel superposition, the discrete ordinates method, and Monte Carlo simulation. In this overview, we describe three MBDCAs that are commercially available at the present time, and identify guidance from professional societies and the broader peer-reviewed literature intended to facilitate their safe and appropriate use. We also highlight several important considerations to keep in mind when introducing an MBDCA into clinical practice, and look briefly at early applications reported in the literature and selected from our own ongoing work. The enhanced dose calculation accuracy offered by a MBDCA comes at the additional cost of modelling the geometry and material composition of the patient in treatment position (as determined from imaging), and the treatment applicator (as characterized by the vendor). The adequacy of these inputs and of the radiation source model, which needs to be assessed for each treatment site, treatment technique, and radiation source type, determines the accuracy of the resultant dose calculations. Although new challenges associated with their familiarization, commissioning, clinical implementation, and quality assurance exist, MBDCAs clearly afford an opportunity to improve brachytherapy practice, particularly for low-energy sources. PMID:28344608
NASA Astrophysics Data System (ADS)
Gouda, M. M.; Hamzawy, A.; Badawi, M. S.; El-Khatib, A. M.; Thabet, A. A.; Abbas, M. I.
2016-02-01
The full-energy peak efficiency of high-purity germanium well-type detector is extremely important to calculate the absolute activities of natural and artificial radionuclides for samples with low radioactivity. In this work, the efficiency transfer method in an integral form is proposed to calculate the full-energy peak efficiency and to correct the coincidence summing effect for a high-purity germanium well-type detector. This technique is based on the calculation of the ratio of the effective solid angles subtended by the well-type detector with cylindrical sources measured inside detector cavity and an axial point source measured out the detector cavity including the attenuation of the photon by the absorber system. This technique can be easily applied in establishing the efficiency calibration curves of well-type detectors. The calculated values of the efficiency are in good agreement with the experimental calibration data obtained with a mixed γ-ray standard source containing 60Co and 88Y.
A design of a DICOM-RT-based tool box for nonrigid 4D dose calculation.
Wong, Victy Y W; Baker, Colin R; Leung, T W; Tung, Stewart Y
2016-03-01
The study was aimed to introduce a design of a DICOM-RT-based tool box to facilitate 4D dose calculation based on deformable voxel-dose registration. The computational structure and the calculation algorithm of the tool box were explicitly discussed in the study. The tool box was written in MATLAB in conjunction with CERR. It consists of five main functions which allow a) importation of DICOM-RT-based 3D dose plan, b) deformable image registration, c) tracking voxel doses along breathing cycle, d) presentation of temporal dose distribution at different time phase, and e) derivation of 4D dose. The efficacy of using the tool box for clinical application had been verified with nine clinical cases on retrospective-study basis. The logistic and the robustness of the tool box were tested with 27 applications and the results were shown successful with no computational errors encountered. In the study, the accumulated dose coverage as a function of planning CT taken at end-inhale, end-exhale, and mean tumor position were assessed. The results indicated that the majority of the cases (67%) achieved maximum target coverage, while the planning CT was taken at the temporal mean tumor position and 56% at the end-exhale position. The comparable results to the literature imply that the studied tool box can be reliable for 4D dose calculation. The authors suggest that, with proper application, 4D dose calculation using deformable registration can provide better dose evaluation for treatment with moving target. PACS number(s): 87.55.kh.
Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.
Miliordos, Evangelos; Xantheas, Sotiris S.
2013-08-15
We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding number using double differentiation in Cartesian coordinates. For molecules of C_{1} symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm^{–1} from those obtained from Cartesian coordinates.
Miliordos, Evangelos; Xantheas, Sotiris S
2013-08-15
We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson's GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding number using double differentiation in Cartesian coordinates. For molecules of C1 symmetry the computational savings in the energy calculations amount to 36N - 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. In all cases the frequencies based on internal coordinates differ on average by <1 cm(-1) from those obtained from Cartesian coordinates.
Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio
2016-10-01
We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two (13) C atoms ((13) C2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of (13) C2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% (13) C2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
NASA Astrophysics Data System (ADS)
Townson, Reid W.; Jia, Xun; Tian, Zhen; Jiang Graves, Yan; Zavgorodni, Sergei; Jiang, Steve B.
2013-06-01
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.
Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-06-21
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
Fully converged plane-wave-based self-consistent G W calculations of periodic solids
NASA Astrophysics Data System (ADS)
Cao, Huawei; Yu, Zhongyuan; Lu, Pengfei; Wang, Lin-Wang
2017-01-01
The G W approximation is a well-known method to obtain the quasiparticle and spectral properties of systems ranging from molecules to solids. In practice, G W calculations are often employed with many different approximations and truncations. In this work, we describe the implementation of a fully self-consistent G W approach based on the solution of the Dyson equation using a plane wave basis set. Algorithmic, numerical, and technical details of the self-consistent G W approach are presented. The fully self-consistent G W calculations are performed for GaAs, ZnO, and CdS including semicores in the pseudopotentials. No further approximations and truncations apart from the truncation on the plane wave basis set are made in our implementation of the G W calculation. After adopting a special potential technique, a ˜100 Ry energy cutoff can be used without the loss of accuracy. We found that the self-consistent G W (sc-G W ) significantly overestimates the bulk band gaps, and this overestimation is likely due to the underestimation of the macroscopic dielectric constants. On the other hand, the sc-G W accurately predicts the d -state positions, most likely because the d -state screening does not sensitively depend on the macroscopic dielectric constant. Our work indicates the need to include the high-order vertex term in order for the many-body perturbation theory to accurately predict the semiconductor band gaps. It also sheds some light on why, in some cases, the G0W0 bulk calculation is more accurate than the fully self-consistent G W calculation, because the initial density-functional theory has a better dielectric constant compared to experiments.
A calculator based program to optimize the simulation of breast irradiation.
Lederer, E W; Schwendener, H
1997-01-01
The simulation of breast fields using an isocentric set-up technique can be a lengthy process involving the placement of the isocentre, the determination of the gantry angles, and the selection of the lung shields, which in our center is one of six standard blocks. We show that with a body contour taken through central axis, five measurements and a calculator program, it is possible to significantly decrease the amount of time required to simulate a breast patient. We have developed a program for an HP48GX handheld calculator to determine the gantry angles, the isocentre, the field width, the standard angled block, and the couch and collimator rotation. The calculations are based on measurements of the field length, the horizontal distance between midline and mid axillary line, and the vertical distances from the mid axillary line to the inferior and superior beam border and central axis at midline. We use spherical geometry to perform the calculations to reflect the true environment and do not make any assumptions about the average patient's shape. For the simulation process a jig was developed that is inserted into the tray holder of the simulator to show the optical and radiological shadow of the calculated shielding along the patient's midline for clinical assessment during simulation and on the simulation film. The jig also has a holder for an aluminum wedge to improve the image quality of the simulation film. We admit that the lung shield increases the dose to the contralateral breast because of increased scatter and transmission through the shield; however, the block decreases the volume of irradiated lung while keeping the beam edge along the midline of the patient. The technique has been in use for two years and has resulted in time savings of up to 30% per patient. It has proven to be an easy and accurate way of setting up isocentric treatments to the breast.
Wang, L; Jette, D
1999-08-01
The transport of the secondary electrons resulting from high-energy photon interactions is essential to energy redistribution and deposition. In order to develop an accurate dose-calculation algorithm for high-energy photons, which can predict the dose distribution in inhomogeneous media and at the beam edges, we have investigated the feasibility of applying electron transport theory [Jette, Med. Phys. 15, 123 (1988)] to photon dose calculation. In particular, the transport of and energy deposition by Compton electron and electrons and positrons resulting from pair production were studied. The primary photons are treated as the source of the secondary electrons and positrons, which are transported through the irradiated medium using Gaussian multiple-scattering theory [Jette, Med. Phys. 15, 123 (1988)]. The initial angular and kinetic energy distribution(s) of the secondary electrons (and positrons) emanating from the photon interactions are incorporated into the transport. Due to different mechanisms of creation and cross-section functions, the transport of and the energy deposition by the electrons released in these two processes are studied and modeled separately based on first principles. In this article, we focus on determining the dose distribution for an individual interaction site. We define the Compton dose deposition kernel (CDK) or the pair-production dose deposition kernel (PDK) as the dose distribution relative to the point of interaction, per unit interaction density, for a monoenergetic photon beam in an infinite homogeneous medium of unit density. The validity of this analytic modeling of dose deposition was evaluated through EGS4 Monte Carlo simulation. Quantitative agreement between these two calculations of the dose distribution and the average energy deposited per interaction was achieved. Our results demonstrate the applicability of the electron dose-calculation method to photon dose calculation.
SU-E-T-161: Evaluation of Dose Calculation Based On Cone-Beam CT
Abe, T; Nakazawa, T; Saitou, Y; Nakata, A; Yano, M; Tateoka, K; Fujimoto, K; Sakata, K
2014-06-01
Purpose: The purpose of this study is to convert pixel values in cone-beam CT (CBCT) using histograms of pixel values in the simulation CT (sim-CT) and the CBCT images and to evaluate the accuracy of dose calculation based on the CBCT. Methods: The sim-CT and CBCT images immediately before the treatment of 10 prostate cancer patients were acquired. Because of insufficient calibration of the pixel values in the CBCT, it is difficult to be directly used for dose calculation. The pixel values in the CBCT images were converted using an in-house program. A 7 fields treatment plans (original plan) created on the sim-CT images were applied to the CBCT images and the dose distributions were re-calculated with same monitor units (MUs). These prescription doses were compared with those of original plans. Results: In the results of the pixel values conversion in the CBCT images,the mean differences of pixel values for the prostate,subcutaneous adipose, muscle and right-femur were −10.78±34.60, 11.78±41.06, 29.49±36.99 and 0.14±31.15 respectively. In the results of the calculated doses, the mean differences of prescription doses for 7 fields were 4.13±0.95%, 0.34±0.86%, −0.05±0.55%, 1.35±0.98%, 1.77±0.56%, 0.89±0.69% and 1.69±0.71% respectively and as a whole, the difference of prescription dose was 1.54±0.4%. Conclusion: The dose calculation on the CBCT images achieve an accuracy of <2% by using this pixel values conversion program. This may enable implementation of efficient adaptive radiotherapy.
NASA Astrophysics Data System (ADS)
Feng, Chi; Li, Dong; Gao, Shan; Daniel, Ketui
2016-11-01
This paper presents a CFD (Computation Fluid Dynamic) simulation and experimental results for the reflected radiation error from turbine vanes when measuring turbine blade's temperature using a pyrometer. In the paper, an accurate reflection model based on discrete irregular surfaces is established. Double contour integral method is used to calculate view factor between the irregular surfaces. Calculated reflected radiation error was found to change with relative position between blades and vanes as temperature distribution of vanes and blades was simulated using CFD. Simulation results indicated that when the vanes suction surface temperature ranged from 860 K to 1060 K and the blades pressure surface average temperature is 805 K, pyrometer measurement error can reach up to 6.35%. Experimental results show that the maximum pyrometer absolute error of three different targets on the blade decreases from 6.52%, 4.15% and 1.35% to 0.89%, 0.82% and 0.69% respectively after error correction.
Liu, Jicheng; Huang, Kama; Guo, Lanting; Zhang, Hong; Hu, Yayi
2005-04-01
It is the intent of this paper to locate the activation point in Transcranial Magnetic Stimulation (TMS) efficiently. The schemes of coil array in torus shape is presented to get the electromagnetic field distribution with ideal focusing capability. Then an improved adaptive genetic algorithm (AGA) is applied to the optimization of both value and phase of the current infused in each coil. Based on the calculated results of the optimized current configurations, ideal focusing capability is drawn as contour lines and 3-D mesh charts of magnitude of both magnetic and electric field within the calculation area. It is shown that the coil array has good capability to establish focused shape of electromagnetic distribution. In addition, it is also demonstrated that the coil array has the capability to focus on two or more targets simultaneously.
Phase-only stereoscopic hologram calculation based on Gerchberg-Saxton iterative algorithm
NASA Astrophysics Data System (ADS)
Xia, Xinyi; Xia, Jun
2016-09-01
A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg-Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. Project supported by the National Basic Research Program of China (Grant No. 2013CB328803) and the National High Technology Research and Development Program of China (Grant Nos. 2013AA013904 and 2015AA016301).
Calculating the Marine Gravity Anomaly of the South China Sea based on the Inverse Stokes Formula
NASA Astrophysics Data System (ADS)
Liu, Liang; Jiang, Xiaoguang; Liu, Shanwei; Zheng, Lei; Zang, Jinxia; Zhang, Xuehua; Liu, Longfei
2016-11-01
Marine gravity field information has a great significance for the resource, environment and military affairs. As a new way to get marine gravity data, the satellite altimetry technique makes up for what ship measuring means lack. The paper carries out the researches on how altimeter data applied for calculating marine gravity anomaly based on inverse Stokes formula. In the article, the editing of 14-track Jason-1 data over South China Sea for 7 years is for collinear processing and cross-point adjustment. The inverse Stokes formula and fast Flourier transform technique are applied to calculate marine gravity anomaly of the region (0°∼23°N, 103°∼120°E), and to draw gravity anomaly map. Compared with the gravity anomaly by ship observation, RMS is 12.6mGal, and single altimetry satellite has a good precision.
A Brief User's Guide to the Excel^{®} -Based DF Calculator
Jubin, Robert T.
2016-06-01
To understand the importance of capturing penetrating forms of iodine as well as the other volatile radionuclides, a calculation tool was developed in the form of an Excel^{®} spreadsheet to estimate the overall plant decontamination factor (DF). The tool requires the user to estimate splits of the volatile radionuclides within the major portions of the reprocessing plant, speciation of iodine and individual DFs for each off-gas stream within the Used Nuclear Fuel reprocessing plant. The Impact to the overall plant DF for each volatile radionuclide is then calculated by the tool based on the specific user choices. The Excel^{®} spreadsheet tracks both elemental and penetrating forms of iodine separately and allows changes in the speciation of iodine at each processing step. It also tracks ^{3}H, ^{14}C and ^{85}Kr. This document provides a basic user's guide to the manipulation of this tool.
Modifications of the chromophore of Spinach aptamer based on QM:MM calculations.
Skúpa, Katarína; Urban, Ján
2017-02-01
Spinach aptamer was developed as an RNA analog of the green fluorescent protein. The aptamer interacts with its ligand and modifies its electronic spectrum so that it fluoresces brightly at the wavelength of 501 nm. Song et al. investigated modifications of the ligand in their experimental study and found a molecule emitting at 523 nm upon creating a complex with the Spinach aptamer. The crystal structure of the aptamer in complex with its original ligand has been published, which enabled us to study the system computationally. In this article, we suggest several new modifications of the ligand that shift the emission maximum of the complex to even longer wavelengths. Our results are based on combined quantum mechanical/molecular mechanical calculations with DFT method used for geometry optimization and TD-DFT for calculations of absorption and emission energies.
A theoretical study of blue phosphorene nanoribbons based on first-principles calculations
Xie, Jiafeng; Si, M. S. Yang, D. Z.; Zhang, Z. Y.; Xue, D. S.
2014-08-21
Based on first-principles calculations, we present a quantum confinement mechanism for the band gaps of blue phosphorene nanoribbons (BPNRs) as a function of their widths. The BPNRs considered have either armchair or zigzag shaped edges on both sides with hydrogen saturation. Both the two types of nanoribbons are shown to be indirect semiconductors. An enhanced energy gap of around 1 eV can be realized when the ribbon's width decreases to ∼10 Å. The underlying physics is ascribed to the quantum confinement effect. More importantly, the parameters to describe quantum confinement are obtained by fitting the calculated band gaps with respect to their widths. The results show that the quantum confinement in armchair nanoribbons is stronger than that in zigzag ones. This study provides an efficient approach to tune the band gap in BPNRs.
NASA Astrophysics Data System (ADS)
Nguyen van Ye, Romain; Del-Castillo-Negrete, Diego; Spong, D.; Hirshman, S.; Farge, M.
2008-11-01
A limitation of particle-based transport calculations is the noise due to limited statistical sampling. Thus, a key element for the success of these calculations is the development of efficient denoising methods. Here we discuss denoising techniques based on Proper Orthogonal Decomposition (POD) and Wavelet Decomposition (WD). The goal is the reconstruction of smooth (denoised) particle distribution functions from discrete particle data obtained from Monte Carlo simulations. In 2-D, the POD method is based on low rank truncations of the singular value decomposition of the data. For 3-D we propose the use of a generalized low rank approximation of matrices technique. The WD denoising is based on the thresholding of empirical wavelet coefficients [Donoho et al., 1996]. The methods are illustrated and tested with Monte-Carlo particle simulation data of plasma collisional relaxation including pitch angle and energy scattering. As an application we consider guiding-center transport with collisions in a magnetically confined plasma in toroidal geometry. The proposed noise reduction methods allow to achieve high levels of smoothness in the particle distribution function using significantly less particles in the computations.
SU-E-T-355: Efficient Scatter Correction for Direct Ray-Tracing Based Dose Calculation
Chen, M; Jiang, S; Lu, W
2015-06-15
Purpose: To propose a scatter correction method with linear computational complexity for direct-ray-tracing (DRT) based dose calculation. Due to its speed and simplicity, DRT is widely used as a dose engine in the treatment planning system (TPS) and monitor unit (MU) verification software, where heterogeneity correction is applied by radiological distance scaling. However, such correction only accounts for attenuation but not scatter difference, causing the DRT algorithm less accurate than the model-based algorithms for small field size in heterogeneous media. Methods: Inspired by the convolution formula derived from an exponential kernel as is typically done in the collapsed-cone-convolution-superposition (CCCS) method, we redesigned the ray tracing component as the sum of TERMA scaled by a local deposition factor, which is linear with respect to density, and dose of the previous voxel scaled by a remote deposition factor, D(i)=aρ(i)T(i)+(b+c(ρ(i)-1))D(i-1),where T(i)=e(-αr(i)+β(r(i))2) and r(i)=Σ-(j=1,..,i)ρ(j).The two factors together with TERMA can be expressed in terms of 5 parameters, which are subsequently optimized by curve fitting using digital phantoms for each field size and each beam energy. Results: The proposed algorithm was implemented for the Fluence-Convolution-Broad-Beam (FCBB) dose engine and evaluated using digital slab phantoms and clinical CT data. Compared with the gold standard calculation, dose deviations were improved from 20% to 2% in the low density regions of the slab phantoms for the 1-cm field size, and within 2% for over 95% of the volume with the largest discrepancy at the interface for the clinical lung case. Conclusion: We developed a simple recursive formula for scatter correction for the DRT-based dose calculation with much improved accuracy, especially for small field size, while still keeping calculation to linear complexity. The proposed calculator is fast, yet accurate, which is crucial for dose updating in IMRT
Accurate heat of formation for fully hydrided LaNi5 via the all-electron FLAPW approach
NASA Astrophysics Data System (ADS)
Zhao, Yu-Jun; Freeman, A. J.
2003-03-01
It is known that the theoretical/computational determination of the heat of formation for La_2Ni_10H_14, Δ H_f, is overestimated theoretically by 50% or more when a pseudopotential approach is employed.(Tatsumi et al), PRB 64, 184105(2001) Does this signify a failure of first-principles total energy calculations? Here, we employ the full-potential linearized augmented plane wave (FLAPW) method(Wimmer, Krakauer, Weinert, and Freeman, PRB 24), 864 (1981). within both the generalized gradient approximation (GGA) and the localized density approximation (LDA), with a highly precise treatment of the total energy of H2 molecule due to its critical role in the calculation of Δ H_f. The calculated Δ Hf (-31.1 KJ/mol-H_2) and geometry structure within GGA are in excellent agreement with experiment ( ˜ -32 KJ/mol-H_2). While LDA calculations underestimate the volume of LaNi5 by 10.4%, the final value of Δ Hf (-31.2 KJ/mol-H_2) is also in excellent agreement with experiment. These results show the success rather than failure of first-principles calculations. The electronic properties indicate that charge transfer from the interstitial region to the H atoms stabilizes the fully hydrided LaNi_5.
Voxel-based dose calculation in radiocolloid therapy of cystic craniopharyngiomas
NASA Astrophysics Data System (ADS)
Treuer, H.; Hoevels, M.; Luyken, K.; Gierich, A.; Hellerbach, A.; Lachtermann, B.; Visser-Vandewalle, V.; Ruge, M.; Wirths, J.
2015-02-01
Very high doses are administered in radiocolloid therapy of cystic craniopharyngiomas. However individual dose planning is not common yet mainly due to insufficient image resolution. Our aim was to investigate whether currently available high-resolution image data can be used for voxel-based dose calculation for short-ranged β-emitters (32P,90Y,186Re) and to assess the achievable accuracy. We developed a convolution algorithm based on voxelized dose activity distributions and dose-spread kernels. Results for targets with 5-40 mm diameter were compared with high-resolution Monte Carlo calculations in spherical phantoms. Voxel size was 0.35 mm. Homogeneous volume and surface activity distributions were used. Dose-volume histograms of targets and shell structures were compared and γ index (dose tolerance 5%, distance to agreement 0.35 mm) was calculated for dose profiles along the principal axes. For volumetric activity distributions 89.3% ± 11.9% of all points passed the γ test (mean γ 0.53 ± 0.16). For surface distributions 33.6% ± 14.8% of all points passed the γ test (mean γ 2.01 ± 0.60). The shift of curves in dose-volume histograms was -1.7 Gy ± 7.6 Gy (-4.4 Gy ± 24.1 Gy for 186Re) in volumetric distributions and 46.3% ± 32.8% in surface distributions. The results show that individual dose planning for radiocolloid therapy of cystic craniopharyngiomas based on high-resolution voxelized image data is feasible and yields highly accurate results for volumetric activity distributions and reasonable dose estimates for surface distributions.
Calculation of grey level co-occurrence matrix-based seismic attributes in three dimensions
NASA Astrophysics Data System (ADS)
Eichkitz, Christoph Georg; Amtmann, Johannes; Schreilechner, Marcellus Gregor
2013-10-01
Seismic interpretation can be supported by seismic attribute analysis. Common seismic attributes use mathematical relationships based on the geometry and the physical properties of the subsurface to reveal features of interest. But they are mostly not capable of describing the spatial arrangement of depositional facies or reservoir properties. Textural attributes such as the grey level co-occurrence matrix (GLCM) and its derived attributes are able to describe the spatial dependencies of seismic facies. The GLCM - primary used for 2D data - is a measure of how often different combinations of pixel brightness values occur in an image. We present in this paper a workflow for full three-dimensional calculation of GLCM-based seismic attributes that also consider the structural dip of the seismic data. In our GLCM workflow we consider all 13 possible space directions to determine GLCM-based attributes. The developed workflow is applied onto various seismic datasets and the results of GLCM calculation are compared to common seismic attributes such as coherence.
Basis set convergence of electric properties in HF and DFT calculations of nucleic acid bases
NASA Astrophysics Data System (ADS)
Campos, C. T.; Jorge, F. E.
Recently, a hierarchical sequence of augmented basis sets of double, triple, and quadruple zeta valence quality plus polarization functions (AXZP, X = D, T, and Q) for the atoms from H to Ar were presented by Jorge et al. We report a systematic study of basis sets required to obtain accurate values of several electric properties for benzene, pyridine, the five common nucleic acid bases (uracil, cytosine, thymine, guanine, and adenine), and three related bases (fluorouracil, 5-methylcytosine, and hypoxanthine) at their full optimized geometries. Two methods were examined: Hartree-Fock (HF) and density functional theory (DFT). Including electron correlation decreases the magnitude of the dipole moment and increases the mean polarizability and also the polarizability anisotropy for every molecule. Calculated B3LYP/ADZP dipole moments and dipole polarizabilities show good agreement with both experimental and ab initio results based on second-order Møller-Plesset perturbation theory calculations. We have also showed that a basis set of double zeta quality is enough to obtain reliable and accurate electric property results for this kind of compounds.
Improving iterative surface energy balance convergence for remote sensing based flux calculation
NASA Astrophysics Data System (ADS)
Dhungel, Ramesh; Allen, Richard G.; Trezza, Ricardo
2016-04-01
A modification of the iterative procedure of the surface energy balance was purposed to expedite the convergence of Monin-Obukhov stability correction utilized by the remote sensing based flux calculation. This was demonstrated using ground-based weather stations as well as the gridded weather data (North American Regional Reanalysis) and remote sensing based (Landsat 5, 7) images. The study was conducted for different land-use classes in southern Idaho and northern California for multiple satellite overpasses. The convergence behavior of a selected Landsat pixel as well as all of the Landsat pixels within the area of interest was analyzed. Modified version needed multiple times less iteration compared to the current iterative technique. At the time of low wind speed (˜1.3 m/s), the current iterative technique was not able to find a solution of surface energy balance for all of the Landsat pixels, while the modified version was able to achieve it in a few iterations. The study will facilitate many operational evapotranspiration models to avoid the nonconvergence in low wind speeds, which helps to increase the accuracy of flux calculations.
a Novel Sub-Pixel Matching Algorithm Based on Phase Correlation Using Peak Calculation
NASA Astrophysics Data System (ADS)
Xie, Junfeng; Mo, Fan; Yang, Chao; Li, Pin; Tian, Shiqiang
2016-06-01
The matching accuracy of homonymy points of stereo images is a key point in the development of photogrammetry, which influences the geometrical accuracy of the image products. This paper presents a novel sub-pixel matching method phase correlation using peak calculation to improve the matching accuracy. The peak theoretic centre that means to sub-pixel deviation can be acquired by Peak Calculation (PC) according to inherent geometrical relationship, which is generated by inverse normalized cross-power spectrum, and the mismatching points are rejected by two strategies: window constraint, which is designed by matching window and geometric constraint, and correlation coefficient, which is effective for satellite images used for mismatching points removing. After above, a lot of high-precise homonymy points can be left. Lastly, three experiments are taken to verify the accuracy and efficiency of the presented method. Excellent results show that the presented method is better than traditional phase correlation matching methods based on surface fitting in these aspects of accuracy and efficiency, and the accuracy of the proposed phase correlation matching algorithm can reach 0.1 pixel with a higher calculation efficiency.
Lin, Lin; Chen, Mohan; Yang, Chao; He, Lixin
2012-02-10
We describe how to apply the recently developed pole expansion plus selected inversion (PEpSI) technique to Kohn-Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating charge density, total energy, Helmholtz free energy and atomic forces without using the eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to update the chemical potential without using Kohn-Sham eigenvalues. The advantage of using PEpSI is that it has a much lower computational complexity than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEpSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEpSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEpSI is modest. This makes it even possible to perform Kohn-Sham DFT calculations for 10,000-atom nanotubes on a single processor. We also show that the use of PEpSI does not lead to loss of accuracy required in a practical DFT calculation.
SKE/BKE task-based methodology for calculating Hotelling observer SNR in mammography
NASA Astrophysics Data System (ADS)
Liu, Haimo; Kyprianou, Iacovos S.; Badano, Aldo; Myers, Kyle J.; Jennings, Robert J.; Park, Subok; Kaczmarek, Richard V.; Chakrabarti, Kish
2009-02-01
A common method for evaluating projection mammography is Contrast-Detail (CD) curves derived from the CD phantom for Mammography (CDMAM). The CD curves are derived either by human observers, or by automated readings. Both methods have drawbacks which limit their reliability. The human based reading is significantly affected by reader variability, reduced precision and bias. On the other hand, the automated methods suffer from limited statistics. The purpose of this paper is to develop a simple and reliable methodology for the evaluation of mammographic imaging systems using the Signal Known Exactly/Background Known Exactly (SKE/BKE) detection task for signals relevant to mammography. In this paper, we used the spatial definition of the ideal, linear (Hotelling) observer to calculate the task-specific SNR for mammography and discussed the results. The noise covariance matrix as well as the detector response H matrix of the imaging system were estimated and used to calculate the SNRSKEBKE for the simulated discs of the CDMAM. The SNR as a function of exposure, disc diameter and thickness were calculated.
NASA Astrophysics Data System (ADS)
Arroudj, S.; Bouchouit, M.; Bouchouit, K.; Bouraiou, A.; Messaadia, L.; Kulyk, B.; Figa, V.; Bouacida, S.; Sofiani, Z.; Taboukhat, S.
2016-06-01
This paper explores the synthesis, structure characterization and optical properties of two new schiff bases. These compounds were obtained by condensation of o-tolidine with salicylaldehyde and cinnamaldehyde. The obtained ligands were characterized by UV, 1H and NMR. Their third-order NLO properties were measured using the third harmonic generation technique on thin films at 1064 nm. The electric dipole moment (μ), the polarizability (α) and the first hyperpolarizability (β) were calculated using the density functional B3LYP method with the lanl2dz basis set. For the results, the title compound shows nonzero β value revealing second order NLO behaviour.
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
Chen, M; Jiang, S; Lu, W
2015-06-15
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data, as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.
Wang, Lin-Wang
2006-12-01
Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N{sup 3}) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the
GPU-based fast Monte Carlo dose calculation for proton therapy.
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B
2012-12-07
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ∼1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.
GPU-based fast Monte Carlo dose calculation for proton therapy
NASA Astrophysics Data System (ADS)
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B.
2012-12-01
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ˜1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.
Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation
NASA Astrophysics Data System (ADS)
Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei
2007-02-01
On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No
Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation.
Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei
2007-02-07
On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No
SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.
Muhire, Brejnev Muhizi; Varsani, Arvind; Martin, Darren Patrick
2014-01-01
The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV). There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT), a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms).
Calculations of helium separation via uniform pores of stanene-based membranes
Gao, Guoping; Jiao, Yan; Jiao, Yalong; Ma, Fengxian; Kou, Liangzhi
2015-01-01
Summary The development of low energy cost membranes to separate He from noble gas mixtures is highly desired. In this work, we studied He purification using recently experimentally realized, two-dimensional stanene (2D Sn) and decorated 2D Sn (SnH and SnF) honeycomb lattices by density functional theory calculations. To increase the permeability of noble gases through pristine 2D Sn at room temperature (298 K), two practical strategies (i.e., the application of strain and functionalization) are proposed. With their high concentration of large pores, 2D Sn-based membrane materials demonstrate excellent helium purification and can serve as a superior membrane over traditionally used, porous materials. In addition, the separation performance of these 2D Sn-based membrane materials can be significantly tuned by application of strain to optimize the He purification properties by taking both diffusion and selectivity into account. Our results are the first calculations of He separation in a defect-free honeycomb lattice, highlighting new interesting materials for helium separation for future experimental validation. PMID:26885459
EXAFS simulations in Zn-doped LiNbO3 based on defect calculations
NASA Astrophysics Data System (ADS)
Valerio, Mário E. G.; Jackson, Robert A.; Bridges, Frank G.
2017-02-01
Lithium niobate, LiNbO3, is an important technological material with good electro-optic, acousto-optic, elasto-optic, piezoelectric and nonlinear properties. EXAFS on Zn-doped LiNbO3 found strong evidences that Zn substitutes primarily at the Li site on highly doped samples. In this work the EXAFS results were revisited using a different approach where the models for simulating the EXAFS results were obtained from the output of defect calculations. The strategy uses the relaxed positions of the ions surrounding the dopants to generate a cluster from where the EXAFS oscillations can be calculated. The defect involves not only the Zn possible substitution at either Li or Nb sites but also the charge compensating defects, when needed. From previous defect modelling, a subset of defects was selected based on the energetics of the defect production in the LiNbO3 lattice. From them, all possible clusters were generated and the simulated EXAFS were computed. The simulated EXAFS were them compared to available EXAFS results in the literature. Based on this comparison different models could be proposed to explain the behaviour of Zn in the LiNbO3 matrix.
BaTiO3-based nanolayers and nanotubes: first-principles calculations.
Evarestov, Robert A; Bandura, Andrei V; Kuruch, Dmitrii D
2013-01-30
The first-principles calculations using hybrid exchange-correlation functional and localized atomic basis set are performed for BaTiO(3) (BTO) nanolayers and nanotubes (NTs) with the structure optimization. Both the cubic and the ferroelectric BTO phases are used for the nanolayers and NTs modeling. It follows from the calculations that nanolayers of the different ferroelectric BTO phases have the practically identical surface energies and are more stable than nanolayers of the cubic phase. Thin nanosheets composed of three or more dense layers of (0 1 0) and (0 1 1[overline]) faces preserve the ferroelectric displacements inherent to the initial bulk phase. The structure and stability of BTO single-wall NTs depends on the original bulk crystal phase and a wall thickness. The majority of the considered NTs with the low formation and strain energies has the mirror plane perpendicular to the tube axis and therefore cannot exhibit ferroelectricity. The NTs folded from (0 1 1[overline]) layers may show antiferroelectric arrangement of Ti-O bonds. Comparison of stability of the BTO-based and SrTiO(3)-based NTs shows that the former are more stable than the latter.
Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki
2013-01-01
This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
NASA Technical Reports Server (NTRS)
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
Esmaielzadeh, Sheida; Azimian, Leila; Shekoohi, Khadijeh; Mohammadi, Khosro
2014-12-10
Synthesis, magnetic and spectroscopy techniques are described for five copper(II) containing tetradentate Schiff bases are synthesized from methyl-2-(N-2'-aminoethane), (1-methyl-2'-aminoethane), (3-aminopropylamino)cyclopentenedithiocarboxylate. Molar conductance and infrared spectral evidences indicate that the complexes are four-coordinate in which the Schiff bases are coordinated as NNOS ligands. Room temperature μeff values for the complexes are 1.71-1.80B.M. corresponding to one unpaired electron respectively. The formation constants and free energies were measured spectrophotometrically, at constant ionic strength 0.1M (NaClO4), at 25˚C in DMF solvent. Also, the DFT calculations were carried out to determine the structural and the geometrical properties of the complexes. The DFT results are further supported by the experimental formation constants of these complexes.
Proper orthogonal decomposition methods for noise reduction in particle-based transport calculations
NASA Astrophysics Data System (ADS)
del-Castillo-Negrete, D.; Spong, D. A.; Hirshman, S. P.
2008-09-01
Proper orthogonal decomposition techniques to reduce noise in the reconstruction of the distribution function in particle-based transport calculations are explored. For two-dimensional steady-state problems, the method is based on low rank truncations of the singular value decomposition of a coarse-grained representation of the particle distribution function. For time-dependent two-dimensional problems or three-dimensional time-independent problems, the use of a generalized low-rank approximation of matrices technique is proposed. The methods are illustrated and tested with Monte Carlo particle simulation data of plasma collisional relaxation and guiding-center transport with collisions in a magnetically confined plasma in toroidal geometry. It is observed that the proposed noise reduction methods achieve high levels of smoothness in the particle distribution function by using significantly fewer particles in the computations.
Implementation of a Web-Based Spatial Carbon Calculator for Latin America and the Caribbean
NASA Astrophysics Data System (ADS)
Degagne, R. S.; Bachelet, D. M.; Grossman, D.; Lundin, M.; Ward, B. C.
2013-12-01
A multi-disciplinary team from the Conservation Biology Institute is creating a web-based tool for the InterAmerican Development Bank (IDB) to assess the impact of potential development projects on carbon stocks in Latin America and the Caribbean. Funded by the German Society for International Cooperation (GIZ), this interactive carbon calculator is an integrated component of the IDB Decision Support toolkit which is currently utilized by the IDB's Environmental Safeguards Group. It is deployed on the Data Basin (www.databasin.org) platform and provides a risk screening function to indicate the potential carbon impact of various types of projects, based on a user-delineated development footprint. The tool framework employs the best available geospatial carbon data to quantify above-ground carbon stocks and highlights potential below-ground and soil carbon hotspots in the proposed project area. Results are displayed in the web mapping interface, as well as summarized in PDF documents generated by the tool.
NASA Astrophysics Data System (ADS)
Chaudhari, Mrunalkumar
Nickel based superalloys have superior high temperature mechanical strength, corrosion and creep resistance in harsh environments and found applications in the hot sections as turbine blades and turbine discs in jet engines and gas generator turbines in the aerospace and energy industries. The efficiency of these turbine engines depends on the turbine inlet temperature, which is determined by the high temperature strength and behavior of these superalloys. The microstructure of nickel based superalloys usually contains coherently precipitated gamma prime (gamma') Ni3Al phase within the random solid solution of the gamma (gamma) matrix, with the gamma' phase being the strengthening phase of the superalloys. How the alloying elements partition into the gamma and gamma' phases and especially in the site occupancy behaviors in the strengthening gamma' phases play a critical role in their high temperature mechanical behaviors. The goal of this dissertation is to study the site substitution behavior of the major alloying elements including Cr, Co and Ti through first principles based calculations. Site substitution energies have been calculated using the anti-site formation, the standard defect formation formalism, and the vacancy formation based formalism. Elements such as Cr and Ti were found to show strong preference for Al sublattice, whereas Co was found to have a compositionally dependent site preference. In addition, the interaction energies between Cr-Cr, Co-Co, Ti-Ti and Cr-Co atoms have also been determined. Along with the charge transfer, chemical bonding and alloy chemistry associated with the substitutions has been investigated by examining the charge density distributions and electronic density of states to explain the chemical nature of the site substitution. Results show that Cr and Co atoms prefer to be close by on either Al sublattice or on a Ni-Al mixed lattice, suggesting a potential tendency of Cr and Co segregation in the gamma' phase.
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
Model-based dose calculations for {sup 125}I lung brachytherapy
Sutherland, J. G. H.; Furutani, K. M.; Garces, Y. I.; Thomson, R. M.
2012-07-15
Purpose: Model-baseddose calculations (MBDCs) are performed using patient computed tomography (CT) data for patients treated with intraoperative {sup 125}I lung brachytherapy at the Mayo Clinic Rochester. Various metallic artifact correction and tissue assignment schemes are considered and their effects on dose distributions are studied. Dose distributions are compared to those calculated under TG-43 assumptions. Methods: Dose distributions for six patients are calculated using phantoms derived from patient CT data and the EGSnrc user-code BrachyDose. {sup 125}I (GE Healthcare/Oncura model 6711) seeds are fully modeled. Four metallic artifact correction schemes are applied to the CT data phantoms: (1) no correction, (2) a filtered back-projection on a modified virtual sinogram, (3) the reassignment of CT numbers above a threshold in the vicinity of the seeds, and (4) a combination of (2) and (3). Tissue assignment is based on voxel CT number and mass density is assigned using a CT number to mass density calibration. Three tissue assignment schemes with varying levels of detail (20, 11, and 5 tissues) are applied to metallic artifact corrected phantoms. Simulations are also performed under TG-43 assumptions, i.e., seeds in homogeneous water with no interseed attenuation. Results: Significant dose differences (up to 40% for D{sub 90}) are observed between uncorrected and metallic artifact corrected phantoms. For phantoms created with metallic artifact correction schemes (3) and (4), dose volume metrics are generally in good agreement (less than 2% differences for all patients) although there are significant local dose differences. The application of the three tissue assignment schemes results in differences of up to 8% for D{sub 90}; these differences vary between patients. Significant dose differences are seen between fully modeled and TG-43 calculations with TG-43 underestimating the dose (up to 36% in D{sub 90}) for larger volumes containing higher proportions of
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Martinez-Rovira, I.; Sempau, J.; Prezado, Y.
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Liu, Miao; Rong, Ziqin; Malik, Rahul; ...
2014-12-16
In this study, batteries that shuttle multivalent ions such as Mg2+ and Ca2+ ions are promising candidates for achieving higher energy density than available with current Li-ion technology. Finding electrode materials that reversibly store and release these multivalent cations is considered a major challenge for enabling such multivalent battery technology. In this paper, we use recent advances in high-throughput first-principles calculations to systematically evaluate the performance of compounds with the spinel structure as multivalent intercalation cathode materials, spanning a matrix of five different intercalating ions and seven transition metal redox active cations. We estimate the insertion voltage, capacity, thermodynamic stabilitymore » of charged and discharged states, as well as the intercalating ion mobility and use these properties to evaluate promising directions. Our calculations indicate that the Mn2O4 spinel phase based on Mg and Ca are feasible cathode materials. In general, we find that multivalent cathodes exhibit lower voltages compared to Li cathodes; the voltages of Ca spinels are ~0.2 V higher than those of Mg compounds (versus their corresponding metals), and the voltages of Mg compounds are ~1.4 V higher than Zn compounds; consequently, Ca and Mg spinels exhibit the highest energy densities amongst all the multivalent cation species. The activation barrier for the Al³⁺ ion migration in the Mn₂O₄ spinel is very high (~1400 meV for Al3+ in the dilute limit); thus, the use of an Al based Mn spinel intercalation cathode is unlikely. Amongst the choice of transition metals, Mn-based spinel structures rank highest when balancing all the considered properties.« less
Star sub-pixel centroid calculation based on multi-step minimum energy difference method
NASA Astrophysics Data System (ADS)
Wang, Duo; Han, YanLi; Sun, Tengfei
2013-09-01
The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better
Tian, Zhen; Li, Yongbao; Hassan-Rezaeian, Nima; Jiang, Steve B; Jia, Xun
2017-03-01
We have previously developed a GPU-based Monte Carlo (MC) dose engine on the OpenCL platform, named goMC, with a built-in analytical linear accelerator (linac) beam model. In this paper, we report our recent improvement on goMC to move it toward clinical use. First, we have adapted a previously developed automatic beam commissioning approach to our beam model. The commissioning was conducted through an optimization process, minimizing the discrepancies between calculated dose and measurement. We successfully commissioned six beam models built for Varian TrueBeam linac photon beams, including four beams of different energies (6 MV, 10 MV, 15 MV, and 18 MV) and two flattening-filter-free (FFF) beams of 6 MV and 10 MV. Second, to facilitate the use of goMC for treatment plan dose calculations, we have developed an efficient source particle sampling strategy. It uses the pre-generated fluence maps (FMs) to bias the sampling of the control point for source particles already sampled from our beam model. It could effectively reduce the number of source particles required to reach a statistical uncertainty level in the calculated dose, as compared to the conventional FM weighting method. For a head-and-neck patient treated with volumetric modulated arc therapy (VMAT), a reduction factor of ~2.8 was achieved, accelerating dose calculation from 150.9 s to 51.5 s. The overall accuracy of goMC was investigated on a VMAT prostate patient case treated with 10 MV FFF beam. 3D gamma index test was conducted to evaluate the discrepancy between our calculated dose and the dose calculated in Varian Eclipse treatment planning system. The passing rate was 99.82% for 2%/2 mm criterion and 95.71% for 1%/1 mm criterion. Our studies have demonstrated the effectiveness and feasibility of our auto-commissioning approach and new source sampling strategy for fast and accurate MC dose calculations for treatment plans.
Jacob, D; Palacios, J J
2011-01-28
We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.
Topological phase transition of single-crystal Bi based on empirical tight-binding calculations
NASA Astrophysics Data System (ADS)
Ohtsubo, Yoshiyuki; Kimura, Shin-ichi
2016-12-01
The topological order of single-crystal Bi and its surface states on the (111) surface are studied in detail based on empirical tight-binding (TB) calculations. New TB parameters are presented that are used to calculate the surface states of semi-infinite single-crystal Bi(111), which agree with the experimental angle-resolved photoelectron spectroscopy results. The influence of the crystal lattice distortion is surveyed and it is revealed that a topological phase transition is driven by in-plane expansion with topologically non-trivial bulk bands. In contrast with the semi-infinite system, the surface-state dispersions on finite-thickness slabs are non-trivial irrespective of the bulk topological order. The role of the interaction between the top and bottom surfaces in the slab is systematically studied, and it is revealed that a very thick slab is required to properly obtain the bulk topological order of Bi from the (111) surface state: above 150 biatomic layers in this case.
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation
Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan
2014-03-24
This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.
Low complexity VLSI implementation of CORDIC-based exponent calculation for neural networks
NASA Astrophysics Data System (ADS)
Aggarwal, Supriya; Khare, Kavita
2012-11-01
This article presents a low hardware complexity for exponent calculations based on CORDIC. The proposed CORDIC algorithm is designed to overcome major drawbacks (scale-factor compensation, low range of convergence and optimal selection of micro-rotations) of the conventional CORDIC in hyperbolic mode of operation. The micro-rotations are identified using leading-one bit detection with uni-direction rotations to eliminate redundant iterations and improve throughput. The efficiency and performance of the processor are independent of the probability of rotation angles being known prior to implementation. The eight-staged pipelined architecture implementation requires an 8 × N ROM in the pre-processing unit for storing the initial coordinate values; it no longer requires the ROM for storing the elementary angles. It provides an area-time efficient design for VLSI implementation for calculating exponents in activation functions and Gaussain Potential Functions (GPF) in neural networks. The proposed CORDIC processor requires 32.68% less adders and 72.23% less registers compared to that of the conventional design. The proposed design when implemented on Virtex 2P (2vp50ff1148-6) device, dissipates 55.58% less power and has 45.09% less total gate count and 16.91% less delay as compared to Xilinx CORDIC Core. The detailed algorithm design along with FPGA implementation and area and time complexities is presented.
Proper Orthogonal Decomposition methods for particle-based transport calculations in plasmas
NASA Astrophysics Data System (ADS)
Del-Castillo-Negrete, Diego; Spong, D.; Hirshman, S.
2009-05-01
The Proper Orthogonal Decomposition (POD) is a powerful technique to analyze large data sets by projecting the data into an optimal set of low-order modes that capture the main features of the data. POD methods have been widely used in image and signal processing and also in the study of coherent structures in neutral fluids. However, the use of these techniques in plasma physics is a relatively new area of research. Here we discuss recent novel applications of POD methods to particle-based transport calculations in plasmas. We show that POD techniques provide an efficient method to filter noise in the reconstruction of the particle distribution function. As a specific application we consider Monte Carlo simulations of plasma collisional relaxation and guiding-center transport in magnetically confined plasma in toroidal geometry [1]. We also discuss recent results on the application of POD methods to PIC-codes in the context of the Vlasov-Poisson system, and the use of POD methods in projective integration. In particular, we show how POD modes can be used as effective macroscopic variables to accelerate Monte-Carlo calculations. [1] D. del-Castillo-Negrete, et al. Phys. of Plasmas 15, 092308 (2008).
GMC: a GPU implementation of a Monte Carlo dose calculation based on Geant4.
Jahnke, Lennart; Fleckenstein, Jens; Wenz, Frederik; Hesser, Jürgen
2012-03-07
We present a GPU implementation called GMC (GPU Monte Carlo) of the low energy (<100 GeV) electromagnetic part of the Geant4 Monte Carlo code using the NVIDIA® CUDA programming interface. The classes for electron and photon interactions as well as a new parallel particle transport engine were implemented. The way a particle is processed is not in a history by history manner but rather by an interaction by interaction method. Every history is divided into steps that are then calculated in parallel by different kernels. The geometry package is currently limited to voxelized geometries. A modified parallel Mersenne twister was used to generate random numbers and a random number repetition method on the GPU was introduced. All phantom results showed a very good agreement between GPU and CPU simulation with gamma indices of >97.5% for a 2%/2 mm gamma criteria. The mean acceleration on one GTX 580 for all cases compared to Geant4 on one CPU core was 4860. The mean number of histories per millisecond on the GPU for all cases was 658 leading to a total simulation time for one intensity-modulated radiation therapy dose distribution of 349 s. In conclusion, Geant4-based Monte Carlo dose calculations were significantly accelerated on the GPU.
Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics
NASA Astrophysics Data System (ADS)
Hošek, Petr; Spiwok, Vojtěch
2016-01-01
Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.
Wignall, Jessica A.; Shapiro, Andrew J.; Wright, Fred A.; Woodruff, Tracey J.; Chiu, Weihsueh A.; Guyton, Kathryn Z.
2014-01-01
Background: Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. Objectives: We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. Methods: We evaluated 880 dose–response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. Results: We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. Conclusions: BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches. Citation: Wignall JA, Shapiro AJ, Wright FA, Woodruff TJ, Chiu WA, Guyton KZ, Rusyn I. 2014. Standardizing benchmark dose calculations to improve science-based decisions in human health assessments. Environ Health
Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies
NASA Astrophysics Data System (ADS)
Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.
2017-04-01
Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4% ± 3.1% for CBAC and 3.5% ± 3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a
NASA Astrophysics Data System (ADS)
Borkar, Aditi N.; De Simone, Alfonso; Montalvao, Rinaldo W.; Vendruscolo, Michele
2013-06-01
We describe a method of determining the conformational fluctuations of RNA based on the incorporation of nuclear magnetic resonance (NMR) residual dipolar couplings (RDCs) as replica-averaged structural restraints in molecular dynamics simulations. In this approach, the alignment tensor required to calculate the RDCs corresponding to a given conformation is estimated from its shape, and multiple replicas of the RNA molecule are simulated simultaneously to reproduce in silico the ensemble-averaging procedure performed in the NMR measurements. We provide initial evidence that with this approach it is possible to determine accurately structural ensembles representing the conformational fluctuations of RNA by applying the reference ensemble test to the trans-activation response element of the human immunodeficiency virus type 1.
Angenendt, Knut; Johansson, Patrik
2011-06-23
The solvation of lithium salts in ionic liquids (ILs) leads to the creation of a lithium ion carrying species quite different from those found in traditional nonaqueous lithium battery electrolytes. The most striking differences are that these species are composed only of ions and in general negatively charged. In many IL-based electrolytes, the dominant species are triplets, and the charge, stability, and size of the triplets have a large impact on the total ion conductivity, the lithium ion mobility, and also the lithium ion delivery at the electrode. As an inherent advantage, the triplets can be altered by selecting lithium salts and ionic liquids with different anions. Thus, within certain limits, the lithium ion carrying species can even be tailored toward distinct important properties for battery application. Here, we show by DFT calculations that the resulting charge carrying species from combinations of ionic liquids and lithium salts and also some resulting electrolyte properties can be predicted.
First-principles based calculation of phonon spectrain substitutionally disordered alloys
NASA Astrophysics Data System (ADS)
Ghosh, Subhradip
2013-02-01
A first-principles based solution to the longstanding problem of calculating the phonon spectra in substitutional disordered alloys where strong force-constant disorder plays a significantrole is provided by a combination of first-principles electronicstructure tools, physically reasonable models of force-constant in alloyenvironments, and the Itinerant Coherent-Potntial Approximation (ICPA) by Ghosh and co-workers (S. Ghosh et. al., Physical Review B 66, 214206 (2002)). Wehere present the salient features of such hybrid formalism and illustrate its capability by the computation of phonon spectrafor disordered alloys with large size mismatch of end point components. We demonstrate that the consideration of local environments insize-mismatched alloys is crucial in understanding the microscopicinterplay of forces between various pairs of chemical specie and a correctdepiction of these is important for computation of accurate phonondispersions in these systems.
NASA Astrophysics Data System (ADS)
Park, Hee Su; Sharma, Aditya
2016-12-01
We calculate the operation wavelength range of polarization controllers based on rotating wave plates such as paddle-type optical fiber devices. The coverages over arbitrary polarization conversion or arbitrary birefringence compensation are numerically estimated. The results present the acceptable phase retardation range of polarization controllers composed of two quarter-wave plates or a quarter-half-quarter-wave plate combination, and thereby determines the operation wavelength range of a given design. We further prove that a quarter-quarter-half-wave-plate combination is also an arbitrary birefringence compensator as well as a conventional quarter-half-quarter-wave-plate combination, and show that the two configurations have the identical range of acceptable phase retardance within the uncertainty of our numerical method.
NASA Astrophysics Data System (ADS)
Homma, H.; Murayama, T.
We investigate the chemical evolution model explaining the chemical composition and the star formation histories (SFHs) simultaneously for the dwarf spheroidal galaxies (dSphs). Recently, wide imaging photometry and multi-object spectroscopy give us a large number of data. Therefore, we start to develop the chemical evolution model based on an SFH given by photometric observations and estimates a metallicity distribution function (MDF) comparing with spectroscopic observations. With this new model we calculate the chemical evolution for 4 dSphs (Fornax, Sculptor, Leo II, Sextans), and then we found that the model of 0.1 Gyr for the delay time of type Ia SNe is too short to explain the observed [alpha /Fe] vs. [Fe/H] diagrams.
The solar silicon abundance based on 3D non-LTE calculations
NASA Astrophysics Data System (ADS)
Amarsi, A. M.; Asplund, M.
2017-01-01
We present 3D non-local thermodynamic equilibrium (non-LTE) radiative transfer calculations for silicon in the solar photosphere, using an extensive model atom that includes recent, realistic neutral hydrogen collisional cross-sections. We find that photon losses in the Si I lines give rise to slightly negative non-LTE abundance corrections of the order of -0.01 dex. We infer a 3D non-LTE-based solar silicon abundance of lg ɛ_{Si{⊙}}=7.51 dex. With silicon commonly chosen to be the anchor between the photospheric and meteoritic abundances, we find that the meteoritic abundance scale remains unchanged compared with the Asplund et al. and Lodders et al. results.
A novel "Integrated Biomarker Response" calculation based on reference deviation concept.
Sanchez, Wilfried; Burgeot, Thierry; Porcher, Jean-Marc
2013-05-01
Multi-biomarker approaches are used to assess ecosystem health and identify impacts of environmental stress on organisms. However, exploration of large datasets by environmental managers represents a major challenge for regulatory application of this tool. Several integrative tools were developed to summarize biomarker responses. The aim of the present paper is to update calculation of the "Integrated Biological Response" (IBR) described by Beliaeff and Burgeot (Environ Toxicol Chem 21:1316-1322, 2002) to avoid weaknesses of this integrative tool. In the present paper, a novel index named "Integrated Biological Responses version 2" based on the reference deviation concept is presented. It allows a clear discrimination of sampling sites as for the IBR, but several differences are observed for contaminated sites according to up- and downregulation of biomarker responses. This novel tool could be used to integrate multi-biomarker responses not only in large-scale monitoring but also in upstream/downstream investigations.
A study of potential numerical pitfalls in GPU-based Monte Carlo dose calculation
NASA Astrophysics Data System (ADS)
Magnoux, Vincent; Ozell, Benoît; Bonenfant, Éric; Després, Philippe
2015-07-01
The purpose of this study was to evaluate the impact of numerical errors caused by the floating point representation of real numbers in a GPU-based Monte Carlo code used for dose calculation in radiation oncology, and to identify situations where this type of error arises. The program used as a benchmark was bGPUMCD. Three tests were performed on the code, which was divided into three functional components: energy accumulation, particle tracking and physical interactions. First, the impact of single-precision calculations was assessed for each functional component. Second, a GPU-specific compilation option that reduces execution time as well as precision was examined. Third, a specific function used for tracking and potentially more sensitive to precision errors was tested by comparing it to a very high-precision implementation. Numerical errors were found in two components of the program. Because of the energy accumulation process, a few voxels surrounding a radiation source end up with a lower computed dose than they should. The tracking system contained a series of operations that abnormally amplify rounding errors in some situations. This resulted in some rare instances (less than 0.1%) of computed distances that are exceedingly far from what they should have been. Most errors detected had no significant effects on the result of a simulation due to its random nature, either because they cancel each other out or because they only affect a small fraction of particles. The results of this work can be extended to other types of GPU-based programs and be used as guidelines to avoid numerical errors on the GPU computing platform.
A study of potential numerical pitfalls in GPU-based Monte Carlo dose calculation.
Magnoux, Vincent; Ozell, Benoît; Bonenfant, Éric; Després, Philippe
2015-07-07
The purpose of this study was to evaluate the impact of numerical errors caused by the floating point representation of real numbers in a GPU-based Monte Carlo code used for dose calculation in radiation oncology, and to identify situations where this type of error arises. The program used as a benchmark was bGPUMCD. Three tests were performed on the code, which was divided into three functional components: energy accumulation, particle tracking and physical interactions. First, the impact of single-precision calculations was assessed for each functional component. Second, a GPU-specific compilation option that reduces execution time as well as precision was examined. Third, a specific function used for tracking and potentially more sensitive to precision errors was tested by comparing it to a very high-precision implementation. Numerical errors were found in two components of the program. Because of the energy accumulation process, a few voxels surrounding a radiation source end up with a lower computed dose than they should. The tracking system contained a series of operations that abnormally amplify rounding errors in some situations. This resulted in some rare instances (less than 0.1%) of computed distances that are exceedingly far from what they should have been. Most errors detected had no significant effects on the result of a simulation due to its random nature, either because they cancel each other out or because they only affect a small fraction of particles. The results of this work can be extended to other types of GPU-based programs and be used as guidelines to avoid numerical errors on the GPU computing platform.
Vidal, David; Thormann, Michael; Pons, Miquel
2005-01-01
SMILES strings are the most compact text based molecular representations. Implicitly they contain the information needed to compute all kinds of molecular structures and, thus, molecular properties derived from these structures. We show that this implicit information can be accessed directly at SMILES string level without the need to apply explicit time-consuming conversion of the SMILES strings into molecular graphs or 3D structures with subsequent 2D or 3D QSPR calculations. Our method is based on the fragmentation of SMILES strings into overlapping substrings of a defined size that we call LINGOs. The integral set of LINGOs derived from a given SMILES string, the LINGO profile, is a hologram of the SMILES representation of the molecule described. LINGO profiles provide input for QSPR models and the calculation of intermolecular similarities at very low computational cost. The octanol/water partition coefficient (LlogP) QSPR model achieved a correlation coefficient R2=0.93, a root-mean-square error RRMS=0.49 log units, a goodness of prediction correlation coefficient Q2=0.89 and a QRMS=0.61 log units. The intrinsic aqueous solubility (LlogS) QSPR model achieved correlation coefficient values of R2=0.91, Q2=0.82, and RRMS=0.60 and QRMS=0.89 log units. Integral Tanimoto coefficients computed from LINGO profiles provided sharp discrimination between random and bioisoster pairs extracted from Accelrys Bioster Database. Average similarities (LINGOsim) were 0.07 for the random pairs and 0.36 for the bioisosteric pairs.
Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ruf, Joe
2007-01-01
As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.
Giuseppe Palmiotti
2015-05-01
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The
Lauritzen, Bent; Hedemann-Jensen, Per
2005-12-01
In the event of a nuclear or radiological emergency resulting in an atmospheric release of radioactive materials, stationary gamma-measurements, for example obtained from distributed, automatic monitoring stations, may provide a first assessment of exposures resulting from airborne and deposited activity. Decisions on the introduction of countermeasures for the protection of the public can be based on such off-site gamma measurements. A methodology is presented for calculation of gamma-radiation action levels for the introduction of specific countermeasures, based on probabilistic modelling of the dispersion of radionuclides and the radiation exposure. The methodology is applied to a nuclear accident situation with long-range atmospheric dispersion of radionuclides, and action levels of dose rate measured by a network of monitoring stations are estimated for sheltering and foodstuff restrictions. It is concluded that the methodology is applicable to all emergency countermeasures following a nuclear accident but measurable quantities other than ambient dose equivalent rate are needed for decisions on the introduction of foodstuff countermeasures.
Chen, David; Shah, Anup; Nguyen, Hien; Loo, Dorothy; Inder, Kerry L; Hill, Michelle M
2014-09-05
The utility of high-throughput quantitative proteomics to identify differentially abundant proteins en-masse relies on suitable and accessible statistical methodology, which remains mostly an unmet need. We present a free web-based tool, called Quantitative Proteomics p-value Calculator (QPPC), designed for accessibility and usability by proteomics scientists and biologists. Being an online tool, there is no requirement for software installation. Furthermore, QPPC accepts generic peptide ratio data generated by any mass spectrometer and database search engine. Importantly, QPPC utilizes the permutation test that we recently found to be superior to other methods for analysis of peptide ratios because it does not assume normal distributions.1 QPPC assists the user in selecting significantly altered proteins based on numerical fold change, or standard deviation from the mean or median, together with the permutation p-value. Output is in the form of comma separated values files, along with graphical visualization using volcano plots and histograms. We evaluate the optimal parameters for use of QPPC, including the permutation level and the effect of outlier and contaminant peptides on p-value variability. The optimal parameters defined are deployed as default for the web-tool at http://qppc.di.uq.edu.au/ .
Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-09-01
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using
Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-09-08
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using
He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao
2016-01-01
The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone.
Code of Federal Regulations, 2014 CFR
2014-07-01
...-based fuel economy, CO2 emissions, and carbon-related exhaust emissions for a model type. 600.208-12... FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.208-12 Calculation of FTP-based and...
Code of Federal Regulations, 2013 CFR
2013-07-01
...-based fuel economy, CO2 emissions, and carbon-related exhaust emissions for a model type. 600.208-12... FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.208-12 Calculation of FTP-based and...
Code of Federal Regulations, 2012 CFR
2012-07-01
...-based fuel economy, CO2 emissions, and carbon-related exhaust emissions for a model type. 600.208-12... FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.208-12 Calculation of FTP-based and...
Mikell, Justin K.; Klopp, Ann H.; Gonzalez, Graciela M.N.; Kisling, Kelly D.; Price, Michael J.; Berner, Paula A.; Eifel, Patricia J.; Mourtada, Firas
2012-07-01
Purpose: To investigate the dosimetric impact of the heterogeneity dose calculation Acuros (Transpire Inc., Gig Harbor, WA), a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials: The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received {sup 192}Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed with and without solid model applicator, with and without overriding the patient contour to 1 g/cm{sup 3} muscle, and with and without overriding contrast materials to muscle or 2.25 g/cm{sup 3} bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. American Association of Physicists in Medicine Task Group 43 (TG-43) guidelines and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, International Commission on Radiation Units and Measurements (ICRU) report 38 rectal and bladder points, three and nine o'clock, and {sub D2cm3} to the bladder, rectum, and sigmoid. Results: Points A and B, D{sub 2} cm{sup 3} bladder, ICRU bladder, and three and nine o'clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43 guidelines. The D{sub 2cm3} rectum (n = 3), D{sub 2cm3} sigmoid (n = 1), and ICRU rectum (n = 6) had differences of >5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast were mapped to bone and radiopaque packing was not overridden. Conclusions
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.
2014-02-15
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model
Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry
2011-01-01
ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.
Stationarity Modeling and Informatics-Based Diagnostics in Monte Carlo Criticality Calculations
Ueki, Taro; Brown, Forrest B.
2005-01-15
In Monte Carlo criticality calculations, source error propagation through the stationary (active) cycles and source convergence in the settling (inactive) cycles are both dominated by the dominance ratio (DR) of fission kernels. For symmetric two-fissile-component systems with the DR close to unity, the extinction of fission source sites can occur in one of the components even when the initial source is symmetric and the number of histories per cycle is more than 1000. When such a system is made slightly asymmetric, the neutron effective multiplication factor at the inactive cycles does not reflect the convergence to stationary source distribution. To overcome this problem, relative entropy has been applied to a slightly asymmetric two-fissile-component problem with a DR of 0.993. The numerical results are mostly satisfactory but also show the possibility of the occasional occurrence of unnecessarily strict stationarity diagnostics. Therefore, a criterion is defined based on the concept of data compression limit in information theory. Numerical results for a pressurized water reactor fuel storage facility with a DR of 0.994 strongly support the efficacy of relative entropy in both the posterior and progressive stationarity diagnostics.
Codina, Antonio; Fernández, Eduardo J; Jones, Peter G; Laguna, Antonio; López-De-Luzuriaga, José M; Monge, Miguel; Olmos, M Elena; Pérez, Javier; Rodríguez, Miguel A
2002-06-12
[M(C6F5)(N(H)=CPh2)] (M = Ag (1) and Au (2)) complexes have been synthesized and characterized by X-ray diffraction analysis. Complex 1 shows a ladder-type structure in which two [Ag(C6F5)(N(H)=CPh2)] units are linked by a Ag(I)-Ag(I) interaction in an antiparallel disposition. The dimeric units are associated through hydrogen bonds of the type N-H...F(ortho). On the other hand, gold(I) complex 2 displays discrete dimers also in an antiparallel conformation in which both Au(I)-Au(I) interactions and N-H.F(ortho) hydrogen bonds appear within the dimeric units. The features of these coexisting interactions have been theoretically studied by ab initio calculations based on four different model systems in order to analyze them separately. The interactions have been analyzed at HF and MP2 levels of theory showing that, in this case, even at larger distances. The Au(I)-Au(I) interaction is stronger than Ag(I)-Ag(I) and that N-H.F hydrogen bonding and Au(I)-Au(I) contacts have a similar strength in the same molecule, which permits a competition between these two structural motifs giving rise to different structural arrangements.
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Chang, Chia-Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-01
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wave function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.
Accelerated materials design of fast oxygen ionic conductors based on first principles calculations
NASA Astrophysics Data System (ADS)
He, Xingfeng; Mo, Yifei
Over the past decades, significant research efforts have been dedicated to seeking fast oxygen ion conductor materials, which have important technological applications in electrochemical devices such as solid oxide fuel cells, oxygen separation membranes, and sensors. Recently, Na0.5Bi0.5TiO3 (NBT) was reported as a new family of fast oxygen ionic conductor. We will present our first principles computation study aims to understand the O diffusion mechanisms in the NBT material and to design this material with enhanced oxygen ionic conductivity. Using the NBT materials as an example, we demonstrate the computation capability to evaluate the phase stability, chemical stability, and ionic diffusion of the ionic conductor materials. We reveal the effects of local atomistic configurations and dopants on oxygen diffusion and identify the intrinsic limiting factors in increasing the ionic conductivity of the NBT materials. Novel doping strategies were predicted and demonstrated by the first principles calculations. In particular, the K doped NBT compound achieved good phase stability and an order of magnitude increase in oxygen ionic conductivity of up to 0.1 S cm-1 at 900 K compared to the experimental Mg doped compositions. Our results provide new avenues for the future design of the NBT materials and demonstrate the accelerated design of new ionic conductor materials based on first principles techniques. This computation methodology and workflow can be applied to the materials design of any (e.g. Li +, Na +) fast ion-conducting materials.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
GPAW - massively parallel electronic structure calculations with Python-based software.
Enkovaara, J.; Romero, N.; Shende, S.; Mortensen, J.
2011-01-01
Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used this approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.
Fission yield calculation using toy model based on Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jubaidah, Kurniadi, Rizal
2015-09-01
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Structure reconstruction of TiO2-based multi-wall nanotubes: first-principles calculations.
Bandura, A V; Evarestov, R A; Lukyanov, S I
2014-07-28
A new method of theoretical modelling of polyhedral single-walled nanotubes based on the consolidation of walls in the rolled-up multi-walled nanotubes is proposed. Molecular mechanics and ab initio quantum mechanics methods are applied to investigate the merging of walls in nanotubes constructed from the different phases of titania. The combination of two methods allows us to simulate the structures which are difficult to find only by ab initio calculations. For nanotube folding we have used (1) the 3-plane fluorite TiO2 layer; (2) the anatase (101) 6-plane layer; (3) the rutile (110) 6-plane layer; and (4) the 6-plane layer with lepidocrocite morphology. The symmetry of the resulting single-walled nanotubes is significantly lower than the symmetry of initial coaxial cylindrical double- or triple-walled nanotubes. These merged nanotubes acquire higher stability in comparison with the initial multi-walled nanotubes. The wall thickness of the merged nanotubes exceeds 1 nm and approaches the corresponding parameter of the experimental patterns. The present investigation demonstrates that the merged nanotubes can integrate the two different crystalline phases in one and the same wall structure.
Fission yield calculation using toy model based on Monte Carlo simulation
Jubaidah; Kurniadi, Rizal
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
NASA Astrophysics Data System (ADS)
Lauridsen, Bente; Hedemann Jensen, Per
1987-03-01
The basic dosimetric quantity in ICRP-publication no. 30 is the aborbed fraction AF( T←S). This parameter is the fraction of energy absorbed in a target organ T per emission of radiation from activity deposited in the source organ S. Based upon this fraction it is possible to calculate the Specific Effective Energy SEE( T← S). From this, the committed effective dose equivalent from an intake of radioactive material can be found, and thus the annual limit of intake for given radionuclides can be determined. A male phantom has been constructed with the aim of measuring the Specific Effective Energy SEE(T←S) in various target organs. Impressions-of real human organs have been used to produce vacuum forms. Tissue equivalent plastic sheets were sucked into the vacuum forms producing a shell with a shape identical to the original organ. Each organ has been made of two shells. The same procedure has been used for the body. Thin tubes through the organs make it possible to place TL dose meters in a matrix so the dose distribution can be measured. The phantom has been supplied with lungs, liver, kidneys, spleen, stomach, bladder, pancreas, and thyroid gland. To select a suitable body liquid for the phantom, laboratory experiments have been made with different liquids and different radionuclides. In these experiments the change in dose rate due to changes in density and composition of the liquid was determined. Preliminary results of the experiments are presented.
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
Chang, Chia -Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-19
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wavemore » function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Lastly, our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.« less
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
Chang, Chia -Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-19
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wave function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Lastly, our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.
Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems
NASA Technical Reports Server (NTRS)
Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.
2012-01-01
Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.
NASA Astrophysics Data System (ADS)
Ma, Z.; Hou, Z.; Zang, X.
2015-09-01
As a large-scale flexible inflatable structure by a huge inner lifting gas volume of several hundred thousand cubic meters, the stratospheric airship's thermal characteristic of inner gas plays an important role in its structural performance. During the floating flight, the day-night variation of the combined thermal condition leads to the fluctuation of the flow field inside the airship, which will remarkably affect the pressure acted on the skin and the structural safety of the stratospheric airship. According to the multi-physics coupling mechanism mentioned above, a numerical procedure of structural safety analysis of stratospheric airships is developed and the thermal model, CFD model, finite element code and criterion of structural strength are integrated. Based on the computation models, the distributions of the deformations and stresses of the skin are calculated with the variation of day-night time. The effects of loads conditions and structural configurations on the structural safety of stratospheric airships in the floating condition are evaluated. The numerical results can be referenced for the structural design of stratospheric airships.
A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)
NASA Astrophysics Data System (ADS)
Bitar, A.; Lisbona, A.; Thedrez, P.; Sai Maurel, C.; LeForestier, D.; Barbet, J.; Bardies, M.
2007-02-01
Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.
NASA Astrophysics Data System (ADS)
Dai, Wen-Wu; Zhao, Zong-Yan
2017-06-01
Heterostructure constructing is a feasible and powerful strategy to enhance the performance of photocatalysts, because they can be tailored to have desirable photo-electronics properties and couple distinct advantageous of components. As a novel layered photocatalyst, the main drawback of BiOI is the low edge position of the conduction band. To address this problem, it is meaningful to find materials that possess suitable band gap, proper band edge position, and high mobility of carrier to combine with BiOI to form hetertrostructure. In this study, graphene-based materials (including: graphene, graphene oxide, and g-C3N4) were chosen as candidates to achieve this purpose. The charge transfer, interface interaction, and band offsets are focused on and analyzed in detail by DFT calculations. Results indicated that graphene-based materials and BiOI were in contact and formed van der Waals heterostructures. The valence and conduction band edge positions of graphene oxide, g-C3N4 and BiOI changed with the Fermi level and formed the standard type-II heterojunction. In addition, the overall analysis of charge density difference, Mulliken population, and band offsets indicated that the internal electric field is facilitate for the separation of photo-generated electron-hole pairs, which means these heterostructures can enhance the photocatalytic efficiency of BiOI. Thus, BiOI combines with 2D materials to construct heterostructure not only make use of the unique high electron mobility, but also can adjust the position of energy bands and promote the separation of photo-generated carriers, which provide useful hints for the applications in photocatalysis.
Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education
ERIC Educational Resources Information Center
Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.
2014-01-01
Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…
Gaff, J F; Franzen, S; Delley, B
2010-11-04
A method for the calculation of resonance Raman cross sections is presented on the basis of calculation of structural differences between optimized ground and excited state geometries using density functional theory. A vibrational frequency calculation of the molecule is employed to obtain normal coordinate displacements for the modes of vibration. The excited state displacement relative to the ground state can be calculated in the normal coordinate basis by means of a linear transformation from a Cartesian basis to a normal coordinate one. The displacements in normal coordinates are then scaled by root-mean-square displacement of zero point motion to calculate dimensionless displacements for use in the two-time-correlator formalism for the calculation of resonance Raman spectra at an arbitrary temperature. The method is valid for Franck-Condon active modes within the harmonic approximation. The method was validated by calculation of resonance Raman cross sections and absorption spectra for chlorine dioxide, nitrate ion, trans-stilbene, 1,3,5-cycloheptatriene, and the aromatic amino acids. This method permits significant gains in the efficiency of calculating resonance Raman cross sections from first principles and, consequently, permits extension to large systems (>50 atoms).
An EGS4 based mathematical phantom for radiation protection calculations using standard man
Wise, K.N.
1994-11-01
This note describes an Electron Gamma Shower code (EGS4) Monte Carlo program for calculating radiation transport in adult males and females from internal or external electron and gamma sources which requires minimal knowledge of organ geometry. Calculations of the dose from planar gamma fields and from computerized tomography illustrate two applications of the package. 25 refs., 5 figs.
Calculation of positron observables using a finite-element-based approach
Klein, B. M.; Pask, J. E.; Sterne, P.
1998-11-04
We report the development of a new method for calculating positron observables using a finite-element approach for the solution of the Schrodinger equation. This method combines the advantages of both basis-set and real-space-grid approaches. The strict locality in real space of the finite element basis functions results in a method that is well suited for calculating large systems of a thousand or more atoms, as required for calculations of extended defects such as dislocations. In addition, the method is variational in nature and its convergence can be controlled systematically. The calculation of positron observables is straightforward due to the real-space nature of this method. We illustrate the power of this method with positron lifetime calculations on defects and defect-free materials, using overlapping atomic charge densities.
Equation of State of Al Based on Quantum Molecular Dynamics Calculations
NASA Astrophysics Data System (ADS)
Minakov, Dmitry V.; Levashov, Pavel R.; Khishchenko, Konstantin V.
2011-06-01
In this work, we present quantum molecular dynamics calculations of the shock Hugoniots of solid and porous samples as well as release isentropes and values of isentropic sound velocity behind the shock front for aluminum. We use the VASP code with an ultrasoft pseudopotential and GGA exchange-correlation functional. Up to 108 particles have been used in calculations. For the Hugoniots of Al we solve the Hugoniot equation numerically. To calculate release isentropes, we use Zel'dovich's approach and integrate an ordinary differential equation for the temperature thus restoring all thermodynamic parameters. Isentropic sound velocity is calculated by differentiation along isentropes. The results of our calculations are in good agreement with experimental data. Thus, quantum molecular dynamics results can be effectively used for verification or calibration of semiempirical equations of state under conditions of lack of experimental information at high energy densities. This work is supported by RFBR, grants 09-08-01129 and 11-08-01225.
Mezherovskii, V.A.
1994-05-01
New calculation schemes are suggested for the {open_quotes}building-loess collapsing base{close_quotes} system, with the help of which it is possible to obtain values of the forces and movements occurring in a building as a result of collapses of bases that are close to the real ones in the nature of moistening and deformations of loess strata.
NASA Astrophysics Data System (ADS)
Bidwell, Colin S.
2015-05-01
A method for calculating particle transport through turbo-machinery using the mixing plane analogy was developed and used to analyze the energy efficient engine . This method allows the prediction of temperature and phase change of water based particles along their path and the impingement efficiency and particle impact property data on various components in the engine. This methodology was incorporated into the LEWICE3D V3.5 software. The method was used to predict particle transport in the low pressure compressor of the . The was developed by NASA and GE in the early 1980s as a technology demonstrator and is representative of a modern high bypass turbofan engine. The flow field was calculated using the NASA Glenn ADPAC turbo-machinery flow solver. Computations were performed for a Mach 0.8 cruise condition at 11,887 m assuming a standard warm day for ice particle sizes of 5, 20 and 100 microns and a free stream particle concentration of . The impingement efficiency results showed that as particle size increased average impingement efficiencies and scoop factors increased for the various components. The particle analysis also showed that the amount of mass entering the inner core decreased with increased particle size because the larger particles were less able to negotiate the turn into the inner core due to particle inertia. The particle phase change analysis results showed that the larger particles warmed less as they were transported through the low pressure compressor. Only the smallest 5 micron particles were warmed enough to produce melting with a maximum average melting fraction of 0.18. The results also showed an appreciable amount of particle sublimation and evaporation for the 5 micron particles entering the engine core (22.6 %).
[Calculation of environmental flows in river reaches based on ecological objectives].
Sun, Tao; Yang, Zhi-Feng
2005-09-01
Based on the identifying the ecological objectives, environmental flows in river reaches is calculated after the relation between parameters of objectives, and river discharges is determined. The ecological objectives are determined in a two step process: theobjective will be determined for the critical period of the year, then the temporal variation will be defined. Considering the compatibility between the different kinds of environmental flow requirements, the strictest objective is settled to be the ecological objectives for the critical period of the year. The temporal variation of the natural river discharge monthly is settled to be the temporal variation of ecological objectives. In the studies of the environmental flows in the river reaches downstream for Guanting reservoir in the Yongding River of Haihe River Basin, the requirements of velocities for spawning of the fish in April are regarded as the ecological objectives in the most critical period because this also is the period of highest water demand for irrigation. The relation between objectives and river discharge is identified using historical data at the river station. The results indicate that the minimum, medium and ideal lever of annual environmental flow requirements are 1.56 x 10(8) m3, 5.97 x 10(8) m3 and 11.02 x 10(8) m3, about 7.19%, 27.51% and 50.78 % of the natural river discharge respectively. The ratio of water requirements monthly should be 20 % in the flood period (Aug.) and be 20% in biological propagation period in spring (Apr. - Jun.).
NASA Astrophysics Data System (ADS)
Majumder, Moumita; Dawes, Richard; Wang, Xiao-Gang; Carrington, Tucker; Li, Jun; Guo, Hua; Manzhos, Sergei
2014-06-01
New potential energy surfaces for methane were constructed, represented as analytic fits to about 100,000 individual high-level ab initio data. Explicitly-correlated multireference data (MRCI-F12(AE)/CVQZ-F12) were computed using Molpro [1] and fit using multiple strategies. Fits with small to negligible errors were obtained using adaptations of the permutation-invariant-polynomials (PIP) approach [2,3] based on neural-networks (PIP-NN) [4,5] and the interpolative moving least squares (IMLS) fitting method [6] (PIP-IMLS). The PESs were used in full-dimensional vibrational calculations with an exact kinetic energy operator by representing the Hamiltonian in a basis of products of contracted bend and stretch functions and using a symmetry adapted Lanczos method to obtain eigenvalues and eigenvectors. Very close agreement with experiment was produced from the purely ab initio PESs. References 1- H.-J. Werner, P. J. Knowles, G. Knizia, 2012.1 ed. 2012, MOLPRO, a package of ab initio programs. see http://www.molpro.net. 2- Z. Xie and J. M. Bowman, J. Chem. Theory Comput 6, 26, 2010. 3- B. J. Braams and J. M. Bowman, Int. Rev. Phys. Chem. 28, 577, 2009. 4- J. Li, B. Jiang and Hua Guo, J. Chem. Phys. 139, 204103 (2013). 5- S Manzhos, X Wang, R Dawes and T Carrington, JPC A 110, 5295 (2006). 6- R. Dawes, X-G Wang, A.W. Jasper and T. Carrington Jr., J. Chem. Phys. 133, 134304 (2010).
Ray tracing based path-length calculations for polarized light tomographic imaging
NASA Astrophysics Data System (ADS)
Manjappa, Rakesh; Kanhirodan, Rajan
2015-09-01
A ray tracing based path length calculation is investigated for polarized light transport in a pixel space. Tomographic imaging using polarized light transport is promising for applications in optical projection tomography of small animal imaging and turbid media with low scattering. Polarized light transport through a medium can have complex effects due to interactions such as optical rotation of linearly polarized light, birefringence, di-attenuation and interior refraction. Here we investigate the effects of refraction of polarized light in a non-scattering medium. This step is used to obtain the initial absorption estimate. This estimate can be used as prior in Monte Carlo (MC) program that simulates the transport of polarized light through a scattering medium to assist in faster convergence of the final estimate. The reflectance for p-polarized (parallel) and s-polarized (perpendicular) are different and hence there is a difference in the intensities that reach the detector end. The algorithm computes the length of the ray in each pixel along the refracted path and this is used to build the weight matrix. This weight matrix with corrected ray path length and the resultant intensity reaching the detector for each ray is used in the algebraic reconstruction (ART) method. The proposed method is tested with numerical phantoms for various noise levels. The refraction errors due to regions of different refractive index are discussed, the difference in intensities with polarization is considered. The improvements in reconstruction using the correction so applied is presented. This is achieved by tracking the path of the ray as well as the intensity of the ray as it traverses through the medium.
NASA Astrophysics Data System (ADS)
Paranin, Y.; Burmistrov, A.; Salikeev, S.; Fomina, M.
2015-08-01
Basic propositions of calculation procedures for oil free scroll compressors characteristics are presented. It is shown that mathematical modelling of working process in a scroll compressor makes it possible to take into account such factors influencing the working process as heat and mass exchange, mechanical interaction in working chambers, leakage through slots, etc. The basic mathematical model may be supplemented by taking into account external heat exchange, elastic deformation of scrolls, inlet and outlet losses, etc. To evaluate the influence of procedure on scroll compressor characteristics calculations accuracy different calculations were carried out. Internal adiabatic efficiency was chosen as a comparative parameter which evaluates the perfection of internal thermodynamic and gas-dynamic compressor processes. Calculated characteristics are compared with experimental values obtained for the compressor pilot sample.
12 CFR 702.106 - Standard calculation of risk-based net worth requirement.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.106 Standard calculation of...) Allowance. Negative one hundred percent (−100%) of the balance of the Allowance for Loan and Lease...
NASA Astrophysics Data System (ADS)
Preobrazhenskii, M. P.; Rudakov, O. B.
2016-01-01
A regression model for calculating the boiling point isobars of tetrachloromethane-organic solvent binary homogeneous systems is proposed. The parameters of the model proposed were calculated for a series of solutions. The correlation between the nonadditivity parameter of the regression model and the hydrophobicity criterion of the organic solvent is established. The parameter value of the proposed model is shown to allow prediction of the potential formation of azeotropic mixtures of solvents with tetrachloromethane.
Effectiveness of a computer based medication calculation education and testing programme for nurses.
Sherriff, Karen; Burston, Sarah; Wallis, Marianne
2012-01-01
The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety.
Xu, Huijun; Guerrero, Mariana; Chen, Shifeng; Yang, Xiaocheng; Prado, Karl; Schinkel, Colleen
2016-01-01
Many clinics still use monitor unit (MU) calculations for electron treatment planning and/or quality assurance (QA). This work (1) investigates the clinical implementation of a dosimetry system including a modified American Association of Physicists in Medicine-task group-71 (TG-71)-based electron MU calculation protocol (modified TG-71 electron [mTG-71E] and an independent commercial calculation program and (2) provides the practice recommendations for clinical usage. Following the recently published TG-71 guidance, an organized mTG-71E databook was developed to facilitate data access and subsequent MU computation according to our clinical need. A recently released commercial secondary calculation program – Mobius3D (version 1.5.1) Electron Quick Calc (EQC) (Mobius Medical System, LP, Houston, TX, USA), with inherent pencil beam algorithm and independent beam data, was used to corroborate the calculation results. For various setups, the calculation consistency and accuracy of mTG-71E and EQC were validated by their cross-comparison and the ion chamber measurements in a solid water phantom. Our results show good agreement between mTG-71E and EQC calculations, with average 2% difference. Both mTG-71E and EQC calculations match with measurements within 3%. In general, these differences increase with decreased cutout size, increased extended source to surface distance, and lower energy. It is feasible to use TG71 and Mobius3D clinically as primary and secondary electron MU calculations or vice versa. We recommend a practice that only requires patient-specific measurements in rare cases when mTG-71E and EQC calculations differ by 5% or more. PMID:28144112
Xu, Huijun; Guerrero, Mariana; Chen, Shifeng; Yang, Xiaocheng; Prado, Karl; Schinkel, Colleen
2016-01-01
Many clinics still use monitor unit (MU) calculations for electron treatment planning and/or quality assurance (QA). This work (1) investigates the clinical implementation of a dosimetry system including a modified American Association of Physicists in Medicine-task group-71 (TG-71)-based electron MU calculation protocol (modified TG-71 electron [mTG-71E] and an independent commercial calculation program and (2) provides the practice recommendations for clinical usage. Following the recently published TG-71 guidance, an organized mTG-71E databook was developed to facilitate data access and subsequent MU computation according to our clinical need. A recently released commercial secondary calculation program - Mobius3D (version 1.5.1) Electron Quick Calc (EQC) (Mobius Medical System, LP, Houston, TX, USA), with inherent pencil beam algorithm and independent beam data, was used to corroborate the calculation results. For various setups, the calculation consistency and accuracy of mTG-71E and EQC were validated by their cross-comparison and the ion chamber measurements in a solid water phantom. Our results show good agreement between mTG-71E and EQC calculations, with average 2% difference. Both mTG-71E and EQC calculations match with measurements within 3%. In general, these differences increase with decreased cutout size, increased extended source to surface distance, and lower energy. It is feasible to use TG71 and Mobius3D clinically as primary and secondary electron MU calculations or vice versa. We recommend a practice that only requires patient-specific measurements in rare cases when mTG-71E and EQC calculations differ by 5% or more.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.208-12 Calculation of...
Raevsky, Oleg A; Grigor'ev, Veniamin Yu; Polianczyk, Daniel E; Raevskaja, Olga E; Dearden, John C
2014-02-24
Solubilities of crystalline organic compounds calculated according to AMP (arithmetic mean property) and LoReP (local one-parameter regression) models based on structural and physicochemical similarities are presented. We used data on water solubility of 2615 compounds in un-ionized form measured at 25±5 °C. The calculation results were compared with the equation based on the experimental data for lipophilicity and melting point. According to statistical criteria, the model based on structural and physicochemical similarities showed a better fit with the experimental data. An additional advantage of this model is that it uses only theoretical descriptors, and this provides means for calculating water solubility for both existing and not yet synthesized compounds.
Otani, Makoto; Ise, Shiro
2006-05-01
Recently, development of a numerical calculation of the head-related transfer function (HRTF) has been conducted using a computer model of a human head and the boundary element method. The reciprocity theorem is incorporated into the computational process in order to shorten the computational time, which is otherwise very long. On the other hand, another fast HRTF calculation method for any source position, which is realized by calculating factors independent of the source position in advance, has been suggested by the authors. Using this algorithm, the HRTF for any source position can be obtained in a few seconds with a common PC. The resulting HRTFs are more precise and are calculated faster than those by using the reciprocity theorem. However, speeding the process up even further is required in order to respond to a head movement and rotation or to moving sources during binaural sound reproduction. In this paper, a faster calculation method by incorporating a time domain operation into the authors' previous algorithm is proposed. Additionally, the new formulation, which eliminates the extra computational time in the preprocess, is proposed. This method is shown to be faster than the previous ones, but there are some discrepancies at higher frequencies.
Calculation of the axion mass based on high-temperature lattice quantum chromodynamics.
Borsanyi, S; Fodor, Z; Guenther, J; Kampert, K-H; Katz, S D; Kawanai, T; Kovacs, T G; Mages, S W; Pasztor, A; Pittler, F; Redondo, J; Ringwald, A; Szabo, K K
2016-11-03
Unlike the electroweak sector of the standard model of particle physics, quantum chromodynamics (QCD) is surprisingly symmetric under time reversal. As there is no obvious reason for QCD being so symmetric, this phenomenon poses a theoretical problem, often referred to as the strong CP problem. The most attractive solution for this requires the existence of a new particle, the axion-a promising dark-matter candidate. Here we determine the axion mass using lattice QCD, assuming that these particles are the dominant component of dark matter. The key quantities of the calculation are the equation of state of the Universe and the temperature dependence of the topological susceptibility of QCD, a quantity that is notoriously difficult to calculate, especially in the most relevant high-temperature region (up to several gigaelectronvolts). But by splitting the vacuum into different sectors and re-defining the fermionic determinants, its controlled calculation becomes feasible. Thus, our twofold prediction helps most cosmological calculations to describe the evolution of the early Universe by using the equation of state, and may be decisive for guiding experiments looking for dark-matter axions. In the next couple of years, it should be possible to confirm or rule out post-inflation axions experimentally, depending on whether the axion mass is found to be as predicted here. Alternatively, in a pre-inflation scenario, our calculation determines the universal axionic angle that corresponds to the initial condition of our Universe.
Calculation of the axion mass based on high-temperature lattice quantum chromodynamics
NASA Astrophysics Data System (ADS)
Borsanyi, S.; Fodor, Z.; Guenther, J.; Kampert, K.-H.; Katz, S. D.; Kawanai, T.; Kovacs, T. G.; Mages, S. W.; Pasztor, A.; Pittler, F.; Redondo, J.; Ringwald, A.; Szabo, K. K.
2016-11-01
Unlike the electroweak sector of the standard model of particle physics, quantum chromodynamics (QCD) is surprisingly symmetric under time reversal. As there is no obvious reason for QCD being so symmetric, this phenomenon poses a theoretical problem, often referred to as the strong CP problem. The most attractive solution for this requires the existence of a new particle, the axion—a promising dark-matter candidate. Here we determine the axion mass using lattice QCD, assuming that these particles are the dominant component of dark matter. The key quantities of the calculation are the equation of state of the Universe and the temperature dependence of the topological susceptibility of QCD, a quantity that is notoriously difficult to calculate, especially in the most relevant high-temperature region (up to several gigaelectronvolts). But by splitting the vacuum into different sectors and re-defining the fermionic determinants, its controlled calculation becomes feasible. Thus, our twofold prediction helps most cosmological calculations to describe the evolution of the early Universe by using the equation of state, and may be decisive for guiding experiments looking for dark-matter axions. In the next couple of years, it should be possible to confirm or rule out post-inflation axions experimentally, depending on whether the axion mass is found to be as predicted here. Alternatively, in a pre-inflation scenario, our calculation determines the universal axionic angle that corresponds to the initial condition of our Universe.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
GPU-based calculation of scattering characteristics of space target in the visible spectrum
NASA Astrophysics Data System (ADS)
Cao, YunHua; Wu, Zhensen; Bai, Lu; Song, Zhan; Guo, Xing
2014-10-01
Scattering characteristics of space target in the visible spectrum, which can be used in target detection, target identification, and space docking, is calculated in this paper. Algorithm of scattering characteristics of space target is introduced. In the algorithm, space target is divided into thousands of triangle facets. In order to obtain scattering characteristics of the target, calculation of each facet will be needed. For each facet, calculation will be executed in the spectrum of 400-760 nanometers at intervals of 1 nanometer. Thousands of facets and hundreds of bands of each facet will cause huge calculation, thus the calculation will be very time-consuming. Taking into account the high parallelism of the algorithm, Graphic Processing Units (GPUs) are used to accelerate the algorithm. The acceleration reaches 300 times speedup on single Femi-generation NVDIA GTX 590 as compared to the single-thread CPU version of code on Intel(R) Xeon(R) CPU E5-2620. And a speedup of 412x can be reached when a Kepler-generation NVDIA K20c is used.
NASA Astrophysics Data System (ADS)
Alam, M. J.; Bhat, S. A.; Ahmad, S.
2016-05-01
Molecular structure and vibrational spectra of 5-nitro-6-methyluracil molecule have been studied by the simulation of its monomer, dimer and trimer forms using DFT and MP2 methods with 6-311G(d,p) basis set. Anharmonic force field calculations have been carried out for the isolated monomer, while the calculations on dimer and trimer have been done in the harmonic approximation. An accurate numerical integration grid has been used for geometry optimization as well as frequency calculation. Anharmonic vibrational frequencies have been computed using VPT2 algorithm (Barone's method) as well as VSCF and VSCF-PT2 approaches. These methods yield results that are in remarkable agreement with the experiment. The coupling strengths between pair of modes have been also calculated using coupling integral based on 2MR-QFF approximation. The vibrational assignments have been made with the help of potential energy distribution values and animated modes.
NASA Astrophysics Data System (ADS)
Jin, Ying; Song, Yang; Wang, Wenchao; Ji, Yunjing; Li, Zhenhua; He, Anzhi
2016-11-01
Flame tomography of chemiluminescence is a necessary combustion diagnostic technique that provides instantaneous 3D information on flame structure and excited species concentrations. However, in most research, the simplification of calculation model of weight coefficient based on lens imaging theory always causes information missing, which influences the result of further reconstructions. In this work, an improved calculation model is presented to determine the weight coefficient by the intersection areas of the blurry circle with the square pixels, which is more appropriate to the practical imaging process. The numerical simulation quantitatively evaluates the performance of the improved calculation method. Furthermore, a flame chemiluminescence tomography system consisting of 12 cameras was established to reconstruct 3D structure of instantaneous non-axisymmetric propane flame. Both numerical simulating estimations and experiments illustrate the feasibility of the improved calculation model in combustion diagnostic.
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
Hotta, Kenji; Kohno, Ryosuke; Nagafuchi, Kohsuke; Yamaguchi, Hidenori; Tansho, Ryohei; Takada, Yoshihisa; Akimoto, Tetsuo
2015-09-08
Calibrating the dose per monitor unit (DMU) for individual patients is important to deliver the prescribed dose in radiation therapy. We have developed a DMU calculation method combining measurement data and calculation with a simplified Monte Carlo method for the double scattering system in proton beam therapy at the National Cancer Center Hospital East in Japan. The DMU calculation method determines the clinical DMU by the multiplication of three factors: a beam spreading device factor FBSD, a patient-specific device factor FPSD, and a field-size correction factor FFS(A). We compared the calculated and the measured DMU for 75 dose fields in clinical cases. The calculated DMUs were in agreement with measurements in ± 1.5% for all of 25 fields in prostate cancer cases, and in ± 3% for 94% of 50 fields in head and neck (H&N) and lung cancer cases, including irregular shape fields and small fields. Although the FBSD in the DMU calculations is dominant as expected, we found that the patient-specific device factor and field-size correction also contribute significantly to the calculated DMU. This DMU calculation method will be able to substitute the conventional DMU measurement for the majority of clinical cases with a reasonable calculation time required for clinical use.
Hotta, Kenji; Kohno, Ryosuke; Nagafuchi, Kohsuke; Yamaguchi, Hidenori; Tansho, Ryohei; Takada, Yoshihisa; Akimoto, Tetsuo
2015-09-01
Calibrating the dose per monitor unit (DMU) for individual patients is important to deliver the prescribed dose in radiation therapy. We have developed a DMU calculation method combining measurement data and calculation with a simplified Monte Carlo method for the double scattering system in proton beam therapy at the National Cancer Center Hospital East in Japan. The DMU calculation method determines the clinical DMU by the multiplication of three factors: a beam spreading device factor FBSD, a patient-specific device factor FPSD, and a field-size correction factor FFS(A). We compared the calculated and the measured DMU for 75 dose fields in clinical cases. The calculated DMUs were in agreement with measurements in ±1.1% for all of 25 fields in prostate cancer cases, and in ±3% for 94% of 50 fields in head and neck (H&N) and lung cancer cases, including irregular shape fields and small fields. Although the FBSD in the DMU calculations is dominant as expected, we found that the patient-specific device factor and field-size correction also contribute significantly to the calculated DMU. This DMU calculation method will be able to substitute the conventional DMU measurement for the majority of clinical cases with a reasonable calculation time required for clinical use. PACS number: 87.55.kh.
Optimum design calculations for detectors based on ZnSe(Те,О) scintillators
NASA Astrophysics Data System (ADS)
Katrunov, K.; Ryzhikov, V.; Gavrilyuk, V.; Naydenov, S.; Lysetska, O.; Litichevskyi, V.
2013-06-01
Light collection in scintillators ZnSe(X), where X is an isovalent dopant, was studied using Monte Carlo calculations. Optimum design was determined for detectors of "scintillator—Si-photodiode" type, which can involve either one scintillation element or scintillation layers of large area made of small-crystalline grains. The calculations were carried out both for determination of the optimum scintillator shape and for design optimization of light guides, on the surface of which the layer of small-crystalline grains is formed.
Interpretation of the resonance Raman spectra of linear tetrapyrroles based on DFT calculations
NASA Astrophysics Data System (ADS)
Kneip, Christa; Hildebrandt, Peter; Németh, Károly; Mark, Franz; Schaffner, Kurt
1999-10-01
Raman spectra of linear methine-bridged tetrapyrroles in different conformational and protonation states were calculated on the basis of scaled force fields obtained by density functional theory. Results are reported for protonated phycocyanobilin in the extended ZZZasa configuration, as it is found in C-phycocyanin of cyanobacteria. The calculated spectra are in good agreement with experimental spectra of the protein-bound chromophore in the α-subunit of C-phycocyanin and allow a plausible and consistent assignment of most of the observed resonance Raman bands in the region between 1000 and 1700 cm -1.
Radiation calculations on the base of atmospheric models from lidar sounding
NASA Astrophysics Data System (ADS)
Melnikova, Irina; Samulenkov, Dmitry; Sapunov, Maxim; Vasilyev, Alexander; Kuznetsov, Anatoly; Frolkis, Victor
2017-02-01
The results of lidar sounding in the Resource Center "Observatory of Environmental Safety" of the St. Petersburg University, Research Park, have been obtained in the center of St. Petersburg. Observations are accomplished during 12 hours on 5 March 2015, from 11 am till 11 pm. Four time periods are considered. Results of AERONET observations and retrieval at 4 stations around St. Petersburg region are considered in addition. Optical models of the atmosphere in day and night time are constructed from the lidar and AERONET observations and used for radiation calculation. The radiative divergence, transmitted and reflected irradiance and heating rate are calculated.
An accurate potential energy curve for helium based on ab initio calculations
NASA Astrophysics Data System (ADS)
Janzen, A. R.; Aziz, R. A.
1997-07-01
Korona, Williams, Bukowski, Jeziorski, and Szalewicz [J. Chem. Phys. 106, 1 (1997)] constructed a completely ab initio potential for He2 by fitting their calculations using infinite order symmetry adapted perturbation theory at intermediate range, existing Green's function Monte Carlo calculations at short range and accurate dispersion coefficients at long range to a modified Tang-Toennies potential form. The potential with retardation added to the dipole-dipole dispersion is found to predict accurately a large set of microscopic and macroscopic experimental data. The potential with a significantly larger well depth than other recent potentials is judged to be the most accurate characterization of the helium interaction yet proposed.
The Activation Energy Of Ignition Calculation For Materials Based On Plastics
NASA Astrophysics Data System (ADS)
Rantuch, Peter; Wachter, Igor; Martinka, Jozef; Kuracina, Marcel
2015-06-01
This article deals with the activation energy of ignition calculation of plastics. Two types of polyamide 6 and one type of polypropylene and polyurethane were selected as samples. The samples were tested under isothermal conditions at several temperatures while times to ignition were observed. From the obtained data, activation energy relating to the moment of ignition was calculated for each plastics. The values for individual plastics were different. The highest activation energies (129.5 kJ.mol-1 and 106.2 kJ.mol-1) were achieved by polyamides 6, while the lowest was determined for a sample of polyurethane.
Spectral linelist of HD16O molecule based on VTT calculations for atmospheric application
NASA Astrophysics Data System (ADS)
Voronin, B. A.
2014-11-01
Three version line-list of dipole transition for isotopic modification of water molecule HD16O are presented. Line-lists have been created on the basis of VTT calculations (Voronin, Tennyson, Tolchenov et al. MNRAS, 2010) by adding air- and self-broadening coefficient, and temperature exponents for HD16O-air case. Three cut-of values for the line intensities were used: 1e-30, 1e-32 and 1e-35 cm/molecule. Calculated line-lists are available on the site ftp://ftp.iao.ru/pub/VTT/VTT-296/.
Code of Federal Regulations, 2011 CFR
2011-07-01
...-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and... of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for a model...
NASA Technical Reports Server (NTRS)
Cheng, H. K.; Wong, Eric Y.; Dogra, V. K.
1991-01-01
Grad's thirteen-moment equations are applied to the flow behind a bow shock under the formalism of a thin shock layer. Comparison of this version of the theory with Direct Simulation Monte Carlo calculations of flows about a flat plate at finite attack angle has lent support to the approach as a useful extension of the continuum model for studying translational nonequilibrium in the shock layer. This paper reassesses the physical basis and limitations of the development with additional calculations and comparisons. The streamline correlation principle, which allows transformation of the 13-moment based system to one based on the Navier-Stokes equations, is extended to a three-dimensional formulation. The development yields a strip theory for planar lifting surfaces at finite incidences. Examples reveal that the lift-to-drag ratio is little influenced by planform geometry and varies with altitudes according to a 'bridging function' determined by correlated two-dimensional calculations.
ARS-Media: A spreadsheet tool for calculating media recipes based on ion-specific constraints
Technology Transfer Automated Retrieval System (TEKTRAN)
ARS-Media is an ion solution calculator that uses Microsoft Excel to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Thus, the recipes are generated using ...
Vibrational and structural study of onopordopicrin based on the FTIR spectrum and DFT calculations.
Chain, Fernando E; Romano, Elida; Leyton, Patricio; Paipa, Carolina; Catalán, César A N; Fortuna, Mario; Brandán, Silvia Antonia
2015-01-01
In the present work, the structural and vibrational properties of the sesquiterpene lactone onopordopicrin (OP) were studied by using infrared spectroscopy and density functional theory (DFT) calculations together with the 6-31G(∗) basis set. The harmonic vibrational wavenumbers for the optimized geometry were calculated at the same level of theory. The complete assignment of the observed bands in the infrared spectrum was performed by combining the DFT calculations with Pulay's scaled quantum mechanical force field (SQMFF) methodology. The comparison between the theoretical and experimental infrared spectrum demonstrated good agreement. Then, the results were used to predict the Raman spectrum. Additionally, the structural properties of OP, such as atomic charges, bond orders, molecular electrostatic potentials, characteristics of electronic delocalization and topological properties of the electronic charge density were evaluated by natural bond orbital (NBO), atoms in molecules (AIM) and frontier orbitals studies. The calculated energy band gap and the chemical potential (μ), electronegativity (χ), global hardness (η), global softness (S) and global electrophilicity index (ω) descriptors predicted for OP low reactivity, higher stability and lower electrophilicity index as compared with the sesquiterpene lactone cnicin containing similar rings.
Structure of amphotericin B aggregates based on calculations of optical spectra
Hemenger, R.P.; Kaplan, T.; Gray, L.J.
1983-01-01
The degenerate ground state approximation was used to calculate the optical absorption and CD spectra for helical polymer models of amphotericin B aggregates in aqueous solution. Comparisons with experimental spectra indicate that a two-molecule/unit cell helical polymer model is a possible structure for aggregates of amphotericin B.
ERIC Educational Resources Information Center
Senol, Ali; Dündar, Sefa; Gündüz, Nazan
2015-01-01
The aim of this study are to examine the relationship between prospective classroom teachers' estimation skills based on calculation and their number sense and to investigate whether their number sense and estimation skills change according to their class level and gender. The participants of the study are 125 prospective classroom teachers…
Code of Federal Regulations, 2010 CFR
2010-10-01
... rate under the ESRD prospective payment system effective January 1, 2011. 413.220 Section 413.220... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the per-treatment base rate under the ESRD prospective payment system effective January 1, 2011. (a) Data...
Code of Federal Regulations, 2014 CFR
2014-10-01
... rate under the ESRD prospective payment system effective January 1, 2011. 413.220 Section 413.220... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the per-treatment base rate under the ESRD prospective payment system effective January 1, 2011. (a) Data...
Technology Transfer Automated Retrieval System (TEKTRAN)
The integration of methods for calculating soil loss caused by water erosion using a geoprocessing system is important to enable investigations of soil erosion over large areas. GIS-based procedures have been used in soil erosion studies; however in most cases it is difficult to integrate the functi...
Evaluation of a commercial MRI Linac based Monte Carlo dose calculation algorithm with GEANT 4
Ahmad, Syed Bilal; Sarfehnia, Arman; Kim, Anthony; Sahgal, Arjun; Keller, Brian; Paudel, Moti Raj; Hissoiny, Sami
2016-02-15
Purpose: This paper provides a comparison between a fast, commercial, in-patient Monte Carlo dose calculation algorithm (GPUMCD) and GEANT4. It also evaluates the dosimetric impact of the application of an external 1.5 T magnetic field. Methods: A stand-alone version of the Elekta™ GPUMCD algorithm, to be used within the Monaco treatment planning system to model dose for the Elekta™ magnetic resonance imaging (MRI) Linac, was compared against GEANT4 (v10.1). This was done in the presence or absence of a 1.5 T static magnetic field directed orthogonally to the radiation beam axis. Phantoms with material compositions of water, ICRU lung, ICRU compact-bone, and titanium were used for this purpose. Beams with 2 MeV monoenergetic photons as well as a 7 MV histogrammed spectrum representing the MRI Linac spectrum were emitted from a point source using a nominal source-to-surface distance of 142.5 cm. Field sizes ranged from 1.5 × 1.5 to 10 × 10 cm{sup 2}. Dose scoring was performed using a 3D grid comprising 1 mm{sup 3} voxels. The production thresholds were equivalent for both codes. Results were analyzed based upon a voxel by voxel dose difference between the two codes and also using a volumetric gamma analysis. Results: Comparisons were drawn from central axis depth doses, cross beam profiles, and isodose contours. Both in the presence and absence of a 1.5 T static magnetic field the relative differences in doses scored along the beam central axis were less than 1% for the homogeneous water phantom and all results matched within a maximum of ±2% for heterogeneous phantoms. Volumetric gamma analysis indicated that more than 99% of the examined volume passed gamma criteria of 2%—2 mm (dose difference and distance to agreement, respectively). These criteria were chosen because the minimum primary statistical uncertainty in dose scoring voxels was 0.5%. The presence of the magnetic field affects the dose at the interface depending upon the density of the material
Beres, D.A.; Hull, A.P.
1991-12-01
DEPDOSE is an interactive, menu driven, microcomputer based program designed to rapidly calculate committed dose from radionuclides deposited on the ground. The program is designed to require little or no computer expertise on the part of the user. The program consisting of a dose calculation section and a library maintenance section. These selections are available to the user from the main menu. The dose calculation section provides the user with the ability to calculate committed doses, determine the decay time needed to reach a particular dose, cross compare deposition data from separate locations, and approximate a committed dose based on a measured exposure rate. The library maintenance section allows the user to review and update dose modifier data as well as to build and maintain libraries of radionuclide data, dose conversion factors, and default deposition data. The program is structured to provide the user easy access for reviewing data prior to running the calculation. Deposition data can either be entered by the user or imported from other databases. Results can either be displayed on the screen or sent to the printer.
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend
Calculations of unsteady flows around high-lift configurations based on a zonal approach
NASA Astrophysics Data System (ADS)
Bosnyakov, S.; Kursakov, I.; Mikhaylov, S.; Vlasenko, V.
2015-06-01
Zonal approach for unsteady Reynolds-averaged Navier-Stokes (URANS) problem solution is described. Original feature of this approach is to use Courant-Friedrichs-Levi number CFL ˜ 1 in the main part of calculation domain excluding thin part of boundary layer. It is achieved by using explicit numerical scheme with fractional time stepping in the main part of calculation domain. In the near-wall zone of boundary layer, implicit dual stepping method is used. In addition to zonal approach, fully implicit method with dual stepping technology is also implemented. The methods are verified in comparison with the results of test case data obtained by consortium participants within DeSiReH FP-7 project.
NASA Astrophysics Data System (ADS)
Lakshmi, A.; Balachandran, V.
2013-02-01
FT-IR and FT-Raman spectra of N-(2-hydroxyethyl)phthalimide (NHEP) have been recorded and analyzed. The stable isomer of NHEP is determined. The optimization geometry, intermolecular hydrogen bonding, and harmonic vibrational wavenumber of NHEP have been investigated with the help of B3LYP scaled quantum mechanical (SQM) method. The infrared and Raman spectra were predicted theoretically from the calculated intensities. Natural bond orbital (NBO) analysis indicates the presence of Cdbnd O⋯H in the molecule. The calculated HOMO and LUMO are important in determining such properties as molecular reactivity. Information about the size, shape, charge density distribution and site of chemical reactivity of the molecule has been obtained by mapping electron density isosurface with electrostatic potential (ESP).
Navier-Stokes calculations on multi-element airfoils using a chimera-based solver
NASA Technical Reports Server (NTRS)
Jasper, Donald W.; Agrawal, Shreekant; Robinson, Brian A.
1993-01-01
A study of Navier-Stokes calculations of flows about multielement airfoils using a chimera grid approach is presented. The chimera approach utilizes structured, overlapped grids which allow great flexibility of grid arrangement and simplifies grid generation. Calculations are made for two-, three-, and four-element airfoils, and modeling of the effect of gap distance between elements is demonstrated for a two element case. Solutions are obtained using the thin-layer form of the Reynolds averaged Navier-Stokes equations with turbulence closure provided by the Baldwin-Lomax algebraic model or the Baldwin-Barth one equation model. The Baldwin-Barth turbulence model is shown to provide better agreement with experimental data and to dramatically improve convergence rates for some cases. Recently developed, improved farfield boundary conditions are incorporated into the solver for greater efficiency. Computed results show good comparison with experimental data which include aerodynamic forces, surface pressures, and boundary layer velocity profiles.
PEREGRINE: Bringing Monte Carlo based treatment planning calculations to today's clinic
Patterson, R; Daly, T; Garrett, D; Hartmann-Siantar, C; House, R; May, S
1999-12-13
Monte Carlo simulation of radiotherapy is now available for routine clinical use. It brings improved accuracy of dose calculations for treatments where important physics comes into play, and provides a robust, general tool for planning where empirical solutions have not been implemented. Through the use of Monte Carlo, new information, including the effects of the composition of materials in the patient, the effects of electron transport, and the details of the distribution of energy deposition, can be applied to the field. PEREGRINE{trademark} is a Monte Carlo dose calculation solution that was designed and built specifically for the purpose of providing a practical, affordable Monte Carlo capability to the clinic. The system solution was crafted to facilitate insertion of this powerful tool into day-to-day treatment planning, while being extensible to accommodate improvements in techniques, computers, and interfaces.
NASA Astrophysics Data System (ADS)
Takaba, Hiromitsu; Kimura, Shou; Alam, Md. Khorshed
2017-03-01
Durability of organo-lead halide perovskite are important issue for its practical application in a solar cells. In this study, using density functional theory (DFT) and molecular dynamics, we theoretically investigated a crystal structure, electronic structure, and ionic diffusivity of the partially substituted cubic MA0.5X0.5PbI3 (MA = CH3NH3+, X = NH4+ or (NH2)2CH+ or Cs+). Our calculation results indicate that a partial substitution of MA induces a lattice distortion, resulting in preventing MA or X from the diffusion between A sites in the perovskite. DFT calculations show that electronic structures of the investigated partially substituted perovskites were similar with that of MAPbI3, while their bandgaps slightly decrease compared to that of MAPbI3. Our results mean that partial substitution in halide perovskite is effective technique to suppress diffusion of intrinsic ions and tune the band gap.
NASA Astrophysics Data System (ADS)
Mironenko, Mikhail; Diamond, Larryn
2010-05-01
Recent improvements in chemical analysis of fluid inclusions (using techniques such as LA-ICP-MS, PIXE, SXRF, LIBS and SIMS for individual inclusions, and crush-leach analysis for bulk samples), now permit ratios of certain solute elements to be determined with high accuracy. In order to apply these results to geochemical problems, the element ratios must be converted to concentrations in the inclusions. Approaches to this conversion problem have remained very approximate so far, and have not kept pace with the improved quality of the raw analytical data. We have developed a thermodynamic procedure to calculate the absolute solute concentrations in multicomponent electrolyte solutions from input element ratios and microthermometric determinations of final-melting temperatures of daughter crystals such as ice and various salts. Equilibria are calculated using the algorithm of Mironenko and Polyakov (2009), which employs the Gibbs free energy minimization method and applies Pitzer's model to calculate the water activity and solute activity coefficients. The thermodynamic database of Marion (2008) is used for the system Na-K-Ca-Mg-FeCl-SO4-CO3-H-H2O over the temperature range -60 °C to 25 °C, and the database of Greenberg and Møller (1998) is used for the system Na-K-Ca-Cl-SO4-H2O for phase transitions from 25 °C to 250 °C. The model has been verified with experimentally studied systems. In addition to providing the solute concentrations, the model also predicts other melting transitions not used as input for the calculations (eutectic, peritectic, etc.), thereby allowing the results for specific fluid inclusions to be checked for consistency.
Multi-state Approach to Chemical Reactivity in Fragment Based Quantum Chemistry Calculations.
Lange, Adrian W; Voth, Gregory A
2013-09-10
We introduce a multistate framework for Fragment Molecular Orbital (FMO) quantum mechanical calculations and implement it in the context of protonated water clusters. The purpose of the framework is to address issues of nonuniqueness and dynamic fragmentation in FMO as well as other related fragment methods. We demonstrate that our new approach, Fragment Molecular Orbital Multistate Reactive Molecular Dynamics (FMO-MS-RMD), can improve energetic accuracy and yield stable molecular dynamics for small protonated water clusters undergoing proton transfer reactions.
Molecular orbital calculations on atomic structures of Si-based covalent amorphous ceramics
Matsunaga, K.; Matsubara, H.
1999-07-01
The authors have performed ab-initio Hartree-Fock molecular orbital calculations of local atomic structures and chemical bonding states in Si-N covalent amorphous ceramics. Solute elements such as boron, carbon and oxygen were considered in the Si-N network, and the bonding characteristics around the solute elements were analyzed. When a nitrogen atom is substituted by a carbon atom, it was found that Si-C bonds reinforce the Si-N network due to strong covalency.
NASA Astrophysics Data System (ADS)
Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael
2007-08-01
Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm3 ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 ± 1.2% and 0.5 ± 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 ± 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach
Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate
NASA Astrophysics Data System (ADS)
Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef
2016-04-01
The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.
Renner, F; Wulff, J; Kapsch, R-P; Zink, K
2015-10-07
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Sensor-based clear and cloud radiance calculations in the community radiative transfer model.
Liu, Quanhua; Xue, Y; Li, C
2013-07-10
The community radiative transfer model (CRTM) has been implemented for clear and cloudy satellite radiance simulations in the National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Prediction (NCEP) Gridpoint Statistical Interpolation data assimilation system for global and regional forecasting as well as reanalysis for climate studies. Clear-sky satellite radiances are successfully assimilated, while cloudy radiances need to be assimilated for improving precipitation and severe weather forecasting. However, cloud radiance calculations are much slower than the calculations for clear-sky radiance, and exceed our computational capacity for weather forecasting. In order to make cloud radiance assimilation affordable, cloud optical parameters at the band central wavelength are used in the CRTM (OPTRAN-CRTM) where the optical transmittance (OPTRAN) band model is applied. The approximation implies that only one radiative transfer solution for each band (i.e., channel) is needed, instead of typically more than 10,000 solutions that are required for a detailed line-by-line radiative transfer model (LBLRTM). This paper investigated the accuracy of the approximation and helps us to understand the error source. Two NOAA operational sensors, High Resolution Infrared Radiation Sounder/3 (HIRS/3) and Advanced Microwave Sounding Unit (AMSU), have been chosen for this investigation with both clear and cloudy cases. By comparing the CRTM cloud radiance calculations with the LBLRTM simulations, we found that the CRTM cloud radiance model can achieve accuracy better than 0.4 K for the IR sensor and 0.1 K for the microwave sensor. The results suggest that the CRTM cloud radiance calculations may be adequate to the operational satellite radiance assimilation for numerical forecast model. The accuracy using OPTRAN is much better than using the scaling method (SCALING-CRTM). In clear-sky applications, the scaling of the optical depth derived at nadir
Chi, Yuan; Hu, Chundong; Zhuang, Ge
2014-02-15
Calorimetric method has been primarily applied for several experimental campaigns to determine the angular divergence of high-current ion source for the neutral beam injection system on the Experimental Advanced Superconducting Tokamak (EAST). A Doppler shift spectroscopy has been developed to provide the secondary measurement of the angular divergence to improve the divergence measurement accuracy and for real-time and non-perturbing measurement. The modified calculation model based on the W7AS neutral beam injectors is adopted to accommodate the slot-type accelerating grids used in the EAST's ion source. Preliminary spectroscopic experimental results are presented comparable to the calorimetrically determined value of theoretical calculation.
Δg: The new aromaticity index based on g-factor calculation applied for polycyclic benzene rings
NASA Astrophysics Data System (ADS)
Ucun, Fatih; Tokatlı, Ahmet
2015-02-01
In this work, the aromaticity of polycyclic benzene rings was evaluated by the calculation of g-factor for a hydrogen placed perpendicularly at geometrical center of related ring plane at a distance of 1.2 Å. The results have compared with the other commonly used aromatic indices, such as HOMA, NICSs, PDI, FLU, MCI, CTED and, generally been found to be in agreement with them. So, it was proposed that the calculation of the average g-factor as Δg could be applied to study the aromaticity of polycyclic benzene rings without any restriction in the number of benzene rings as a new magnetic-based aromaticity index.
NASA Technical Reports Server (NTRS)
Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard
1991-01-01
Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.
Aljasser, Faisal; Vitevitch, Michael S
2017-03-24
A number of databases (Storkel Behavior Research Methods, 45, 1159-1167, 2013) and online calculators (Vitevitch & Luce Behavior Research Methods, Instruments, and Computers, 36, 481-487, 2004) have been developed to provide statistical information about various aspects of language, and these have proven to be invaluable assets to researchers, clinicians, and instructors in the language sciences. The number of such resources for English is quite large and continues to grow, whereas the number of such resources for other languages is much smaller. This article describes the development of a Web-based interface to calculate phonotactic probability in Modern Standard Arabic (MSA). A full description of how the calculator can be used is provided. It can be freely accessed at http://phonotactic.drupal.ku.edu/ .
NASA Astrophysics Data System (ADS)
Wang, Y. B.; Xu, Y.; Zhang, Y.; Song, G. F.; Chen, L. H.
2012-12-01
We calculated the coupling coefficient of different types of laterally coupled distributed feedback (LC-DFB) structures with coupled-wave theory and the two-dimensional semivectorial finite difference method. Effects neglected in previous studies such as other partial waves, the ohmic contact and metal contact layers are taken into account in this calculation. The LC-DFB structure with metal gratings is especially studied due to its advantage over index-coupled structures. The dependence of coupling coefficient on structure parameters is theoretically calculated such as grating order, ridge width, thickness of the residual cladding layer, grating depth and lateral proximity of gratings to the ridge waveguide. A complex-coupled GaSb-based 2 µm LC-DFB structure is optimized to achieve a high coupling coefficient of 14.5 cm-1.
Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems
NASA Astrophysics Data System (ADS)
da Jornada, Felipe H.
2015-03-01
Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.
Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations
NASA Astrophysics Data System (ADS)
Stripling, Hayes Franklin
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
NASA Technical Reports Server (NTRS)
Susko, M.; Hill, C. K.; Kaufman, J. W.
1974-01-01
The quantitative estimates are presented of pollutant concentrations associated with the emission of the major combustion products (HCl, CO, and Al2O3) to the lower atmosphere during normal launches of the space shuttle. The NASA/MSFC Multilayer Diffusion Model was used to obtain these calculations. Results are presented for nine sets of typical meteorological conditions at Kennedy Space Center, including fall, spring, and a sea-breeze condition, and six sets at Vandenberg AFB. In none of the selected typical meteorological regimes studied was a 10-min limit of 4 ppm exceeded.
Dielectric permittivity calculation of composites based on electrospun barium titanate fibers
NASA Astrophysics Data System (ADS)
Ávila, H. A.; Reboredo, M. M.; Parra, R.; Castro, M. S.
2015-04-01
On the basis of theoretical predictions and experimental results, an empirical method using upper bound equation of the rule of mixtures (ROM) is reported to predict the dielectric permittivity of barium titanate nanofibers. In addition, composites with low volume fraction of BaTiO3 fiber layers embedded in epoxy resin were prepared and characterized. The relative permittivities of composites with perpendicular and parallel configurations, with respect to the electrodes, were calculated by means of the ROM model. The predicted permittivities matched precisely the obtained experimental values.
Abdelmoulahi, Hafedh; Ghalla, Houcine; Nasr, Salah; Darpentigny, Jacques; Bellissent-Funel, Marie-Claire
2016-10-07
In the present work, we have investigated the intermolecular associations of formamide with water in an equimolar formamide-water solution (FA-Water) by means of neutron scattering in combination with density functional theory calculations. The neutron scattering data were analyzed to deduce the structure factor SM(q) and the intermolecular pair correlation function gL(r). By considering different hydrogen bonded FA-Water associations, it has been shown that some of them describe well the local order in the solution. Natural bond orbital and atoms in molecules analyses have been performed to give more insight into the properties of hydrogen bonds involved in the more probable models.
1993-06-03
canceled and those gnadMras aoaois okn ouet ogen and Murmansk laboratories. Working documents for planned for sites A4 and A5 to be conducted at site A4a...is the viscosity of fish flesh, and where Ya is the ratio of specific heats of air (y,.= 1.4) and fo is the resonance frequency of the swimbladder: P ...regression equations of the P . We have also used cw,= 1.5 X 105 cm/s. form Thus, Eqs. (1)-(3) show that to calculate SL for a W=W. (7) layer of fish
Heightened odds of large earthquakes near istanbul: An interaction-based probability calculation
Parsons; Toda; Stein; Barka; Dieterich
2000-04-28
We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium. Departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 +/- 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 +/- 12% during the next decade.
Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation
Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.
2000-01-01
We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.
A microsoft excel(®) 2010 based tool for calculating interobserver agreement.
Reed, Derek D; Azulay, Richard L
2011-01-01
This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel(®)) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work.
Liu, Jing-yong; Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe; Li, Xiao-ming; Chen, Tao; Luo, Guang-qian; Xie, Wu-ming; Wang, Yu-jie; Zhuo, Zhong-xu; Fu, Jie-wen
2015-04-15
Highlights: • A thermodynamic equilibrium calculation was carried out. • Effects of three types of sulfurs on Pb distribution were investigated. • The mechanism for three types of sulfurs acting on Pb partitioning were proposed. • Lead partitioning and species in bottom ash and fly ash were identified. - Abstract: Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na{sub 2}S and Na{sub 2}SO{sub 4}) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na{sub 2}SO{sub 4} and Na{sub 2}S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO{sub 4}(s) at low temperatures (<1000 K). The equilibrium calculation prediction also suggested that SiO{sub 2}, CaO, TiO{sub 2}, and Al{sub 2}O{sub 3} containing materials function as condensed phase solids in the temperature range of 800–1100 K as sorbents to stabilize Pb. However, in the presence of sulfur or chlorine or the co-existence of sulfur and chlorine, these sorbents were inactive. The effect of sulfur on Pb partitioning in the sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the
NASA Astrophysics Data System (ADS)
Deng, Wanling; Huang, Junkai
2013-09-01
A physical-based explicit calculation to the height of grain boundary barrier has been derived based on the quasi-two-dimensional approach at discrete grain boundaries. The analytical solution is obtained by using the Lambert W function, combining both the uniform distributed deep states and the exponential tail states. The proposed scheme is demonstrated as an accurate and computationally efficient solution in a closed form, which can serve as a basis for the discrete-grain-based models of mobility and drain current in polysilicon thin film transistors. It is verified successfully by comparisons with both numerical simulation and experimental data.
NASA Astrophysics Data System (ADS)
Yin, Gang; Zhang, Yingtang; Mi, Songlin; Fan, Hongbo; Li, Zhining
2016-11-01
To obtain accurate magnetic gradient tensor data, a fast and robust calculation method based on regularized method in frequency domain was proposed. Using the potential field theory, the transform formula in frequency domain was deduced in order to calculate the magnetic gradient tensor from the pre-existing total magnetic anomaly data. By analyzing the filter characteristics of the Vertical vector transform operator (VVTO) and Gradient tensor transform operator (GTTO), we proved that the conventional transform process was unstable which would zoom in the high-frequency part of the data in which measuring noise locate. Due to the existing unstable problem that led to a low signal-to-noise (SNR) for the calculated result, we introduced regularized method in this paper. By selecting the optimum regularization parameters of different transform phases using the C-norm approach, the high frequency noise was restrained and the SNR was improved effectively. Numerical analysis demonstrates that most value and characteristics of the calculated data by the proposed method compare favorably with reference magnetic gradient tensor data. In addition, calculated magnetic gradient tensor components form real aeromagnetic survey provided better resolution of the magnetic sources and original profile.
NASA Astrophysics Data System (ADS)
Giannoglou, V.; Stylianidis, E.
2016-06-01
Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s) calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s) and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.
Dahlgren, C; Sunqvist, T
1981-01-01
The correlation between the contact angle and degree of phagocytosis of different yeast particles has been investigated. To facilitate the estimation of the contact angle, we have tested the hypothesis that the shape of a small liquid drop put on a flat surface is that of a truncated sphere. By making this approximation it is possible to calculate the contact angle, i.e. the tangent to the drop in the 3-phase liquid/solid/air meeting point, by measuring the drop diameter. Known volumes of saline were put on different surfaces and the diameters of the drops were measured from above. Calculation of the contact angle with drops of different volumes, and comparison between expected and measured height of 10 microl drops, indicated that the assumption that the shape of a drop is that of a truncated sphere is valid. Monolayers of leukocytes was shown to give rise to a contact angle of 17.9 degrees. Particles with a lower contact angle than the phagocytic cells resisted phagocytosis, but opsonization of the particles with normal human serum rendered them susceptible to phagocytosis, conferring a higher contact angle than that of the phagocytic cells.
Martínez, G M; Rennó, N; Fischer, E; Borlina, C S; Hallet, B; de la Torre Juárez, M; Vasavada, A R; Ramos, M; Hamilton, V; Gomez-Elvira, J; Haberle, R M
2014-01-01
The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia (I) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ∼104 m2 to ∼107 m2. Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ∼102 m2. We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I = 452 J m−2 K−1 s−1/2 (SI units used throughout this article) is found at YKB followed by PL with I = 306 and RCK with I = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars. PMID:26213666
Martínez, G M; Rennó, N; Fischer, E; Borlina, C S; Hallet, B; de la Torre Juárez, M; Vasavada, A R; Ramos, M; Hamilton, V; Gomez-Elvira, J; Haberle, R M
2014-08-01
The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia (I) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ∼10(4) m(2) to ∼10(7) m(2). Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ∼10(2) m(2). We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I = 452 J m(-2) K(-1) s(-1/2) (SI units used throughout this article) is found at YKB followed by PL with I = 306 and RCK with I = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars.
Zhang, Siyu; Yu, Gang; Chen, Jingwen; Zhao, Qing; Zhang, Xuejiao; Wang, Bin; Huang, Jun; Deng, Shubo; Wang, Yujue
2017-01-01
Ozonation is widely used in wastewater treatment plants to remove diverse organic micropollutants. As molecular structures of organic micropollutants contain multiple ozone-preferred reaction sites, and moreover intermediate products can react with ozone again, ozonation mechanism is complex. A fast increasing number of organic micropollutants and a great demand of ecological risk assessments call for an in silico method to provide insights into the ozonation mechanism of organic micropollutants. Here, an in silico model was developed to unveil ozonation mechanisms of organic micropollutants. Sulfamethoxazole was taken as a case. The model enumerates elementary reactions following well-accepted ozonation patterns and secondary transformation reactions established for intermediates by experiments. Density functional theory (DFT) calculations were employed for evaluating thermodynamic feasibilities of reaction pathways. By calculating Gibbs free energies, ozonation products of SMX were predicted. The predicted products are consistent with those detected in experiments. This method is advanced in revealing all possible reaction pathways including minor pathways that produce toxic byproducts but are difficult to be observed by experiments. Accordingly, water treatment engineers can setup necessary treatment technology to ensure water safety.
TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors
Mitchell, T; Bush, K
2015-06-15
Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identify the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.
NASA Astrophysics Data System (ADS)
Xu, Xue-song; Wang, Sheng-wei
2012-03-01
In re-entry, the drilling riser hanging to the holding vessel takes on a free hanging state, waiting to be moved from the initial random position to the wellhead. For the re-entry, dynamics calculation is often done to predict the riser motion or evaluate the structural safety. A dynamics calculation method based on Flexible Segment Model (FSM) is proposed for free hanging marine risers. In FSM, a riser is discretized into a series of flexible segments. For each flexible segment, its deflection feature and external forces are analyzed independently. For the whole riser, the nonlinear governing equations are listed according to the moment equilibrium at nodes. For the solution of the nonlinear equations, a linearization iteration scheme is provided in the paper. Owing to its flexibility, each segment can match a long part of the riser body, which enables that good results can be obtained even with a small number of segments. Moreover, the linearization iteration scheme can avoid widely used Newton-Rapson iteration scheme in which the calculation stability is influenced by the initial points. The FSM-based dynamics calculation is timesaving and stable, so suitable for the shape prediction or real-time control of free hanging marine risers.
NASA Astrophysics Data System (ADS)
Hammitzsch, M.; Spazier, J.; Reißland, S.
2014-12-01
Usually, tsunami early warning and mitigation systems (TWS or TEWS) are based on several software components deployed in a client-server based infrastructure. The vast majority of systems importantly include desktop-based clients with a graphical user interface (GUI) for the operators in early warning centers. However, in times of cloud computing and ubiquitous computing the use of concepts and paradigms, introduced by continuously evolving approaches in information and communications technology (ICT), have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in three research projects - 'German Indonesian Tsunami Early Warning System' (GITEWS), 'Distant Early Warning System' (DEWS), and 'Collaborative, Complex, and Critical Decision-Support in Evolving Crises' (TRIDEC) - new technologies are exploited to implement a cloud-based and web-based prototype to open up new prospects for EWS. This prototype, named 'TRIDEC Cloud', merges several complementary external and in-house cloud-based services into one platform for automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The prototype in its current version addresses tsunami early warning and mitigation. The integration of GPU accelerated tsunami simulation computations have been an integral part of this prototype to foster early warning with on-demand tsunami predictions based on actual source parameters. However, the platform is meant for researchers around the world to make use of the cloud-based GPU computation to analyze other types of geohazards and natural hazards and react upon the computed situation picture with a web-based GUI in a web browser at remote sites. The current website is an early alpha version for demonstration purposes to give the
Calculating the detection limits of chamber-based soil greenhouse gas flux measurements
Technology Transfer Automated Retrieval System (TEKTRAN)
Renewed interest in quantifying greenhouse gas emissions from soil has lead to an increase in the application of chamber-based flux measurement techniques. Despite the apparent conceptual simplicity of chamber-based methods, nuances in chamber design, deployment, and data analyses can have marked ef...
Two interfacial shear strength calculations based on the single fiber composite test
NASA Astrophysics Data System (ADS)
Zhandarov, S. F.; Pisanova, E. V.
1996-07-01
The fragmentation of a single fiber embedded in a polymer matrix upon stretching (SFC test) provides valuable information on the fiber-matrix bond strength (τ), which determines stress transfer through the interface and, thus, significantly affects the mechanical properties of the composite material. However, the calculated bond strength appears to depend on data interpretation, i.e., on the applied theoretical model, since the direct result of the SFC test is the fiber fragment length distribution rather than the τ value. Two approaches are used in SFC testing for calculation of the bond strength: 1) the Kelly-Tyson model, in which the matrix is assumed to be totally elastic and 2) the Cox model using the elastic constants of the fiber and the matrix. In this paper, an attempt has been made to compare these two approaches employing theory as well as the experimental data of several authors. The dependence of the tensile stress in the fiber and the interfacial shear stress on various factors has been analyzed. For both models, the mean interfacial shear stress in the fragment of critical length (lc) was shown to satisfy the same formula (τ) = (σcD)/2lc, where D is the fiber diameter and σc is the tensile strength of a fiber at gauge length equal to lc. However, the critical lengths from the Kelly-Tyson approach and Cox model are differently related to the fragment length distribution parameters such as the mean fragment length. This discrepancy results in different (τ) values for the same experimental data set. While the main parameter in the Kelly-Tyson model assumed constant for a given fiber-matrix pair is the interfacial shear strength, the ultimate (local) bond strength τult may be seen as the corresponding parameter in the Cox model. Various τult values were obtained for carbon fiber-epoxy matrix systems by analyzing the data of continuously monitored single fiber composite tests. Whereas the mean value of the interfacial shear stress calculated in
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector.
Cabal, Fatima Padilla; Lopez-Pino, Neivy; Bernal-Castillo, Jose Luis; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D'Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar
2010-12-01
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ((241)Am, (133)Ba, (22)Na, (60)Co, (57)Co, (137)Cs and (152)Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Morgan, Nicole Y.; Kramer-Marek, Gabriela; Smith, Paul D.; Camphausen, Kevin; Capala, Jacek
2011-01-01
The recent demonstration of nanoscale scintillators has led to interest in the combination of radiation and photodynamic therapy. In this model, scintillating nanoparticles conjugated to photosensitizers and molecular targeting agents would enhance the targeting and improve the efficacy of radiotherapy and extend the application of photodynamic therapy to deeply seated tumors. In this study, we calculated the physical parameters required for these nanoparticle conjugates to deliver cytotoxic levels of singlet oxygen at therapeutic radiation doses, drawing on the published literature from several disparate fields. Although uncertainties remain, it appears that the light yield of the nanoscintillators, the efficiency of energy transfer to the photosensitizers, and the cellular uptake of the nano-particles all need to be fairly well optimized to observe a cytotoxic effect. Even so, the efficacy of the combination therapy will likely be restricted to X-ray energies below 300 keV, which limits the application to brachytherapy. PMID:19267550
NASA Astrophysics Data System (ADS)
Kim, Han Seul; Kim, Yong-Hoon
2015-03-01
We report on the development of a novel first-principles method for the calculation of non-equilibrium quantum transport process. Within the scheme, non-equilibrium situation and quantum transport within the open-boundary condition are described by the region-dependent Δ self-consistent field method and matrix Green's function theory, respectively. We will discuss our solutions to the technical difficulties in describing bias-dependent electron transport at complicated nanointerfaces and present several application examples. Global Frontier Program (2013M3A6B1078881), Basic Science Research Grant (2012R1A1A2044793), EDISON Program (No. 2012M3C1A6035684), and 2013 Global Ph.D fellowship program of the National Research Foundation. KISTI Supercomputing Center (KSC-2014-C3-021).
Thermal state of SNPS Topaz'' units: Calculation basing and experimental confirmation
Bogush, I.P.; Bushinsky, A.V.; Galkin, A.Y.; Serbin, V.I.; Zhabotinsky, E.E. )
1991-01-01
The ensuring thermal state parameters of thermionic space nuclear power system (SNPS) units in required limits on all operating regimes is a factor which determines SNPSs lifetime. The requirements to unit thermal state are distinguished to a marked degree, and both the corresponding units arragement in SNPS power generating module and the use of definite control algorithms, special thermal regulation and protection are neccessary for its provision. The computer codes which permit to define the thermal transient performances of liquid metal loop and main units had been elaborated for calculation basis of required SNPS Topaz'' unit thermal state. The conformity of these parameters to a given requirements are confirmed by results of autonomous unit tests, tests of mock-ups, power tests of ground SNPS prototypes and flight tests of two SNPS Topaz''.
Warwicker, Jim
2004-10-01
Ionizable groups play critical roles in biological processes. Computation of pK(a)s is complicated by model approximations and multiple conformations. Calculated and experimental pK(a)s are compared for relatively inflexible active-site side chains, to develop an empirical model for hydration entropy changes upon charge burial. The modification is found to be generally small, but large for cysteine, consistent with small molecule ionization data and with partial charge distributions in ionized and neutral forms. The hydration model predicts significant entropic contributions for ionizable residue burial, demonstrated for components in the pyruvate dehydrogenase complex. Conformational relaxation in a pH-titration is estimated with a mean-field assessment of maximal side chain solvent accessibility. All ionizable residues interact within a low protein dielectric finite difference (FD) scheme, and more flexible groups also access water-mediated Debye-Hückel (DH) interactions. The DH method tends to match overall pH-dependent stability, while FD can be more accurate for active-site groups. Tolerance for side chain rotamer packing is varied, defining access to DH interactions, and the best fit with experimental pK(a)s obtained. The new (FD/DH) method provides a fast computational framework for making the distinction between buried and solvent-accessible groups that has been qualitatively apparent from previous work, and pK(a) calculations are significantly improved for a mixed set of ionizable residues. Its effectiveness is also demonstrated with computation of the pH-dependence of electrostatic energy, recovering favorable contributions to folded state stability and, in relation to structural genomics, with substantial improvement (reduction of false positives) in active-site identification by electrostatic strain.
Equatorial scintillation calculations based on coherent scatter radar and C/NOFS data
NASA Astrophysics Data System (ADS)
Costa, Emanoel; de Paula, Eurico R.; Rezende, L. F. C.; Groves, Keith M.; Roddy, Patrick A.; Dao, Eugene V.; Kelley, Michael C.
2011-04-01
During its transit through a region of equatorial ionospheric irregularities, sensors on board the Communication/Navigation Outage Forecasting System (C/NOFS) satellite provide a one-dimensional description of the medium, which can be extended to two dimensions if the structures are assumed to be elongated in the direction of the magnetic field lines. The C/NOFS scintillation calculation approach assumes that the medium is equivalent to a diffracting screen with random phase fluctuations that are proportional to the irregularities in the total electron content, specified through the product of the directly measured electron density by an estimated extent of the irregularity layer along the raypaths. Within the international collaborative effort anticipated by the C/NOFS Science Definition Team, the present work takes the vertical structure of the irregularities into more detailed consideration, which could lead to improved predictions of scintillation. Initially, it describes a flexible model for the power spectral density of the equatorial ionospheric irregularities, estimates its shape parameters from C/NOFS in situ data and uses the signal-to-noise ratio S/N measurements by the São Luís coherent scatter radar to estimate the mean square electron density fluctuation <ΔN2> within the corresponding sampled volume. Next, it presents an algorithm for the wave propagation through a three-dimensional irregularity layer which considers the variations of <ΔN2> along the propagation paths according to observations by the radar. Data corresponding to several range-time-intensity maps from the radar is used to predict time variations of the scintillation index S4 at the L1 Global Positioning System (GPS) frequency (1575.42 MHz). The results from the scintillation calculations are compared with corresponding measurements by the colocated São Luís GPS scintillation monitor for an assessment of the prediction capability of the present formulation.
van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C
2005-09-01
International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines.
NASA Astrophysics Data System (ADS)
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
NASA Astrophysics Data System (ADS)
Wu, Su-Yong; Long, Xing-Wu; Yang, Kai-Yong
2009-09-01
To improve the current status of home multilayer optical coating design with low speed and poor efficiency when a large layer number occurs, the accurate calculation and fast realization of merit function’s gradient and Hesse matrix is pointed out. Based on the matrix method to calculate the spectral properties of multilayer optical coating, an analytic model is established theoretically. And the corresponding accurate and fast computation is successfully achieved by programming with Matlab. Theoretical and simulated results indicate that this model is mathematically strict and accurate, and its maximal precision can reach floating-point operations in the computer, with short time and fast speed. Thus it is very suitable to improve the optimal search speed and efficiency of local optimization methods based on the derivatives of merit function. It has outstanding performance in multilayer optical coating design with a large layer number.
Calculation of Shuttle Base Heating Environments and Comparison with Flight Data
NASA Technical Reports Server (NTRS)
Greenwood, T. F.; Lee, Y. C.; Bender, R. L.; Carter, R. E.
1983-01-01
The techniques, analytical tools, and experimental programs used initially to generate and later to improve and validate the Shuttle base heating design environments are discussed. In general, the measured base heating environments for STS-1 through STS-5 were in good agreement with the preflight predictions. However, some changes were made in the methodology after reviewing the flight data. The flight data is described, preflight predictions are compared with the flight data, and improvements in the prediction methodology based on the data are discussed.
Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David
2016-12-06
There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift
Sasagane, Kotoku
2008-09-17
The essence of the quasienergy derivative (QED) method and calculations of the frequency-dependent hyperpolarizabilities based on the QED method will be presented in the first half. Our recent and up-to-date development and some possibilities concerning the QED method will be explained later. At the end of the lecture whether the extension of the QED method to the numerical approach is possible or not will be investigated.
Rodrigo, E; Miñambres, E; Ruiz, J C; Ballesteros, A; Piñera, C; Quintanar, J; Fernández-Fresnedo, G; Palomar, R; Gómez-Alamillo, C; Arias, M
2012-01-01
Renal failure persisting after renal transplant is known as delayed graft function (DGF). DGF predisposes the graft to acute rejection and increases the risk of graft loss. In 2010, Irish et al. developed a new model designed to predict DGF risk. This model was used to program a web-based DGF risk calculator, which can be accessed via http://www.transplantcalculator.com . The predictive performance of this score has not been tested in a different population. We analyzed 342 deceased-donor adult renal transplants performed in our hospital. Individual and population DGF risk was assessed using the web-based calculator. The area under the ROC curve to predict DGF was 0.710 (95% CI 0.653-0.767, p < 0.001). The "goodness-of-fit" test demonstrates that the DGF risk was well calibrated (p = 0.309). Graft survival was significantly better for patients with a lower DGF risk (5-year survival 71.1% vs. 60.1%, log rank p = 0.036). The model performed well with good discrimination ability and good calibration to predict DGF in a single transplant center. Using the web-based DGF calculator, we can predict the risk of developing DGF with a moderate to high degree of certainty only by using information available at the time of transplantation.
SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy
Kalantzis, G; Leventouri, T; Tachibana, H; Shang, C
2015-06-15
Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction while the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms.
NASA Astrophysics Data System (ADS)
Reyer, Dorothea; Philipp, Sonja
2014-05-01
It is desirable to enlarge the profit margin of geothermal projects by reducing the total drilling costs considerably. Substantiated assumptions on uniaxial compressive strengths and failure criteria are important to avoid borehole instabilities and adapt the drilling plan to rock mechanical conditions to minimise non-productive time. Because core material is rare we aim at predicting in situ rock properties from outcrop analogue samples which are easy and cheap to provide. The comparability of properties determined from analogue samples with samples from depths is analysed by performing physical characterisation (P-wave velocities, densities), conventional triaxial tests, and uniaxial compressive strength tests of both quarry and equivalent core samples. "Equivalent" means that the quarry sample is of the same stratigraphic age and of comparable sedimentary facies and composition as the correspondent core sample. We determined the parameters uniaxial compressive strength (UCS) and Young's modulus for 35 rock samples from quarries and 14 equivalent core samples from the North German Basin. A subgroup of these samples was used for triaxial tests. For UCS versus Young's modulus, density and P-wave velocity, linear- and non-linear regression analyses were performed. We repeated regression separately for clastic rock samples or carbonate rock samples only as well as for quarry samples or core samples only. Empirical relations were used to calculate UCS values from existing logs of sampled wellbore. Calculated UCS values were then compared with measured UCS of core samples of the same wellbore. With triaxial tests we determined linearized Mohr-Coulomb failure criteria, expressed in both principal stresses and shear and normal stresses, for quarry samples. Comparison with samples from larger depths shows that it is possible to apply the obtained principal stress failure criteria to clastic and volcanic rocks, but less so for carbonates. Carbonate core samples have higher
NASA Astrophysics Data System (ADS)
Green, A. T.
Beam emittance is an important characteristic describing charged particle beams. In linear accelerators (linac), it is critical to characterize the beam phase space parameters and, in particular, to precisely measure transverse beam emittance. The quadrupole scan (quad-scan) is a well-established technique used to characterize transverse beam parameters in four-dimensional phase space, including beam emittance. A computational algorithm with PYTHON scripts has been developed to estimate beam parameters, in particular beam emittance, using the quad-scan technique in the electron linac at the Fermilab Accelerator Science and Technology (FAST) facility. This script has been implemented in conjunction with an automated quad-scan tool (also written in PYTHON) and has decreased the time it takes to perform a single quad-scan from an hour to a few minutes. From the experimental data, the emittance calculator quickly delivers several results including: geometrical and normalized transverse emittance, Courant-Snyder parameters, and plots of the beam size versus quadrupole field strength, among others. This paper will discuss the details of the techniques used, the results from several quad-scans preformed at FAST during the electron injector commissioning, and the PYTHON code used to obtain the results.
Highly correlated configuration interaction calculations on water with large orbital bases
NASA Astrophysics Data System (ADS)
Almora-Díaz, César X.
2014-05-01
A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupled cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, -76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the "experimental" value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol).
Structural and vibrational study of primidone based on monomer and dimer calculations.
Celik, Sefa; Kecel-Gunduz, Serda; Ozel, Aysen E; Akyuz, Sevim
2015-01-01
Primidone (Mysoline), with the chemical formula 5-ethyl-5-phenyl-hexahydropyrimidine- 4,6-dione (C12H14N2O2), has been a valuable drug in the treatment of epilepsy. In the present work, the experimental IR and Raman spectra of solid phase primidone were recorded, and the results were compared with theoretical wavenumber values of monomer and dimer forms of the title molecule. Vibrational spectral simulations in the dimer form were carried out to improve the assignment of the bands in the solid phase experimental spectra. The possible stable conformers of free molecule were searched by means of torsion potential energy surfaces scan studies through two dihedral angles. The molecular geometries of the monomer and dimer forms of title molecule were optimized using DFT method at B3LYP/6-31++G(d,p) level of theory. Using PEDs determined the contributions of internal (stretching, bending, etc.) coordinates to each normal mode of vibration. Further, HOMO-LUMO energy gap and NBO properties of the investigated molecule in monomer and dimer forms were also calculated.
Ray-Based Calculations with DEPLETE of Laser Backscatter in ICF Targets
Strozzi, D J; Williams, E; Hinkel, D; Froula, D; London, R; Callahan, D
2008-05-19
A steady-state model for Brillouin and Raman backscatter along a laser ray path is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code Deplete, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as 'plane-wave' simulations with the paraxial propagation code pF3D. Comparisons with Brillouin-scattering experiments at the Omega Laser Facility show that laser speckles greatly enhance the reflectivity over the Deplete results. An approximate upper bound on this enhancement is given by doubling the Deplete coupling coefficient. Analysis with Deplete of an ignition design for the National Ignition Facility (NIF), with a peak radiation temperature of 285 eV, shows encouragingly low reflectivity. Doubling the coupling to bracket speckle effects suggests a less optimistic picture. Re-absorption of Raman light is seen to be significant in this design.
Vandenberghe, William G. Fischetti, Massimo V.
2014-11-07
Monolayers of tin (stannanane) functionalized with halogens have been shown to be topological insulators. Using density functional theory (DFT), we study the electronic properties and room-temperature transport of nanoribbons of iodine-functionalized stannanane showing that the overlap integral between the wavefunctions associated to edge-states at opposite ends of the ribbons decreases with increasing width of the ribbons. Obtaining the phonon spectra and the deformation potentials also from DFT, we calculate the conductivity of the ribbons using the Kubo-Greenwood formalism and show that their mobility is limited by inter-edge phonon backscattering. We show that wide stannanane ribbons have a mobility exceeding 10{sup 6} cm{sup 2}/Vs. Contrary to ordinary semiconductors, two-dimensional topological insulators exhibit a high conductivity at low charge density, decreasing with increasing carrier density. Furthermore, the conductivity of iodine-functionalized stannanane ribbons can be modulated over a range of three orders of magnitude, thus rendering this material extremely interesting for classical computing applications.
DFT-Based Electronic Structure Calculations on Hybrid and Massively Parallel Computer Architectures
NASA Astrophysics Data System (ADS)
Briggs, Emil; Hodak, Miroslav; Lu, Wenchang; Bernholc, Jerry
2014-03-01
The latest generation of supercomputers is capable of multi-petaflop peak performance, achieved by using thousands of multi-core CPU's and often coupled with thousands of GPU's. However, efficient utilization of this computing power for electronic structure calculations presents significant challenges. We describe adaptations of the Real-Space Multigrid (RMG) code that enable it to scale well to thousands of nodes. A hybrid technique that uses one MPI process per node, rather than on per core was adopted with OpenMP and POSIX threads used for intra-node parallelization. This reduces the number of MPI process's by an order of magnitude or more and improves individual node memory utilization. GPU accelerators are also becoming common and are capable of extremely high performance for vector workloads. However, they typically have much lower scalar performance than CPU's, so achieving good performance requires that the workload is carefully partitioned and data transfer between CPU and GPU is optimized. We have used a hybrid approach utilizing MPI/OpenMP/POSIX threads and GPU accelerators to reach excellent scaling to over 100,000 cores on a Cray XE6 platform as well as a factor of three performance improvement when using a Cray XK7 system with CPU-GPU nodes.
Paudel, Moti R; Kim, Anthony; Sarfehnia, Arman; Ahmad, Sayed B; Beachey, David J; Sahgal, Arjun; Keller, Brian M
2016-11-01
A new GPU-based Monte Carlo dose calculation algorithm (GPUMCD), developed by the vendor Elekta for the Monaco treatment planning system (TPS), is capable of modeling dose for both a standard linear accelerator and an Elekta MRI linear accelerator. We have experimentally evaluated this algorithm for a standard Elekta Agility linear accelerator. A beam model was developed in the Monaco TPS (research version 5.09.06) using the commissioned beam data for a 6 MV Agility linac. A heterogeneous phantom representing several scenarios - tumor-in-lung, lung, and bone-in-tissue - was designed and built. Dose calculations in Monaco were done using both the current clinical Monte Carlo algorithm, XVMC, and the new GPUMCD algorithm. Dose calculations in a Pinnacle TPS were also produced using the collapsed cone convolution (CCC) algorithm with heterogeneity correction. Calculations were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2×2 cm2,5×5 cm2, and 10×2 cm2 field sizes. The percentage depth doses (PDDs) calculated by XVMC and GPUMCD in a homogeneous solid water phantom were within 2%/2 mm of film measurements and within 1% of ion chamber measurements. For the tumor-in-lung phantom, the calculated doses were within 2.5%/2.5 mm of film measurements for GPUMCD. For the lung phantom, doses calculated by all of the algorithms were within 3%/3 mm of film measurements, except for the 2×2 cm2 field size where the CCC algorithm underestimated the depth dose by ∼5% in a larger extent of the lung region. For the bone phantom, all of the algorithms were equivalent and calculated dose to within 2%/2 mm of film measurements, except at the interfaces. Both GPUMCD and XVMC showed interface effects, which were more pronounced for GPUMCD and were comparable to film measurements, whereas the CCC algorithm showed these effects poorly. PACS number(s): 87.53.Bn, 87.55.dh, 87.55.km.
Mai, V. T.; Fujii, T.; Wada, K.; Kitada, T.; Takaki, N.; Yamaguchi, A.; Watanabe, H.; Unesaki, H.
2012-07-01
Considering the importance of thorium data and concerning about the accuracy of Th-232 cross section library, a series of experiments of thorium critical core carried out at KUCA facility of Kyoto Univ. Research Reactor Inst. have been analyzed. The core was composed of pure thorium plates and 93% enriched uranium plates, solid polyethylene moderator with hydro to U-235 ratio of 140 and Th-232 to U-235 ratio of 15.2. Calculations of the effective multiplication factor, control rod worth, reactivity worth of Th plates have been conducted by MVP code using JENDL-4.0 library [1]. At the experiment site, after achieving the critical state with 51 fuel rods inserted inside the reactor, the measurements of the reactivity worth of control rod and thorium sample are carried out. By comparing with the experimental data, the calculation overestimates the effective multiplication factor about 0.90%. Reactivity worth of the control rods evaluation using MVP is acceptable with the maximum discrepancy about the statistical error of the measured data. The calculated results agree to the measurement ones within the difference range of 3.1% for the reactivity worth of one Th plate. From this investigation, further experiments and research on Th-232 cross section library need to be conducted to provide more reliable data for thorium based fuel core design and safety calculation. (authors)
Calculating the detection limits of chamber-based greenhouse gas flux measurements
Technology Transfer Automated Retrieval System (TEKTRAN)
Chamber-based measurement of greenhouse gas emissions from soil is a common technique. However, when changes in chamber headspace gas concentrations are small over time, determination of the flux can be problematic. Several factors contribute to the reliability of measured fluxes, including: samplin...
2012-08-01
characteristic and efficiency maps. Rigorous dynamometer testing has been performed to characterize main hybrid components. The data gathered from the... dynamometer testing was used to further fine tune and improve vehicle simulation and control software. Samples of the types of characterization data...characteristics are confirmed during the dynamometer testing phases and fed back into the base simulation to adjust control parameters and strategy
Moustafa, Nagy Emam; Eissa, Elham Ahmed
2007-11-01
The Flory-Huggins interaction parameter (chi1, 2(infinity)) and solubility parameter (delta2) and its hydrogen bonding sensing component (delta(h)) were determined using inverse gas chromatography (IGC). These parameters were successfully used in the probes of chemical changes that occur during the oxidation of naphthenic and paraffinic base oils in a GC column. Changes in chi1, 2(infinity) values reflect the different types of intermolecular interactions (dispersive, polar, hydrogen bonding) of the given lubricating base oil during oxidation. The obtained results showed that delta(h) component of solubility parameter is the most important parameter for probing the oxidative-chemical changes during the oxidation of given lubricating oils.
NASA Astrophysics Data System (ADS)
Chen, C. L.; Wu, T. H.; Cheng, M. C.; Huang, Y. H.; Sheu, C. Y.; Hsieh, J. C.; Lee, J. S.
2006-12-01
Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training.
Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter
2017-04-07
A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σiso is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)(3) (M: number of basis functions).
Grafton, Anthony K
2007-05-01
This report describes the development and applications of a software package called Vibalizer, the first and only method that provides free, fast, interactive, and quantitative comparison and analysis of calculated vibrational modes. Using simple forms and menus in a web-based interface, Vibalizer permits the comparison of vibrational modes from different, but similar molecules and also performs rapid calculation and comparison of isotopically substituted molecules' normal modes. Comparing and matching complex vibrational modes can be completed in seconds with Vibalizer, whereas matching vibrational modes manually can take hours and gives only qualitative comparisons subject to human error and differing individual judgments. In addition to these core features, Vibalizer also provides several other useful features, including the ability to automatically determine first-approximation mode descriptions, to help users analyze the results of vibrational frequency calculations. Because the software can be dimensioned to handle almost arbitrarily large systems, Vibalizer may be of particular use when analyzing the vibrational modes of complex systems such as proteins and extended materials systems. Additionally, the ease of use of the Vibalizer interface and the straightforward interpretation of results may find favor with educators who incorporate molecular modeling into their classrooms. The Vibalizer interface is available for free use at http://www.compchem.org, and it is also available as a locally-installable package that will run on a Linux-based web server.
Skachkov, Dmitry; Krykunov, Mykhaylo; Kadantsev, Eugene; Ziegler, Tom
2010-05-11
We present here a method that can calculate NMR shielding tensors from first principles for systems with translational invariance. Our approach is based on Kohn-Sham density functional theory and gauge-including atomic orbitals. Our scheme determines the shielding tensor as the second derivative of the total electronic energy with respect to an external magnetic field and a nuclear magnetic moment. The induced current density due to a periodic perturbation from nuclear magnetic moments is obtained through numerical differentiation, whereas the influence of the responding perturbation in terms of the external magnetic field is evaluated analytically. The method is implemented into the periodic program BAND. It employs a Bloch basis set made up of Slater-type or numeric atomic orbitals and represents the Kohn-Sham potential fully without the use of effective core potentials. Results from calculations of NMR shielding constants based on the present approach are presented for isolated molecules as well as systems with one-, two- and three-dimensional periodicity. The reported values are compared to experiment and results from calculations on cluster models.
McGee, K. P.; Lake, D.; Mariappan, Y; Hubmayr, R. D.; Manduca, A.; Ansell, K.; Ehman, R. L.
2011-01-01
Magnetic resonance elastography (MRE) is a non invasive phase-contrast based method for quantifying the shear stiffness of biological tissues. Synchronous application of a shear wave source and motion encoding gradient waveforms within the MRE pulse sequence enable visualization of the propagating shear wave throughout the medium under investigation. Encoded shear wave induced displacements are then processed to calculate the local shear stiffness of each voxel. An important consideration in local shear stiffness estimates is that the algorithms employed typically calculate shear stiffness using relatively high signal-to-noise ratio (SNR) MRE images and have difficulties at extremely low SNR. A new method of estimating shear stiffness based on the principal spatial frequency of the shear wave displacement map is presented. Finite element simulations were performed to assess the relative insensitivity of this approach to decreases in SNR. Additionally, ex vivo experiments were conducted on normal rat lungs to assess the robustness of this approach in low SNR biological tissue. Simulation and experimental results indicate that calculation of shear stiffness by the principal frequency method is less sensitive to extremely low SNR than previously reported MRE inversion methods but at the expense of loss of spatial information within the region of interest from which the principal frequency estimate is derived. PMID:21701049
Analytical Calculation of Sensing Parameters on Carbon Nanotube Based Gas Sensors
Akbari, Elnaz; Buntat, Zolkafle; Ahmad, Mohd Hafizi; Enzevaee, Aria; Yousof, Rubiyah; Iqbal, Syed Muhammad Zafar; Ahmadi, Mohammad Taghi.; Sidik, Muhammad Abu Bakar; Karimi, Hediyeh
2014-01-01
Carbon Nanotubes (CNTs) are generally nano-scale tubes comprising a network of carbon atoms in a cylindrical setting that compared with silicon counterparts present outstanding characteristics such as high mechanical strength, high sensing capability and large surface-to-volume ratio. These characteristics, in addition to the fact that CNTs experience changes in their electrical conductance when exposed to different gases, make them appropriate candidates for use in sensing/measuring applications such as gas detection devices. In this research, a model for a Field Effect Transistor (FET)-based structure has been developed as a platform for a gas detection sensor in which the CNT conductance change resulting from the chemical reaction between NH3 and CNT has been employed to model the sensing mechanism with proposed sensing parameters. The research implements the same FET-based structure as in the work of Peng et al. on nanotube-based NH3 gas detection. With respect to this conductance change, the I–V characteristic of the CNT is investigated. Finally, a comparative study shows satisfactory agreement between the proposed model and the experimental data from the mentioned research. PMID:24658617
Analytical calculation of sensing parameters on carbon nanotube based gas sensors.
Akbari, Elnaz; Buntat, Zolkafle; Ahmad, Mohd Hafizi; Enzevaee, Aria; Yousof, Rubiyah; Iqbal, Syed Muhammad Zafar; Ahmadi, Mohammad Taghi; Sidik, Muhammad Abu Bakar; Karimi, Hediyeh
2014-03-20
Carbon Nanotubes (CNTs) are generally nano-scale tubes comprising a network of carbon atoms in a cylindrical setting that compared with silicon counterparts present outstanding characteristics such as high mechanical strength, high sensing capability and large surface-to-volume ratio. These characteristics, in addition to the fact that CNTs experience changes in their electrical conductance when exposed to different gases, make them appropriate candidates for use in sensing/measuring applications such as gas detection devices. In this research, a model for a Field Effect Transistor (FET)-based structure has been developed as a platform for a gas detection sensor in which the CNT conductance change resulting from the chemical reaction between NH3 and CNT has been employed to model the sensing mechanism with proposed sensing parameters. The research implements the same FET-based structure as in the work of Peng et al. on nanotube-based NH3 gas detection. With respect to this conductance change, the I-V characteristic of the CNT is investigated. Finally, a comparative study shows satisfactory agreement between the proposed model and the experimental data from the mentioned research.
NASA Astrophysics Data System (ADS)
Lallier-Daniels, Dominic
La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul m
NASA Astrophysics Data System (ADS)
Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun
2013-04-01
Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.
Tedgren, Åsa Carlsson; Carlsson, Gudrun Alm
2013-04-21
Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from (125)I, (169)Yb and (192)Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.
NASA Astrophysics Data System (ADS)
Schäuble, Holger; Marinoni, Oswald; Hinderer, Matthias
2008-06-01
This paper presents a new approach to calculate flow accumulation with geographic information systems (GIS). It is based on the well-known D8 single-flow algorithm that is extended to consider the trap-efficiencies of dams and their specific operation time. This allows realistic calculations of flow accumulation for any time period. The new approach is not restricted to surface water runoff but can be applied to all kinds of mass fluxes like suspended or dissolved sediment load (weighted flow accumulation). To facilitate its use, two GIS extensions for ArcView and ArcGIS have been developed. This paper presents the principles of the new approach, the functionality of the extensions and gives some applications in the fields of hydrology and sedimentology.
Wu, Anthony; Lovett, David; McEwan, Matthew; Cecelja, Franjo; Chen, Tao
2016-11-01
This paper presents a spreadsheet calculator to estimate biogas production and the operational revenue and costs for UK-based farm-fed anaerobic digesters. There exist sophisticated biogas production models in published literature, but the application of these in farm-fed anaerobic digesters is often impractical. This is due to the limited measuring devices, financial constraints, and the operators being non-experts in anaerobic digestion. The proposed biogas production model is designed to use the measured process variables typically available at farm-fed digesters, accounting for the effects of retention time, temperature and imperfect mixing. The estimation of the operational revenue and costs allow the owners to assess the most profitable approach to run the process. This would support the sustained use of the technology. The calculator is first compared with literature reported data, and then applied to the digester unit on a UK Farm to demonstrate its use in a practical setting.
NASA Astrophysics Data System (ADS)
Cimrman, Robert; Tůma, Miroslav; Novák, Matyáš; Čertík, Ondřej; Plešek, Jiří; Vackář, Jiří
2013-10-01
Ab-initio calculations of electronic states within the density-functional framework has been performed by means of the open source finite element package SfePy (Simple Finite Elements in Python, http://sfepy.org). We describe a new robust ab-initio real-space code based on (i) density functional theory, (ii) finite element method and (iii) environment-reflecting pseudopotentials. This approach brings a new quality to solving Kohn-Sham equations, calculating electronic states, total energy, Hellmann-Feynman forces and material properties particularly for non-crystalline, non-periodic structures. The main asset of the above approach is an efficient combination of excellent convergence control of standard, universal basis used in industrially proved finite-element method, high precision of ab-initio environment-reflecting pseudopotentials, and applicability not restricted to electrically neutral periodic environment. We present also numerical examples illustrating the outputs of the method.
Gangarapu, Satesh; Marcelis, Antonius T M; Zuilhof, Han
2013-04-02
The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum model (CPCM) and universal solvation models (SMD, SM8). G3, SCS-MP2 and M11-L methods coupled with SMD and SM8 solvation models perform well for alkanolamines with mean unsigned errors below 0.20 pKa units, in all cases. Extending this method to the pKa calculation of 35 nitrogen-containing compounds spanning 12 pKa units showed an excellent correlation between experimental and computational pKa values of these 35 amines with the computationally low-cost SM8/M11-L density functional approach.
An off-line data processing system for biochemistry profiling based on a 2K programmable calculator.
James, R M
1976-03-01
An off-line data processing system based on a Hewlett Packard 2K programmable calculator to be used with a biochemistry profiling system is described. The program is in two sections. A Data Acquisition phase calculates results from Auto Analyser II peak heights after corrections for drift and stores them on magnetic tape cassettes. Quality control statistics are produced. A Reporting phase types the profile results on self-adhesive pre-printed labels to be attached to the test-request form and also prepares a laboratory record sheet. The system is routinely used to process up to 2000 peak heights per day. Non-profile heights may also be read using this program.
Code of Federal Regulations, 2012 CFR
2012-07-01
... HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust...
Code of Federal Regulations, 2011 CFR
2011-07-01
... HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust...
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2011 CFR
2011-07-01
...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values...
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values §...
Code of Federal Regulations, 2011 CFR
2011-07-01
... HFET-based fuel economy and carbon-related exhaust emission values for vehicle configurations. 600.206... POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values for 1977 and Later Model Year Automobiles §...
Code of Federal Regulations, 2013 CFR
2013-07-01
... HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust...
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values §...
Code of Federal Regulations, 2010 CFR
2010-07-01
... HFET-based fuel economy and carbon-related exhaust emission values for vehicle configurations. 600.206... POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values §...
NASA Astrophysics Data System (ADS)
Sharipov, Felix; Yang, Yuanchao; Ricker, Jacob E.; Hendricks, Jay H.
2016-10-01
Currently, the piston-cylinder assembly known as PG39 is used as a primary pressure standard at the National Institute of Standards and Technology (NIST) in the range of 20 kPa to 1 MPa with a standard uncertainty of 3× {{10}-6} as evaluated in 2006. An approximate model of gas flow through the crevice between the piston and sleeve contributed significantly to this uncertainty. The aim of this work is to revise the previous effective cross sectional area of PG39 and its uncertainty by carrying out more exact calculations that consider the effects of rarefied gas flow. The effective cross sectional area is completely determined by the pressure distribution in the crevice. Once the pressure distribution is known, the elastic deformations of both piston and sleeve are calculated by finite element analysis. Then, the pressure distribution is recalculated iteratively for the new crevice dimension. As a result, a new value of the effective area is obtained with a relative difference of 3× {{10}-6} from the previous one. Moreover, this approach allows us to reduce significantly the standard uncertainty related to the gas flow model so that the total uncertainty is decreased by a factor of three.
NASA Astrophysics Data System (ADS)
Fitzgerald, Alex; Roy, James W.; Smith, James E.
2015-09-01
Elevated levels of nutrients, especially phosphorus, in urban streams can lead to eutrophication and general degradation of stream water quality. Contributions of phosphorus from groundwater have typically been assumed minor, though elevated concentrations have been associated with riparian areas and urban settings. The objective of this study was to investigate the importance of groundwater as a pathway for phosphorus and nitrogen input to a gaining urban stream. The stream at the 28-m study reach was 3-5 m wide and straight, flowing generally eastward, with a relatively smooth bottom of predominantly sand, with some areas of finer sediments and a few boulders. Temperature-based methods were used to estimate the groundwater flux distribution. Detailed concentration distributions in discharging groundwater were mapped using in-stream piezometers and diffusion-based peepers, and showed elevated levels of soluble reactive phosphorus (SRP) and ammonium compared to the stream (while nitrate levels were lower), especially along the south bank, where groundwater fluxes were lower and geochemically reducing conditions dominated. Field evidence suggests the ammonium may originate from nearby landfills, but that local sediments likely contribute the SRP. Ammonium and SRP mass discharges with groundwater were then estimated as the product of the respective concentration distributions and the groundwater flux distribution. These were determined as approximately 9 and 200 g d-1 for SRP and ammonium, respectively, which compares to stream mass discharges over the observed range of base flows of 20-1100 and 270-7600 g d-1, respectively. This suggests that groundwater from this small reach, and any similar areas along Dyment's Creek, has the potential to contribute substantially to the stream nutrient concentrations.
NASA Astrophysics Data System (ADS)
Benda, Jakub; Houfek, Karel
2017-04-01
For total energies below the ionization threshold it is possible to dramatically reduce the computational burden of the solution of the electron-atom scattering problem based on grid methods combined with the exterior complex scaling. As in the R-matrix method, the problem can be split into the inner and outer problem, where the outer problem considers only the energetically accessible asymptotic channels. The (N + 1)-electron inner problem is coupled to the one-electron outer problems for every channel, resulting in a matrix that scales only linearly with size of the outer grid.
Dimenna, R.A.; Jacobs, R.A.; Taylor, G.A.; Durate, O.E.; Paul, P.K.; Elder, H.H.; Pike, J.A.; Fowler, J.R.; Rutland, P.L.; Gregory, M.V.; Smith III, F.G.; Hang, T.; Subosits, S.G.; Campbell, S.G.
2001-03-26
The High Level Waste (HLW) Salt Disposition Systems Engineering Team was formed on March 13, 1998, and chartered to identify options, evaluate alternatives, and recommend a selected alternative(s) for processing HLW salt to a permitted wasteform. This requirement arises because the existing In-Tank Precipitation process at the Savannah River Site, as currently configured, cannot simultaneously meet the HLW production and Authorization Basis safety requirements. This engineering study was performed in four phases. This document provides the technical bases, assumptions, and results of this engineering study.
NASA Astrophysics Data System (ADS)
Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.
2007-07-01
Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation
Porphyrin-based polymeric nanostructures for light harvesting applications: Ab initio calculations
NASA Astrophysics Data System (ADS)
Orellana, Walter
The capture and conversion of solar energy into electricity is one of the most important challenges to the sustainable development of mankind. Among the large variety of materials available for this purpose, porphyrins concentrate great attention due to their well-known absorption properties in the visible range. However, extended materials like polymers with similar absorption properties are highly desirable. In this work, we investigate the stability, electronic and optical properties of polymeric nanostructures based on free-base porphyrins and phthalocyanines (H2P, H2Pc), within the framework of the time-dependent density functional perturbation theory. The aim of this work is the stability, electronic, and optical characterization of polymeric sheets and nanotubes obtained from H2P and H2Pc monomers. Our results show that H2P and H2Pc sheets exhibit absorption bands between 350 and 400 nm, slightly different that the isolated molecules. However, the H2P and H2Pc nanotubes exhibit a wide absorption in the visible and near-UV range, with larger peaks at 600 and 700 nm, respectively, suggesting good characteristic for light harvesting. The stability and absorption properties of similar structures obtained from ZnP and ZnPc molecules is also discussed. Departamento de Ciencias Físicas, República 220, 037-0134 Santiago, Chile.
Liu, Qing-Jie; Jing, Lin-Hai; Li, Xin-Wu; Bi, Jian-Tao; Wang, Meng-Fei; Lin, Qi-Zhong
2013-04-01
Rapid identification of minerals based on near infrared (NIR) and shortwave infrared (SWIR) hyperspectra is vital to remote sensing mine exploration, remote sensing minerals mapping and field geological documentation of drill core, and have leaded to many identification methods including spectral angle mapping (SAM), spectral distance mapping (SDM), spectral feature fitting(SFF), linear spectral mixture model (LSMM), mathematical combination feature spectral linear inversion model(CFSLIM) etc. However, limitations of these methods affect their actual applications. The present paper firstly gives a unified minerals components spectral inversion (MCSI) model based on target sample spectrum and standard endmember spectral library evaluated by spectral similarity indexes. Then taking LSMM and SAM evaluation index for example, a specific formulation of unified MCSI model is presented in the form of a kind of combinatorial optimization. And then, an artificial immune colonial selection algorithm is used for solving minerals feature spectral linear inversion model optimization problem, which is named ICSFSLIM. Finally, an experiment was performed to use ICSFSLIM and CFSLIM to identify the contained minerals of 22 rock samples selected in Baogutu in Xinjiang China. The mean value of correctness and validness identification of ICSFSLIM are 34.22% and 54.08% respectively, which is better than that of CFSLIM 31.97% and 37.38%; the correctness and validness variance of ICSFSLIM are 0.11 and 0.13 smaller than that of CFSLIM, 0.15 and 0.25, indicating better identification stability.
Computer-based training for improving mental calculation in third- and fifth-graders.
Caviola, Sara; Gerotto, Giulia; Mammarella, Irene C
2016-11-01
The literature on intervention programs to improve arithmetical abilities is fragmentary and few studies have examined training on the symbolic representation of numbers (i.e. Arabic digits). In the present research, three groups of 3rd- and 5th-grade schoolchildren were given training on mental additions: 76 were assigned to a computer-based strategic training (ST) group, 73 to a process-based training (PBT) group, and 71 to a passive control (PC) group. Before and after the training, the children were given a criterion task involving complex addition problems, a nearest transfer task on complex subtraction problems, two near transfer tasks on math fluency, and a far transfer task on numerical reasoning. Our results showed developmental differences: 3rd-graders benefited more from the ST, with transfer effects on subtraction problems and math fluency, while 5th-graders benefited more from the PBT, improving their response times in the criterion task. Developmental, clinical and educational implications of these findings are discussed.
Parallel calculations on shared memory, NUMA-based computers using MATLAB
NASA Astrophysics Data System (ADS)
Krotkiewski, Marcin; Dabrowski, Marcin
2014-05-01
Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU
NASA Astrophysics Data System (ADS)
Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng
2013-11-01
Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.
Source-based calibration of space instruments using calculable synchrotron radiation
NASA Astrophysics Data System (ADS)
Klein, Roman; Fliegauf, Rolf; Kroth, Simone; Paustian, Wolfgang; Reichel, Thomas; Richter, Mathias; Thornagel, Reiner
2016-10-01
Physikalisch-Technische Bundesanstalt (PTB) has more than 20 years of experience in the calibration of space-based instruments using synchrotron radiation to cover the ultraviolet (UV), vacuum UV (VUV), and x-ray spectral range. Over the past decades, PTB has performed calibrations for numerous space missions within scientific collaborations and has become an important partner for activities in this field. New instrumentation at the electron storage ring, metrology light source, creates additional calibration possibilities within this framework. A new facility for the calibration of radiation transfer source standards with a considerably extended spectral range has been put into operation. The commissioning of a large vacuum vessel that can accommodate entire space instruments opens up new prospects. Finally, an existing VUV transfer calibration source was upgraded to increase the spectral range coverage to a band from 15 to 350 nm.
Calculation of Cancellous Bone Elastic Properties with the Polarization-based FFT Iterative Scheme.
Colabella, Lucas; Ibarra Pino, Ariel Alejandro; Ballarre, Josefina; Kowalczyk, Piotr; Cisilino, Adrián Pablo
2017-03-07
The FFT based method, originally introduced by Moulinec and Suquet in 1994 has gained popularity for computing homogenized properties of composites. In this work, the method is used for the computational homogenization of the elastic properties of cancellous bone. To the authors' knowledge, this is the first study where the FFT scheme is applied to bone mechanics. The performance of the method is analyzed for artificial and natural bone samples of two species: bovine femoral heads and implanted femurs of Hokkaido rats. Model geometries are constructed using data from X-ray tomographies and the bone tissue elastic properties are measured using micro and nanoindentation tests. Computed results are in excellent agreement with those available in the literature. The study shows the suitability of the method to accurately estimate the fully anisotropic elastic response of cancellous bone. Guidelines are provided for the construction of the models and the setting of the algorithm.
Use of ground-based remotely sensed data for surface energy balance calculations during Monsoon '90
NASA Technical Reports Server (NTRS)
Moran, M. S.; Kustas, William P.; Vidal, Alain; Stannard, David I.; Blanford, James
1991-01-01
Surface energy balance was evaluated at a semiarid watershed using direct and indirect measurements of the turbulent fluxes, a remote technique based on measurements of surface reflectance and temperature, and conventional meteorological information. Comparison of remote estimates of net radiant flux and soil heat flux densities with measured values showed errors on the order of +/-40 W/sq m. To account for the effects of sparse vegetation, semi-empirical adjustments to aerodynamic resistance were required for evaluation of sensible heat flux density (H). However, a significant scatter in estimated versus measured latent heat flux density (LE) was still observed, +/-75 W/sq m over a range from 100-400 W/sq m. The errors of H and LE estimates were reduced to +/-50 W/sq m when observations were restricted to clear sky conditions.
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.
NASA Technical Reports Server (NTRS)
Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.
1990-01-01
The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.
Site- and phase-selective x-ray absorption spectroscopy based on phase-retrieval calculation
NASA Astrophysics Data System (ADS)
Kawaguchi, Tomoya; Fukuda, Katsutoshi; Matsubara, Eiichiro
2017-03-01
Understanding the chemical state of a particular element with multiple crystallographic sites and/or phases is essential to unlocking the origin of material properties. To this end, resonant x-ray diffraction spectroscopy (RXDS) achieved through a combination of x-ray diffraction (XRD) and x-ray absorption spectroscopy (XAS) techniques can allow for the measurement of diffraction anomalous fine structure (DAFS). This is expected to provide a peerless tool for electronic/local structural analyses of materials with complicated structures thanks to its capability to extract spectroscopic information about a given element at each crystallographic site and/or phase. At present, one of the major challenges for the practical application of RXDS is the rigorous determination of resonant terms from observed DAFS, as this requires somehow determining the phase change in the elastic scattering around the absorption edge from the scattering intensity. This is widely known in the field of XRD as the phase problem. The present review describes the basics of this problem, including the relevant background and theory for DAFS and a guide to a newly-developed phase-retrieval method based on the logarithmic dispersion relation that makes it possible to analyze DAFS without suffering from the intrinsic ambiguities of conventional iterative-fitting. Several matters relating to data collection and correction of RXDS are also covered, with a final emphasis on the great potential of powder-sample-based RXDS (P-RXDS) to be used in various applications relevant to practical materials, including antisite-defect-type electrode materials for lithium-ion batteries.
Matthews, Holly; Deakin, Jon; Rajab, May; Idris-Usman, Maryam
2017-01-01
The widespread introduction of artemisinin-based combination therapy has contributed to recent reductions in malaria mortality. Combination therapies have a range of advantages, including synergism, toxicity reduction, and delaying the onset of resistance acquisition. Unfortunately, antimalarial combination therapy is limited by the depleting repertoire of effective drugs with distinct target pathways. To fast-track antimalarial drug discovery, we have previously employed drug-repositioning to identify the anti-amoebic drug, emetine dihydrochloride hydrate, as a potential candidate for repositioned use against malaria. Despite its 1000-fold increase in in vitro antimalarial potency (ED50 47 nM) compared with its anti-amoebic potency (ED50 26–32 uM), practical use of the compound has been limited by dose-dependent toxicity (emesis and cardiotoxicity). Identification of a synergistic partner drug would present an opportunity for dose-reduction, thus increasing the therapeutic window. The lack of reliable and standardised methodology to enable the in vitro definition of synergistic potential for antimalarials is a major drawback. Here we use isobologram and combination-index data generated by CalcuSyn software analyses (Biosoft v2.1) to define drug interactivity in an objective, automated manner. The method, based on the median effect principle proposed by Chou and Talalay, was initially validated for antimalarial application using the known synergistic combination (atovaquone-proguanil). The combination was used to further understand the relationship between SYBR Green viability and cytocidal versus cytostatic effects of drugs at higher levels of inhibition. We report here the use of the optimised Chou Talalay method to define synergistic antimalarial drug interactivity between emetine dihydrochloride hydrate and atovaquone. The novel findings present a potential route to harness the nanomolar antimalarial efficacy of this affordable natural product. PMID:28257497
Site- and phase-selective x-ray absorption spectroscopy based on phase-retrieval calculation.
Kawaguchi, Tomoya; Fukuda, Katsutoshi; Matsubara, Eiichiro
2017-03-22
Understanding the chemical state of a particular element with multiple crystallographic sites and/or phases is essential to unlocking the origin of material properties. To this end, resonant x-ray diffraction spectroscopy (RXDS) achieved through a combination of x-ray diffraction (XRD) and x-ray absorption spectroscopy (XAS) techniques can allow for the measurement of diffraction anomalous fine structure (DAFS). This is expected to provide a peerless tool for electronic/local structural analyses of materials with complicated structures thanks to its capability to extract spectroscopic information about a given element at each crystallographic site and/or phase. At present, one of the major challenges for the practical application of RXDS is the rigorous determination of resonant terms from observed DAFS, as this requires somehow determining the phase change in the elastic scattering around the absorption edge from the scattering intensity. This is widely known in the field of XRD as the phase problem. The present review describes the basics of this problem, including the relevant background and theory for DAFS and a guide to a newly-developed phase-retrieval method based on the logarithmic dispersion relation that makes it possible to analyze DAFS without suffering from the intrinsic ambiguities of conventional iterative-fitting. Several matters relating to data collection and correction of RXDS are also covered, with a final emphasis on the great potential of powder-sample-based RXDS (P-RXDS) to be used in various applications relevant to practical materials, including antisite-defect-type electrode materials for lithium-ion batteries.
Model-Based Calculations of the Probability of a Country's Nuclear Proliferation Decisions
Li, Jun; Yim, Man-Sung; McNelis, David N.
2007-07-01
explain the occurrences of proliferation decisions. However, predicting major historical proliferation events using model-based predictions has been unreliable. Nuclear proliferation decisions by a country is affected by three main factors: (1) technology; (2) finance; and (3) political motivation [1]. Technological capability is important as nuclear weapons development needs special materials, detonation mechanism, delivery capability, and the supporting human resources and knowledge base. Financial capability is likewise important as the development of the technological capabilities requires a serious financial commitment. It would be difficult for any state with a gross national product (GNP) significantly less than that of about $100 billion to devote enough annual governmental funding to a nuclear weapon program to actually achieve positive results within a reasonable time frame (i.e., 10 years). At the same time, nuclear proliferation is not a matter determined by a mastery of technical details or overcoming financial constraints. Technology or finance is a necessary condition but not a sufficient condition for nuclear proliferation. At the most fundamental level, the proliferation decision by a state is controlled by its political motivation. To effectively address the issue of predicting proliferation events, all three of the factors must be included in the model. To the knowledge of the authors, none of the exiting models considered the 'technology' variable as part of the modeling. This paper presents an attempt to develop a methodology for statistical modeling and predicting a country's nuclear proliferation decisions. The approach is based on the combined use of data on a country's nuclear technical capability profiles economic development status, security environment factors and internal political and cultural factors. All of the information utilized in the study was from open source literature. (authors)
Bielęda, Grzegorz; Skowronek, Janusz; Mazur, Magdalena
2016-01-01
Purpose Well-known defect of TG-43 based algorithms used in brachytherapy is a lack of information about interaction cross-sections, which are determined not only by electron density but also by atomic number. TG-186 recommendations with using of MBDCA (model-based dose calculation algorithm), accurate tissues segmentation, and the structure's elemental composition continue to create difficulties in brachytherapy dosimetry. For the clinical use of new algorithms, it is necessary to introduce reliable and repeatable methods of treatment planning systems (TPS) verification. The aim of this study is the verification of calculation algorithm used in TPS for shielded vaginal applicators as well as developing verification procedures for current and further use, based on the film dosimetry method. Material and methods Calibration data was collected by separately irradiating 14 sheets of Gafchromic® EBT films with the doses from 0.25 Gy to 8.0 Gy using HDR 192Ir source. Standard vaginal cylinders of three diameters were used in the water phantom. Measurements were performed without any shields and with three shields combination. Gamma analyses were performed using the VeriSoft® package. Results Calibration curve was determined as third-degree polynomial type. For all used diameters of unshielded cylinder and for all shields combinations, Gamma analysis were performed and showed that over 90% of analyzed points meets Gamma criteria (3%, 3 mm). Conclusions Gamma analysis showed good agreement between dose distributions calculated using TPS and measured by Gafchromic films, thus showing the viability of using film dosimetry in brachytherapy. PMID:27648087
NASA Astrophysics Data System (ADS)
Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru
2014-12-01
Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.
Hu, Long; Xu, Zhiyu; Hu, Boqin; Lu, Zhi John
2017-01-01
Recent genomic studies suggest that novel long non-coding RNAs (lncRNAs) are specifically expressed and far outnumber annotated lncRNA sequences. To identify and characterize novel lncRNAs in RNA sequencing data from new samples, we have developed COME, a coding potential calculation tool based on multiple features. It integrates multiple sequence-derived and experiment-based features using a decompose–compose method, which makes it more accurate and robust than other well-known tools. We also showed that COME was able to substantially improve the consistency of predication results from other coding potential calculators. Moreover, COME annotates and characterizes each predicted lncRNA transcript with multiple lines of supporting evidence, which are not provided by other tools. Remarkably, we found that one subgroup of lncRNAs classified by such supporting features (i.e. conserved local RNA secondary structure) was highly enriched in a well-validated database (lncRNAdb). We further found that the conserved structural domains on lncRNAs had better chance than other RNA regions to interact with RNA binding proteins, based on the recent eCLIP-seq data in human, indicating their potential regulatory roles. Overall, we present COME as an accurate, robust and multiple-feature supported method for the identification and characterization of novel lncRNAs. The software implementation is available at https://github.com/lulab/COME. PMID:27608726
The MARS15-based FermiCORD Code System for Calculation of the Accelerator-Induced Residual Dose
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2016-09-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
Kress, Christian; Sadowski, Gabriele; Brandenbusch, Christoph
2016-10-01
The purification of therapeutic proteins is a challenging task with immediate need for optimization. Besides other techniques, aqueous 2-phase extraction (ATPE) of proteins has been shown to be a promising alternative to cost-intensive state-of-the-art chromatographic protein purification. Most likely, to enable a selective extraction, protein partitioning has to be influenced using a displacement agent to isolate the target protein from the impurities. In this work, a new displacement agent (lithium bromide [LiBr]) allowing for the selective separation of the target protein IgG from human serum albumin (represents the impurity) within a citrate-polyethylene glycol (PEG) ATPS is presented. In order to characterize the displacement suitability of LiBr on IgG, the mutual influence of LiBr and the phase formers on the aqueous 2-phase system (ATPS) and partitioning is investigated. Using osmotic virial coefficients (B22 and B23) accessible by composition gradient multiangle light-scattering measurements, the precipitating effect of LiBr on both proteins and an estimation of both protein partition coefficients is estimated. The stabilizing effect of LiBr on both proteins was estimated based on B22 and experimentally validated within the citrate-PEG ATPS. Our approach contributes to an efficient implementation of ATPE within the downstream processing development of therapeutic proteins.
Growth of Co and Fe on Cu(1 1 1): experiment and BFS based calculations
NASA Astrophysics Data System (ADS)
Farías, D.; Niño, M. A.; de Miguel, J. J.; Miranda, R.; Morse, J.; Bozzolo, G.
2003-10-01
The structure and morphology of Co and Fe films grown on Cu(1 1 1) have been investigated by thermal energy atom scattering (TEAS) and low-energy electron diffraction (LEED). It has been found that the growth mode of Co and Fe can be greatly improved by using Pb as surfactant, although in the case of Fe this works only for the first bilayer. This shows that the two systems exhibit decisive differences already in the first stages of the growth process. In a second series of experiments, the effect of codepositing Co-Cu and Fe-Cu on the films quality was investigated. The results are very promising, and suggest that very flat, structurally ordered fcc Fe-Cu and Co-Cu films can be prepared by applying this technique together with the use of Pb as surfactant. These results were complemented by atomistic simulations based on the BFS method for alloys. Simulations performed in the low-coverage regime suggest that the early stages of growth are governed to a great extent by the affinity of Cu for Co and Fe. We have also performed temperature-dependent Monte Carlo simulations to determine the structure of superlattices formed by codeposition of Cu-Co and Cu-Fe.
NASA Astrophysics Data System (ADS)
Tresch, Simon; Fister, Wolfgang; Marzen, Miriam; Kuhn, Nikolaus J.
2015-04-01
The quality of data obtained by rainfall experiments depends mainly on the quality of the rainfall simulation itself. However, the best rainfall simulation cannot deliver valuable data, if runoff and sediment discharge from the plot are not sampled at a proper interval or if poor interpolation methods are being used. The safest way to get good results would be to collect all runoff and sediment amounts that come off the plot in the shortest possible intervals. Unfortunately, high rainfall amounts often coincide with limited transport and analysis capacities. Therefore, it is in most cases necessary to find a good compromise between sampling frequency, interpolation method, and available analysis capacities. The aim of this study was to compare different methods to calculate total sediment yield based on aliquot sampling intervals. The methods tested were (1) simple extrapolation of one sample until next sample was collected; (2) averaging between two successive samples; (3) extrapolation of the sediment concentration; (4) extrapolation using a regression function. The results indicate that all methods could, theoretically, be used to calculate total sediment yields, but errors between 10-25% would have to be taken into account for interpretation of the gained data. Highest deviations were always found for the first measurement interval, which shows that it is very important to capture the initial flush of sediment from the plot to be able to calculate reliable total values.
Kim, Ki Chul; Kulkarni, Anant D; Johnson, J Karl; Sholl, David S
2011-04-21
Systematic thermodynamics calculations based on density functional theory-calculated energies for crystalline solids have been a useful complement to experimental studies of hydrogen storage in metal hydrides. We report the most comprehensive set of thermodynamics calculations for mixtures of light metal hydrides to date by performing grand canonical linear programming screening on a database of 359 compounds, including 147 compounds not previously examined by us. This database is used to categorize the reaction thermodynamics of all mixtures containing any four non-H elements among Al, B, C, Ca, K, Li, Mg, N, Na, Sc, Si, Ti, and V. Reactions are categorized according to the amount of H(2) that is released and the reaction's enthalpy. This approach identifies 74 distinct single step reactions having that a storage capacity >6 wt.% and zero temperature heats of reaction 15 ≤ΔU(0)≤ 75 kJ mol(-1) H(2). Many of these reactions, however, are likely to be problematic experimentally because of the role of refractory compounds, B(12)H(12)-containing compounds, or carbon. The single most promising reaction identified in this way involves LiNH(2)/LiH/KBH(4), storing 7.48 wt.% H(2) and having ΔU(0) = 43.6 kJ mol(-1) H(2). We also examined the complete range of reaction mixtures to identify multi-step reactions with useful properties; this yielded 23 multi-step reactions of potential interest.
NASA Astrophysics Data System (ADS)
Li, Feng-guang; Zhang, Jian-liang; Zuo, Hai-bin; Qin, Xuan; Qi, Cheng-lin
2017-03-01
Cooling effects of the cast iron cooling stave were tested with a specially designed experimental furnace under the conditions of different temperatures of 800 °C, 900 °C, 1,000 °C and 1,100 °C as well as different cooling water velocities of 0.5 m·s-1, 1.0 m·s-1, 1.5 m·s-1 and 2.0 m·s-1. Furthermore, the combined heat transfer coefficient of hot-face on cast iron cooling stave (αh-i) was calculated by heat transfer theory based on the thermal test. The calculated αh-i was then applied in temperature field simulation of cooling stave and the simulation results were compared with the experimental data. The calculation of αh-i indicates that αh-i increases rapidly as the furnace temperature increases while it increases a little as the water velocity increases. The comparison of the simulation results with the experimental data shows that the simulation results fit well with the experiment data under different furnace temperatures.
Monge-Palacios, M; Corchado, J C; Espinosa-Garcia, J
2013-06-07
To understand the reactivity and mechanism of the OH + NH3 → H2O + NH2 gas-phase reaction, which evolves through wells in the entrance and exit channels, a detailed dynamics study was carried out using quasi-classical trajectory calculations. The calculations were performed on an analytical potential energy surface (PES) recently developed by our group, PES-2012 [Monge-Palacios et al. J. Chem. Phys. 138, 084305 (2013)]. Most of the available energy appeared as H2O product vibrational energy (54%), reproducing the only experimental evidence, while only the 21% of this energy appeared as NH2 co-product vibrational energy. Both products appeared with cold and broad rotational distributions. The excitation function (constant collision energy in the range 1.0-14.0 kcal mol(-1)) increases smoothly with energy, contrasting with the only theoretical information (reduced-dimensional quantum scattering calculations based on a simplified PES), which presented a peak at low collision energies, related to quantized states. Analysis of the individual reactive trajectories showed that different mechanisms operate depending on the collision energy. Thus, while at high energies (E(coll) ≥ 6 kcal mol(-1)) all trajectories are direct, at low energies about 20%-30% of trajectories are indirect, i.e., with the mediation of a trapping complex, mainly in the product well. Finally, the effect of the zero-point energy constraint on the dynamics properties was analyzed.
Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force
NASA Astrophysics Data System (ADS)
Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.
2016-01-01
The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.
NASA Astrophysics Data System (ADS)
Dimitroulis, Christos; Raptis, Theophanes; Raptis, Vasilios
2015-12-01
We present an application for the calculation of radial distribution functions for molecular centres of mass, based on trajectories generated by molecular simulation methods (Molecular Dynamics, Monte Carlo). When designing this application, the emphasis was placed on ease of use as well as ease of further development. In its current version, the program can read trajectories generated by the well-known DL_POLY package, but it can be easily extended to handle other formats. It is also very easy to 'hack' the program so it can compute intermolecular radial distribution functions for groups of interaction sites rather than whole molecules.
Ojala, Jarkko J; Kapanen, Mika K; Hyödynmaa, Simo J; Wigren, Tuija K; Pitkänen, Maunu A
2014-03-06
threshold criteria showed larger discrepancies. The TPS algorithm comparison results showed large dose discrepancies in the PTV mean dose (D50%), nearly 60%, for the PBC algorithm, and differences of nearly 20% for the AAA, occurring also in the small PTV size range. This work suggests the application of independent plan verification, when the AAA or the AXB algorithm are utilized in lung SBRT having PTVs smaller than 20-25 cc. The calculated data from this study can be used in converting the SBRT protocols based on type 'a' and/or type 'b' algorithms for the most recent generation type 'c' algorithms, such as the AXB algorithm.
Elastic anharmonicity of bcc Fe and Fe-based random alloys from first-principles calculations
NASA Astrophysics Data System (ADS)
Li, Xiaoqing; Schönecker, Stephan; Zhao, Jijun; Vitos, Levente; Johansson, Börje
2017-01-01
We systematically investigate elastic anharmonic behavior in ferromagnetic body-centered cubic (bcc) Fe and Fe1 -xMx (M =Al , V, Cr, Co, or Ni) random alloys by means of density-functional simulations. To benchmark computational accuracy, three ab initio codes are used to obtain the complete set of second- and third-order elastic constants (TOECs) for bcc Fe. The TOECs of Fe1 -xMx alloys are studied employing the first-principles alloy theory formulated within the exact muffin-tin orbital method in combination with the coherent-potential approximation. It is found that the alloying effects on C111,C112 , and C123, which are governed by normal strains only, are more pronounced than those on C144,C166 , and C456, which involve shear strains. Remarkably, the magnitudes of all TOECs but C123 decrease upon alloying with Al, V, Cr, Co, or Ni. Using the computed TOECs, we study compositional effects on the pressure derivatives of the effective elastic constants (d Bi j/d P ), bulk (d K /d P ), and shear moduli (d G /d P ) and derive longitudinal acoustic nonlinearity parameters (β ). Our predictions show that the pressure derivatives of K and G decrease with x for all solute elements and reveal a strong correlation between the compositional trends on d K /d P and d G /d P arising from the fact that alloying predominantly alters d B11/d P . The sensitivity of d B11/d P to composition is attributed to intrinsic alloying effects as opposed to lattice parameter changes accompanying solute addition. For Fe and the considered Fe-based alloys, β along high-symmetry directions orders as β [111 ]>β [100 ]>β [110 ] , and alloying increases the directional anisotropy of β but reduces its magnitude.
NASA Astrophysics Data System (ADS)
Inoue, N.; Kitada, N.; Irikura, K.
2013-12-01
A probability of surface rupture is important to configure the seismic source, such as area sources or fault models, for a seismic hazard evaluation. In Japan, Takemura (1998) estimated the probability based on the historical earthquake data. Kagawa et al. (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated probability indicates a sigmoid curve and increases between Mj (the local magnitude defined and calculated by Japan Meteorological Agency) =6.5 and Mj=7.0. The probability of surface rupture is also used in a probabilistic fault displacement analysis (PFDHA). The probability is determined from the collected earthquake catalog, which were classified into two categories: with surface rupture or without surface rupture. The logistic regression is performed for the classified earthquake data. Youngs et al. (2003), Ross and Moss (2011) and Petersen et al. (2011) indicate the logistic curves of the probability of surface rupture by normal, reverse and strike-slip faults, respectively. Takao et al. (2013) shows the logistic curve derived from only Japanese earthquake data. The Japanese probability curve shows the sharply increasing in narrow magnitude range by comparison with other curves. In this study, we estimated the probability of surface rupture applying the logistic analysis to the surface displacement derived from a surface displacement calculation. A source fault was defined in according to the procedure of Kagawa et al. (2004), which determined a seismic moment from a magnitude and estimated the area size of the asperity and the amount of slip. Strike slip and reverse faults were considered as source faults. We applied Wang et al. (2003) for calculations. The surface displacements with defined source faults were calculated by varying the depth of the fault. A threshold value as 5cm of surface displacement was used to evaluate whether a surface rupture reach or do not reach to the surface. We carried out the
Code of Federal Regulations, 2012 CFR
2012-07-01
...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08... GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.207-08 Calculation and use of vehicle-specific 5-cycle-based fuel...
Schmidt, Signe; Nørgaard, Kirsten
2014-09-01
Matching meal insulin to carbohydrate intake, blood glucose, and activity level is recommended in type 1 diabetes management. Calculating an appropriate insulin bolus size several times per day is, however, challenging and resource demanding. Accordingly, there is a need for bolus calculators to support patients in insulin treatment decisions. Currently, bolus calculators are available integrated in insulin pumps, as stand-alone devices and in the form of software applications that can be downloaded to, for example, smartphones. Functionality and complexity of bolus calculators vary greatly, and the few handfuls of published bolus calculator studies are heterogeneous with regard to study design, intervention, duration, and outcome measures. Furthermore, many factors unrelated to the specific device affect outcomes from bolus calculator use and therefore bolus calculator study comparisons should be conducted cautiously. Despite these reservations, there seems to be increasing evidence that bolus calculators may improve glycemic control and treatment satisfaction in patients who use the devices actively and as intended.
Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.; Oedegaard-Jensen, A.
2012-07-01
In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, and to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)
Rivard, Mark J; Beaulieu, Luc; Mourtada, Firas
2010-06-01
The current standard for brachytherapy dose calculations is based on the AAPM TG-43 formalism. Simplifications used in the TG-43 formalism have been challenged by many publications over the past decade. With the continuous increase in computing power, approaches based on fundamental physics processes or physics models such as the linear-Boltzmann transport equation are now applicable in a clinical setting. Thus, model-based dose calculation algorithms (MBDCAs) have been introduced to address TG-43 limitations for brachytherapy. The MBDCA approach results in a paradigm shift, which will require a concerted effort to integrate them properly into the radiation therapy community. MBDCA will improve treatment planning relative to the implementation of the traditional TG-43 formalism by accounting for individualized, patient-specific radiation scatter conditions, and the radiological effect of material heterogeneities differing from water. A snapshot of the current status of MBDCA and AAPM Task Group reports related to the subject of QA recommendations for brachytherapy treatment planning is presented. Some simplified Monte Carlo simulation results are also presented to delineate the effects MBDCA are called to account for and facilitate the discussion on suggestions for (i) new QA standards to augment current societal recommendations, (ii) consideration of dose specification such as dose to medium in medium, collisional kerma to medium in medium, or collisional kerma to water in medium, and (iii) infrastructure needed to uniformly introduce these new algorithms. Suggestions in this Vision 20/20 article may serve as a basis for developing future standards to be recommended by professional societies such as the AAPM, ESTRO, and ABS toward providing consistent clinical implementation throughout the brachytherapy community and rigorous quality management of MBDCA-based treatment planning systems.
NASA Technical Reports Server (NTRS)
Meng, J. C. S.
1973-01-01
The laminar base flow field of a two-dimensional reentry body has been studied by Telenin's method. The flow domain was divided into strips along the x-axis, and the flow variations were represented by Lagrange interpolation polynomials in the transformed vertical coordinate. The complete Navier-Stokes equations were used in the near wake region, and the boundary layer equations were applied elsewhere. The boundary conditions consisted of the flat plate thermal boundary layer in the forebody region and the near wake profile in the downstream region. The resulting two-point boundary value problem of 33 ordinary differential equations was then solved by the multiple shooting method. The detailed flow field and thermal environment in the base region are presented in the form of temperature contours, Mach number contours, velocity vectors, pressure distributions, and heat transfer coefficients on the base surface. The maximum heating rate was found on the centerline, and the two-dimensional stagnation point flow solution was adquate to estimate the maximum heating rate so long as the local Reynolds number could be obtained.
SU-E-T-416: Experimental Evaluation of a Commercial GPU-Based Monte Carlo Dose Calculation Algorithm
Paudel, M R; Beachey, D J; Sarfehnia, A; Sahgal, A; Keller, B; Kim, A; Ahmad, S
2015-06-15
Purpose: A new commercial GPU-based Monte Carlo dose calculation algorithm (GPUMCD) developed by the vendor Elekta™ to be used in the Monaco Treatment Planning System (TPS) is capable of modeling dose for both a standard linear accelerator and for an Elekta MRI-Linear accelerator (modeling magnetic field effects). We are evaluating this algorithm in two parts: commissioning the algorithm for an Elekta Agility linear accelerator (the focus of this work) and evaluating the algorithm’s ability to model magnetic field effects for an MRI-linear accelerator. Methods: A beam model was developed in the Monaco TPS (v.5.09.06) using the commissioned beam data for a 6MV Agility linac. A heterogeneous phantom representing tumor-in-lung, lung, bone-in-tissue, and prosthetic was designed/built. Dose calculations in Monaco were done using the current clinical algorithm (XVMC) and the new GPUMCD algorithm (1 mm3 voxel size, 0.5% statistical uncertainty) and in the Pinnacle TPS using the collapsed cone convolution (CCC) algorithm. These were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2×2 cm{sup 2}, 5×5 cm{sup 2}, and 10×10 cm{sup 2} field sizes. Results: The calculated central axis percentage depth doses (PDDs) in homogeneous solid water were within 2% compared to measurements for XVMC and GPUMCD. For tumor-in-lung and lung phantoms, doses calculated by all of the algorithms were within the experimental uncertainty of the measurements (±2% in the homogeneous phantom and ±3% for the tumor-in-lung or lung phantoms), except for 2×2 cm{sup 2} field size where only the CCC algorithm differs from film by 5% in the lung region. The analysis for bone-in-tissue and the prosthetic phantoms are ongoing. Conclusion: The new GPUMCD algorithm calculated dose comparable to both the XVMC algorithm and to measurements in both a homogeneous solid water medium and the heterogeneous phantom representing lung or tumor-in-lung for 2×2 cm
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-07
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
NASA Astrophysics Data System (ADS)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
Hofbauer, Julia; Kirisits, Christian; Resch, Alexandra; Xu, Yingjie; Sturdza, Alina; Pötter, Richard
2016-01-01
Purpose To analyze the impact of heterogeneity-corrected dose calculation on dosimetric quality parameters in gynecological and breast brachytherapy using Acuros, a grid-based Boltzmann equation solver (GBBS), and to evaluate the shielding effects of different cervix brachytherapy applicators. Material and methods Calculations with TG-43 and Acuros were based on computed tomography (CT) retrospectively, for 10 cases of accelerated partial breast irradiation and 9 cervix cancer cases treated with tandem-ring applicators. Phantom CT-scans of different applicators (plastic and titanium) were acquired. For breast cases the V20Gyαβ3 to lung, the D0.1cm3, D1cm3, D2cm3 to rib, the D0.1cm3, D1cm3, D10cm3 to skin, and Dmax for all structures were reported. For cervix cases, the D0.1cm3, D2cm3 to bladder, rectum and sigmoid, and the D50, D90, D98, V100 for the CTVHR were reported. For the phantom study, surrogates for target and organ at risk were created for a similar dose volume histogram (DVH) analysis. Absorbed dose and equivalent dose to 2 Gy fractionation (EQD2) were used for comparison. Results Calculations with TG-43 overestimated the dose for all dosimetric indices investigated. For breast, a decrease of ~8% was found for D10cm3 to the skin and 5% for D2cm3 to rib, resulting in a difference ~ –1.5 Gy EQD2 for overall treatment. Smaller effects were found for cervix cases with the plastic applicator, with up to –2% (–0.2 Gy EQD2) per fraction for organs at risk and –0.5% (–0.3 Gy EQD2) per fraction for CTVHR. The shielding effect of the titanium applicator resulted in a decrease of 2% for D2cm3 to the organ at risk versus 0.7% for plastic. Conclusions Lower doses were reported when calculating with Acuros compared to TG-43. Differences in dose parameters were larger in breast cases. A lower impact on clinical dose parameters was found for the cervix cases. Applicator material causes systematic shielding effects that can be taken into account. PMID
Lim, Hyung-Kyu; Lee, Hankyul; Kim, Hyungjun
2016-10-11
Among various models that incorporate solvation effects into first-principles-based electronic structure theory such as density functional theory (DFT), the average solvent electrostatic potential/molecular dynamics (ASEP/MD) method is particularly advantageous. This method explicitly includes the nature of complicated solvent structures that is absent in implicit solvation methods. Because the ASEP/MD method treats only solvent molecule dynamics, it requires less computational cost than the conventional quantum mechanics/molecular mechanics (QM/MM) approaches. Herein, we present a real-space rectangular grid-based method to implement the mean-field QM/MM idea of ASEP/MD to plane-wave DFT, which is termed "DFT in classical explicit solvents", or DFT-CES. By employing a three-dimensional real-space grid as a communication medium, we can treat the electrostatic interactions between the DFT solute and the ASEP sampled from MD simulations in a seamless and straightforward manner. Moreover, we couple a fast and efficient free energy calculation method based on the two-phase thermodynamic (2PT) model with our DFT-CES method, which enables direct and simultaneous computation of the solvation free energies as well as the geometric and electronic responses of a solute of interest under the solvation effect. With the aid of DFT-CES/2PT, we investigate the solvation free energies and detailed solvation thermodynamics for 17 types of organic molecules, which show good agreement with the experimental data. We further compare our simulation results with previous theoretical models and assumptions made for the development of implicit solvation models. We anticipate that our proposed method, DFT-CES/2PT, will enable vast utilization of the ASEP/MD method for investigating solvation properties of materials by using periodic DFT calculations in the future.
Liu, Miao; Rong, Ziqin; Malik, Rahul; Canepa, Pieremanuele; Jain, Anubhav; Ceder, Gerbrand; Persson, Kristin A.
2014-12-16
In this study, batteries that shuttle multivalent ions such as Mg^{2+} and Ca^{2+} ions are promising candidates for achieving higher energy density than available with current Li-ion technology. Finding electrode materials that reversibly store and release these multivalent cations is considered a major challenge for enabling such multivalent battery technology. In this paper, we use recent advances in high-throughput first-principles calculations to systematically evaluate the performance of compounds with the spinel structure as multivalent intercalation cathode materials, spanning a matrix of five different intercalating ions and seven transition metal redox active cations. We estimate the insertion voltage, capacity, thermodynamic stability of charged and discharged states, as well as the intercalating ion mobility and use these properties to evaluate promising directions. Our calculations indicate that the Mn_{2}O_{4} spinel phase based on Mg and Ca are feasible cathode materials. In general, we find that multivalent cathodes exhibit lower voltages compared to Li cathodes; the voltages of Ca spinels are ~0.2 V higher than those of Mg compounds (versus their corresponding metals), and the voltages of Mg compounds are ~1.4 V higher than Zn compounds; consequently, Ca and Mg spinels exhibit the highest energy densities amongst all the multivalent cation species. The activation barrier for the Al³⁺ ion migration in the Mn₂O₄ spinel is very high (~1400 meV for Al^{3+} in the dilute limit); thus, the use of an Al based Mn spinel intercalation cathode is unlikely. Amongst the choice of transition metals, Mn-based spinel structures rank highest when balancing all the considered properties.
Density functional wavelet calculation of solid state systems
NASA Astrophysics Data System (ADS)
Daykov, I. P.; Engeness, T. D.; Arias, T. A.
2001-03-01
We present, to our knowledge, the first all-electron wavelet calculations of the electronic structure of solids within density functional theory. To make these calculations competitive with traditional approaches, we employ recent developments in algorithms for multiresolution analysis (MRA) which speed density functional calculations by three to four orders of magnitude[1,2]. MRA provides a fully systematic, integrated treatment of core and valence electrons and is ideal for exploring the limits of the accuracy of density functional theory in the calculation of EELS spectra, which involve matrix elements between the core and valence states. We shall present results for EELS spectra as well as the resolution of technical issues which arise in carrying out solid-state calculations within a wavelet-like basis. [1] ``Multiscale computation with interpolating wavelets,'' by Ross A. Lippert, T.A. Arias and Alan Edelman, Journal of Computational Physics, 140:2, 278--310 (1 March 1998). Preprint: http://xxx.lanl.gov/abs/cond-mat/9805283 . [2] ``Multiresolution analysis of electronic structure: semicardinal and wavelet bases,'' T.A. Arias, Reviews of Modern Physics 71:1, 267--311 (January 1999). Preprint: http://xxx.lanl.gov/abs/cond-mat/9805262 .
NASA Astrophysics Data System (ADS)
Tsige, Mesfin; Bhatta, Ram; Dhinojwala, Ali
2014-03-01
Understanding the acid-base interactions is important in surface science as it helps to rationalize materials properties such as wetting, adhesion and tribology. Quantitative relation between changes in enthalpy (ΔH) and frequency shift (Δν) during the acid base interaction is particularly important. We investigate ΔH and Δν of twenty-five complexes of acids (methanol, ethanol, propanol, butanol and phenol) with bases (benzene, pyridine, DMSO, Et2O and THF) in CCl4 using intermolecular perturbation theory calculations. ΔH and Δν of complexes of all alcohols with bases except benzene fall in the range from -14 kJ/mol to -28 kJ/mol and 215 cm-1 to 523 cm- 1 , respectively. Smaller values of ΔH (-2 to -6 kJ/mol) and Δν (23 to 70 cm-1) are estimated for benzene. For all the studied complexes, ΔH varies linearly (R2 ? 0.974) with Δν yielding the average slope and intercept of 0.056 and 1.5, respectively. Linear correlations were found between theoretical and experimental values of ΔH as well as Δν and are concurrent with the Badger-Bauer rule. This work is supported by the National Science Foundation.
NASA Astrophysics Data System (ADS)
Ono, Tomoya; Egami, Yoshiyuki; Hirose, Kikuji
2012-11-01
We demonstrate an efficient nonequilibrium Green's function transport calculation procedure based on the real-space finite-difference method. The direct inversion of matrices for obtaining the self-energy terms of electrodes is computationally demanding in the real-space method because the matrix dimension corresponds to the number of grid points in the unit cell of electrodes, which is much larger than that of sites in the tight-binding approach. The procedure using the ratio matrices of the overbridging boundary-matching technique [Y. Fujimoto and K. Hirose, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.67.195315 67, 195315 (2003)], which is related to the wave functions of a couple of grid planes in the matching regions, greatly reduces the computational effort to calculate self-energy terms without losing mathematical strictness. In addition, the present procedure saves computational time to obtain the Green's function of the semi-infinite system required in the Landauer-Büttiker formula. Moreover, the compact expression to relate Green's functions and scattering wave functions, which provide a real-space picture of the scattering process, is introduced. An example of the calculated results is given for the transport property of the BN ring connected to (9,0) carbon nanotubes. The wave-function matching at the interface reveals that the rotational symmetry of wave functions with respect to the tube axis plays an important role in electron transport. Since the states coming from and going to electrodes show threefold rotational symmetry, the states in the vicinity of the Fermi level, the wave function of which exhibits fivefold symmetry, do not contribute to the electron transport through the BN ring.
Belal, Arafa A M; Zayed, M A; El-Desawy, M; Rakha, Sh M A H
2015-03-05
Three Schiff's bases AI (2(1-hydrazonoethyl)phenol), AII (2, 4-dibromo 6-(hydrazonomethyl)phenol) and AIII (2(hydrazonomethyl)phenol) were prepared as new hydrazone compounds via condensation reactions with molar ratio (1:1) of reactants. Firstly by reaction of 2-hydroxy acetophenone solution and hydrazine hydrate; it gives AI. Secondly condensation between 3,5-dibromo-salicylaldehyde and hydrazine hydrate gives AII. Thirdly condensation between salicylaldehyde and hydrazine hydrate gives AIII. The structures of AI-AIII were characterized by elemental analysis (EA), mass (MS), FT-IR and (1)H NMR spectra, and thermal analyses (TG, DTG, and DTA). The activation thermodynamic parameters, such as, ΔE(∗), ΔH(∗), ΔS(∗) and ΔG(∗) were calculated from the TG curves using Coats-Redfern method. It is important to investigate their molecular structures to know the active groups and weak bond responsible for their biological activities. Consequently in the present work, the obtained thermal (TA) and mass (MS) practical results are confirmed by semi-empirical MO-calculations (MOCS) using PM3 procedure. Their biological activities have been tested in vitro against Escherichia coli, Proteus vulgaris, Bacillissubtilies and Staphylococcus aurous bacteria in order to assess their anti-microbial potential.
Toivanen, Elias A; Losilla, Sergio A; Sundholm, Dage
2015-12-21
Algorithms and working expressions for a grid-based fast multipole method (GB-FMM) have been developed and implemented. The computational domain is divided into cubic subdomains, organized in a hierarchical tree. The contribution to the electrostatic interaction energies from pairs of neighboring subdomains is computed using numerical integration, whereas the contributions from further apart subdomains are obtained using multipole expansions. The multipole moments of the subdomains are obtained by numerical integration. Linear scaling is achieved by translating and summing the multipoles according to the tree structure, such that each subdomain interacts with a number of subdomains that are almost independent of the size of the system. To compute electrostatic interaction energies of neighboring subdomains, we employ an algorithm which performs efficiently on general purpose graphics processing units (GPGPU). Calculations using one CPU for the FMM part and 20 GPGPUs consisting of tens of thousands of execution threads for the numerical integration algorithm show the scalability and parallel performance of the scheme. For calculations on systems consisting of Gaussian functions (α = 1) distributed as fullerenes from C20 to C720, the total computation time and relative accuracy (ppb) are independent of the system size.
Mishra, Sandeep Kumar; Suryaprakash, N
2015-06-21
The rare examples of intramolecular hydrogen bonds (HB) of the type the N-H∙∙∙F-C, detected in a low polarity solvent in the derivatives of hydrazides, by utilizing one and two-dimensional solution state multinuclear NMR techniques, are reported. The observation of through-space couplings, such as, (1h)JFH, and (1h)JFN, provides direct evidence for the existence of intra-molecular HB. Solvent induced perturbations and the variable temperature NMR experiments unambiguously establish the presence of intramolecular HB. The existence of multiple conformers in some of the investigated molecules is also revealed by two dimensional HOESY and (15)N-(1)H HSQC experiments. The (1)H DOSY experimental results discard any possibility of self or cross dimerization of the molecules. The derived NMR experimental results are further substantiated by Density Function Theory (DFT) based Non Covalent Interaction (NCI), and Quantum Theory of Atom in Molecule (QTAIM) calculations. The NCI calculations served as a very sensitive tool for detection of non-covalent interactions and also confirm the presence of bifurcated HBs.
García-Jacas, César R; Aguilera-Mendoza, Longendri; González-Pérez, Reisel; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Avdeenko, Tatiana
2015-01-01
The present report introduces a novel module of the QuBiLS-MIDAS software for the distributed computation of the 3D Multi-Linear algebraic molecular indices. The main motivation for developing this module is to deal with the computational complexity experienced during the calculation of the descriptors over large datasets. To accomplish this task, a multi-server computing platform named T-arenal was developed, which is suited for institutions with many workstations interconnected through a local network and without resources particularly destined for computation tasks. This new system was deployed in 337 workstations and it was perfectly integrated with the QuBiLS-MIDAS software. To illustrate the usability of the T-arenal platform, performance tests over a dataset comprised of 15 000 compounds are carried out, yielding a 52 and 60 fold reduction in the sequential processing time for the 2-Linear and 3-Linear indices, respectively. Therefore, it can be stated that the T-arenal based distribution of computation tasks constitutes a suitable strategy for performing high-throughput calculations of 3D Multi-Linear descriptors over thousands of chemical structures for posterior QSAR and/or ADME-Tox studies.
NASA Astrophysics Data System (ADS)
Belal, Arafa A. M.; Zayed, M. A.; El-Desawy, M.; Rakha, Sh. M. A. H.
2015-03-01
Three Schiff's bases AI (2(1-hydrazonoethyl)phenol), AII (2, 4-dibromo 6-(hydrazonomethyl)phenol) and AIII (2(hydrazonomethyl)phenol) were prepared as new hydrazone compounds via condensation reactions with molar ratio (1:1) of reactants. Firstly by reaction of 2-hydroxy acetophenone solution and hydrazine hydrate; it gives AI. Secondly condensation between 3,5-dibromo-salicylaldehyde and hydrazine hydrate gives AII. Thirdly condensation between salicylaldehyde and hydrazine hydrate gives AIII. The structures of AI-AIII were characterized by elemental analysis (EA), mass (MS), FT-IR and 1H NMR spectra, and thermal analyses (TG, DTG, and DTA). The activation thermodynamic parameters, such as, ΔE∗, ΔH∗, ΔS∗ and ΔG∗ were calculated from the TG curves using Coats-Redfern method. It is important to investigate their molecular structures to know the active groups and weak bond responsible for their biological activities. Consequently in the present work, the obtained thermal (TA) and mass (MS) practical results are confirmed by semi-empirical MO-calculations (MOCS) using PM3 procedure. Their biological activities have been tested in vitro against Escherichia coli, Proteus vulgaris, Bacillissubtilies and Staphylococcus aurous bacteria in order to assess their anti-microbial potential.
Niedz, Randall P.
2016-01-01
ARS-Media for Excel is an ion solution calculator that uses “Microsoft Excel” to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Excel’s Solver add-on solves the linear programming equation to generate a recipe. Calculating a mixture of salts to generate exact solutions of complex ionic mixtures is required for at least 2 types of problems– 1) formulating relevant ecological/biological ionic solutions such as those from a specific lake, soil, cell, tissue, or organ and, 2) designing ion confounding-free experiments to determine ion-specific effects where ions are treated as statistical factors. Using ARS-Media for Excel to solve these two problems is illustrated by 1) exactly reconstructing a soil solution representative of a loamy agricultural soil and, 2) constructing an ion-based experiment to determine the effects of substituting Na+ for K+ on the growth of a Valencia sweet orange nonembryogenic cell line. PMID:27812202
Niedz, Randall P
2016-01-01
ARS-Media for Excel is an ion solution calculator that uses "Microsoft Excel" to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Excel's Solver add-on solves the linear programming equation to generate a recipe. Calculating a mixture of salts to generate exact solutions of complex ionic mixtures is required for at least 2 types of problems- 1) formulating relevant ecological/biological ionic solutions such as those from a specific lake, soil, cell, tissue, or organ and, 2) designing ion confounding-free experiments to determine ion-specific effects where ions are treated as statistical factors. Using ARS-Media for Excel to solve these two problems is illustrated by 1) exactly reconstructing a soil solution representative of a loamy agricultural soil and, 2) constructing an ion-based experiment to determine the effects of substituting Na+ for K+ on the growth of a Valencia sweet orange nonembryogenic cell line.
Ma, Wei; Huang, Chuanbo; Zhou, Yuan; Li, Jianwei; Cui, Qinghua
2017-01-01
The microbiota colonized on human body is renowned as “a forgotten organ” due to its big impacts on human health and disease. Recently, microbiome studies have identified a large number of microbes differentially regulated in a variety of conditions, such as disease and diet. However, methods for discovering biological patterns in the differentially regulated microbes are still limited. For this purpose, here, we developed a web-based tool named MicroPattern to discover biological patterns for a list of microbes. In addition, MicroPattern implemented and integrated an algorithm we previously presented for the calculation of disease similarity based on disease-microbe association data. MicroPattern first grouped microbes into different sets based on the associated diseases and the colonized positions. Then, for a given list of microbes, MicroPattern performed enrichment analysis of the given microbes on all of the microbe sets. Moreover, using MicroPattern, we can also calculate disease similarity based on the shared microbe associations. Finally, we confirmed the accuracy and usefulness of MicroPattern by applying it to the changed microbes under the animal-based diet condition. MicroPattern is freely available at http://www.cuilab.cn/micropattern. PMID:28071710
Meirovitch, Hagai
2010-01-01
The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently
NASA Astrophysics Data System (ADS)
Kumar, Ajay; Raghuwanshi, Sanjeev Kumar
2016-06-01
The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.
NASA Astrophysics Data System (ADS)
Greco, Cristina; Yiang, Ying; Kremer, Kurt; Chen, Jeff; Daoulas, Kostas
Polymer liquid crystals, apart from traditional applications as high strength materials, are important for new technologies, e.g. Organic Electronics. Their studies often invoke mesoscale models, parameterized to reproduce thermodynamic properties of the real material. Such top-down strategies require advanced simulation techniques, predicting accurately the thermodynamics of mesoscale models as a function of characteristic features and parameters. Here a recently developed model describing nematic polymers as worm-like chains interacting with soft directional potentials is considered. We present a special thermodynamic integration scheme delivering free energies in particle-based Monte Carlo simulations of this model, avoiding thermodynamic singularities. Conformational and structural properties, as well as Helmholtz free energies are reported as a function of interaction strength. They are compared with state-of-art SCF calculations invoking a continuum analog of the same model, demonstrating the role of liquid-packing and fluctuations.
Evaluation of signal energy calculation methods for a light-sharing SiPM-based PET detector
NASA Astrophysics Data System (ADS)
Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Liu, Yaqiang; Wang, Shi; Gu, Yu
2017-03-01
Signals of a light-sharing positron emission tomography (PET) detector are commonly multiplexed to three analog pulses (E, X, and Y) and then digitally sampled. From this procedure, the signal energy that are critical to detector performance are obtained. In this paper, different signal energy calculation strategies for a self-developed SiPM-based PET detector, including pulse height and different integration methods, are evaluated in terms of energy resolution and spread of the crystal response in the flood histogram using a root-mean-squared (RMS) index. Results show that integrations outperform the pulse height. Integration using the maximum derivative value of the pulse E as the landmark point and 28 integrated points (448 ns) has the best performance in these evaluated methods for our detector. Detector performance in terms of energy and position is improved with this integration method. The proposed methodology is expected to be applicable for other light-sharing PET detectors.
NASA Astrophysics Data System (ADS)
Jiang, Teng; Wang, Long; Zhang, Sui; Sun, Ping-Chuan; Ding, Chuan-Fan; Chu, Yan-Qiu; Zhou, Ping
2011-10-01
Curcumin has been recognized as a potential natural drug to treat the Alzheimer's disease (AD) by chelating baleful metal ions, scavenging radicals and preventing the amyloid β (Aβ) peptides from the aggregation. In this paper, Al(III)-curcumin complexes with Al(III) were synthesized and characterized by liquid-state 1H, 13C and 27Al nuclear magnetic resonance (NMR), mass spectroscopy (MS), ultraviolet spectroscopy (UV) and generalized 2D UV-UV correlation spectroscopy. In addition, the density functional theory (DFT)-based UV and chemical shift calculations were also performed to view insight into the structures and properties of curcumin and its complexes. It was revealed that curcumin could interact strongly with Al(III) ion, and form three types of complexes under different molar ratios of [Al(III)]/[curcumin], which would restrain the interaction of Al(III) with the Aβ peptide, reducing the toxicity effect of Al(III) on the peptide.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Inoue, R; Hiraga, F; Kiyanagi, Y
2014-06-01
An accelerator based BNCT has been desired because of its therapeutic convenience. However, optimal design of a neutron moderator system is still one of the issues. Therefore, detailed studies on materials consisting of the moderator system are necessary to obtain the optimal condition. In this study, the epithermal neutron flux and the RBE dose have been calculated as the indicators to look for optimal materials for the filter and the moderator. As a result, it was found that a combination of MgF2 moderator with Fe filter gave best performance, and the moderator system gave a dose ratio greater than 3 and an epithermal neutron flux over 1.0×10(9)cm(-2)s(-1).
Freitas, Jair C. C.; Scopel, Wanderlã L.; Paz, Wendel S.; Bernardes, Leandro V.; Cunha-Filho, Francisco E.; Speglich, Carlos; Araújo-Moreira, Fernando M.; Pelc, Damjan; Cvitanić, Tonči; Požek, Miroslav
2015-01-01
The prospect of carbon-based magnetic materials is of immense fundamental and practical importance, and information on atomic-scale features is required for a better understanding of the mechanisms leading to carbon magnetism. Here we report the first direct detection of the microscopic magnetic field produced at 13C nuclei in a ferromagnetic carbon material by zero-field nuclear magnetic resonance (NMR). Electronic structure calculations carried out in nanosized model systems with different classes of structural defects show a similar range of magnetic field values (18–21 T) for all investigated systems, in agreement with the NMR experiments. Our results are strong evidence of the intrinsic nature of defect-induced magnetism in magnetic carbons and establish the magnitude of the hyperfine magnetic field created in the neighbourhood of the defects that lead to magnetic order in these materials. PMID:26434597
Yasuda, H. Hosako, I.
2015-03-16
We investigate the performance of terahertz quantum cascade lasers (THz-QCLs) based on Al{sub x}Ga{sub 1−x}As/Al{sub y}Ga{sub 1−y}As and GaSb/AlGaSb material systems to realize higher-temperature operation. Calculations with the non-equilibrium Green's function method reveal that the AlGaAs-well-based THz-QCLs do not show improved performance, mainly because of alloy scattering in the ternary compound semiconductor. The GaSb-based THz-QCLs offer clear advantages over GaAs-based THz-QCLs. Weaker longitudinal optical phonon–electron interaction in GaSb produces higher peaks in the spectral functions of the lasing levels, which enables more electrons to be accumulated in the upper lasing level.
Makarov, Yuri V.; Du, Pengwei; Pai, M. A.; McManus, Bart
2014-01-14
The variability and uncertainty of wind power production requires increased flexibility in power systems, or more operational reserves to main a satisfactory level of reliability. The incremental increase in reserve requirement caused by wind power is often studied separately from the effects of loads. Accordingly, the cost in procuring reserves is allocated based on this simplification rather than a fair and transparent calculation of the different resources’ contribution to the reserve requirement. This work proposes a new allocation mechanism for intermittency and variability of resources regardless of their type. It is based on a new formula, called grid balancing metric (GBM). The proposed GBM has several distinct features: 1) it is directly linked to the control performance standard (CPS) scores and interconnection frequency performance, 2) it provides scientifically defined allocation factors for individual resources, 3) the sum of allocation factors within any group of resources is equal to the groups’ collective allocation factor (linearity), and 4) it distinguishes helpers and harmers. The paper illustrates and provides results of the new approach based on actual transmission system operator (TSO) data.
NASA Astrophysics Data System (ADS)
Rodriguez Frias, Marco A.; Yang, Wuqiang
2017-04-01
Image reconstruction for electrical capacitance tomography is a challenging task due to the severely underdetermined nature of the inverse problem. A model-based algorithm tackles this problem by reducing the number of unknowns to be calculated from the limited number of independent measurements. The conventional model-based algorithm is implemented with a finite element method to solve the forward problem at each iteration and can produce good results. However, it is time-consuming and hence the algorithm can be used for off-line image reconstruction only. In this paper, a solution to this limitation is proposed. The model-based algorithm is implemented with a database containing a set of prior solved forward problems. In this way, the time required to perform image reconstruction is drastically reduced without sacrificing accuracy, and real-time image reconstruction achieved with up to 100 frames s‑1. Further enhancement in speed may be accomplished by implementing the reconstruction algorithm in a parallel processing general purpose graphics process unit.
Na, Y; Kapp, D; Kim, Y; Xing, L; Suh, T
2014-06-01
Purpose: To report the first experience on the development of a cloud-based treatment planning system and investigate the performance improvement of dose calculation and treatment plan optimization of the cloud computing platform. Methods: A cloud computing-based radiation treatment planning system (cc-TPS) was developed for clinical treatment planning. Three de-identified clinical head and neck, lung, and prostate cases were used to evaluate the cloud computing platform. The de-identified clinical data were encrypted with 256-bit Advanced Encryption Standard (AES) algorithm. VMAT and IMRT plans were generated for the three de-identified clinical cases to determine the quality of the treatment plans and computational efficiency. All plans generated from the cc-TPS were compared to those obtained with the PC-based TPS (pc-TPS). The performance evaluation of the cc-TPS was quantified as the speedup factors for Monte Carlo (MC) dose calculations and large-scale plan optimizations, as well as the performance ratios (PRs) of the amount of performance improvement compared to the pc-TPS. Results: Speedup factors were improved up to 14.0-fold dependent on the clinical cases and plan types. The computation times for VMAT and IMRT plans with the cc-TPS were reduced by 91.1% and 89.4%, respectively, on average of the clinical cases compared to those with pc-TPS. The PRs were mostly better for VMAT plans (1.0 ≤ PRs ≤ 10.6 for the head and neck case, 1.2 ≤ PRs ≤ 13.3 for lung case, and 1.0 ≤ PRs ≤ 10.3 for prostate cancer cases) than for IMRT plans. The isodose curves of plans on both cc-TPS and pc-TPS were identical for each of the clinical cases. Conclusion: A cloud-based treatment planning has been setup and our results demonstrate the computation efficiency of treatment planning with the cc-TPS can be dramatically improved while maintaining the same plan quality to that obtained with the pc-TPS. This work was supported in part by the National Cancer Institute (1
Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework
Berger, Daniel Oberhofer, Harald; Reuter, Karsten; Logsdail, Andrew J. Farrow, Matthew R.; Catlow, C. Richard A.; Sokol, Alexey A.; Sherwood, Paul; Blum, Volker
2014-07-14
We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO{sub 2}(110)
Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework.
Berger, Daniel; Logsdail, Andrew J; Oberhofer, Harald; Farrow, Matthew R; Catlow, C Richard A; Sherwood, Paul; Sokol, Alexey A; Blum, Volker; Reuter, Karsten
2014-07-14
We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO2(110).
Held, Mareike; Sneed, Penny K; Fogh, Shannon E; Pouliot, Jean; Morin, Olivier
2015-11-08
Unlike scheduled radiotherapy treatments, treatment planning time and resources are limited for emergency treatments. Consequently, plans are often simple 2D image-based treatments that lag behind technical capabilities available for nonurgent radiotherapy. We have developed a novel integrated urgent workflow that uses onboard MV CBCT imaging for patient simulation to improve planning accuracy and reduce the total time for urgent treatments. This study evaluates both MV CBCT dose planning accuracy and novel urgent workflow feasibility for a variety of anatomic sites. We sought to limit local mean dose differences to less than 5% compared to conventional CT simulation. To improve dose calculation accuracy, we created separate Hounsfield unit-to-density calibration curves for regular and extended field-of-view (FOV) MV CBCTs. We evaluated dose calculation accuracy on phantoms and four clinical anatomical sites (brain, thorax/spine, pelvis, and extremities). Plans were created for each case and dose was calculated on both the CT and MV CBCT. All steps (simulation, planning, setup verification, QA, and dose delivery) were performed in one 30 min session using phantoms. The monitor units (MU) for each plan were compared and dose distribution agreement was evaluated using mean dose difference over the entire volume and gamma index on the central 2D axial plane. All whole-brain dose distributions gave gamma passing rates higher than 95% for 2%/2 mm criteria, and pelvic sites ranged between 90% and 98% for 3%/3 mm criteria. However, thoracic spine treatments produced gamma passing rates as low as 47% for 3%/3 mm criteria. Our novel MV CBCT-based dose planning and delivery approach was feasible and time-efficient for the majority of cases. Limited MV CBCT FOV precluded workflow use for pelvic sites of larger patients and resulted in image clearance issues when tumor position was far off midline. The agreement of calculated MU on CT and MV CBCT was acceptable for all
Wilkinson, P L
1979-06-01
Assessing and modifying oxygen transport are major parts of ICU patient management. Determination of base excess, blood oxygen saturation and content, dead space ventilation, and P50 helps in this management. A program is described for determining these variables using a T1 59 programmable calculator and PC 100A printer. Each variable can be independently calculated without running the whole program. The calculator-printer's small size, low cost, and hard copy printout make it a valuable and versatile tool for calculating physiological variables. The program is easily entered by an stored on magnetic card, and prompts the user to enter the appropriate variables, making is easy to run by untrained personnel.
Kirschstein, Timo; Wolters, Alexander; Lenz, Jan-Hendrik; Fröhlich, Susanne; Hakenberg, Oliver; Kundt, Günther; Darmüntzel, Martin; Hecker, Michael; Altiner, Attila; Müller-Hilke, Brigitte
2016-01-01
Objective: The amendment of the Medical Licensing Act (ÄAppO) in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality. Methods: In the spring of 2014, the students‘ dean commissioned the „core group“ for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. Results: This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and – in analogy to impact factors and third party grants – a ranking among faculty. Conclusion: Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds. PMID:27275509
Zhang, Xueli; Gong, Xuedong
2014-08-04
Nitrogen-rich heterocyclic bases and oxygen-rich acids react to produce energetic salts with potential application in the field of composite explosives and propellants. In this study, 12 salts formed by the reaction of the bases 4-amino-1,2,4-trizole (A), 1-amino-1,2,4-trizole (B), and 5-aminotetrazole (C), upon reaction with the acids HNO3 (I), HN(NO2 )2 (II), HClO4 (III), and HC(NO2 )3 (IV), are studied using DFT calculations at the B97-D/6-311++G** level of theory. For the reactions with the same base, those of HClO4 are the most exothermic and spontaneous, and the most negative Δr Gm in the formation reaction also corresponds to the highest decomposition temperature of the resulting salt. The ability of anions and cations to form hydrogen bonds decreases in the order NO3 (-) >N(NO2 )2 (-) >ClO4 (-) >C(NO2 )3 (-) , and C(+) >B(+) >A(+) . In particular, those different cation abilities are mainly due to their different conformations and charge distributions. For the salts with the same anion, the larger total hydrogen-bond energy (EH,tot ) leads to a higher melting point. The order of cations and anions on charge transfer (q), second-order perturbation energy (E2 ), and binding energy (Eb ) are the same to that of EH,tot , so larger q leads to larger E2 , Eb , and EH,tot . All salts have similar frontier orbitals distributions, and their HOMO and LUMO are derived from the anion and the cation, respectively. The molecular orbital shapes are kept as the ions form a salt. To produce energetic salts, 5-aminotetrazole and HClO4 are the preferred base and acid, respectively.
Bubin, Sergiy; Sharkey, Keeper L.; Adamowicz, Ludwik
2013-04-28
Very accurate variational nonrelativistic finite-nuclear-mass calculations employing all-electron explicitly correlated Gaussian basis functions are carried out for six Rydberg {sup 2}D states (1s{sup 2}nd, n= 6, Horizontal-Ellipsis , 11) of the {sup 7}Li and {sup 6}Li isotopes. The exponential parameters of the Gaussian functions are optimized using the variational method with the aid of the analytical energy gradient determined with respect to these parameters. The experimental results for the lower states (n= 3, Horizontal-Ellipsis , 6) and the calculated results for the higher states (n= 7, Horizontal-Ellipsis , 11) fitted with quantum-defect-like formulas are used to predict the energies of {sup 2}D 1s{sup 2}nd states for {sup 7}Li and {sup 6}Li with n up to 30.
Kraemer, Philipp; Gerlach, Gabriele
2017-03-09
The Demerelate package offers algorithms to calculate different inter-individual relatedness measurements. Three different allele sharing indices, five pairwise weighted estimates of relatedness and four pairwise weighted estimates with sample size correction are implemented to analyze kinship structures within populations. Statistics are based on randomization tests; modeling relatedness coefficients by logistic regression, modeling relatedness with geographic distance by mantel correlation and comparing mean relatedness between populations using pairwise t-tests. Demerelate provides an advance on previous software packages by including some estimators not available in R to date, along with FIS , as well as combining analysis of relatedness and spatial structuring. An UPGMA tree visualizes genetic relatedness among individuals. Additionally, Demerelate summarizes information on datasets (allele vs. genotype frequencies; heterozygosity; FIS -values). Demerelate is - to our knowledge - the first R-package implementing basic allele sharing indices such as Blouin's Mxy relatedness, the estimator of Wang corrected for sample size (wangxy ), estimators based on Morans I adapted to genetic relatedness as well as combining all estimators with geographic information. The R environment enables users to better understand relatedness within populations due to the flexibility of Demerelate of accepting different datasets as empirical data, reference data, geographical data and by providing intermediate results. Each statistic and tool can be used separately, which helps to understand the suitability of the data for relatedness analysis, and can be easily implemented in custom pipelines. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Elliott, S. D.; Dey, G.; Maimaiti, Y.
2017-02-01
Reaction cycles for the atomic layer deposition (ALD) of metals are presented, based on the incomplete data that exist about their chemical mechanisms, particularly from density functional theory (DFT) calculations. ALD requires self-limiting adsorption of each precursor, which results from exhaustion of adsorbates from previous ALD pulses and possibly from inactivation of the substrate through adsorption itself. Where the latter reaction does not take place, an "abbreviated cycle" still gives self-limiting ALD, but at a much reduced rate of deposition. Here, for example, ALD growth rates are estimated for abbreviated cycles in H2-based ALD of metals. A wide variety of other processes for the ALD of metals are also outlined and then classified according to which a reagent supplies electrons for reduction of the metal. Detailed results on computing the mechanism of copper ALD by transmetallation are summarized and shown to be consistent with experimental growth rates. Potential routes to the ALD of other transition metals by using complexes of non-innocent diazadienyl ligands as metal sources are also evaluated using DFT.
NASA Astrophysics Data System (ADS)
Jin, Xin; Nie, Rencan; Zhou, Dongming; Yao, Shaowen; Chen, Yanyan; Yu, Jiefu; Wang, Quan
2016-11-01
A novel method for the calculation of DNA sequence similarity is proposed based on simplified pulse-coupled neural network (S-PCNN) and Huffman coding. In this study, we propose a coding method based on Huffman coding, where the triplet code was used as a code bit to transform DNA sequence into numerical sequence. The proposed method uses the firing characters of S-PCNN neurons in DNA sequence to extract features. Besides, the proposed method can deal with different lengths of DNA sequences. First, according to the characteristics of S-PCNN and the DNA primary sequence, the latter is encoded using Huffman coding method, and then using the former, the oscillation time sequence (OTS) of the encoded DNA sequence is extracted. Simultaneously, relevant features are obtained, and finally the similarities or dissimilarities of the DNA sequences are determined by Euclidean distance. In order to verify the accuracy of this method, different data sets were used for testing. The experimental results show that the proposed method is effective.
Elliott, S D; Dey, G; Maimaiti, Y
2017-02-07
Reaction cycles for the atomic layer deposition (ALD) of metals are presented, based on the incomplete data that exist about their chemical mechanisms, particularly from density functional theory (DFT) calculations. ALD requires self-limiting adsorption of each precursor, which results from exhaustion of adsorbates from previous ALD pulses and possibly from inactivation of the substrate through adsorption itself. Where the latter reaction does not take place, an "abbreviated cycle" still gives self-limiting ALD, but at a much reduced rate of deposition. Here, for example, ALD growth rates are estimated for abbreviated cycles in H2-based ALD of metals. A wide variety of other processes for the ALD of metals are also outlined and then classified according to which a reagent supplies electrons for reduction of the metal. Detailed results on computing the mechanism of copper ALD by transmetallation are summarized and shown to be consistent with experimental growth rates. Potential routes to the ALD of other transition metals by using complexes of non-innocent diazadienyl ligands as metal sources are also evaluated using DFT.
NASA Astrophysics Data System (ADS)
Achmad, Tria Laksana; Fu, Wenxiang; Chen, Hao; Zhang, Chi; Yang, Zhi-Gang
2017-01-01
The main idea of alloy design is to reduce costs and time required by the traditional (trial and error) method, then finding a new way to develop the efficiency of the alloy design is necessary. In this study, we proposed a new approach to the design of Co-based alloys. It is based on the concept that lowering the ratio of stable and unstable stacking fault energy (SFE) could bring a significant increase in the tendency of partial dislocation accumulation and FCC to HCP phase transformation then enhance mechanical properties. Through the advance development of the computing techniques, first-principles density-functional-theory (DFT) calculations are capable of providing highly accurate structural modeling at the atomic scale without any experimental data. The first-principles calculated results show that the addition of some transition metal (Cr, Mo, W, Re, Os, Ir) and rare-earth (Sc, Y, La, Sm) alloying elements would decrease both stable and unstable SFE of pure Co. The dominant deformation mechanism of binary Co-4.5 at.% X (X = alloying element) is extended partial dislocation. Our study reveals Re, W, Mo and La as the most promising alloying additions for the Co-based alloys design with superior performances. Furthermore, the underlying mechanisms for the SFE reduction can be explained regarding the electronic structure.
NASA Astrophysics Data System (ADS)
Nomura, Kazuya; Hoshino, Ryota; Hoshiba, Yasuhiro; Danilov, Victor I.; Kurita, Noriyuki
2013-04-01
We investigated transition states (TS) between wobble Guanine-Thymine (wG-T) and tautomeric G-T base-pair as well as Br-containing base-pairs by MP2 and density functional theory (DFT) calculations. The obtained TS between wG-T and G*-T (asterisk is an enol-form of base) is different from TS got by the previous DFT calculation. The activation energy (17.9 kcal/mol) evaluated by our calculation is significantly smaller than that (39.21 kcal/mol) obtained by the previous calculation, indicating that our TS is more preferable. In contrast, the obtained TS and activation energy between wG-T and G-T* are similar to those obtained by the previous DFT calculation. We furthermore found that the activation energy between wG-BrU and tautomeric G-BrU is smaller than that between wG-T and tautomeric G-T. This result elucidates that the replacement of CH3 group of T by Br increases the probability of the transition reaction producing the enol-form G* and T* bases. Because G* prefers to bind to T rather than to C, and T* to G not A, our calculated results reveal that the spontaneous mutation from C to T or from A to G base is accelerated by the introduction of wG-BrU base-pair.
NASA Astrophysics Data System (ADS)
Mendonca, J.; Strong, K.; Sung, K.; Devi, V. M.; Toon, G. C.; Wunch, D.; Franklin, J. E.
2017-03-01
A quadratic-speed-dependent Voigt line shape (qSDV) with line mixing (qSDV+LM), together with spectroscopic line parameters from Devi et al. [1,2] for the 2v3 band of CH4, was used to retrieve total columns of CH4 from atmospheric solar absorption spectra. The qSDV line shape (Tran et al., 2013) [3] with line mixing (Lévy et al., 1992) [4] was implemented into the forward model of GFIT (the retrieval algorithm that is at the heart of the GGG software (Wunch et al., 2015) [5]) to calculate CH4 absorption coefficients. High-resolution laboratory spectra of CH4 were used to assess absorption coefficients calculated using a Voigt line shape and spectroscopic parameters from the atm line list (Toon, 2014) [6]. The same laboratory spectra were used to test absorption coefficients calculated using the qSDV+LM line shape with spectroscopic line parameters from Devi et al. [1,2] for the 2v3 band of CH4 and a Voigt line shape for lines that don't belong to the 2v3 band. The spectral line list for lines that don't belong to the 2v3 band is an amalgamation of multiple spectral line lists. We found that for the P, Q, and R branches of the 2v3 band, the qSDV+LM simulated the laboratory spectra better than the Voigt line shape. The qSDV+LM was also used in the spectral fitting of high-resolution solar absorption spectra from four ground-based remote sensing sites and compared to spectra fitted with a Voigt line shape. The average root mean square (RMS) residual for 131,124 solar absorption spectra fitted with absorption coefficients calculated using the qSDV+LM for the 2v3 band of CH4 and the new spectral line list for lines for lines that don't belong to the 2v3 band, was reduced in the P, Q, and R branches by 5%, 13%, and 3%, respectively when compared with spectra fitted using a Voigt line shape and the atm line list. We found that the average total column of CH4 retrieved from these 131,124 spectra, with the qSDV+LM was 1.1±0.3% higher than the retrievals performed using a
NASA Astrophysics Data System (ADS)
Chu, Iek-Heng; Trinastic, Jonathan P.; Wang, Yun-Peng; Eguiluz, Adolfo G.; Kozhevnikov, Anton; Schulthess, Thomas C.; Cheng, Hai-Ping
2016-03-01
The G W approximation is a well-known method to improve electronic structure predictions calculated within density functional theory. In this work, we have implemented a computationally efficient G W approach that calculates central properties within the Matsubara-time domain using the modified version of elk, the full-potential linearized augmented plane wave (FP-LAPW) package. Continuous-pole expansion (CPE), a recently proposed analytic continuation method, has been incorporated and compared to the widely used Padé approximation. Full crystal symmetry has been employed for computational speedup. We have applied our approach to 18 well-studied semiconductors/insulators that cover a wide range of band gaps computed at the levels of single-shot G0W0 , partially self-consistent G W0 , and fully self-consistent G W (full-G W ), in conjunction with the diagonal approximation. Our calculations show that G0W0 leads to band gaps that agree well with experiment for the case of simple s -p electron systems, whereas full-G W is required for improving the band gaps in 3 d electron systems. In addition, G W0 almost always predicts larger band gap values compared to full-G W , likely due to the substantial underestimation of screening effects as well as the diagonal approximation. Both the CPE method and Padé approximation lead to similar band gaps for most systems except strontium titantate, suggesting that further investigation into the latter approximation is necessary for strongly correlated systems. Moreover, the calculated cation d band energies suggest that both full-G W and G W0 lead to results in good agreement with experiment. Our computed band gaps serve as important benchmarks for the accuracy of the Matsubara-time G W approach.
Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian; Rauhut, Guntram
2015-12-28
Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.
NASA Astrophysics Data System (ADS)
Liendo Sanchez, A. K.; Rojas, R.
2013-05-01
Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps
Park, Justin C.; Li, Jonathan G.; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray
2015-04-15
Purpose: The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Results: Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm{sup 2} square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm{sup 2}, where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (
New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation
NASA Astrophysics Data System (ADS)
Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.
2015-02-01
In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.
Vasilkov, Alexander P; Herman, Jay R; Ahmad, Ziauddin; Kahru, Mati; Mitchell, B Greg
2005-05-10
Quantitative assessment of the UV effects on aquatic ecosystems requires an estimate of the in-water radiation field. Actual ocean UV reflectances are needed for improving the total ozone retrievals from the total ozone mapping spectrometer (TOMS) and the ozone monitoring instrument (OMI) flown on NASA's Aura satellite. The estimate of underwater UV radiation can be done on the basis of measurements from the TOMS/OMI and full models of radiative transfer (RT) in the atmosphere-ocean system. The Hydrolight code, modified for extension to the UV, is used for the generation of look-up tables for in-water irradiances. A look-up table for surface radiances generated with a full RT code is input for the Hydrolight simulations. A model of seawater inherent optical properties (IOPs) is an extension of the Case 1 water model to the UV. A new element of the IOP model is parameterization of particulate matter absorption based on recent in situ data. A chlorophyll product from ocean color sensors is input for the IOP model. Verification of the in-water computational scheme shows that the calculated diffuse attenuation coefficient Kd is in good agreement with the measured Kd.
Zwart, Mark P; Tromas, Nicolas; Elena, Santiago F
2013-01-01
The cellular multiplicity of infection (MOI) is a key parameter for describing the interactions between virions and cells, predicting the dynamics of mixed-genotype infections, and understanding virus evolution. Two recent studies have reported in vivo MOI estimates for Tobacco mosaic virus (TMV) and Cauliflower mosaic virus (CaMV), using sophisticated approaches to measure the distribution of two virus variants over host cells. Although the experimental approaches were similar, the studies employed different definitions of MOI and estimation methods. Here, new model-selection-based methods for calculating MOI were developed. Seven alternative models for predicting MOI were formulated that incorporate an increasing number of parameters. For both datasets the best-supported model included spatial segregation of virus variants over time, and to a lesser extent aggregation of virus-infected cells was also implicated. Three methods for MOI estimation were then compared: the two previously reported methods and the best-supported model. For CaMV data, all three methods gave comparable results. For TMV data, the previously reported methods both predicted low MOI values (range: 1.04-1.23) over time, whereas the best-supported model predicted a wider range of MOI values (range: 1.01-2.10) and an increase in MOI over time. Model selection can therefore identify suitable alternative MOI models and suggest key mechanisms affecting the frequency of coinfected cells. For the TMV data, this leads to appreciable differences in estimated MOI values.
NASA Astrophysics Data System (ADS)
Posuvailo, V. M.; Klapkiv, M. D.; Student, M. M.; Sirak, Y. Y.; Pokhmurska, H. V.
2017-03-01
The oxide ceramic coating with copper inclusions was synthesized by the method of plasma electrolytic oxidation (PEO). Calculations of the Gibbs energies of reactions between the plasma channel elements with inclusions of copper and copper oxide were carried out. Two methods of forming the oxide-ceramic coatings on aluminum base in electrolytic plasma with copper inclusions were established. The first method – consist in the introduction of copper into the aluminum matrix, the second - copper oxide. During the synthesis of oxide ceramic coatings plasma channel does not react with copper and copper oxide-ceramic included in the coating. In the second case is reduction of copper oxide in interaction with elements of the plasma channel. The content of oxide-ceramic layer was investigated by X-ray and X-ray microelement analysis. The inclusions of copper, CuAl2, Cu9Al4 in the oxide-ceramic coatings were found. It was established that in the spark plasma channels alongside with the oxidation reaction occurs also the reaction aluminothermic reduction of the metal that allows us to dope the oxide-ceramic coating by metal the isobaric-isothermal potential oxidation of which is less negative than the potential of the aluminum oxide.
Balakrishnan, C; Subha, L; Neelakantan, M A; Mariappan, S S
2015-11-05
A propargyl arms containing Schiff base (L) was synthesized by the condensation of 1-[2-hydroxy-4-(prop-2-yn-1-yloxy)phenyl]ethanone with trans-1,2-diaminocyclohexane. The structure of L was characterized by IR, (1)H NMR, (13)C NMR and UV-Vis spectroscopy and by single crystal X-ray diffraction analysis. The UV-Visible spectral behavior of L in different solvents exhibits positive solvatochromism. Density functional calculation of the L in gas phase was performed by using DFT (B3LYP) method with 6-31G basis set. The computed vibrational frequencies and NMR signals of L were compared with the experimental data. Tautomeric stability study inferred that the enolimine is more stable than the ketoamine form. The charge delocalization has been analyzed using natural bond orbital (NBO) analysis. Electronic absorption and emission spectral studies were used to study the binding of L with CT-DNA. The molecular docking was done to identify the interaction of L with A-DNA and B-DNA.
NASA Astrophysics Data System (ADS)
Kishida, Ryo; Kasai, Hideaki; Meñez Aspera, Susan; Lacdao Arevalo, Ryan; Nakanishi, Hiroshi
2017-02-01
Using density functional theory-based first principles calculations, we investigated the changes in the energetics and electronic structures of rhododendrol (RD)-quinone for the initial step of two important reactions, viz., cyclization and thiol binding, to give significant insights into the mechanism of the cause of cytotoxic effects. We found that RD-quinone in the electroneutral structure cannot undergo cyclization, indicating a slow cyclization of RD-quinone at neutral pH. Furthermore, using methane thiolate ion as a model thiol, we found that the oxidized form of the cyclized RD-quinone, namely RD-cyclic quinone, exhibited a reduced binding energy for thiols. However, this reduction of binding energy is clearly smaller than the case of dopaquinone, which is a molecule originally involved in the melanin synthesis. This study clearly shows that RD-quinone has a preference toward thiol bindings than cyclization compared to the case of dopaquinone. Considering that thiol bindings have been reported to induce cytotoxic effects in various ways, the preference toward thiol bindings is an important chemical property for the cytotoxicity caused by RD.
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W
2015-02-21
In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient's 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.
NASA Astrophysics Data System (ADS)
Pongracz, R.; Bartholy, J.; Lelovics, E.; Dezso, Zs.
2010-09-01
Human settlements (especially, the large urban areas) significantly modify the environment. Atmospheric composition near urban agglomerations is highly affected mainly due to industrial activity and road traffic. Urban smog events are common characteristics of large, very populated cities. Furthermore, artificial covers (i.e., concrete, asphalt) considerably modify the energy budget of urban regions, and thus, local climatic conditions. One of the most often analyzed phenomena related to cities is the urban heat island (UHI) effect. In this poster, UHI effects calculated from ground-based air temperature observations and remotely sensed surface temperature measurements are analyzed and compared for Budapest (the capital of Hungary, with about 1.7 million inhabitants) for the period 2001-2009. Hourly recorded air temperature observations are available from four climatological stations of the Hungarian Meteorological Service. Remotely sensed surface temperature data is available from the measurements of sensor MODIS (Moderate Resolution Imaging Spectroradiometer), which is one of the sensors on-board satellites Terra and Aqua. They were launched to polar orbit as part of the NASA's Earth Observing System in December 1999, and in May 2002, respectively. In the frame of our analysis, monthly and seasonal mean values for day-time (morning and afternoon) and night-time (late evening and before dawn) are evaluated. Furthermore, distribution of temperature values are analyzed on a seasonal scale.
Tscherbul, T V; Dalgarno, A
2010-11-14
An efficient method is presented for rigorous quantum calculations of atom-molecule and molecule-molecule collisions in a magnetic field. The method is based on the expansion of the wave function of the collision complex in basis functions with well-defined total angular momentum in the body-fixed coordinate frame. We outline the general theory of the method for collisions of diatomic molecules in the (2)Σ and (3)Σ electronic states with structureless atoms and with unlike (2)Σ and (3)Σ molecules. The cross sections for elastic scattering and Zeeman relaxation in low-temperature collisions of CaH((2)Σ(+)) and NH((3)Σ(-)) molecules with (3)He atoms converge quickly with respect to the number of total angular momentum states included in the basis set, leading to a dramatic (>10-fold) enhancement in computational efficiency compared to the previously used methods [A. Volpi and J. L. Bohn, Phys. Rev. A 65, 052712 (2002); R. V. Krems and A. Dalgarno, J. Chem. Phys. 120, 2296 (2004)]. Our approach is thus well suited for theoretical studies of strongly anisotropic molecular collisions in the presence of external electromagnetic fields.
NASA Astrophysics Data System (ADS)
Koudil, Z.; Ikkene, R.; Mouzali, M.
2013-11-01
Polymer quenchants are becoming increasingly popular as substitutes for traditional quenching media in hardening metallic alloys. Water-soluble organic polymer offers a number of environmental, economic, and technical advantages, as well as eliminating the quench-oil fire hazard. The close control of polymer quenchant solutions is essential for their successful applications, in order to avoid the defects of structure of steels, such as shrinkage cracks and deformations. The aim of the present paper is to evaluate and optimize the experimental parameters of polymer quenching bath which gives the best behavior quenching process and homogeneous microstructure of the final work-piece. This study has been carried out on water-soluble polymer based on poly(N-vinyl-2-pyrrolidone) PVP K30, which does not exhibit inverse solubility phenomena in water. The studied parameters include polymer concentration, bath temperature, and agitation speed. Evaluation of cooling power and hardening performance has been measured with IVF SmartQuench apparatus, using standard ISO Inconel-600 alloy. The original numerical evaluation method has been introduced in the computation software called SQ Integra. The heat transfer coefficients were used as input data for calculation of microstructural constituents and the hardness profile of cylindrical sample.
Eça, L.; Hoekstra, M.
2014-04-01
This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.
Towbin, Alexander J; Hawkins, C Matthew
2017-03-29
While medical calculators are common, they are infrequently used in the day-to-day radiology practice. We hypothesized that a calculator coupled with a structured report generator would decrease the time required to interpret and dictate a study in addition to decreasing the number of errors in interpretation. A web-based application was created to