Charge redistribution in QM:QM ONIOM model systems: a constrained density functional theory approach
NASA Astrophysics Data System (ADS)
Beckett, Daniel; Krukau, Aliaksandr; Raghavachari, Krishnan
2017-11-01
The ONIOM hybrid method has found considerable success in QM:QM studies designed to approximate a high level of theory at a significantly reduced cost. This cost reduction is achieved by treating only a small model system with the target level of theory and the rest of the system with a low, inexpensive, level of theory. However, the choice of an appropriate model system is a limiting factor in ONIOM calculations and effects such as charge redistribution across the model system boundary must be considered as a source of error. In an effort to increase the general applicability of the ONIOM model, a method to treat the charge redistribution effect is developed using constrained density functional theory (CDFT) to constrain the charge experienced by the model system in the full calculation to the link atoms in the truncated model system calculations. Two separate CDFT-ONIOM schemes are developed and tested on a set of 20 reactions with eight combinations of levels of theory. It is shown that a scheme using a scaled Lagrange multiplier term obtained from the low-level CDFT model calculation outperforms ONIOM at each combination of levels of theory from 32% to 70%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanematsu, Yusuke; Tachikawa, Masanori
2014-11-14
Multicomponent quantum mechanical (MC-QM) calculation has been extended with ONIOM (our own N-layered integrated molecular orbital + molecular mechanics) scheme [ONIOM(MC-QM:MM)] to take account of both the nuclear quantum effect and the surrounding environment effect. The authors have demonstrated the first implementation and application of ONIOM(MC-QM:MM) method for the analysis of the geometry and the isotope shift in hydrogen-bonding center of photoactive yellow protein. ONIOM(MC-QM:MM) calculation for a model with deprotonated Arg52 reproduced the elongation of O–H bond of Glu46 observed by neutron diffraction crystallography. Among the unique isotope shifts in different conditions, the model with protonated Arg52 with solventmore » effect reasonably provided the best agreement with the corresponding experimental values from liquid NMR measurement. Our results implied the availability of ONIOM(MC-QM:MM) to distinguish the local environment around hydrogen bonds in a biomolecule.« less
Prajongtat, Pongthep; Phromyothin, Darinee Sae-Tang; Hannongbua, Supa
2013-08-01
The interactions between oxaloacetic (OAA) and phosphoenolpyruvic carboxykinase (PEPCK) binding pocket in the presence and absence of hydrazine were carried out using quantum chemical calculations, based on the two-layered ONIOM (ONIOM2) approach. The complexes were partially optimized by ONIOM2 (B3LYP/6-31G(d):PM6) method while the interaction energies between OAA and individual residues surrounding the pocket were performed at the MP2/6-31G(d,p) level of theory. The calculated interaction energies (INT) indicated that Arg87, Gly237, Ser286, and Arg405 are key residues for binding to OAA with the INT values of -1.93, -2.06, -2.47, and -3.16 kcal mol(-1), respectively. The interactions are mainly due to the formation of hydrogen bonding interactions with OAA. Moreover, using ONIOM2 (B3LYP/6-31G(d):PM6) applied on the PEPCKHS complex, two proton transfers were observed; first, the proton was transferred from the carboxylic group of OAA to hydrazine while the second one was from Asp311 to Lys244. Such reactions cause the generation of binding strength of OAA to the pocket via electrostatic interaction. The orientations of Lys243, Lys244, His264, Asp311, Phe333, and Arg405 were greatly deviated after hydrazine incorporation. These indicate that hydrazine plays an important role in terms of not only changing the conformation of the binding pocket, but is also tightly bound to OAA resulting in its conformation change in the pocket. The understanding of such interaction can be useful for the design of hydrazine-based inhibitor for antichachexia agents.
Asada, Naoya; Fedorov, Dmitri G.; Kitaura, Kazuo; Nakanishi, Isao; Merz, Kenneth M.
2012-01-01
We propose an approach based on the overlapping multicenter ONIOM to evaluate intermolecular interaction energies in large systems and demonstrate its accuracy on several representative systems in the complete basis set limit at the MP2 and CCSD(T) level of theory. In the application to the intermolecular interaction energy between insulin dimer and 4′-hydroxyacetanilide at the MP2/CBS level, we use the fragment molecular orbital method for the calculation of the entire complex assigned to the lowest layer in three-layer ONIOM. The developed method is shown to be efficient and accurate in the evaluation of the protein-ligand interaction energies. PMID:23050059
Ab initio ONIOM-molecular dynamics (MD) study on the deamination reaction by cytidine deaminase.
Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako
2007-08-23
We applied the ONIOM-molecular dynamics (MD) method to the hydrolytic deamination of cytidine by cytidine deaminase, which is an essential step of the activation process of the anticancer drug inside the human body. The direct MD simulations were performed for the realistic model of cytidine deaminase by calculating the energy and its gradient by the ab initio ONIOM method on the fly. The ONIOM-MD calculations including the thermal motion show that the neighboring amino acid residue is an important factor of the environmental effects and significantly affects not only the geometry and energy of the substrate trapped in the pocket of the active site but also the elementary step of the catalytic reaction. We successfully simulate the second half of the catalytic cycle, which has been considered to involve the rate-determining step, and reveal that the rate-determining step is the release of the NH3 molecule.
Accurate prediction of bond dissociation energies of large n-alkanes using ONIOM-CCSD(T)/CBS methods
NASA Astrophysics Data System (ADS)
Wu, Junjun; Ning, Hongbo; Ma, Liuhao; Ren, Wei
2018-05-01
Accurate determination of the bond dissociation energies (BDEs) of large alkanes is desirable but practically impossible due to the expensive cost of high-level ab initio methods. We developed a two-layer ONIOM-CCSD(T)/CBS method which treats the high layer with CCSD(T) method and the low layer with DFT method, respectively. The accuracy of this method was validated by comparing the calculated BDEs of n-hexane with that obtained at the CCSD(T)-F12b/aug-cc-pVTZ level of theory. On this basis, the C-C BDEs of C6-C20 n-alkanes were calculated systematically using the ONIOM [CCSD(T)/CBS(D-T):M06-2x/6-311++G(d,p)] method, showing a good agreement with the data available in the literature.
Ab Initio ONIOM-Molecular Dynamics (MD) Study on the Deamination Reaction by Cytidine Deaminase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako
2007-08-23
We applied the ONIOM-molecular dynamics (MD) method to the hydrolytic deamination of cytidine by cytidine deaminase, which is an essential step of the activation process of the anticancer drug inside the human body. The direct MD simulations were performed for the realistic model of cytidine deaminase calculating the energy and its gradient by the ab initio ONIOM method on the fly. The ONIOM-MD calculations including the thermal motion show that the neighboring amino acid residue is an important factor of the environmental effects and significantly affects not only the geometry and energy of the substrate trapped in the pocket ofmore » the active site but also the elementary step of the catalytic reaction. We successfully simulate the second half of the catalytic cycle, which has been considered to involve the rate-determining step, and reveal that the rate-determing step is the release of the NH3 molecule. TM and MA were supported in part by grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan. MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE.« less
The ONIOM molecular dynamics method for biochemical applications: cytidine deaminase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako
2007-03-22
Abstract We derived and implemented the ONIOM-molecular dynamics (MD) method for biochemical applications. The implementation allows the characterization of the functions of the real enzymes taking account of their thermal motion. In this method, the direct MD is performed by calculating the ONIOM energy and gradients of the system on the fly. We describe the first application of this ONOM-MD method to cytidine deaminase. The environmental effects on the substrate in the active site are examined. The ONIOM-MD simulations show that the product uridine is strongly perturbed by the thermal motion of the environment and dissociates easily from the activemore » site. TM and MA were supported in part by grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan. MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE.« less
Alzate-Morales, Jans H; Caballero, Julio; Vergara Jague, Ariela; González Nilo, Fernando D
2009-04-01
N2 and O6 substituted guanine derivatives are well-known as potent and selective CDK2 inhibitors. The ability of molecular docking using the program AutoDock3 and the hybrid method ONIOM, to obtain some quantum chemical descriptors with the aim to successfully rank these inhibitors, was assessed. The quantum chemical descriptors were used to explain the affinity, of the series studied, by a model of the CDK2 binding site. The initial structures were obtained from docking studies and the ONIOM method was applied with only a single point energy calculation on the protein-ligand structure. We obtained a good correlation model between the ONIOM derived quantum chemical descriptor "H-bond interaction energy" and the experimental biological activity, with a correlation coefficient value of R = 0.80 for 75 compounds. To the best of our knowledge, this is the first time that both methodologies are used in conjunction in order to obtain a correlation model. The model suggests that electrostatic interactions are the principal driving force in this protein-ligand interaction. Overall, the approach was successful for the cases considered, and it suggests that could be useful for the design of inhibitors in the lead optimization phase of drug discovery.
Zhang, Lidong; Meng, Qinghui; Chi, Yicheng; Zhang, Peng
2018-05-31
A two-layer ONIOM[QCISD(T)/CBS:DFT] method was proposed for the high-level single-point energy calculations of large biodiesel molecules and was validated for the hydrogen abstraction reactions of unsaturated methyl esters that are important components of real biodiesel. The reactions under investigation include all the reactions on the potential energy surface of C n H 2 n-1 COOCH 3 ( n = 2-5, 17) + H, including the hydrogen abstraction, the hydrogen addition, the isomerization (intramolecular hydrogen shift), and the β-scission reactions. By virtue of the introduced concept of chemically active center, a unified specification of chemically active portion for the ONIOM (ONIOM = our own n-layered integrated molecular orbital and molecular mechanics) method was proposed to account for the additional influence of C═C double bond. The predicted energy barriers and heats of reaction by using the ONIOM method are in very good agreement with those obtained by using the widely accepted high-level QCISD(T)/CBS theory, as verified by the computational deviations being less than 0.15 kcal/mol, for almost all the reaction pathways under investigation. The method provides a computationally accurate and affordable approach to combustion chemists for high-level theoretical chemical kinetics of large biodiesel molecules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Re, Suyong; Morokuma, Keiji
2001-07-07
The reliability of the two-layered ONIOM (our own N-layered molecular orbital + molecular mechanics) method was examined for the investigation of the SN2 reaction pathway (reactants, reactant complexes, transition states, product complexes, and products) between CH3Cl and an OH- ion in microsolvation clusters with one or two water molecules. Only the solute part, CH3Cl and OH-, was treated at a high level of molecular orbital (MO) theory, and all solvent water molecules were treated at a low MO level. The ONIOM calculation at the MP2 (Moller-Plesset second order perturbation)/aug-cc-pVDZ (augmented correlation-consistent polarized valence double-zeta basis set) level of theory asmore » the high level coupled with the B3LYP (Becke 3 parameter-Lee-Yag-Parr)/6-31+G(d) as the low level was found to reasonably reproduce the "target"geometries at the MP2/aug-cc-pVDZ level of theory. The energetics can be further improved to an average absolute error of <1.0 kcal/mol per solvent water molecule relative to the target CCSD(T) (coupled cluster singles and doubles with triples by perturbation)/aug-cc-pVDZ level by using the ONIOM method in which the high level was CCSD(T)/aug-cc-pVDZ level with the low level of MP2/aug-cc-pVDZ. The present results indicate that the ONIOM method would be a powerful tool for obtaining reliable geometries and energetics for chemical reactions in larger microsolvated clusters with a fraction of cost of the full high level calculation, when an appropriate combination of high and low level methods is used. The importance of a careful test is emphasized.« less
Liang, Y H; Chen, F E
2007-08-01
Theoretical investigations of the interaction between dapivirine and the HIV-1 RT binding site have been performed by the ONIOM2 (B3LYP/6-31G (d,p): PM3) and B3LYP/6-31G (d,p) methods. The results derived from this study indicate that this inhibitor dapivirine forms two hydrogen bonds with Lys101 and exhibits strong π-π stacking or H…π interaction with Tyr181 and Tyr188. These interactions play a vital role in stabilizing the NNIBP/dapivirine complex. Additionally, the predicted binding energy of the BBF optimized structure for this complex system is -18.20 kcal/mol.
Suzuki, Kimichi; Morokuma, Keiji; Maeda, Satoshi
2017-10-05
We propose a multistructural microiteration (MSM) method for geometry optimization and reaction path calculation in large systems. MSM is a simple extension of the geometrical microiteration technique. In conventional microiteration, the structure of the non-reaction-center (surrounding) part is optimized by fixing atoms in the reaction-center part before displacements of the reaction-center atoms. In this method, the surrounding part is described as the weighted sum of multiple surrounding structures that are independently optimized. Then, geometric displacements of the reaction-center atoms are performed in the mean field generated by the weighted sum of the surrounding parts. MSM was combined with the QM/MM-ONIOM method and applied to chemical reactions in aqueous solution or enzyme. In all three cases, MSM gave lower reaction energy profiles than the QM/MM-ONIOM-microiteration method over the entire reaction paths with comparable computational costs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Interfacial Reaction Studies Using ONIOM
NASA Technical Reports Server (NTRS)
Cardelino, Beatriz H.
2003-01-01
In this report, we focus on the calculations of the energetics and chemical kinetics of heterogeneous reactions for Organometallic vapor phase epitaxy (OMVPE). The work described in this report builds upon our own previous thermochemical and chemical kinetics studies. The first of these articles refers to the prediction of thermochemical properties, and the latter one deals with the prediction of rate constants for gaseous homolytic dissociation reactions. The calculations of this investigation are at the microscopic level. The systems chosen consisted of a gallium nitride (GaN) substrate, and molecular nitrogen (N2) and ammonia (NH3) as adsorbants. The energetics for the adsorption and the adsorbant dissociation processes were estimated, and reaction rate constants for the dissociation reactions of free and adsorbed molecules were predicted. The energetics for substrate decomposition was also computed. The ONIOM method, implemented in the Gaussian98 program, was used to perform the calculations. This approach has been selected since it allows dividing the system into two layers that can be treated at different levels of accuracy. The atoms of the substrate were modeled using molecular mechanics6 with universal force fields, whereas the adsorbed molecules were approximated using quantum mechanics, based on density functional theory methods with B3LYP functionals and 6-311G(d,p) basis sets. Calculations for the substrate were performed in slabs of several unit cells in each direction. The N2 and NH3 adsorbates were attached to a central location at the Ga-lined surface.
Balamurugan, Kanagasabai; Baskar, Prathab; Kumar, Ravva Mahesh; Das, Sumitesh; Subramanian, Venkatesan
2014-11-28
The present work utilizes classical molecular dynamics simulations to investigate the covalent functionalization of carbon nanotubes (CNTs) and their interaction with ethylene glycol (EG) and water molecules. The MD simulation reveals the dispersion of functionalized carbon nanotubes and the prevention of aggregation in aqueous medium. Further, residue-wise radial distribution function (RRDF) and atomic radial distribution function (ARDF) calculations illustrate the extent of interaction of -OH and -COOH functionalized CNTs with water molecules and the non-functionalized CNT surface with EG. As the presence of the number of functionalized nanotubes increases, enhancement in the propensity for the interaction with water molecules can be observed. However, the same trend decreases for the interaction of EG molecules. In addition, the ONIOM (M06-2X/6-31+G**:AM1) calculations have also been carried out on model systems to quantitatively determine the interaction energy (IE). It is found from these calculations that the relative enhancement in the interaction of water molecules with functionalized CNTs is highly favorable when compared to the interaction of EG.
Casadesús, Ricard; Moreno, Miquel; González-Lafont, Angels; Lluch, José M; Repasky, Matthew P
2004-01-15
In this article a wide variety of computational approaches (molecular mechanics force fields, semiempirical formalisms, and hybrid methods, namely ONIOM calculations) have been used to calculate the energy and geometry of the supramolecular system 2-(2'-hydroxyphenyl)-4-methyloxazole (HPMO) encapsulated in beta-cyclodextrin (beta-CD). The main objective of the present study has been to examine the performance of these computational methods when describing the short range H. H intermolecular interactions between guest (HPMO) and host (beta-CD) molecules. The analyzed molecular mechanics methods do not provide unphysical short H...H contacts, but it is obvious that their applicability to the study of supramolecular systems is rather limited. For the semiempirical methods, MNDO is found to generate more reliable geometries than AM1, PM3 and the two recently developed schemes PDDG/MNDO and PDDG/PM3. MNDO results only give one slightly short H...H distance, whereas the NDDO formalisms with modifications of the Core Repulsion Function (CRF) via Gaussians exhibit a large number of short to very short and unphysical H...H intermolecular distances. In contrast, the PM5 method, which is the successor to PM3, gives very promising results. Our ONIOM calculations indicate that the unphysical optimized geometries from PM3 are retained when this semiempirical method is used as the low level layer in a QM:QM formulation. On the other hand, ab initio methods involving good enough basis sets, at least for the high level layer in a hybrid ONIOM calculation, behave well, but they may be too expensive in practice for most supramolecular chemistry applications. Finally, the performance of the evaluated computational methods has also been tested by evaluating the energetic difference between the two most stable conformations of the host(beta-CD)-guest(HPMO) system. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 25: 99-105, 2004
NASA Technical Reports Server (NTRS)
Park, Jin-Young; Woon, David E.
2004-01-01
Density functional theory (DFT) calculations of cyanate (OCN(-)) charge-transfer complexes were performed to model the "XCN" feature observed in interstellar icy grain mantles. OCN(-) charge-transfer complexes were formed from precursor combinations of HNCO or HOCN with either NH3 or H2O. Three different solvation strategies for realistically modeling the ice matrix environment were explored, including (1) continuum solvation, (2) pure DFT cluster calculations, and (3) an ONIOM DFT/PM3 cluster calculation. The model complexes were evaluated by their ability to reproduce seven spectroscopic measurements associated with XCN: the band origin of the OCN(-) asymmetric stretching mode, shifts in that frequency due to isotopic substitutions of C, N, O, and H, plus two weak features. The continuum solvent field method produced results consistent with some of the experimental data but failed to account for other behavior due to its limited capacity to describe molecular interactions with solvent. DFT cluster calculations successfully reproduced the available spectroscopic measurements very well. In particular, the deuterium shift showed excellent agreement in complexes where OCN(-) was fully solvated. Detailed studies of representative complexes including from two to twelve water molecules allowed the exploration of various possible solvation structures and provided insights into solvation trends. Moreover, complexes arising from cyanic or isocyanic acid in pure water suggested an alternative mechanism for the formation of OCN(-) charge-transfer complexes without the need for a strong base such as NH3 to be present. An extended ONIOM (B3LYP/PM3) cluster calculation was also performed to assess the impact of a more realistic environment on HNCO dissociation in pure water.
Kazemi, Zahra; Rudbari, Hadi Amiri; Sahihi, Mehdi; Mirkhani, Valiollah; Moghadam, Majid; Tangestaninejad, Shahram; Mohammadpoor-Baltork, Iraj; Gharaghani, Sajjad
2016-09-01
Novel metal-based drug candidate including VOL2, NiL2, CuL2 and PdL2 have been synthesized from 2-hydroxy-1-allyliminomethyl-naphthalen ligand and have been characterized by means of elemental analysis (CHN), FT-IR and UV-vis spectroscopies. In addition, (1)H and (13)C NMR techniques were employed for characterization of the PdL2 complex. Single-crystal X-ray diffraction technique was utilized to characterise the structure of the complexes. The Cu(II), Ni(II) and Pd(II) complexes show a square planar trans-coordination geometry, while in the VOL2, the vanadium center has a distorted tetragonal pyramidal N2O3 coordination sphere. The HSA-binding was also determined, using fluorescence quenching, UV-vis spectroscopy, and circular dichroism (CD) titration method. The obtained results revealed that the HSA affinity for binding the synthesized compounds follows as PdL2>CuL2>VOL2>NiL2, indicating the effect of metal ion on binding constant. The distance between these compounds and HSA was obtained based on the Förster's theory of non-radiative energy transfer. Furthermore, computational methods including molecular docking and our Own N-layered Integrated molecular Orbital and molecular Mechanics (ONIOM) were carried out to investigate the HSA-binding of the compounds. Molecular docking calculation indicated the existence of hydrogen bond between amino acid residues of HSA and all synthesized compounds. The formation of the hydrogen bond in the HSA-compound systems leads to their stabilization. The ONIOM method was utilized in order to investigate HSA binding of compounds more precisely in which molecular mechanics method (UFF) and semi empirical method (PM6) were selected for the low layer and the high layer, respectively. The results show that the structural parameters of the compounds changed along with binding to HSA, indicating the strong interaction between the compounds and HSA. The value of binding constant depends on the extent of the resultant changes. This should be mentioned that both theoretical methods calculated the Kb values in the same sequence and are in a good agreement with the experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nam, Pham Cam; Chandra, Asit K.; Nguyen, Minh Tho
2013-01-01
Integration of the (RO)B3LYP/6-311++G(2df,2p) with the PM6 method into a two-layer ONIOM is found to produce reasonably accurate BDE(O-H)s of phenolic compounds. The chosen ONIOM model contains only two atoms of the breaking bond as the core zone and is able to provide reliable evaluation for BDE(O-H) for phenols and tocopherol. Deviation of calculated values from experiment is ±(1-2) kcal/mol. BDE(O-H) of several curcuminoids and flavanoids extracted from ginger and tea are computed using the proposed model. The BDE(O-H) values of enol curcumin and epigallocatechin gallate are predicted to be 83.3 ± 2.0 and 76.0 ± 2.0 kcal/mol, respectively.
NASA Astrophysics Data System (ADS)
Kerdcharoen, Teerakiat; Morokuma, Keiji
2003-05-01
An extension of the ONIOM (Own N-layered Integrated molecular Orbital and molecular Mechanics) method [M. Svensson, S. Humbel, R. D. J. Froese, T. Mutsubara, S. Sieber, and K. Morokuma, J. Phys. Chem. 100, 19357 (1996)] for simulation in the condensed phase, called ONIOM-XS (XS=eXtension to Solvation) [T. Kerdcharoen and K. Morokuma, Chem. Phys. Lett. 355, 257 (2002)], was applied to investigate the coordination of Ca2+ in liquid ammonia. A coordination number of 6 is found. Previous simulations based on pair potential or pair potential plus three-body correction gave values of 9 and 8.2, respectively. The new value is the same as the coordination number most frequently listed in the Cambridge Structural Database (CSD) and Protein Data Bank (PDB). N-Ca-N angular distribution reveals a near-octahedral coordination structure. Inclusion of many-body interactions (which amounts to 25% of the pair interactions) into the potential energy surface is essential for obtaining reasonable coordination number. Analyses of the metal coordination in water, water-ammonia mixture, and in proteins reveals that cation/ammonia solution can be used to approximate the coordination environment in proteins.
ONIOM Investigation of the Second-Order Nonlinear Optical Responses of Fluorescent Proteins.
de Wergifosse, Marc; Botek, Edith; De Meulenaere, Evelien; Clays, Koen; Champagne, Benoît
2018-05-17
The first hyperpolarizability (β) of six fluorescent proteins (FPs), namely, enhanced green fluorescent protein, enhanced yellow fluorescent protein, SHardonnay, ZsYellow, DsRed, and mCherry, has been calculated to unravel the structure-property relationships on their second-order nonlinear optical properties, owing to their potential for multidimensional biomedical imaging. The ONIOM scheme has been employed and several of its refinements have been addressed to incorporate efficiently the effects of the microenvironment on the nonlinear optical responses of the FP chromophore that is embedded in a protective β-barrel protein cage. In the ONIOM scheme, the system is decomposed into several layers (here two) treated at different levels of approximation (method1/method2), from the most elaborated method (method1) for its core (called the high layer) to the most approximate one (method2) for the outer surrounding (called the low layer). We observe that a small high layer can already account for the variations of β as a function of the nature of the FP, provided the low layer is treated at an ab initio level to describe properly the effects of key H-bonds. Then, for semiquantitative reproduction of the experimental values obtained from hyper-Rayleigh scattering experiments, it is necessary to incorporate electron correlation as described at the second-order Møller-Plesset perturbation theory (MP2) level as well as implicit solvent effects accounted for using the polarizable continuum model (PCM). This led us to define the MP2/6-31+G(d):HF/6-31+G(d)/IEFPCM scheme as an efficient ONIOM approach and the MP2/6-31+G(d):HF/6-31G(d)/IEFPCM as a better compromise between accuracy and computational needs. Using these methods, we demonstrate that many parameters play a role on the β response of FPs, including the length of the π-conjugated segment, the variation of the bond length alternation, and the presence of π-stacking interactions. Then, noticing the small diversity of the FP chromophores, these results highlight the key role of the β-barrel and surrounding residues on β, not only because they can locally break the noncentrosymmetry vital to a β response but also because it can impose geometrical constraints on the chromophore.
Roy, Dipankar; Pohl, Gabor; Ali-Torres, Jorge; Marianski, Mateusz; Dannenberg, J. J.
2012-01-01
We present a new classification of β-turns specific to antiparallel β-sheets based upon the topology of H-bond formation. This classification results from ONIOM calculations using B3LYP/D95** DFT and AM1 semiempirical calculations as the high and low levels respectively. We chose acetyl(Ala)6NH2 as a model system as it is the simplest all alanine system that can form all the H-bonds required for a β-turn in a sheet. Of the ten different conformation we have found, the most stable structures have C7 cyclic H-bonds in place of the C10 interactions specified in the classic definition. Also, the chiralities specified for the i+1st and i+2nd residues in the classic definition disappear when the structures are optimized using our techniques, as the energetic differences between the four diastereomers of each structure are not substantial for eight of the ten conformations. PMID:22731966
Roy, Dipankar; Pohl, Gabor; Ali-Torres, Jorge; Marianski, Mateusz; Dannenberg, J J
2012-07-10
We present a new classification of β-turns specific to antiparallel β-sheets based upon the topology of H-bond formation. This classification results from ONIOM calculations using B3LYP/D95** density functional theory and AM1 semiempirical calculations as the high and low levels, respectively. We chose acetyl(Ala)(6)NH(2) as a model system as it is the simplest all-alanine system that can form all the H-bonds required for a β-turn in a sheet. Of the 10 different conformations we have found, the most stable structures have C(7) cyclic H-bonds in place of the C(10) interactions specified in the classic definition. Also, the chiralities specified for residues i + 1 and i + 2 in the classic definition disappear when the structures are optimized using our techniques, as the energetic differences among the four diastereomers of each structure are not substantial for 8 of the 10 conformations.
NASA Astrophysics Data System (ADS)
Uma Maheswari, J.; Muthu, S.; Sundius, Tom
2015-02-01
The Fourier transform infrared, FT-Raman, UV and NMR spectra of Ternelin have been recorded and analyzed. Harmonic vibrational frequencies have been investigated with the help of HF with 6-31G (d,p) and B3LYP with 6-31G (d,p) and LANL2DZ basis sets. The 1H and 13C nuclear magnetic resonance (NMR) chemical shifts of the molecule were calculated by GIAO method. The polarizability (α) and the first hyperpolarizability (β) values of the investigated molecule have been computed using DFT quantum mechanical calculations. Stability of the molecule arising from hyper conjugative interactions, and charge delocalization has been analyzed using natural bond orbital (NBO) analysis. The electron density-based local reactivity descriptors such as Fukui functions were calculated to explain the chemical selectivity or reactivity site in Ternelin. Finally the calculated results were compared to simulated infrared and Raman spectra of the title compound which show good agreement with observed spectra. Molecular docking studies have been carried out in the active site of Ternelin and reactivity with ONIOM was also investigated.
The tert-butyl cation on zeolite Y: A theoretical and experimental study
NASA Astrophysics Data System (ADS)
Rosenbach, Nilton, Jr.; dos Santos, Alex P. A.; Franco, Marcelo; Mota, Claudio J. A.
2010-01-01
The structure and energy of the tert-butyl cation on zeolite Y were calculated at ONIOM(MP2(FULL)/6-31G( d, p):MNDO) level. The results indicated that the tert-butyl cation is a minimum and lies between 40 and 51 kJ mol -1 above in energy to the tert-butoxide, depending on the level of calculation. Both species are stabilized through hydrogen bonding interactions with the framework oxygen atoms. Experimental data of nucleophilic substitution of tert-butylchloride and bromide over NaY impregnated with NaCl or NaBr give additional support for the formation of the tert-butyl cation as a discrete intermediate on zeolite Y, in agreement with the calculations.
Liu, Benguo; Zeng, Jie; Chen, Chen; Liu, Yonglan; Ma, Hanjun; Mo, Haizhen; Liang, Guizhao
2016-03-01
Cyclodextrins (CDs) can be used to improve the solubility and stability of cinnamic acid derivatives (CAs). However, there was no detailed report about understanding the effects of the substituent groups in the benzene ring on the inclusion behavior between CAs and CDs in aqueous solution. Here, the interaction of β-CD with CAs, including caffeic acid, ferulic acid, and p-coumaric acid, in water was investigated by phase-solubility method, UV, fluorescence, and (1)H NMR spectroscopy, together with ONIOM (our Own N-layer Integrated Orbital molecular Mechanics)-based QM/MM (Quantum Mechanics/Molecular Mechanics) calculations. Experimental results demonstrated that CAs could form 1:1 stoichiometric inclusion complex with β-CD by non-covalent bonds, and that the maximum apparent stability constants were found in caffeic acid (176M(-1)) followed by p-coumaric acid (160M(-1)) and ferulic acid (133M(-1)). Moreover, our calculations reasonably illustrated the binding orientations of β-CD with CAs determined by experimental observations. Copyright © 2015. Published by Elsevier Ltd.
"Structure-making" ability of Na+ in dilute aqueous solution: an ONIOM-XS MD simulation study.
Sripa, Pattrawan; Tongraar, Anan; Kerdcharoen, Teerakiat
2013-02-28
An ONIOM-XS MD simulation has been performed to characterize the "structure-making" ability of Na(+) in dilute aqueous solution. The region of most interest, i.e., a sphere that includes Na(+) and its surrounding water molecules, was treated at the HF level of accuracy using LANL2DZ and DZP basis sets for the ion and waters, respectively, whereas the rest of the system was described by classical pair potentials. Detailed analyzes of the ONIOM-XS MD trajectories clearly show that Na(+) is able to order the structure of waters in its surroundings, forming two prevalent Na(+)(H(2)O)(5) and Na(+)(H(2)O)(6) species. Interestingly, it is observed that these 5-fold and 6-fold coordinated complexes can convert back and forth with some degrees of flexibility, leading to frequent rearrangements of the Na(+) hydrates as well as numerous attempts of inner-shell water molecules to interchange with waters in the outer region. Such a phenomenon clearly demonstrates the weak "structure-making" ability of Na(+) in aqueous solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako
2008-02-01
We applied the ONIOM-molecular dynamics (MD) method to cytosine deaminase to examine the environmental effects of the amino acid residues in the pocket of the active site on the substrate taking account of their thermal motion. The ab initio ONIOM-MD simulations show that the substrate uracil is strongly perturbed by the amino acid residue Ile33, which sandwiches the uracil with His62, through the steric contact due to the thermal motion. As a result, the magnitude of the thermal oscillation of the potential energy and structure of the substrate uracil significantly increases. TM and MA were partly supported by grants frommore » the Ministry of Education, Culture, Sports, Science and Technology of Japan.MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE.« less
2015-01-01
We present ONIOM calculations using B3LYP/d95(d,p) as the high level and AM1 as the medium level on parallel β-sheets containing four strands of Ac-AAAAAA-NH2 capped with either Ac-AAPAAA-NH2 or Ac-AAAPAA-NH2. Because Pro can form H-bonds from only one side of the peptide linkage (that containing the C=O H-bond acceptor), only one of the two Pro-containing strands can favorably add to the sheet on each side. Surprisingly, when the sheet is capped with AAPAAA-NH2 at one edge, the interaction between the cap and sheet is slightly more stabilizing than that of another all Ala strand. Breaking down the interaction enthalpies into H-bonding and distortion energies shows the favorable interaction to be due to lower distortion energies in both the strand and the four-stranded sheet. Because another strand would be inhibited for attachment to the other side of the capping (Pro-containing) strand, we suggest the possible use of Pro residues in peptides designed to arrest the growth of many amyloids. PMID:24422496
Insights into the Nature of Anesthetic-Protein Interactions: An ONIOM Study.
Qiu, Ling; Lin, Jianguo; Bertaccini, Edward J
2015-10-08
Anesthetics have been employed widely to relieve surgical suffering, but their mechanism of action is not yet clear. For over a century, the mechanism of anesthesia was previously thought to be via lipid bilayer interactions. In the present work, a rigorous three-layer ONIOM(M06-2X/6-31+G*:PM6:AMBER) method was utilized to investigate the nature of interactions between several anesthetics and actual protein binding sites. According to the calculated structural features, interaction energies, atomic charges, and electrostatic potential surfaces, the amphiphilic nature of anesthetic-protein interactions was demonstrated for both inhalational and injectable anesthetics. The existence of hydrogen and halogen bonding interactions between anesthetics and proteins was clearly identified, and these interactions served to assist ligand recognition and binding by the protein. Within all complexes of inhalational or injectable anesthetics, the polarization effects play a dominant role over the steric effects and induce a significant asymmetry in the otherwise symmetric atomic charge distributions of the free ligands in vacuo. This study provides new insight into the mechanism of action of general anesthetics in a more rigorous way than previously described. Future rational design of safer anesthetics for an aging and more physiologically vulnerable population will be predicated on this greater understanding of such specific interactions.
High Coverages of Hydrogen on a (10,0) Carbon Nanotube
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James (Technical Monitor)
2001-01-01
The binding energy of H to a (10,0) carbon nanotube is calculated at 24, 50, and 100% coverage. Several different bonding configurations are considered for the 50% coverage case. Using the ONIOM (our own n-layered integrated molecular orbital and molecular mechanics) approach, the average C-H bond energy for the most stable 50% coverage and for the 100% coverage are 57.3 and 38.6 kcal/mol, respectively. Considering the size of the bond energy of H2, these values suggest that it will be difficult to achieve 100% atomic H coverage on a (10,0) nanotube.
Dudding, Travis; Houk, Kendall N
2004-04-20
The catalytic asymmetric thiazolium- and triazolium-catalyzed benzoin condensations of aldehydes and ketones were studied with computational methods. Transition-state geometries were optimized by using Morokuma's IMOMO [integrated MO (molecular orbital) + MO method] variation of ONIOM (n-layered integrated molecular orbital method) with a combination of B3LYP/6-31G(d) and AM1 levels of theory, and final transition-state energies were computed with single-point B3LYP/6-31G(d) calculations. Correlations between experiment and theory were found, and the origins of stereoselection were identified. Thiazolium catalysts were predicted to be less selective then triazolium catalysts, a trend also found experimentally.
Parandekar, Priya V; Hratchian, Hrant P; Raghavachari, Krishnan
2008-10-14
Hybrid QM:QM (quantum mechanics:quantum mechanics) and QM:MM (quantum mechanics:molecular mechanics) methods are widely used to calculate the electronic structure of large systems where a full quantum mechanical treatment at a desired high level of theory is computationally prohibitive. The ONIOM (our own N-layer integrated molecular orbital molecular mechanics) approximation is one of the more popular hybrid methods, where the total molecular system is divided into multiple layers, each treated at a different level of theory. In a previous publication, we developed a novel QM:QM electronic embedding scheme within the ONIOM framework, where the model system is embedded in the external Mulliken point charges of the surrounding low-level region to account for the polarization of the model system wave function. Therein, we derived and implemented a rigorous expression for the embedding energy as well as analytic gradients that depend on the derivatives of the external Mulliken point charges. In this work, we demonstrate the applicability of our QM:QM method with point charge embedding and assess its accuracy. We study two challenging systems--zinc metalloenzymes and silicon oxide cages--and demonstrate that electronic embedding shows significant improvement over mechanical embedding. We also develop a modified technique for the energy and analytic gradients using a generalized asymmetric Mulliken embedding method involving an unequal splitting of the Mulliken overlap populations to offer improvement in situations where the Mulliken charges may be deficient.
NASA Astrophysics Data System (ADS)
Wei, Jing; Wang, Jin-Yun; Zhang, Min-Yi; Chai, Guo-Liang; Lin, Chen-Sheng; Cheng, Wen-Dan
2013-01-01
We investigate the effect of side chain on the first-order hyperpolarizability in α-helical polyalanine peptide with the 10th alanine mutation (Acetyl(ala)9X(ala)7NH2). Structures of various substituted peptides are optimized by ONIOM (DFT: AM1) scheme, and then linear and nonlinear optical properties are calculated by SOS//CIS/6-31G∗ method. The polarizability and first-order hyperpolarizability increase obviously only when 'X' represents phenylalanine, tyrosine and tryptophan. We also discuss the origin of nonlinear optical response and determine what caused the increase of first-order hyperpolarizability. Our results strongly suggest that side chains containing benzene, phenol and indole have important contributions to first-order hyperpolarizability.
Dudding, Travis; Houk, Kendall N.
2004-01-01
The catalytic asymmetric thiazolium- and triazolium-catalyzed benzoin condensations of aldehydes and ketones were studied with computational methods. Transition-state geometries were optimized by using Morokuma's IMOMO [integrated MO (molecular orbital) + MO method] variation of ONIOM (n-layered integrated molecular orbital method) with a combination of B3LYP/6–31G(d) and AM1 levels of theory, and final transition-state energies were computed with single-point B3LYP/6–31G(d) calculations. Correlations between experiment and theory were found, and the origins of stereoselection were identified. Thiazolium catalysts were predicted to be less selective then triazolium catalysts, a trend also found experimentally. PMID:15079058
Ali-Torres, Jorge; Dannenberg, J J
2012-12-06
We report ONIOM calculations using B3LYP/D95** and AM1 on β-sheet formation from acetyl(Ala)(N)NH(2) (N = 28 or 40). The sheets contain from one to four β-turns for N = 28 and up to six for N = 40. We have obtained four types of geometrically optimized structures. All contain only β-turns. They differ from each other in the types of β-turns formed. The unsolvated sheets containing two turns are most stable. Aqueous solvation (using the SM5.2 and CPCM methods) reduces the stabilities of the folded structures compared to the extended strands.
Extending density functional embedding theory for covalently bonded systems.
Yu, Kuang; Carter, Emily A
2017-12-19
Quantum embedding theory aims to provide an efficient solution to obtain accurate electronic energies for systems too large for full-scale, high-level quantum calculations. It adopts a hierarchical approach that divides the total system into a small embedded region and a larger environment, using different levels of theory to describe each part. Previously, we developed a density-based quantum embedding theory called density functional embedding theory (DFET), which achieved considerable success in metals and semiconductors. In this work, we extend DFET into a density-matrix-based nonlocal form, enabling DFET to study the stronger quantum couplings between covalently bonded subsystems. We name this theory density-matrix functional embedding theory (DMFET), and we demonstrate its performance in several test examples that resemble various real applications in both chemistry and biochemistry. DMFET gives excellent results in all cases tested thus far, including predicting isomerization energies, proton transfer energies, and highest occupied molecular orbital-lowest unoccupied molecular orbital gaps for local chromophores. Here, we show that DMFET systematically improves the quality of the results compared with the widely used state-of-the-art methods, such as the simple capped cluster model or the widely used ONIOM method.
Treesuwan, Witcha; Hirao, Hajime; Morokuma, Keiji; Hannongbua, Supa
2012-05-01
As the mechanism underlying the sense of smell is unclear, different models have been used to rationalize structure-odor relationships. To gain insight into odorant molecules from bread baking, binding energies and vibration spectra in the gas phase and in the protein environment [7-transmembrane helices (7TMHs) of rhodopsin] were calculated using density functional theory [B3LYP/6-311++G(d,p)] and ONIOM [B3LYP/6-311++G(d,p):PM3] methods. It was found that acetaldehyde ("acid" category) binds strongly in the large cavity inside the receptor, whereas 2-ethyl-3-methylpyrazine ("roasted") binds weakly. Lys296, Tyr268, Thr118 and Ala117 were identified as key residues in the binding site. More emphasis was placed on how vibrational frequencies are shifted and intensities modified in the receptor protein environment. Principal component analysis (PCA) suggested that the frequency shifts of C-C stretching, CH(3) umbrella, C = O stretching and CH(3) stretching modes have a significant effect on odor quality. In fact, the frequency shifts of the C-C stretching and C = O stretching modes, as well as CH(3) umbrella and CH(3) symmetric stretching modes, exhibit different behaviors in the PCA loadings plot. A large frequency shift in the CH(3) symmetric stretching mode is associated with the sweet-roasted odor category and separates this from the acid odor category. A large frequency shift of the C-C stretching mode describes the roasted and oily-popcorn odor categories, and separates these from the buttery and acid odor categories.
Liu, Kuan-Yu; Herbert, John M
2017-10-28
Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H 2 O) 37 , four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H 2 O) 20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.
NASA Astrophysics Data System (ADS)
Liu, Kuan-Yu; Herbert, John M.
2017-10-01
Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H2O)37, four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H2O)20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.
Ali-Torres, Jorge
2012-01-01
We report ONIOM calculations using B3LYP/D95** and AM1 on β-sheet formation from acetyl(Ala)NNH2 (N=28 or 40). The sheets contain from one to four β-turns for N=28 and up to six for N=40. We have obtained four types of geometrically optimized structures. All contain only β-turns. They differ from each other in the types of β-turns formed. The unsolvated sheets containing two turns are most stable. Aqueous solvation (using the SM5.2 and CPCM methods) reduces the stabilities of the folded structures compared to the extended strands. PMID:23157432
Ahumedo, Maicol; Drosos, Juan Carlos; Vivas-Reyes, Ricardo
2014-05-01
Molecular docking methods were applied to simulate the coupling of a set of nineteen acyl homoserine lactone analogs into the binding site of the transcriptional receptor LasR. The best pose of each ligand was explored and a qualitative analysis of the possible interactions present in the complex was performed. From the results of the protein-ligand complex analysis, it was found that residues Tyr-64 and Tyr-47 are involved in important interactions, which mainly determine the antagonistic activity of the AHL analogues considered for this study. The effect of different substituents on the aromatic ring, the common structure to all ligands, was also evaluated focusing on how the interaction with the two previously mentioned tyrosine residues was affected. Electrostatic potential map calculations based on the electron density and the van der Waals radii were performed on all ligands to graphically aid in the explanation of the variation of charge density on their structures when the substituent on the aromatic ring is changed through the elements of the halogen group series. A quantitative approach was also considered and for that purpose the ONIOM method was performed to estimate the energy change in the different ligand-receptor complex regions. Those energy values were tested for their relationship with the corresponding IC50 in order to establish if there is any correlation between energy changes in the selected regions and the biological activity. The results obtained using the two approaches may contribute to the field of quorum sensing active molecules; the docking analysis revealed the role of some binding site residues involved in the formation of a halogen bridge with ligands. These interactions have been demonstrated to be responsible for the interruption of the signal propagation needed for the quorum sensing circuit. Using the other approach, the structure-activity relationship (SAR) analysis, it was possible to establish which structural characteristics and chemical requirements are necessary to classify a compound as a possible agonist or antagonist against the LasR binding site.
Color vision: "OH-site" rule for seeing red and green.
Sekharan, Sivakumar; Katayama, Kota; Kandori, Hideki; Morokuma, Keiji
2012-06-27
Eyes gather information, and color forms an extremely important component of the information, more so in the case of animals to forage and navigate within their immediate environment. By using the ONIOM (QM/MM) (ONIOM = our own N-layer integrated molecular orbital plus molecular mechanics) method, we report a comprehensive theoretical analysis of the structure and molecular mechanism of spectral tuning of monkey red- and green-sensitive visual pigments. We show that interaction of retinal with three hydroxyl-bearing amino acids near the β-ionone ring part of the retinal in opsin, A164S, F261Y, and A269T, increases the electron delocalization, decreases the bond length alternation, and leads to variation in the wavelength of maximal absorbance of the retinal in the red- and green-sensitive visual pigments. On the basis of the analysis, we propose the "OH-site" rule for seeing red and green. This rule is also shown to account for the spectral shifts obtained from hydroxyl-bearing amino acids near the Schiff base in different visual pigments: at site 292 (A292S, A292Y, and A292T) in bovine and at site 111 (Y111) in squid opsins. Therefore, the OH-site rule is shown to be site-specific and not pigment-specific and thus can be used for tracking spectral shifts in any visual pigment.
NASA Astrophysics Data System (ADS)
Ohta, Ayumi; Kobayashi, Osamu; Danielache, Sebastian O.; Nanbu, Shinkoh
2017-03-01
The ultra-fast photoisomerization reactions between 1,3-cyclohexadiene (CHD) and 1,3,5-cis-hexatriene (HT) in both hexane and ethanol solvents were revealed by nonadiabatic ab initio molecular dynamics (AI-MD) with a particle-mesh Ewald summation method and our Own N-layered Integrated molecular Orbital and molecular Mechanics model (PME-ONIOM) scheme. Zhu-Nakamura version trajectory surface hopping method (ZN-TSH) was employed to treat the ultra-fast nonadiabatic decaying process. The results for hexane and ethanol simulations reasonably agree with experimental data. The high nonpolar-nonpolar affinity between CHD and the solvent was observed in hexane solvent, which definitely affected the excited state lifetimes, the product branching ratio of CHD:HT, and solute (CHD) dynamics. In ethanol solvent, however, the CHD solute was isomerized in the solvent cage caused by the first solvation shell. The photochemical dynamics in ethanol solvent results in the similar property to the process appeared in vacuo (isolated CHD dynamics).
Usharani, Dandamudi; Srivani, Palakuri; Sastry, G Narahari; Jemmis, Eluvathingal D
2008-06-01
Available X-ray crystal structures of phosphodiesterase 4 (PDE 4) are classified into two groups based on a secondary structure difference of a 310-helix versus a turn in the M-loop region. The only variable that was discernible between these two sets is the pH at the crystallization conditions. Assuming that at lower pH there is a possibility of protonation, thermodynamics of protonation and deprotonation of the aspartic acid, cysteine side chains, and amide bonds are calculated. The models in the gas phase and in the explicit solvent using the ONIOM method are calculated at the B3LYP/6-31+G* and B3LYP/6-31+G*:UFF levels of theory, respectively. The molecular dynamics (MD) simulations are also performed on the M-loop region of a 310-helix and a turn with explicit water for 10 ns under NPT conditions. The isodesmic equations of the various protonation states show that the turn containing structure is thermodynamically more stable when proline or cysteine is protonated. The preference for the turn structure on protonation (pH = 6.5-7.5) is due to an increase in the number of the hydrogen bonding and electrostatic interactions gained by the surrounding environment such as adjacent residues and solvent molecules.
Carbon Monoxide Hydrogenation on Ice Surfaces.
Kuwahata, Kazuaki; Ohno, Kaoru
2018-03-14
We have performed density functional calculations to investigate the carbon monoxide hydrogenation reaction (H+CO→HCO), which is important in interstellar clouds. We found that the activation energy of the reaction on amorphous ice is lower than that on crystalline ice. In the course of this study, we demonstrated that it is roughly possible to use the excitation energy of the reactant molecule (CO) in place of the activation energy. This relationship holds also for small water clusters at the CCSD level of calculation and the two-layer-level ONIOM (CCSD : X3LYP) calculation. Generally, since it is computationally demanding to estimate activation energies of chemical reactions in a circumstance of many water molecules, this relationship enables one to determine the activation energy of this reaction on ice surfaces from the knowledge of the excitation energy of CO only. Incorporating quantum-tunneling effects, we discuss the reaction rate on ice surfaces. Our estimate that the reaction rate on amorphous ice is almost twice as large as that on crystalline ice is qualitatively consistent with the experimental evidence reported by Hidaka et al. [Chem. Phys. Lett., 2008, 456, 36.]. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ion, Bogdan F; Bushnell, Eric A C; Luna, Phil De; Gauld, James W
2012-10-11
Ornithine cyclodeaminase (OCD) is an NAD+-dependent deaminase that is found in bacterial species such as Pseudomonas putida. Importantly, it catalyzes the direct conversion of the amino acid L-ornithine to L-proline. Using molecular dynamics (MD) and a hybrid quantum mechanics/molecular mechanics (QM/MM) method in the ONIOM formalism, the catalytic mechanism of OCD has been examined. The rate limiting step is calculated to be the initial step in the overall mechanism: hydride transfer from the L-ornithine's C(α)-H group to the NAD+ cofactor with concomitant formation of a C(α)=NH(2)+ Schiff base with a barrier of 90.6 kJ mol-1. Importantly, no water is observed within the active site during the MD simulations suitably positioned to hydrolyze the C(α)=NH(2)+ intermediate to form the corresponding carbonyl. Instead, the reaction proceeds via a non-hydrolytic mechanism involving direct nucleophilic attack of the δ-amine at the C(α)-position. This is then followed by cleavage and loss of the α-NH(2) group to give the Δ1-pyrroline-2-carboxylate that is subsequently reduced to L-proline.
Ion, Bogdan F.; Bushnell, Eric A. C.; De Luna, Phil; Gauld, James W.
2012-01-01
Ornithine cyclodeaminase (OCD) is an NAD+-dependent deaminase that is found in bacterial species such as Pseudomonas putida. Importantly, it catalyzes the direct conversion of the amino acid L-ornithine to L-proline. Using molecular dynamics (MD) and a hybrid quantum mechanics/molecular mechanics (QM/MM) method in the ONIOM formalism, the catalytic mechanism of OCD has been examined. The rate limiting step is calculated to be the initial step in the overall mechanism: hydride transfer from the L-ornithine’s Cα–H group to the NAD+ cofactor with concomitant formation of a Cα=NH2 + Schiff base with a barrier of 90.6 kJ mol−1. Importantly, no water is observed within the active site during the MD simulations suitably positioned to hydrolyze the Cα=NH2 + intermediate to form the corresponding carbonyl. Instead, the reaction proceeds via a non-hydrolytic mechanism involving direct nucleophilic attack of the δ-amine at the Cα-position. This is then followed by cleavage and loss of the α-NH2 group to give the Δ1-pyrroline-2-carboxylate that is subsequently reduced to L-proline. PMID:23202934
Adsorption in zeolites using mechanically embedded ONIOM clusters
Patet, Ryan E.; Caratzoulas, Stavros; Vlachos, Dionisios G.
2016-09-01
Here, we have explored mechanically embedded three-layer QM/QM/MM ONIOM models for computational studies of binding in Al-substituted zeolites. In all the models considered, the high-level-theory layer consists of the adsorbate molecule and of the framework atoms within the first two coordination spheres of the Al atom and is treated at the M06-2X/6-311G(2df,p) level. For simplicity, flexibility and routine applicability, the outer, low-level-theory layer is treated with the UFF. We have modelled the intermediate-level layer quantum mechanically and investigated the performance of HF theory and of three DFT functionals, B3LYP, M06-2X and ωB97x-D, for different layer sizes and various basis sets,more » with and without BSSE corrections. We have studied the binding of sixteen probe molecules in H-MFI and compared the computed adsorption enthalpies with published experimental data. We have demonstrated that HF and B3LYP are inadequate for the description of the interactions between the probe molecules and the framework surrounding the metal site of the zeolite on account of their inability to capture dispersion forces. Both M06-2X and ωB97x-D on average converge within ca. 10% of the experimental values. We have further demonstrated transferability of the approach by computing the binding enthalpies of n-alkanes (C1–C8) in H-MFI, H-BEA and H-FAU, with very satisfactory agreement with experiment. The computed entropies of adsorption of n-alkanes in H-MFI are also found to be in good agreement with experimental data. Finally, we compare with published adsorption energies calculated by periodic-DFT for n-C3 to n-C6 alkanes, water and methanol in H-ZSM-5 and find very good agreement.« less
NASA Astrophysics Data System (ADS)
Chain, Fernando; Iramain, Maximiliano Alberto; Grau, Alfredo; Catalán, César A. N.; Brandán, Silvia Antonia
2017-01-01
N-(3,4-dimethoxybenzyl)-hexadecanamide (DMH) was characterized by using Fourier Transform infrared (FT-IR) and Raman (FT-Raman), Ultraviolet- Visible (UV-Visible) and Hydrogen and Carbon Nuclear Magnetic Resonance (1H and 13C NMR) spectroscopies. The structural, electronic, topological and vibrational properties were evaluated in gas phase and in n-hexane employing ONIOM and self-consistent force field (SCRF) calculations. The atomic charges, molecular electrostatic potentials, stabilization energies and topological properties of DMH were analyzed and compared with those calculated for N-(3,4-dimethoxybenzyl)-acetamide (DMA) in order to evaluate the effect of the side chain on the properties of DMH. The reactivity and behavior of this alkamide were predicted by using the gap energies and some descriptors. Force fields and the corresponding force constants were reported for DMA only in gas phase and n-hexane due to the high number of vibration normal modes showed by DMH, while the complete vibrational assignments are presented for DMA and both forms of DMH. The comparisons between the experimental FTIR, FT-Raman, UV-Visible and 1H and 13C NMR spectra with the corresponding theoretical ones showed a reasonable concordance.
Reaction Dynamics Following Ionization of Ammonia Dimer Adsorbed on Ice Surface.
Tachikawa, Hiroto
2016-09-22
The ice surface provides an effective two-dimensional reaction field in interstellar space. However, how the ice surface affects the reaction mechanism is still unknown. In the present study, the reaction of an ammonia dimer cation adsorbed both on water ice and cluster surface was theoretically investigated using direct ab initio molecular dynamics (AIMD) combined with our own n-layered integrated molecular orbital and molecular mechanics (ONIOM) method, and the results were compared with reactions in the gas phase and on water clusters. A rapid proton transfer (PT) from NH3(+) to NH3 takes place after the ionization and the formation of intermediate complex NH2(NH4(+)) is found. The reaction rate of PT was significantly affected by the media connecting to the ammonia dimer. The time of PT was calculated to be 50 fs (in the gas phase), 38 fs (on ice), and 28-33 fs (on water clusters). The dissociation of NH2(NH4(+)) occurred on an ice surface. The reason behind the reaction acceleration on an ice surface is discussed.
Meng, Rui-Hong; Cao, Xiong; Hu, Shuang-Qi; Hu, Li-Shuang
2017-08-01
The cooperativity effects of the H-bonding interactions in HMX (1,3,5,7-tetranitro-1,3,5,7-tetrazacyclooctane)∙∙∙HMX∙∙∙FA (formamide), HMX∙∙∙HMX∙∙∙H 2 O and HMX∙∙∙HMX∙∙∙HMX complexes involving the chair and chair-chair HMX are investigated by using the ONIOM2 (CAM-B3LYP/6-31++G(d,p):PM3) and ONIOM2 (M06-2X/6-31++G(d,p):PM3) methods. The solvent effect of FA or H 2 O on the cooperativity effect in HMX∙∙∙HMX∙∙∙HMX are evaluated by the integral equation formalism polarized continuum model. The results show that the cooperativity and anti-cooperativity effects are not notable in all the systems. Although the effect of solvation on the binding energy of ternary system HMX∙∙∙HMX∙∙∙HMX is not large, that on the cooperativity of H-bonds is notable, which leads to the mutually strengthened H-bonding interaction in solution. This is perhaps the reason for the formation of different conformation of HMX in different solvent. Surface electrostatic potential and reduced density gradient are used to reveal the nature of the solvent effect on cooperativity effect in HMX∙∙∙HMX∙∙∙HMX. Graphical abstract RDG isosurface and electrostatic potential surface of HMX∙∙∙HMX∙∙∙HMX.
Zeng, Guixiang; Maeda, Satoshi; Taketsugu, Tetsuya; Sakaki, Shigeyoshi
2016-10-01
Theoretically designed pincer-type phosphorus compound is found to be active for the hydrogenation of carbon dioxide (CO 2 ) with ammonia-borane. DFT, ONIOM(CCSD(T):MP2), and CCSD(T) computational results demonstrated that the reaction occurs through the phosphorus-ligand cooperative catalysis function, which provides an unprecedented protocol for metal-free CO 2 conversion. The phosphorus compounds with the NNN ligand are more active than those with the ONO ligand. The conjugate and planar ligand considerably improves the efficiency of the catalyst.
Furan production from glycoaldehyde over HZSM-5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seonah; Evans, Tabitha J.; Mukarakate, Calvin
Catalytic fast pyrolysis of biomass over zeolite catalysts results primarily in aromatic (e.g. benzene, toluene, xylene) and olefin products. However, furans are a higher value intermediate for their ability to be readily transformed into gasoline, diesel, and chemicals. Here we investigate possible mechanisms for the coupling of glycoaldehyde, a common product of cellulose pyrolysis, over HZSM-5 for the formation of furans. Experimental measurements of neat glycoaldehyde over a fixed bed of HZSM-5 confirm furans (e.g. furanone) are products of this reaction at temperatures below 300 degrees C with several aldol condensation products as co-products (e.g. benzoquinone). However, under typical catalyticmore » fast pyrolysis conditions (>400 degrees C), further reactions occur that lead to the usual aromatic product slate. ONIOM calculations were utilized to identify the pathway for glycoaldehyde coupling toward furanone and hydroxyfuranone products with dehydration reactions serving as the rate determining steps with typical intrinsic reaction barriers of 40 kcal mol-1. The reaction mechanisms for glycoaldehyde will likely be similar to that of other small oxygenates such as acetaldehyde, lactaldehyde, and hydroxyacetone and this study provides a generalizable mechanism of oxygenate coupling and furan formation over zeolite catalysts.« less
Mechanism of falcipain-2 inhibition by α,β-unsaturated benzo[1,4]diazepin-2-one methyl ester
NASA Astrophysics Data System (ADS)
Grazioso, Giovanni; Legnani, Laura; Toma, Lucio; Ettari, Roberta; Micale, Nicola; De Micheli, Carlo
2012-09-01
Falcipain-2 (FP-2) is a papain-family cysteine protease of Plasmodium falciparum whose primary function is to degrade the host red cell hemoglobin, within the food vacuole, in order to provide free amino acids for parasite protein synthesis. Additionally it promotes host cell rupture by cleaving the skeletal proteins of the erythrocyte membrane. Therefore, the inhibition of FP-2 represents a promising target in the search of novel anti-malarial drugs. A potent FP-2 inhibitor, characterized by the presence in its structure of the 1,4-benzodiazepine scaffold and an α,β-unsaturated methyl ester moiety capable to react with the Cys42 thiol group located in the active site of FP-2, has been recently reported in literature. In order to study in depth the inhibition mechanism triggered by this interesting compound, we carried out, through ONIOM hybrid calculations, a computational investigation of the processes occurring when the inhibitor targets the enzyme and eventually leads to an irreversible covalent Michael adduct. Each step of the reaction mechanism has been accurately characterized and a detailed description of each possible intermediate and transition state along the pathway has been reported.
NASA Astrophysics Data System (ADS)
Alamiddine, Zakaria; Selvam, Balaji; Cerón-Carrasco, José P.; Mathé-Allainmat, Monique; Lebreton, Jacques; Thany, Steeve H.; Laurent, Adèle D.; Graton, Jérôme; Le Questel, Jean-Yves
2015-12-01
The binding of thiaclopride (THI), a neonicotinoid insecticide, with Aplysia californica acetylcholine binding protein ( Ac-AChBP), the surrogate of the extracellular domain of insects nicotinic acetylcholine receptors, has been studied with a QM/QM' hybrid methodology using the ONIOM approach (M06-2X/6-311G(d):PM6). The contributions of Ac-AChBP key residues for THI binding are accurately quantified from a structural and energetic point of view. The importance of water mediated hydrogen-bond (H-bond) interactions involving two water molecules and Tyr55 and Ser189 residues in the vicinity of the THI nitrile group, is specially highlighted. A larger stabilization energy is obtained with the THI- Ac-AChBP complex compared to imidacloprid (IMI), the forerunner of neonicotinoid insecticides. Pairwise interaction energy calculations rationalize this result with, in particular, a significantly more important contribution of the pivotal aromatic residues Trp147 and Tyr188 with THI through CH···π/CH···O and π-π stacking interactions, respectively. These trends are confirmed through a complementary non-covalent interaction (NCI) analysis of selected THI- Ac-AChBP amino acid pairs.
Moreira, Cátia; Ramos, Maria J; Fernandes, Pedro Alexandrino
2016-06-27
This paper is devoted to the understanding of the reaction mechanism of mycobacterium tuberculosis glutamine synthetase (mtGS) with atomic detail, using computational quantum mechanics/molecular mechanics (QM/MM) methods at the ONIOM M06-D3/6-311++G(2d,2p):ff99SB//B3LYP/6-31G(d):ff99SB level of theory. The complete reaction undergoes a three-step mechanism: the spontaneous transfer of phosphate from ATP to glutamate upon ammonium binding (ammonium quickly loses a proton to Asp54), the attack of ammonia on phosphorylated glutamate (yielding protonated glutamine), and the deprotonation of glutamine by the leaving phosphate. This exothermic reaction has an activation free energy of 21.5 kcal mol(-1) , which is consistent with that described for Escherichia coli glutamine synthetase (15-17 kcal mol(-1) ). The participating active site residues have been identified and their role and energy contributions clarified. This study provides an insightful atomic description of the biosynthetic reaction that takes place in this enzyme, opening doors for more accurate studies for developing new anti-tuberculosis therapies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Furan production from glycoaldehyde over HZSM-5
Kim, Seonah; Evans, Tabitha J.; Mukarakate, Calvin; ...
2016-04-03
Catalytic fast pyrolysis of biomass over zeolite catalysts results primarily in aromatic (e.g. benzene, toluene, xylene) and olefin products. However, furans are a higher value intermediate for their ability to be readily transformed into gasoline, diesel, and chemicals. Here we investigate possible mechanisms for the coupling of glycoaldehyde, a common product of cellulose pyrolysis, over HZSM-5 for the formation of furans. Experimental measurements of neat glycoaldehyde over a fixed bed of HZSM-5 confirm furans (e.g. furanone) are products of this reaction at temperatures below 300 degrees C with several aldol condensation products as co-products (e.g. benzoquinone). However, under typical catalyticmore » fast pyrolysis conditions (>400 degrees C), further reactions occur that lead to the usual aromatic product slate. ONIOM calculations were utilized to identify the pathway for glycoaldehyde coupling toward furanone and hydroxyfuranone products with dehydration reactions serving as the rate determining steps with typical intrinsic reaction barriers of 40 kcal mol-1. The reaction mechanisms for glycoaldehyde will likely be similar to that of other small oxygenates such as acetaldehyde, lactaldehyde, and hydroxyacetone and this study provides a generalizable mechanism of oxygenate coupling and furan formation over zeolite catalysts.« less
NASA Astrophysics Data System (ADS)
Bensouilah, Nadjia; Fisli, Hassina; Bensouilah, Hamza; Zaater, Sihem; Abdaoui, Mohamed; Boutemeur-Kheddis, Baya
2017-10-01
In this work, the inclusion complex of DCY/CENS: N-(2-chloroethyl), N-nitroso, N‧, N‧-dicyclohexylsulfamid and β-cyclodextrin (β-CD) is investigated using the fluorescence spectroscopy, PM3, ONIOM2 and DFT methods. The experimental part reveals that DCY/CENS forms a 1:1 stoichiometric ratio inclusion complex with β-CD. The constant of stability is evaluated using the Benesi-Hildebrand equation. The results of the theoretical optimization showed that the lipophilic fraction of molecule (cyclohexyl group) is inside of β-CD. Accordingly, the Nitroso-Chloroethyl moiety is situated outside the cavity of the macromolecule host. The favorable structure of the optimized complex indicates the existence of weak intermolecular hydrogen bonds and the most important van der Waals (vdW) interactions which are studied on the basis of Natural Bonding Orbital (NBO) analysis. The NBO is employed to compute the electronic donor-acceptor exchanges between drug and β-CD. Furthermore, a detailed topological charge density analysis based on the quantum theory of atoms in molecules (QTAIM), has been accomplished on the most favorable complex using B3LYP/6-31G(d) method. The presence of stabilizing intermolecular hydrogen bonds and van der Waals interactions in the most favorable complex is predicted. Also, the energies of these interactions are estimated with Espinosa's formula. The findings of this investigation reveal that the correlation between the structural parameters and the electronic density is good. Finally, and based on DFT calculations, the reactivity of the interesting molecule in free state was studied and compared with that in the complexed state using chemical potential, global hardness, global softness, electronegativity, electrophilicity and local reactivity descriptors.
Mahboob, Abdullah; Vassiliev, Serguei; Poddutoori, Prashanth K; van der Est, Art; Bruce, Doug
2013-01-01
Photosystem II (PSII) of photosynthesis has the unique ability to photochemically oxidize water. Recently an engineered bacterioferritin photochemical 'reaction centre' (BFR-RC) using a zinc chlorin pigment (ZnCe6) in place of its native heme has been shown to photo-oxidize bound manganese ions through a tyrosine residue, thus mimicking two of the key reactions on the electron donor side of PSII. To understand the mechanism of tyrosine oxidation in BFR-RCs, and explore the possibility of water oxidation in such a system we have built an atomic-level model of the BFR-RC using ONIOM methodology. We studied the influence of axial ligands and carboxyl groups on the oxidation potential of ZnCe6 using DFT theory, and finally calculated the shift of the redox potential of ZnCe6 in the BFR-RC protein using the multi-conformational molecular mechanics-Poisson-Boltzmann approach. According to our calculations, the redox potential for the first oxidation of ZnCe6 in the BRF-RC protein is only 0.57 V, too low to oxidize tyrosine. We suggest that the observed tyrosine oxidation in BRF-RC could be driven by the ZnCe6 di-cation. In order to increase the efficiency of tyrosine oxidation, and ultimately oxidize water, the first potential of ZnCe6 would have to attain a value in excess of 0.8 V. We discuss the possibilities for modifying the BFR-RC to achieve this goal.
Rimola, Albert; Ugliengo, Piero
2009-04-14
The reaction of glycine (Gly) with a strained (SiO)(2) four-membered ring defect (D2) at the surface of an interstellar silica grain dust has been studied at ONIOM2[B3LYP/6-31+G(d,p):MNDO] level within a cluster approach in the context of hypothetical reactions occurring in the interstellar medium. The D2 opens up exothermically for reaction with Gly (Delta(r)U(0)=-26.3 kcal mol(-1)) to give a surface mixed anhydride S(surf)-O-C([double bond, length as m-dash]O)-CH(2)NH(2) as a product. The reaction barriers, DeltaU( not equal)(0), are 0.1 and 10.4 kcal mol(-1) for reactive channels involving COOH and NH(2) as attacking groups, respectively. Calculations show the surface mixed anhydride to be rather stable under the action of interstellar processes, such as reactions with isolated H(2)O and NH(3) molecules or the exposure to cosmic rays and UV radiation. The hydrolysis of the surface mixed anhydride to release again Gly was modelled by microsolvation (from 1 to 4 H(2)O molecules) mimicking what could have happened to the interstellar dust after seeding the primordial ocean in the early Earth. Results for these calculations show that the reaction is exergonic and activated, the Delta(r)G(298) becoming more negative and the DeltaG( not equal)(298) being dramatically reduced as a function of increasing number of H(2)O molecules. The present results are relevant because they show that defects present at interstellar dust surfaces could have played a significant role in capturing, protecting and delivering essential prebiotic compounds on the early Earth.
Yang, Gang; Zhou, Lijun
2014-01-01
Defects are often considered as the active sites for chemical reactions. Here a variety of defects in zeolites are used to stabilize zwitterionic glycine that is not self-stable in gas phase; in addition, effects of acidic strengths and zeolite channels on zwitterionic stabilization are demonstrated. Glycine zwitterions can be stabilized by all these defects and energetically prefer to canonical structures over Al and Ga Lewis acidic sites rather than Ti Lewis acidic site, silanol and titanol hydroxyls. For titanol (Ti-OH), glycine interacts with framework Ti and hydroxyl sites competitively, and the former with Lewis acidity predominates. The transformations from canonical to zwitterionic glycine are obviously more facile over Al and Ga Lewis acidic sites than over Ti Lewis acidic site, titanol and silanol hydroxyls. Charge transfers that generally increase with adsorption energies are found to largely decide the zwitterionic stabilization effects. Zeolite channels play a significant role during the stabilization process. In absence of zeolite channels, canonical structures predominate for all defects; glycine zwitterions remain stable over Al and Ga Lewis acidic sites and only with synergy of H-bonding interactions can exist over Ti Lewis acidic site, while automatically transform to canonical structures over silanol and titanol hydroxyls. PMID:25307449
On the temperature dependence of H-U{sub iso} in the riding hydrogen model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lübben, Jens; Volkmann, Christian; Grabowsky, Simon
The temperature dependence of hydrogen U{sub iso} and parent U{sub eq} in the riding hydrogen model is investigated by neutron diffraction, aspherical-atom refinements and QM/MM and MO/MO cluster calculations. Fixed values of 1.2 or 1.5 appear to be underestimated, especially at temperatures below 100 K. The temperature dependence of H-U{sub iso} in N-acetyl-l-4-hydroxyproline monohydrate is investigated. Imposing a constant temperature-independent multiplier of 1.2 or 1.5 for the riding hydrogen model is found to be inaccurate, and severely underestimates H-U{sub iso} below 100 K. Neutron diffraction data at temperatures of 9, 150, 200 and 250 K provide benchmark results for thismore » study. X-ray diffraction data to high resolution, collected at temperatures of 9, 30, 50, 75, 100, 150, 200 and 250 K (synchrotron and home source), reproduce neutron results only when evaluated by aspherical-atom refinement models, since these take into account bonding and lone-pair electron density; both invariom and Hirshfeld-atom refinement models enable a more precise determination of the magnitude of H-atom displacements than independent-atom model refinements. Experimental efforts are complemented by computing displacement parameters following the TLS+ONIOM approach. A satisfactory agreement between all approaches is found.« less
Fekete, Attila; Komáromi, István
2016-12-07
A proteolytic reaction of papain with a simple peptide model substrate N-methylacetamide has been studied. Our aim was twofold: (i) we proposed a plausible reaction mechanism with the aid of potential energy surface scans and second geometrical derivatives calculated at the stationary points, and (ii) we investigated the applicability of the dispersion corrected density functional methods in comparison with the popular hybrid generalized gradient approximations (GGA) method (B3LYP) without such a correction in the QM/MM calculations for this particular problem. In the resting state of papain the ion pair and neutral forms of the Cys-His catalytic dyad have approximately the same energy and they are separated by only a small barrier. Zero point vibrational energy correction shifted this equilibrium slightly to the neutral form. On the other hand, the electrostatic solvation free energy corrections, calculated using the Poisson-Boltzmann method for the structures sampled from molecular dynamics simulation trajectories, resulted in a more stable ion-pair form. All methods we applied predicted at least a two elementary step acylation process via a zwitterionic tetrahedral intermediate. Using dispersion corrected DFT methods the thioester S-C bond formation and the proton transfer from histidine occur in the same elementary step, although not synchronously. The proton transfer lags behind (or at least does not precede) the S-C bond formation. The predicted transition state corresponds mainly to the S-C bond formation while the proton is still on the histidine Nδ atom. In contrast, the B3LYP method using larger basis sets predicts a transition state in which the S-C bond is almost fully formed and the transition state can be mainly featured by the Nδ(histidine) to N(amid) proton transfer. Considerably lower activation energy was predicted (especially by the B3LYP method) for the next amide bond breaking elementary step of acyl-enzyme formation. Deacylation appeared to be a single elementary step process in all the methods we applied.
Side reactions of nitroxide-mediated polymerization: N-O versus O-C cleavage of alkoxyamines.
Hodgson, Jennifer L; Roskop, Luke B; Gordon, Mark S; Lin, Ching Yeh; Coote, Michelle L
2010-09-30
Free energies for the homolysis of the NO-C and N-OC bonds were compared for a large number of alkoxyamines at 298 and 393 K, both in the gas phase and in toluene solution. On this basis, the scope of the N-OC homolysis side reaction in nitroxide-mediated polymerization was determined. It was found that the free energies of NO-C and N-OC homolysis are not correlated, with NO-C homolysis being more dependent upon the properties of the alkyl fragment and N-OC homolysis being more dependent upon the structure of the aminyl fragment. Acyclic alkoxyamines and those bearing the indoline functionality have lower free energies of N-OC homolysis than other cyclic alkoxyamines, with the five-membered pyrrolidine and isoindoline derivatives showing lower free energies than the six-membered piperidine derivatives. For most nitroxides, N-OC homolysis is normally favored above NO-C homolysis only when a heteroatom that is α to the NOC carbon center stabilizes the NO-C bond and/or the released alkyl radical is not sufficiently stabilized. As part of this work, accurate methods for the calculation of free energies for the homolysis of alkoxyamines were determined. Accurate thermodynamic parameters to within 4.5 kJ mol(-1) of experimental values were found using an ONIOM approximation to G3(MP2)-RAD combined with PCM solvation energies at the B3-LYP/6-31G(d) level.
Huang, Chen; Muñoz-García, Ana Belén; Pavone, Michele
2016-12-28
Density-functional embedding theory provides a general way to perform multi-physics quantum mechanics simulations of large-scale materials by dividing the total system's electron density into a cluster's density and its environment's density. It is then possible to compute the accurate local electronic structures and energetics of the embedded cluster with high-level methods, meanwhile retaining a low-level description of the environment. The prerequisite step in the density-functional embedding theory is the cluster definition. In covalent systems, cutting across the covalent bonds that connect the cluster and its environment leads to dangling bonds (unpaired electrons). These represent a major obstacle for the application of density-functional embedding theory to study extended covalent systems. In this work, we developed a simple scheme to define the cluster in covalent systems. Instead of cutting covalent bonds, we directly split the boundary atoms for maintaining the valency of the cluster. With this new covalent embedding scheme, we compute the dehydrogenation energies of several different molecules, as well as the binding energy of a cobalt atom on graphene. Well localized cluster densities are observed, which can facilitate the use of localized basis sets in high-level calculations. The results are found to converge faster with the embedding method than the other multi-physics approach ONIOM. This work paves the way to perform the density-functional embedding simulations of heterogeneous systems in which different types of chemical bonds are present.
Shi, Qicun; Meroueh, Samy O; Fisher, Jed F; Mobashery, Shahriar
2008-07-23
Penicillin-binding protein 5 (PBP 5) of Escherichia coli hydrolyzes the terminal D-Ala-D-Ala peptide bond of the stem peptides of the cell wall peptidoglycan. The mechanism of PBP 5 catalysis of amide bond hydrolysis is initial acylation of an active site serine by the peptide substrate, followed by hydrolytic deacylation of this acyl-enzyme intermediate to complete the turnover. The microscopic events of both the acylation and deacylation half-reactions have not been studied. This absence is addressed here by the use of explicit-solvent molecular dynamics simulations and ONIOM quantum mechanics/molecular mechanics (QM/MM) calculations. The potential-energy surface for the acylation reaction, based on MP2/6-31+G(d) calculations, reveals that Lys47 acts as the general base for proton abstraction from Ser44 in the serine acylation step. A discrete potential-energy minimum for the tetrahedral species is not found. The absence of such a minimum implies a conformational change in the transition state, concomitant with serine addition to the amide carbonyl, so as to enable the nitrogen atom of the scissile bond to accept the proton that is necessary for progression to the acyl-enzyme intermediate. Molecular dynamics simulations indicate that transiently protonated Lys47 is the proton donor in tetrahedral intermediate collapse to the acyl-enzyme species. Two pathways for this proton transfer are observed. One is the direct migration of a proton from Lys47. The second pathway is proton transfer via an intermediary water molecule. Although the energy barriers for the two pathways are similar, more conformers sample the latter pathway. The same water molecule that mediates the Lys47 proton transfer to the nitrogen of the departing D-Ala is well positioned, with respect to the Lys47 amine, to act as the hydrolytic water in the deacylation step. Deacylation occurs with the formation of a tetrahedral intermediate over a 24 kcal x mol(-1) barrier. This barrier is approximately 2 kcal x mol(-1) greater than the barrier (22 kcal x mol(-1)) for the formation of the tetrahedral species in acylation. The potential-energy surface for the collapse of the deacylation tetrahedral species gives a 24 kcal x mol(-1) higher energy species for the product, signifying that the complex would readily reorganize and pave the way for the expulsion of the product of the reaction from the active site and the regeneration of the catalyst. These computational data dovetail with the knowledge on the reaction from experimental approaches.
An insight to the molecular interactions of the FDA approved HIV PR drugs against L38L↑N↑L PR mutant
NASA Astrophysics Data System (ADS)
Sanusi, Zainab K.; Govender, Thavendran; Maguire, Glenn E. M.; Maseko, Sibusiso B.; Lin, Johnson; Kruger, Hendrik G.; Honarparvar, Bahareh
2018-03-01
The aspartate protease of the human immune deficiency type-1 virus (HIV-1) has become a crucial antiviral target in which many useful antiretroviral inhibitors have been developed. However, it seems the emergence of new HIV-1 PR mutations enhances drug resistance, hence, the available FDA approved drugs show less activity towards the protease. A mutation and insertion designated L38L↑N↑L PR was recently reported from subtype of C-SA HIV-1. An integrated two-layered ONIOM (QM:MM) method was employed in this study to examine the binding affinities of the nine HIV PR inhibitors against this mutant. The computed binding free energies as well as experimental data revealed a reduced inhibitory activity towards the L38L↑N↑L PR in comparison with subtype C-SA HIV-1 PR. This observation suggests that the insertion and mutations significantly affect the binding affinities or characteristics of the HIV PIs and/or parent PR. The same trend for the computational binding free energies was observed for eight of the nine inhibitors with respect to the experimental binding free energies. The outcome of this study shows that ONIOM method can be used as a reliable computational approach to rationalize lead compounds against specific targets. The nature of the intermolecular interactions in terms of the host-guest hydrogen bond interactions is discussed using the atoms in molecules (AIM) analysis. Natural bond orbital analysis was also used to determine the extent of charge transfer between the QM region of the L38L↑N↑L PR enzyme and FDA approved drugs. AIM analysis showed that the interaction between the QM region of the L38L↑N↑L PR and FDA approved drugs are electrostatic dominant, the bond stability computed from the NBO analysis supports the results from the AIM application. Future studies will focus on the improvement of the computational model by considering explicit water molecules in the active pocket. We believe that this approach has the potential to provide information that will aid in the design of much improved HIV-1 PR antiviral drugs.
Han, Xinya; Zhu, Xiuyun; Zhu, Shuaihua; Wei, Lin; Hong, Zongqin; Guo, Li; Chen, Haifeng; Chi, Bo; Liu, Yan; Feng, Lingling; Ren, Yanliang; Wan, Jian
2016-01-25
In the present study, a series of novel maleimide derivatives were rationally designed and optimized, and their inhibitory activities against cyanobacteria class-II fructose-1,6-bisphosphate aldolase (Cy-FBA-II) and Synechocystis sp. PCC 6803 were further evaluated. The experimental results showed that the introduction of a bigger group (Br, Cl, CH3, or C6H3-o-F) on the pyrrole-2',5'-dione ring resulted in a decrease in the Cy-FBA-II inhibitory activity of the hit compounds. Generally, most of the hit compounds with high Cy-FBA-II inhibitory activities could also exhibit high in vivo activities against Synechocystis sp. PCC 6803. Especially, compound 10 not only shows a high Cy-FBA-II activity (IC50 = 1.7 μM) but also has the highest in vivo activity against Synechocystis sp. PCC 6803 (EC50 = 0.6 ppm). Thus, compound 10 was selected as a representative molecule, and its probable interactions with the surrounding important residues in the active site of Cy-FBA-II were elucidated by the joint use of molecular docking, molecular dynamics simulations, ONIOM calculations, and enzymatic assays to provide new insight into the binding mode of the inhibitors and Cy-FBA-II. The positive results indicate that the design strategy used in the present study is very likely to be a promising way to find novel lead compounds with high inhibitory activities against Cy-FBA-II in the future. The enzymatic and algal inhibition assays suggest that Cy-FBA-II is very likely to be a promising target for the design, synthesis, and development of novel specific algicides to solve cyanobacterial harmful algal blooms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Bing; Bernstein, Elliot R., E-mail: erb@Colostate.edu
Unimolecular decomposition of nitrogen-rich energetic salt molecules bis(ammonium)5,5′-bistetrazolate (NH{sub 4}){sub 2}BT and bis(triaminoguanidinium) 5,5′-azotetrazolate TAGzT, has been explored via 283 nm laser excitation. The N{sub 2} molecule, with a cold rotational temperature (<30 K), is observed as an initial decomposition product, subsequent to UV excitation. Initial decomposition mechanisms for the two electronically excited salt molecules are explored at the complete active space self-consistent field (CASSCF) level. Potential energy surface calculations at the CASSCF(12,8)/6-31G(d) ((NH{sub 4}){sub 2}BT) and ONIOM (CASSCF/6-31G(d):UFF) (TAGzT) levels illustrate that conical intersections play an essential role in the decomposition mechanism as they provide non-adiabatic, ultrafast radiationless internalmore » conversion between upper and lower electronic states. The tetrazole ring opens on the S{sub 1} excited state surface and, through conical intersections (S{sub 1}/S{sub 0}){sub CI}, N{sub 2} product is formed on the ground state potential energy surface without rotational excitation. The tetrazole rings open at the N2—N3 ring bond with the lowest energy barrier: the C—N ring bond opening has a higher energy barrier than that for any of the N—N ring bonds: this is consistent with findings for other nitrogen-rich neutral organic energetic materials. TAGzT can produce N{sub 2} either by the opening of tetrazole ring or from the N=N group linking its two tetrazole rings. Nonetheless, opening of a tetrazole ring has a much lower energy barrier. Vibrational temperatures of N{sub 2} products are hot based on theoretical predictions. Energy barriers for opening of the tetrazole ring for all the nitrogen-rich energetic materials studied thus far, including both neutral organic molecules and salts, are in the range from 0.31 to 2.71 eV. Energy of the final molecular structure of these systems with dissociated N{sub 2} product is in the range from −1.86 to 3.11 eV. The main difference between energetic salts and neutral nitrogen-rich energetic material is that energetic salts usually have lower excitation energy.« less
The experimental and theoretical QM/MM study of interaction of chloridazon herbicide with ds-DNA
NASA Astrophysics Data System (ADS)
Ahmadi, F.; Jamali, N.; Jahangard-Yekta, S.; Jafari, B.; Nouri, S.; Najafi, F.; Rahimi-Nasrabadi, M.
2011-09-01
We report a multispectroscopic, voltammetric and theoretical hybrid of QM/MM study of the interaction between double-stranded DNA containing both adenine-thymine and guanine-cytosine alternating sequences and chloridazon (CHL) herbicide. The electrochemical behavior of CHL was studied by cyclic voltammetry on HMDE, and the interaction of ds-DNA with CHL was investigated by both cathodic differential pulse voltammetry (CDPV) at a hanging mercury drop electrode (HMDE) and anodic differential pulse voltammetry (ADPV) at a glassy carbon electrode (GCE). The constant bonding of CHL-DNA complex that was obtained by UV/vis, CDPV and ADPV was 2.1 × 10 4, 5.1 × 10 4 and 2.6 × 10 4, respectively. The competition fluorescence studies revealed that the CHL quenches the fluorescence of DNA-ethidium bromide complex significantly and the apparent Stern-Volmer quenching constant has been estimated to be 1.71 × 10 4. Thermal denaturation study of DNA with CHL revealed the Δ Tm of 8.0 ± 0.2 °C. Thermodynamic parameters, i.e., enthalpy (Δ H), entropy (Δ S°), and Gibbs free energy (Δ G) were 98.45 kJ mol -1, 406.3 J mol -1 and -22.627 kJ mol -1, respectively. The ONIOM, based on the hybridization of QM/MM (DFT, 6.31++G(d,p)/UFF) methodology, was also performed using Gaussian 2003 package. The results revealed that the interaction is base sequence dependent, and the CHL has more interaction with ds-DNA via the GC base sequence. The results revealed that CHL may have an interaction with ds-DNA via the intercalation mode.
NASA Astrophysics Data System (ADS)
Ntombela, Thandokuhle; Fakhar, Zeynab; Ibeji, Collins U.; Govender, Thavendran; Maguire, Glenn E. M.; Lamichhane, Gyanu; Kruger, Hendrik G.; Honarparvar, Bahareh
2018-05-01
Tuberculosis remains a dreadful disease that has claimed many human lives worldwide and elimination of the causative agent Mycobacterium tuberculosis also remains elusive. Multidrug-resistant TB is rapidly increasing worldwide; therefore, there is an urgent need for improving the current antibiotics and novel drug targets to successfully curb the TB burden. uc(l,d)-Transpeptidase 2 is an essential protein in Mtb that is responsible for virulence and growth during the chronic stage of the disease. Both uc(d,d)- and uc(l,d)-transpeptidases are inhibited concurrently to eradicate the bacterium. It was recently discovered that classic penicillins only inhibit uc(d,d)-transpeptidases, while uc(l,d)-transpeptidases are blocked by carbapenems. This has contributed to drug resistance and persistence of tuberculosis. Herein, a hybrid two-layered ONIOM (B3LYP/6-31G+(d): AMBER) model was used to extensively investigate the binding interactions of LdtMt2 complexed with four carbapenems (biapenem, imipenem, meropenem, and tebipenem) to ascertain molecular insight of the drug-enzyme complexation event. In the studied complexes, the carbapenems together with catalytic triad active site residues of LdtMt2 (His187, Ser188 and Cys205) were treated at with QM [B3LYP/6-31+G(d)], while the remaining part of the complexes were treated at MM level (AMBER force field). The resulting Gibbs free energy (ΔG), enthalpy (ΔH) and entropy (ΔS) for all complexes showed that the carbapenems exhibit reasonable binding interactions towards LdtMt2. Increasing the number of amino acid residues that form hydrogen bond interactions in the QM layer showed significant impact in binding interaction energy differences and the stabilities of the carbapenems inside the active pocket of LdtMt2. The theoretical binding free energies obtained in this study reflect the same trend of the experimental observations. The electrostatic, hydrogen bonding and Van der Waals interactions between the carbapenems and LdtMt2 were also assessed. To further examine the nature of intermolecular interactions for carbapenem-LdtMt2 complexes, AIM and NBO analysis were performed for the QM region (carbapenems and the active residues of LdtMt2) of the complexes. These analyses revealed that the hydrogen bond interactions and charge transfer from the bonding to anti-bonding orbitals between catalytic residues of the enzyme and selected ligands enhances the binding and stability of carbapenem-LdtMt2 complexes.
Han, Xinya; Zhu, Xiuyun; Hong, Zongqin; Wei, Lin; Ren, Yanliang; Wan, Fen; Zhu, Shuaihua; Peng, Hao; Guo, Li; Rao, Li; Feng, Lingling; Wan, Jian
2017-06-26
Class II fructose-1,6-bisphosphate aldolases (FBA-II) are attractive new targets for the discovery of drugs to combat invasive fungal infection, because they are absent in animals and higher plants. Although several FBA-II inhibitors have been reported, none of these inhibitors exhibit antifungal effect so far. In this study, several novel inhibitors of FBA-II from C. albicans (Ca-FBA-II) with potent antifungal effects were rationally designed by jointly using a specific protocols of molecular docking-based virtual screening, accurate binding-conformation evaluation strategy, synthesis and enzymatic assays. The enzymatic assays reveal that the compounds 3c, 3e-g, 3j and 3k exhibit high inhibitory activity against Ca-FBA-II (IC 50 < 10 μM), and the most potential inhibitor is 3g, with IC 50 value of 2.7 μM. Importantly, the compounds 3f, 3g, and 3l possess not only high inhibitions against Ca-FBA-II, but also moderate antifungal activities against C. glabrata (MIC 80 = 4-64 μg/mL). The compounds 3g, 3l, and 3k in combination with fluconazole (8 μg/mL) displayed significantly synergistic antifungal activities (MIC 80 < 0.0625 μg/mL) against resistant Candida strains, which are resistant to azoles drugs. The probable binding modes between 3g and the active site of Ca-FBA-II have been proposed by using the DOX (docking, ONIOM, and XO) strategy. To our knowledge, no FBA-II inhibitors with antifungal activities against wild type and resistant strains from Candida were reported previously. The positive results suggest that the strategy adopted in this study are a promising method for the discovery of novel drugs against azole-resistant fungal pathogens in the future.
Harris, Travis V; Morokuma, Keiji
2013-08-05
Ferritins are cage-like proteins composed of 24 subunits that take up iron(II) and store it as an iron(III) oxide mineral core. A critical step is the ferroxidase reaction, in which oxygen reacts with a di-iron(II) site, proceeding through a peroxo intermediate, to form μ-oxo/hydroxo-bridged di-iron(III) products. The recent crystal structures of copper(II)- and iron(III)-bound frog M ferritin at 2.8 Å resolution [Bertini; et al. J. Am. Chem. Soc. 2012, 134, 6169-6176] provided an opportunity to theoretically investigate the detailed structures of the reactant state and products. In this study, the quantum mechanical/molecular mechanical ONIOM method is used to structurally optimize a series of single-subunit models with various hydration, protonation, and coordination states of the ferroxidase site. Calculated exchange coupling constants (J), Mössbauer parameters, and time-dependent density functional theoretical (TD-DFT) circular dichroism spectra with electronic embedding are compared with the available experimental data. The di-iron(II) model with the most experimentally consistent structural and spectroscopic parameters has 5-coordinate iron centers with Glu23, Glu58, His61, and two waters completing one coordination sphere, and His54, Glu58, Glu103, and Asp140 completing the other. In contrast to a previously proposed structure, Gln137 is not directly coordinated, but it is involved in hydrogen bonding with several iron ligands. For the di-iron(III) products, we find that a μ-oxo-bridged and two doubly bridged (μ-hydroxo and μ-oxo/hydroxo) species are likely coproduced. Although four quadrupole doublets were observed experimentally, we find that two doublets may arise from a single asymmetrically coordinated ferroxidase site. These proposed key structures will help to explore the pathway connecting the di-Fe(II) state to the peroxo intermediate and the branching mechanisms leading to the multiple products.
Bonanata, Jenner; Turell, Lucía; Antmann, Laura; Ferrer-Sueta, Gerardo; Botasini, Santiago; Méndez, Eduardo; Alvarez, Beatriz; Coitiño, E Laura
2017-07-01
Human serum albumin (HSA) has a single reduced cysteine residue, Cys34, whose acidity has been controversial. Three experimental approaches (pH-dependence of reactivity towards hydrogen peroxide, ultraviolet titration and infrared spectroscopy) are used to determine that the pK a value in delipidated HSA is 8.1±0.2 at 37°C and 0.1M ionic strength. Molecular dynamics simulations of HSA in the sub-microsecond timescale show that while sulfur exposure to solvent is limited and fluctuating in the thiol form, it increases in the thiolate, stabilized by a persistent hydrogen-bond (HB) network involving Tyr84 and bridging waters to Asp38 and Gln33 backbone. Insight into the mechanism of Cys34 oxidation by H 2 O 2 is provided by ONIOM(QM:MM) modeling including quantum water molecules. The reaction proceeds through a slightly asynchronous S N 2 transition state (TS) with calculated Δ ‡ G and Δ ‡ H barriers at 298K of respectively 59 and 54kJmol -1 (the latter within chemical accuracy from the experimental value). A post-TS proton transfer leads to HSA-SO - and water as products. The structured reaction site cages H 2 O 2 , which donates a strong HB to the thiolate. Loss of this HB before reaching the TS modulates Cys34 nucleophilicity and contributes to destabilize H 2 O 2 . The lack of reaction-site features required for differential stabilization of the TS (positive charges, H 2 O 2 HB strengthening) explains the striking difference in kinetic efficiency for the same reaction in other proteins (e.g. peroxiredoxins). The structured HB network surrounding HSA-SH with sequestered waters carries an entropic penalty on the barrier height. These studies contribute to deepen the understanding of the reactivity of HSA-SH, the most abundant thiol in human plasma, and in a wider perspective, provide clues on the key aspects that modulate thiol reactivity against H 2 O 2 . Copyright © 2017 Elsevier Inc. All rights reserved.
Zhou, Yiying; Nelson, William H
2011-10-27
With K-band EPR (Electron Paramagnetic Resonance), ENDOR (Electron-Nuclear DOuble Resonance), and EIE (ENDOR-induced EPR) techniques, three free radicals (RI-RIII) in L-lysine hydrochloride dihydrate single crystals X-irradiated at 298 K were detected at 298 K, and six radicals (R1, R1', R2-R5) were detected if the temperature was lowered to 66 K from 298 K. R1 and RI dominated the central portion of the EPR at 66 and 298 K, respectively, and were identified as main chain deamination radicals, (-)OOCĊH(CH(2))(4)(NH(3))(+). R1' was identified as a main chain deamination radical with the different configuration from R1 at 66 K, and it probably formed during cooling the temperature from 298 to 66 K. The configurations of R1, R1', and RI were analyzed with their coupling tensors. R2 and R3 each contain one α- and four β-proton couplings and have very similar EIEs at three crystallographic axes. The two-layer ONIOM calculations (at B3LYP/6-31G(d,p):PM3) support that R2 and R3 are from different radicals: dehydrogenation at C4, (-)OOCCH(NH(3))(+)CH(2)ĊH(CH(2))(2)(NH(3))(+), and dehydrogenation at C5, (-)OOCCH(NH(3))(+)(CH(2))(2)ĊHCH(2)(NH(3))(+), respectively. The comparisons of the coupling tensors indicated that R2 (66 K) is the same radical as RII (298 K), and R3 is the same as RIII. Thus, RII and RIII also are the radicals of C4 and C5 dehydrogenation. R4 and R5 are minority radicals and were observed only when temperature was lowered to 66 K. R4 and R5 were only tentatively assigned as the side chain deamination radical, (-)OOCCH (NH(3))(+)(CH(2))(3)ĊH(2), and the radical dehydrogenation at C3, (-)OOCCH(NH(3))(+)ĊH(CH(2))(3)(NH(3))(+), respectively, although the evidence was indirect. From simulation of the EPR (B//a, 66 K), the concentrations of R1, R1', and R2-R5 were estimated as: R1, 50%; R1', 11%; R2, 14%; R3, 16%; R4, 6%; R5, 3%.
Targeted studies on the interaction of nicotine and morin molecules with amyloid β-protein.
Boopathi, Subramaniam; Kolandaivel, Ponmalai
2014-03-01
Alzheimer's disease (AD) is a neurodegenerative disorder that occurs due to progressive deposition of amyloid β-protein (Aβ) in the brain. Stable conformations of solvated Aβ₁₋₄₂ protein were predicted by molecular dynamics (MD) simulation using the OPLSAA force field. The seven residue peptide (Lys-Leu-Val-Phe-Phe-Ala-Glu) Aβ₁₆₋₂₂ associated with AD was studied and reported in this paper. Since effective therapeutic agents have not yet been studied in detail, attention has focused on the use of natural products as effective anti-aggregation compounds, targeting the Aβ₁₋₄₂ protein directly. Experimental and theoretical investigation suggests that some compounds extracted from natural products might be useful, but detailed insights into the mechanism by which they might act remains elusive. The molecules nicotine and morin are found in cigarettes and beverages. Here, we report the results of interaction studies of these compounds at each hydrophobic residue of Aβ₁₆₋₂₂ peptide using the hybrid ONIOM (B3LYP/6-31G**:UFF) method. It was found that interaction with nicotine produced higher deformation in the Aβ₁₆₋₂₂ peptide than interaction with morin. MD simulation studies revealed that interaction of the nicotine molecule with the β-sheet of Aβ₁₆₋₂₂ peptide transforms the β-sheet to an α-helical structure, which helps prohibit the aggregation of Aβ-protein.
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle as...
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle as...
One-electron oxidation of individual DNA bases and DNA base stacks.
Close, David M
2010-02-04
In calculations performed with DFT there is a tendency of the purine cation to be delocalized over several bases in the stack. Attempts have been made to see if methods other than DFT can be used to calculate localized cations in stacks of purines, and to relate the calculated hyperfine couplings with known experimental results. To calculate reliable hyperfine couplings it is necessary to have an adequate description of spin polarization which means that electron correlation must be treated properly. UMP2 theory has been shown to be unreliable in estimating spin densities due to overestimates of the doubles correction. Therefore attempts have been made to use quadratic configuration interaction (UQCISD) methods to treat electron correlation. Calculations on the individual DNA bases are presented to show that with UQCISD methods it is possible to calculate hyperfine couplings in good agreement with the experimental results. However these UQCISD calculations are far more time-consuming than DFT calculations. Calculations are then extended to two stacked guanine bases. Preliminary calculations with UMP2 or UQCISD theory on two stacked guanines lead to a cation localized on a single guanine base.
Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan
2018-02-01
Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Jiang, S; Lu, W
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
Simulation and analysis of main steam control system based on heat transfer calculation
NASA Astrophysics Data System (ADS)
Huang, Zhenqun; Li, Ruyan; Feng, Zhongbao; Wang, Songhan; Li, Wenbo; Cheng, Jiwei; Jin, Yingai
2018-05-01
In this paper, after thermal power plant 300MW boiler was studied, mat lab was used to write calculation program about heat transfer process between the main steam and boiler flue gas and amount of water was calculated to ensure the main steam temperature keeping in target temperature. Then heat transfer calculation program was introduced into Simulink simulation platform based on control system multiple models switching and heat transfer calculation. The results show that multiple models switching control system based on heat transfer calculation not only overcome the large inertia of main stream temperature, a large hysteresis characteristic of main stream temperature, but also adapted to the boiler load changing.
Uniformity testing: assessment of a centralized web-based uniformity analysis system.
Klempa, Meaghan C
2011-06-01
Uniformity testing is performed daily to ensure adequate camera performance before clinical use. The aim of this study is to assess the reliability of Beth Israel Deaconess Medical Center's locally built, centralized, Web-based uniformity analysis system by examining the differences between manufacturer and Web-based National Electrical Manufacturers Association integral uniformity calculations measured in the useful field of view (FOV) and the central FOV. Manufacturer and Web-based integral uniformity calculations measured in the useful FOV and the central FOV were recorded over a 30-d period for 4 cameras from 3 different manufacturers. These data were then statistically analyzed. The differences between the uniformity calculations were computed, in addition to the means and the SDs of these differences for each head of each camera. There was a correlation between the manufacturer and Web-based integral uniformity calculations in the useful FOV and the central FOV over the 30-d period. The average differences between the manufacturer and Web-based useful FOV calculations ranged from -0.30 to 0.099, with SD ranging from 0.092 to 0.32. For the central FOV calculations, the average differences ranged from -0.163 to 0.055, with SD ranging from 0.074 to 0.24. Most of the uniformity calculations computed by this centralized Web-based uniformity analysis system are comparable to the manufacturers' calculations, suggesting that this system is reasonably reliable and effective. This finding is important because centralized Web-based uniformity analysis systems are advantageous in that they test camera performance in the same manner regardless of the manufacturer.
NASA Astrophysics Data System (ADS)
Nomura, Kazuya; Hoshino, Ryota; Hoshiba, Yasuhiro; Danilov, Victor I.; Kurita, Noriyuki
2013-04-01
We investigated transition states (TS) between wobble Guanine-Thymine (wG-T) and tautomeric G-T base-pair as well as Br-containing base-pairs by MP2 and density functional theory (DFT) calculations. The obtained TS between wG-T and G*-T (asterisk is an enol-form of base) is different from TS got by the previous DFT calculation. The activation energy (17.9 kcal/mol) evaluated by our calculation is significantly smaller than that (39.21 kcal/mol) obtained by the previous calculation, indicating that our TS is more preferable. In contrast, the obtained TS and activation energy between wG-T and G-T* are similar to those obtained by the previous DFT calculation. We furthermore found that the activation energy between wG-BrU and tautomeric G-BrU is smaller than that between wG-T and tautomeric G-T. This result elucidates that the replacement of CH3 group of T by Br increases the probability of the transition reaction producing the enol-form G* and T* bases. Because G* prefers to bind to T rather than to C, and T* to G not A, our calculated results reveal that the spontaneous mutation from C to T or from A to G base is accelerated by the introduction of wG-BrU base-pair.
Cost estimation using ministerial regulation of public work no. 11/2013 in construction projects
NASA Astrophysics Data System (ADS)
Arumsari, Putri; Juliastuti; Khalifah Al'farisi, Muhammad
2017-12-01
One of the first tasks in starting a construction project is to estimate the total cost of building a project. In Indonesia there are several standards that are used to calculate the cost estimation of a project. One of the standards used in based on the Ministerial Regulation of Public Work No. 11/2013. However in a construction project, contractor often has their own cost estimation based on their own calculation. This research aimed to compare the construction project total cost using calculation based on the Ministerial Regulation of Public Work No. 11/2013 against the contractor’s calculation. Two projects were used as case study to compare the results. The projects were a 4 storey building located in Pantai Indah Kapuk area (West Jakarta) and a warehouse located in Sentul (West Java) which was built by 2 different contractors. The cost estimation from both contractors’ calculation were compared to the one based on the Ministerial Regulation of Public Work No. 11/2013. It is found that there were differences between the two calculation around 1.80 % - 3.03% in total cost, in which the cost estimation based on Ministerial Regulation was higher than the contractors’ calculations.
Xie, Bing; Nguyen, Trung Hai; Minh, David D. L.
2017-01-01
We demonstrate the feasibility of estimating protein-ligand binding free energies using multiple rigid receptor configurations. Based on T4 lysozyme snapshots extracted from six alchemical binding free energy calculations with a flexible receptor, binding free energies were estimated for a total of 141 ligands. For 24 ligands, the calculations reproduced flexible-receptor estimates with a correlation coefficient of 0.90 and a root mean square error of 1.59 kcal/mol. The accuracy of calculations based on Poisson-Boltzmann/Surface Area implicit solvent was comparable to previously reported free energy calculations. PMID:28430432
A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks
NASA Astrophysics Data System (ADS)
Haijun, Xiong; Qi, Zhang
2016-08-01
Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.
SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.
Yuan, Y; Duan, J; Popple, R; Brezovich, I
2012-06-01
To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin
2017-02-01
Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.
NASA Astrophysics Data System (ADS)
Wisniewski, H.; Gourdain, P.-A.
2017-10-01
APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.
NASA Astrophysics Data System (ADS)
Kehlenbeck, Matthias; Breitner, Michael H.
Business users define calculated facts based on the dimensions and facts contained in a data warehouse. These business calculation definitions contain necessary knowledge regarding quantitative relations for deep analyses and for the production of meaningful reports. The business calculation definitions are implementation and widely organization independent. But no automated procedures facilitating their exchange across organization and implementation boundaries exist. Separately each organization currently has to map its own business calculations to analysis and reporting tools. This paper presents an innovative approach based on standard Semantic Web technologies. This approach facilitates the exchange of business calculation definitions and allows for their automatic linking to specific data warehouses through semantic reasoning. A novel standard proxy server which enables the immediate application of exchanged definitions is introduced. Benefits of the approach are shown in a comprehensive case study.
Creative Uses for Calculator-based Laboratory (CBL) Technology in Chemistry.
ERIC Educational Resources Information Center
Sales, Cynthia L.; Ragan, Nicole M.; Murphy, Maureen Kendrick
1999-01-01
Reviews three projects that use a graphing calculator linked to a calculator-based laboratory device as a portable data-collection system for students in chemistry classes. Projects include Isolation, Purification and Quantification of Buckminsterfullerene from Woodstove Ashes; Determination of the Activation Energy Associated with the…
Data base to compare calculations and observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tichler, J.L.
Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)
40 CFR 98.273 - Calculating GHG emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... fossil fuels and combustion of biomass in spent liquor solids. (1) Calculate fossil fuel-based CO2 emissions from direct measurement of fossil fuels consumed and default emissions factors according to the Tier 1 methodology for stationary combustion sources in § 98.33(a)(1). (2) Calculate fossil fuel-based...
40 CFR 98.273 - Calculating GHG emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... fossil fuels and combustion of biomass in spent liquor solids. (1) Calculate fossil fuel-based CO2 emissions from direct measurement of fossil fuels consumed and default emissions factors according to the...) may be used to calculate fossil fuel-based CO2 emissions if the respective monitoring and QA/QC...
40 CFR 98.273 - Calculating GHG emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... fossil fuels and combustion of biomass in spent liquor solids. (1) Calculate fossil fuel-based CO2 emissions from direct measurement of fossil fuels consumed and default emissions factors according to the...) may be used to calculate fossil fuel-based CO2 emissions if the respective monitoring and QA/QC...
40 CFR 98.273 - Calculating GHG emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... fossil fuels and combustion of biomass in spent liquor solids. (1) Calculate fossil fuel-based CO2 emissions from direct measurement of fossil fuels consumed and default emissions factors according to the...) may be used to calculate fossil fuel-based CO2 emissions if the respective monitoring and QA/QC...
40 CFR 98.273 - Calculating GHG emissions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... fossil fuels and combustion of biomass in spent liquor solids. (1) Calculate fossil fuel-based CO2 emissions from direct measurement of fossil fuels consumed and default emissions factors according to the...) may be used to calculate fossil fuel-based CO2 emissions if the respective monitoring and QA/QC...
Lahham, Adnan; Alkbash, Jehad Abu; ALMasri, Hussien
2017-04-20
Theoretical assessments of power density in far-field conditions were used to evaluate the levels of environmental electromagnetic frequencies from selected GSM900 macrocell base stations in the West Bank and Gaza Strip. Assessments were based on calculating the power densities using commercially available software (RF-Map from Telstra Research Laboratories-Australia). Calculations were carried out for single base stations with multiantenna systems and also for multiple base stations with multiantenna systems at 1.7 m above the ground level. More than 100 power density levels were calculated at different locations around the investigated base stations. These locations include areas accessible to the general public (schools, parks, residential areas, streets and areas around kindergartens). The maximum calculated electromagnetic emission level resulted from a single site was 0.413 μW cm-2 and found at Hizma town near Jerusalem. Average maximum power density from all single sites was 0.16 μW cm-2. The results of all calculated power density levels in 100 locations distributed over the West Bank and Gaza were nearly normally distributed with a peak value of ~0.01% of the International Commission on Non-Ionizing Radiation Protection's limit recommended for general public. Comparison between calculated and experimentally measured value of maximum power density from a base station showed that calculations overestimate the actual measured power density by ~27%. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Chanani, Sheila; Wacksman, Jeremy; Deshmukh, Devika; Pantvaidya, Shanti; Fernandez, Armida; Jayaraman, Anuja
2016-12-01
Acute malnutrition is linked to child mortality and morbidity. Community-Based Management of Acute Malnutrition (CMAM) programs can be instrumental in large-scale detection and treatment of undernutrition. The World Health Organization (WHO) 2006 weight-for-height/length tables are diagnostic tools available to screen for acute malnutrition. Frontline workers (FWs) in a CMAM program in Dharavi, Mumbai, were using CommCare, a mobile application, for monitoring and case management of children in combination with the paper-based WHO simplified tables. A strategy was undertaken to digitize the WHO tables into the CommCare application. To measure differences in diagnostic accuracy in community-based screening for acute malnutrition, by FWs, using a mobile-based solution. Twenty-seven FWs initially used the paper-based tables and then switched to an updated mobile application that included a nutritional grade calculator. Human error rates specifically associated with grade classification were calculated by comparison of the grade assigned by the FW to the grade each child should have received based on the same WHO tables. Cohen kappa coefficient, sensitivity and specificity rates were also calculated and compared for paper-based grade assignments and calculator grade assignments. Comparing FWs (N = 14) who completed at least 40 screenings without and 40 with the calculator, the error rates were 5.5% and 0.7%, respectively (p < .0001). Interrater reliability (κ) increased to an almost perfect level (>.90), from .79 to .97, after switching to the mobile calculator. Sensitivity and specificity also improved significantly. The mobile calculator significantly reduces an important component of human error in using the WHO tables to assess acute malnutrition at the community level. © The Author(s) 2016.
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkerts, M; University of California, San Diego, La Jolla, CA; Graves, Y
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is ablemore » to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.« less
Alchemical Free Energy Calculations for Nucleotide Mutations in Protein-DNA Complexes.
Gapsys, Vytautas; de Groot, Bert L
2017-12-12
Nucleotide-sequence-dependent interactions between proteins and DNA are responsible for a wide range of gene regulatory functions. Accurate and generalizable methods to evaluate the strength of protein-DNA binding have long been sought. While numerous computational approaches have been developed, most of them require fitting parameters to experimental data to a certain degree, e.g., machine learning algorithms or knowledge-based statistical potentials. Molecular-dynamics-based free energy calculations offer a robust, system-independent, first-principles-based method to calculate free energy differences upon nucleotide mutation. We present an automated procedure to set up alchemical MD-based calculations to evaluate free energy changes occurring as the result of a nucleotide mutation in DNA. We used these methods to perform a large-scale mutation scan comprising 397 nucleotide mutation cases in 16 protein-DNA complexes. The obtained prediction accuracy reaches 5.6 kJ/mol average unsigned deviation from experiment with a correlation coefficient of 0.57 with respect to the experimentally measured free energies. Overall, the first-principles-based approach performed on par with the molecular modeling approaches Rosetta and FoldX. Subsequently, we utilized the MD-based free energy calculations to construct protein-DNA binding profiles for the zinc finger protein Zif268. The calculation results compare remarkably well with the experimentally determined binding profiles. The software automating the structure and topology setup for alchemical calculations is a part of the pmx package; the utilities have also been made available online at http://pmx.mpibpc.mpg.de/dna_webserver.html .
The Band Structure of Polymers: Its Calculation and Interpretation. Part 2. Calculation.
ERIC Educational Resources Information Center
Duke, B. J.; O'Leary, Brian
1988-01-01
Details ab initio crystal orbital calculations using all-trans-polyethylene as a model. Describes calculations based on various forms of translational symmetry. Compares these calculations with ab initio molecular orbital calculations discussed in a preceding article. Discusses three major approximations made in the crystal case. (CW)
National Stormwater Calculator: Low Impact Development ...
The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The Calculator is organized as a wizard style application that walks the user through the steps necessary to perform runoff calculations on a single urban sub-catchment of 10 acres or less in size. Using an interactive map, the user can select the sub-catchment location and the Calculator automatically acquires hydrologic data for the site.A new LID cost estimation module has been developed for the Calculator. This project involved programming cost curves into the existing Calculator desktop application. The integration of cost components of LID controls into the Calculator increases functionality and will promote greater use of the Calculator as a stormwater management and evaluation tool. The addition of the cost estimation module allows planners and managers to evaluate LID controls based on comparison of project cost estimates and predicted LID control performance. Cost estimation is accomplished based on user-identified size (or auto-sizing based on achieving volume control or treatment of a defined design storm), configuration of the LID control infrastructure, and other key project and site-specific variables, including whether the project is being applied as part of new development or redevelopm
Fast, large-scale hologram calculation in wavelet domain
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi
2018-04-01
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
NASA Astrophysics Data System (ADS)
Wang, Lilie; Ding, George X.
2014-07-01
The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.
Routh, Jonathan C.; Gong, Edward M.; Cannon, Glenn M.; Yu, Richard N.; Gargollo, Patricio C.; Nelson, Caleb P.
2010-01-01
Purpose An increasing number of parents and practitioners use the Internet for health related purposes, and an increasing number of models are available on the Internet for predicting spontaneous resolution rates for children with vesi-coureteral reflux. We sought to determine whether currently available Internet based calculators for vesicoureteral reflux resolution produce systematically different results. Materials and Methods Following a systematic Internet search we identified 3 Internet based calculators of spontaneous resolution rates for children with vesicoureteral reflux, of which 2 were academic affiliated and 1 was industry affiliated. We generated a random cohort of 100 hypothetical patients with a wide range of clinical characteristics and entered the data on each patient into each calculator. We then compared the results from the calculators in terms of mean predicted resolution probability and number of cases deemed likely to resolve at various cutoff probabilities. Results Mean predicted resolution probabilities were 41% and 36% (range 31% to 41%) for the 2 academic affiliated calculators and 33% for the industry affiliated calculator (p = 0.02). For some patients the calculators produced markedly different probabilities of spontaneous resolution, in some instances ranging from 24% to 89% for the same patient. At thresholds greater than 5%, 10% and 25% probability of spontaneous resolution the calculators differed significantly regarding whether cases would resolve (all p < 0.0001). Conclusions Predicted probabilities of spontaneous resolution of vesicoureteral reflux differ significantly among Internet based calculators. For certain patients, particularly those with a lower probability of spontaneous resolution, these differences can significantly influence clinical decision making. PMID:20172550
Moulton, Haley; Tosteson, Tor D; Zhao, Wenyan; Pearson, Loretta; Mycek, Kristina; Scherer, Emily; Weinstein, James N; Pearson, Adam; Abdu, William; Schwarz, Susan; Kelly, Michael; McGuire, Kevin; Milam, Alden; Lurie, Jon D
2018-06-05
Prospective evaluation of an informational web-based calculator for communicating estimates of personalized treatment outcomes. To evaluate the usability, effectiveness in communicating benefits and risks, and impact on decision quality of a calculator tool for patients with intervertebral disc herniations, spinal stenosis, and degenerative spondylolisthesis who are deciding between surgical and non-surgical treatments. The decision to have back surgery is preference-sensitive and warrants shared decision-making. However, more patient-specific, individualized tools for presenting clinical evidence on treatment outcomes are needed. Using Spine Patient Outcomes Research Trial (SPORT) data, prediction models were designed and integrated into a web-based calculator tool: http://spinesurgerycalc.dartmouth.edu/calc/. Consumer Reports subscribers with back-related pain were invited to use the calculator via email, and patient participants were recruited to use the calculator in a prospective manner following an initial appointment at participating spine centers. Participants completed questionnaires before and after using the calculator. We randomly assigned previously validated questions that tested knowledge about the treatment options to be asked either before or after viewing the calculator. 1,256 Consumer Reports subscribers and 68 patient participants completed the calculator and questionnaires. Knowledge scores were higher in the post-calculator group compared to the pre-calculator group, indicating that calculator usage successfully informed users. Decisional conflict was lower when measured following calculator use, suggesting the calculator was beneficial in the decision-making process. Participants generally found the tool helpful and easy to use. While the calculator is not a comprehensive decision aid, it does focus on communicating individualized risks and benefits for treatment options. Moreover, it appears to be helpful in achieving the goals of more traditional shared decision-making tools. It not only improved knowledge scores but also improved other aspects of decision quality.
Nalichowski, Adrian; Burmeister, Jay
2013-07-01
To compare optimization characteristics, plan quality, and treatment delivery efficiency between total marrow irradiation (TMI) plans using the new TomoTherapy graphic processing unit (GPU) based dose engine and CPU/cluster based dose engine. Five TMI plans created on an anthropomorphic phantom were optimized and calculated with both dose engines. The planning treatment volume (PTV) included all the bones from head to mid femur except for upper extremities. Evaluated organs at risk (OAR) consisted of lung, liver, heart, kidneys, and brain. The following treatment parameters were used to generate the TMI plans: field widths of 2.5 and 5 cm, modulation factors of 2 and 2.5, and pitch of either 0.287 or 0.43. The optimization parameters were chosen based on the PTV and OAR priorities and the plans were optimized with a fixed number of iterations. The PTV constraint was selected to ensure that at least 95% of the PTV received the prescription dose. The plans were evaluated based on D80 and D50 (dose to 80% and 50% of the OAR volume, respectively) and hotspot volumes within the PTVs. Gamma indices (Γ) were also used to compare planar dose distributions between the two modalities. The optimization and dose calculation times were compared between the two systems. The treatment delivery times were also evaluated. The results showed very good dosimetric agreement between the GPU and CPU calculated plans for any of the evaluated planning parameters indicating that both systems converge on nearly identical plans. All D80 and D50 parameters varied by less than 3% of the prescription dose with an average difference of 0.8%. A gamma analysis Γ(3%, 3 mm) < 1 of the GPU plan resulted in over 90% of calculated voxels satisfying Γ < 1 criterion as compared to baseline CPU plan. The average number of voxels meeting the Γ < 1 criterion for all the plans was 97%. In terms of dose optimization/calculation efficiency, there was a 20-fold reduction in planning time with the new GPU system. The average optimization/dose calculation time utilizing the traditional CPU/cluster based system was 579 vs 26.8 min for the GPU based system. There was no difference in the calculated treatment delivery time per fraction. Beam-on time varied based on field width and pitch and ranged between 15 and 28 min. The TomoTherapy GPU based dose engine is capable of calculating TMI treatment plans with plan quality nearly identical to plans calculated using the traditional CPU/cluster based system, while significantly reducing the time required for optimization and dose calculation.
Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei
2015-04-11
To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Liu, B; Liang, B
Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less
Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L
2017-11-01
Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.
40 CFR 98.454 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... using measurements and/or engineering assessments or calculations based on chemical engineering principles or physical or chemical laws or properties. Such assessments or calculations may be based on, as...
40 CFR 98.454 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... using measurements and/or engineering assessments or calculations based on chemical engineering principles or physical or chemical laws or properties. Such assessments or calculations may be based on, as...
40 CFR 98.454 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... using measurements and/or engineering assessments or calculations based on chemical engineering principles or physical or chemical laws or properties. Such assessments or calculations may be based on, as...
40 CFR 98.454 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... using measurements and/or engineering assessments or calculations based on chemical engineering principles or physical or chemical laws or properties. Such assessments or calculations may be based on, as...
Medication calculation: the potential role of digital game-based learning in nurse education.
Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle
2013-12-01
Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.
NASA Astrophysics Data System (ADS)
Ma, J.; Liu, Q.
2018-02-01
This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.
Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases
NASA Astrophysics Data System (ADS)
Morifuji, Masato
2018-01-01
We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.
[Raman, FTIR spectra and normal mode analysis of acetanilide].
Liang, Hui-Qin; Tao, Ya-Ping; Han, Li-Gang; Han, Yun-Xia; Mo, Yu-Jun
2012-10-01
The Raman and FTIR spectra of acetanilide (ACN) were measured experimentally in the regions of 3 500-50 and 3 500-600 cm(-1) respectively. The equilibrium geometry and vibration frequencies of ACN were calculated based on density functional theory (DFT) method (B3LYP/6-311G(d, p)). The results showed that the theoretical calculation of molecular structure parameters are in good agreement with previous report and better than the ones calculated based on 6-31G(d), and the calculated frequencies agree well with the experimental ones. Potential energy distribution of each frequency was worked out by normal mode analysis, and based on this, a detailed and accurate vibration frequency assignment of ACN was obtained.
NASA Astrophysics Data System (ADS)
Behzadi, Hadi; Roonasi, Payman; Assle taghipour, Khatoon; van der Spoel, David; Manzetti, Sergio
2015-07-01
The quantum chemical calculations at the DFT/B3LYP level of theory were carried out on seven quinoxaline compounds, which have been synthesized as anti-Mycobacterium tuberculosis agents. Three conformers were optimized for each compound and the lowest energy structure was found and used in further calculations. The electronic properties including EHOMO, ELUMO and related parameters as well as electron density around oxygen and nitrogen atoms were calculated for each compound. The relationship between the calculated electronic parameters and biological activity of the studied compounds were investigated. Six similar quinoxaline derivatives with possible more drug activity were suggested based on the calculated electronic descriptors. A mechanism was proposed and discussed based on the calculated electronic parameters and bond dissociation energies.
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2011 CFR
2011-07-01
... may use fuel economy data from tests conducted on these vehicle configuration(s) at high altitude to...) Calculate the city, highway, and combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests...
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2013 CFR
2013-07-01
... tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel economy for the... combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural...
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2012 CFR
2012-07-01
... tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel economy for the... combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural...
NASA Technical Reports Server (NTRS)
Homan, D. J.
1977-01-01
A computer program written to calculate the proximity aerodynamic force and moment coefficients of the Orbiter/Shuttle Carrier Aircraft (SCA) vehicles based on flight instrumentation is described. The ground reduced aerodynamic coefficients and instrumentation errors (GRACIE) program was developed as a tool to aid in flight test verification of the Orbiter/SCA separation aerodynamic data base. The program calculates the force and moment coefficients of each vehicle in proximity to the other, using the load measurement system data, flight instrumentation data and the vehicle mass properties. The uncertainty in each coefficient is determined, based on the quoted instrumentation accuracies. A subroutine manipulates the Orbiter/747 Carrier Separation Aerodynamic Data Book to calculate a comparable set of predicted coefficients for comparison to the calculated flight test data.
Density functional theory calculations of III-N based semiconductors with mBJLDA
NASA Astrophysics Data System (ADS)
Gürel, Hikmet Hakan; Akıncı, Özden; Ünlü, Hilmi
2017-02-01
In this work, we present first principles calculations based on a full potential linear augmented plane-wave method (FP-LAPW) to calculate structural and electronic properties of III-V based nitrides such as GaN, AlN, InN in a zinc-blende cubic structure. First principles calculation using the local density approximation (LDA) and generalized gradient approximation (GGA) underestimate the band gap. We proposed a new potential called modified Becke-Johnson local density approximation (MBJLDA) that combines modified Becke-Johnson exchange potential and the LDA correlation potential to get better band gap results compared to experiment. We compared various exchange-correlation potentials (LSDA, GGA, HSE, and MBJLDA) to determine band gaps and structural properties of semiconductors. We show that using MBJLDA density potential gives a better agreement with experimental data for band gaps III-V nitrides based semiconductors.
Gonzalez, E; Lino, J; Deriabina, A; Herrera, J N F; Poltev, V I
2013-01-01
To elucidate details of the DNA-water interactions we performed the calculations and systemaitic search for minima of interaction energy of the systems consisting of one of DNA bases and one or two water molecules. The results of calculations using two force fields of molecular mechanics (MM) and correlated ab initio method MP2/6-31G(d, p) of quantum mechanics (QM) have been compared with one another and with experimental data. The calculations demonstrated a qualitative agreement between geometry characteristics of the most of local energy minima obtained via different methods. The deepest minima revealed by MM and QM methods correspond to water molecule position between two neighbor hydrophilic centers of the base and to the formation by water molecule of hydrogen bonds with them. Nevertheless, the relative depth of some minima and peculiarities of mutual water-base positions in' these minima depend on the method used. The analysis revealed insignificance of some differences in the results of calculations performed via different methods and the importance of other ones for the description of DNA hydration. The calculations via MM methods enable us to reproduce quantitatively all the experimental data on the enthalpies of complex formation of single water molecule with the set of mono-, di-, and trimethylated bases, as well as on water molecule locations near base hydrophilic atoms in the crystals of DNA duplex fragments, while some of these data cannot be rationalized by QM calculations.
NASA Astrophysics Data System (ADS)
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
Preliminary calculations related to the accident at Three Mile Island
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchner, W.L.; Stevenson, M.G.
This report discusses preliminary studies of the Three Mile Island Unit 2 (TMI-2) accident based on available methods and data. The work reported includes: (1) a TRAC base case calculation out to 3 hours into the accident sequence; (2) TRAC parametric calculations, these are the same as the base case except for a single hypothetical change in the system conditions, such as assuming the high pressure injection (HPI) system operated as designed rather than as in the accident; (3) fuel rod cladding failure, cladding oxidation due to zirconium metal-steam reactions, hydrogen release due to cladding oxidation, cladding ballooning, cladding embrittlement,more » and subsequent cladding breakup estimates based on TRAC calculated cladding temperatures and system pressures. Some conclusions of this work are: the TRAC base case accident calculation agrees very well with known system conditions to nearly 3 hours into the accident; the parametric calculations indicate that, loss-of-core cooling was most influenced by the throttling of High-Pressure Injection (HPI) flows, given the accident initiating events and the pressurizer electromagnetic-operated valve (EMOV) failing to close as designed; failure of nearly all the rods and gaseous fission product gas release from the failed rods is predicted to have occurred at about 2 hours and 30 minutes; cladding oxidation (zirconium-steam reaction) up to 3 hours resulted in the production of approximately 40 kilograms of hydrogen.« less
Accurate Energy Transaction Allocation using Path Integration and Interpolation
NASA Astrophysics Data System (ADS)
Bhide, Mandar Mohan
This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.
Computational methods for aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Peeters, M. F.
1983-01-01
Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.
Methane on Mars: Thermodynamic Equilibrium and Photochemical Calculations
NASA Technical Reports Server (NTRS)
Levine, J. S.; Summers, M. E.; Ewell, M.
2010-01-01
The detection of methane (CH4) in the atmosphere of Mars by Mars Express and Earth-based spectroscopy is very surprising, very puzzling, and very intriguing. On Earth, about 90% of atmospheric ozone is produced by living systems. A major question concerning methane on Mars is its origin - biological or geological. Thermodynamic equilibrium calculations indicated that methane cannot be produced by atmospheric chemical/photochemical reactions. Thermodynamic equilibrium calculations for three gases, methane, ammonia (NH3) and nitrous oxide (N2O) in the Earth s atmosphere are summarized in Table 1. The calculations indicate that these three gases should not exist in the Earth s atmosphere. Yet they do, with methane, ammonia and nitrous oxide enhanced 139, 50 and 12 orders of magnitude above their calculated thermodynamic equilibrium concentration due to the impact of life! Thermodynamic equilibrium calculations have been performed for the same three gases in the atmosphere of Mars based on the assumed composition of the Mars atmosphere shown in Table 2. The calculated thermodynamic equilibrium concentrations of the same three gases in the atmosphere of Mars is shown in Table 3. Clearly, based on thermodynamic equilibrium calculations, methane should not be present in the atmosphere of Mars, but it is in concentrations approaching 30 ppbv from three distinct regions on Mars.
NASA Astrophysics Data System (ADS)
Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur
2017-06-01
In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.
Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-06-21
Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.
Power Consumption and Calculation Requirement Analysis of AES for WSN IoT.
Hung, Chung-Wen; Hsu, Wen-Ting
2018-05-23
Because of the ubiquity of Internet of Things (IoT) devices, the power consumption and security of IoT systems have become very important issues. Advanced Encryption Standard (AES) is a block cipher algorithm is commonly used in IoT devices. In this paper, the power consumption and cryptographic calculation requirement for different payload lengths and AES encryption types are analyzed. These types include software-based AES-CB, hardware-based AES-ECB (Electronic Codebook Mode), and hardware-based AES-CCM (Counter with CBC-MAC Mode). The calculation requirement and power consumption for these AES encryption types are measured on the Texas Instruments LAUNCHXL-CC1310 platform. The experimental results show that the hardware-based AES performs better than the software-based AES in terms of power consumption and calculation cycle requirements. In addition, in terms of AES mode selection, the AES-CCM-MIC64 mode may be a better choice if the IoT device is considering security, encryption calculation requirement, and low power consumption at the same time. However, if the IoT device is pursuing lower power and the payload length is generally less than 16 bytes, then AES-ECB could be considered.
Shin, Eun-Seok; Garcia-Garcia, Hector M; Garg, Scot; Serruys, Patrick W
2011-04-01
Although percent plaque components on plaque-based measurement have been used traditionally in previous studies, the impact of vessel-based measurement for percent plaque components have yet to be studied. The purpose of this study was therefore to correlate percent plaque components derived by plaque- and vessel-based measurement using intravascular ultrasound virtual histology (IVUS-VH). The patient cohort comprised of 206 patients with de novo coronary artery lesions who were imaged with IVUS-VH. Age ranged from 35 to 88 years old, and 124 patients were male. Whole pullback analysis was used to calculate plaque volume, vessel volume, and absolute and percent volumes of fibrous, fibrofatty, necrotic core, and dense calcium. The plaque and vessel volumes were well correlated (r = 0.893, P < 0.001). There was a strong correlation between percent plaque components volumes calculated by plaque and those calculated by vessel volumes (fibrous; r = 0.927, P < 0.001, fibrofatty; r = 0.972, P < 0.001, necrotic core; r = 0.964, P < 0.001, dense calcium; r = 0.980, P < 0.001,). Plaque and vessel volumes correlated well to the overall plaque burden. For percent plaque component volume, plaque-based measurement was also highly correlated with vessel-based measurement. Therefore, the percent plaque component volume calculated by vessel volume could be used instead of the conventional percent plaque component volume calculated by plaque volume.
Calculation of thermal expansion coefficient of glasses based on topological constraint theory
NASA Astrophysics Data System (ADS)
Zeng, Huidan; Ye, Feng; Li, Xiang; Wang, Ling; Yang, Bin; Chen, Jianding; Zhang, Xianghua; Sun, Luyi
2016-10-01
In this work, the thermal expansion behavior and the structure configuration evolution of glasses were studied. Degree of freedom based on the topological constraint theory is correlated with configuration evolution; considering the chemical composition and the configuration change, the analytical equation for calculating the thermal expansion coefficient of glasses from degree of freedom was derived. The thermal expansion of typical silicate and chalcogenide glasses was examined by calculating their thermal expansion coefficients (TEC) using the approach stated above. The results showed that this approach was energetically favorable for glass materials and revealed the corresponding underlying essence from viewpoint of configuration entropy. This work establishes a configuration-based methodology to calculate the thermal expansion coefficient of glasses that, lack periodic order.
Fast calculation of the line-spread-function by transversal directions decoupling
NASA Astrophysics Data System (ADS)
Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra
2016-07-01
We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giuseppe Palmiotti
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
29 CFR 547.1 - Essential requirements for qualifications.
Code of Federal Regulations, 2014 CFR
2014-07-01
... in accordance with a definite formula specified in the plan, which formula may be based on one or... formula or method of calculation specified in the plan, which formula or method of calculation is based on...
ERIC Educational Resources Information Center
Hagedorn, Linda Serra
1998-01-01
A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…
Internal twisting motion dependent conductance of an aperiodic DNA molecule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiliyanti, Vandan, E-mail: vandan.wiliyanti@ui.ac.id; Yudiarsah, Efta
The influence of internal twisting motion of base-pair on conductance of an aperiodic DNA molecule has been studied. Double-stranded DNA molecule with sequence GCTAGTACGTGACGTAGCTAGGATATGCCTGA on one chain and its complement on the other chain is used. The molecule is modeled using Hamiltonian Tight Binding, in which the effect of twisting motion on base onsite energy and between bases electron hopping constant was taking into account. Semi-empirical theory of Slater-Koster is employed in bringing the twisting motion effect on the hopping constants. In addition to the ability to hop from one base to other base, electron can also hop from amore » base to sugar-phosphate backbone and vice versa. The current flowing through DNA molecule is calculated using Landauer–Büttiker formula from transmission probability, which is calculated using transfer matrix technique and scattering matrix method, simultaneously. Then, the differential conductance is calculated from the I-V curve. The calculation result shows at some region of voltages, the conductance increases as the frequency increases, but in other region it decreases with the frequency.« less
Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services
Rajabi, A; Dabiri, A
2012-01-01
Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations fo...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation and use of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for vehicle configurations. 600.206-12 Section 600.206-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST...
The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation
Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt
2010-01-01
Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang
2012-09-01
Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen
Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less
Comparative PV LCOE calculator | Photovoltaic Research | NREL
Use the Comparative Photovoltaic Levelized Cost of Energy Calculator (Comparative PV LCOE Calculator) to calculate levelized cost of energy (LCOE) for photovoltaic (PV) systems based on cost effect on LCOE to determine whether a proposed technology is cost-effective, perform trade-off analysis
Calculations of a wideband metamaterial absorber using equivalent medium theory
NASA Astrophysics Data System (ADS)
Huang, Xiaojun; Yang, Helin; Wang, Danqi; Yu, Shengqing; Lou, Yanchao; Guo, Ling
2016-08-01
Metamaterial absorbers (MMAs) have drawn increasing attention in many areas due to the fact that they can achieve electromagnetic (EM) waves with unity absorptivity. We demonstrate the design, simulation, experiment and calculation of a wideband MMA based on a loaded double-square-loop (DSL) array of chip resisters. For a normal incidence EM wave, the simulated results show that the absorption of the full width at half maximum is about 9.1 GHz, and the relative bandwidth is 87.1%. Experimental results are in agreement with the simulations. More importantly, equivalent medium theory (EMT) is utilized to calculate the absorptions of the DSL MMA, and the calculated absorptions based on EMT agree with the simulated and measured results. The method based on EMT provides a new way to analysis the mechanism of MMAs.
Many-body calculations with deuteron based single-particle bases and their associated natural orbits
NASA Astrophysics Data System (ADS)
Puddu, G.
2018-06-01
We use the recently introduced single-particle states obtained from localized deuteron wave-functions as a basis for nuclear many-body calculations. We show that energies can be substantially lowered if the natural orbits (NOs) obtained from this basis are used. We use this modified basis for {}10{{B}}, {}16{{O}} and {}24{{Mg}} employing the bare NNLOopt nucleon–nucleon interaction. The lowering of the energies increases with the mass. Although in principle NOs require a full scale preliminary many-body calculation, we found that an approximate preliminary many-body calculation, with a marginal increase in the computational cost, is sufficient. The use of natural orbits based on an harmonic oscillator basis leads to a much smaller lowering of the energies for a comparable computational cost.
Sub-second pencil beam dose calculation on GPU for adaptive proton therapy
NASA Astrophysics Data System (ADS)
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-06-01
Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.
Haddad, S; Tardif, R; Viau, C; Krishnan, K
1999-09-05
Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.
NASA Astrophysics Data System (ADS)
Mikaeilzadeh, L.; Pirgholi, M.; Tavana, A.
2018-05-01
Based on the ab-initio non-equilibrium Green's function (NEGF) formalism based on the density functional theory (DFT), we have studied the electron transport in the all-Heusler device Co2CrSi/Cu2CrAl/Co2CrSi. Results show that the calculated transmission spectra is very sensitive to the structural parameters and the interface. Also, we obtain a range for the thickness of the spacer layer for which the MR effect is optimum. Calculations also show a perfect GMR effect in this device.
Wilkinson, P L
1979-06-01
Assessing and modifying oxygen transport are major parts of ICU patient management. Determination of base excess, blood oxygen saturation and content, dead space ventilation, and P50 helps in this management. A program is described for determining these variables using a T1 59 programmable calculator and PC 100A printer. Each variable can be independently calculated without running the whole program. The calculator-printer's small size, low cost, and hard copy printout make it a valuable and versatile tool for calculating physiological variables. The program is easily entered by an stored on magnetic card, and prompts the user to enter the appropriate variables, making is easy to run by untrained personnel.
NASA Astrophysics Data System (ADS)
Sembiring, M. T.; Wahyuni, D.; Sinaga, T. S.; Silaban, A.
2018-02-01
Cost allocation at manufacturing industry particularly in Palm Oil Mill still widely practiced based on estimation. It leads to cost distortion. Besides, processing time determined by company is not in accordance with actual processing time in work station. Hence, the purpose of this study is to eliminates non-value-added activities therefore processing time could be shortened and production cost could be reduced. Activity Based Costing Method is used in this research to calculate production cost with Value Added and Non-Value-Added Activities consideration. The result of this study is processing time decreasing for 35.75% at Weighting Bridge Station, 29.77% at Sorting Station, 5.05% at Loading Ramp Station, and 0.79% at Sterilizer Station. Cost of Manufactured for Crude Palm Oil are IDR 5.236,81/kg calculated by Traditional Method, IDR 4.583,37/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.581,71/kg after implementation of Activity Improvement Meanwhile Cost of Manufactured for Palm Kernel are IDR 2.159,50/kg calculated by Traditional Method, IDR 4.584,63/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.582,97/kg after implementation of Activity Improvement.
Materials Data on BaSe (SG:225) by Materials Project
Kristin Persson
2014-11-02
Computed materials data using density functional theory calculations. These calculations determine the electronic structure of bulk materials by solving approximations to the Schrodinger equation. For more information, see https://materialsproject.org/docs/calculations
Materials Data on BaSe (SG:221) by Materials Project
Kristin Persson
2014-11-02
Computed materials data using density functional theory calculations. These calculations determine the electronic structure of bulk materials by solving approximations to the Schrodinger equation. For more information, see https://materialsproject.org/docs/calculations
Ruiz, B C; Tucker, W K; Kirby, R R
1975-01-01
With a desk-top, programmable calculator, it is now possible to do complex, previously time-consuming computations in the blood-gas laboratory. The authors have developed a program with the necessary algorithms for temperature correction of blood gases and calculation of acid-base variables and intrapulmonary shunt. It was necessary to develop formulas for the Po2 temperature-correction coefficient, the oxyhemoglobin-dissociation curve for adults (withe necessary adjustments for fetal blood), and changes in water vapor pressure due to variation in body temperature. Using this program in conjuction with a Monroe 1860-21 statistical programmable calculator, it is possible to temperature-correct pH,Pco2, and Po2. The machine will compute alveolar-arterial oxygen tension gradient, oxygen saturation (So2), oxygen content (Co2), actual HCO minus 3 and a modified base excess. If arterial blood and mixed venous blood are obtained, the calculator will print out intrapulmonary shunt data (Qs/Qt) and arteriovenous oxygen differences (a minus vDo2). There also is a formula to compute P50 if pH,Pco2,Po2, and measured So2 from two samples of tonometered blood (one above 50 per cent and one below 50 per cent saturation) are put into the calculator.
NASA Astrophysics Data System (ADS)
Yepes, Pablo P.; Eley, John G.; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe
2016-04-01
Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs.
Development of a web-based CT dose calculator: WAZA-ARI.
Ban, N; Takahashi, F; Sato, K; Endo, A; Ono, K; Hasegawa, T; Yoshitake, T; Katsunuma, Y; Kai, M
2011-09-01
A web-based computed tomography (CT) dose calculation system (WAZA-ARI) is being developed based on the modern techniques for the radiation transport simulation and for software implementation. Dose coefficients were calculated in a voxel-type Japanese adult male phantom (JM phantom), using the Particle and Heavy Ion Transport code System. In the Monte Carlo simulation, the phantom was irradiated with a 5-mm-thick, fan-shaped photon beam rotating in a plane normal to the body axis. The dose coefficients were integrated into the system, which runs as Java servlets within Apache Tomcat. Output of WAZA-ARI for GE LightSpeed 16 was compared with the dose values calculated similarly using MIRD and ICRP Adult Male phantoms. There are some differences due to the phantom configuration, demonstrating the significance of the dose calculation with appropriate phantoms. While the dose coefficients are currently available only for limited CT scanner models and scanning options, WAZA-ARI will be a useful tool in clinical practice when development is finalised.
Holmes, Sean T; Alkan, Fahri; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil
2016-07-05
(29) Si and (31) P magnetic-shielding tensors in covalent network solids have been evaluated using periodic and cluster-based calculations. The cluster-based computational methodology employs pseudoatoms to reduce the net charge (resulting from missing co-ordination on the terminal atoms) through valence modification of terminal atoms using bond-valence theory (VMTA/BV). The magnetic-shielding tensors computed with the VMTA/BV method are compared to magnetic-shielding tensors determined with the periodic GIPAW approach. The cluster-based all-electron calculations agree with experiment better than the GIPAW calculations, particularly for predicting absolute magnetic shielding and for predicting chemical shifts. The performance of the DFT functionals CA-PZ, PW91, PBE, rPBE, PBEsol, WC, and PBE0 are assessed for the prediction of (29) Si and (31) P magnetic-shielding constants. Calculations using the hybrid functional PBE0, in combination with the VMTA/BV approach, result in excellent agreement with experiment. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600.208-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR...
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.
Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G
2014-12-01
Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.
The effects of calculator-based laboratories on standardized test scores
NASA Astrophysics Data System (ADS)
Stevens, Charlotte Bethany Rains
Nationwide, the goal of providing a productive science and math education to our youth in today's educational institutions is centering itself around the technology being utilized in these classrooms. In this age of digital technology, educational software and calculator-based laboratories (CBL) have become significant devices in the teaching of science and math for many states across the United States. Among the technology, the Texas Instruments graphing calculator and Vernier Labpro interface, are among some of the calculator-based laboratories becoming increasingly popular among middle and high school science and math teachers in many school districts across this country. In Tennessee, however, it is reported that this type of technology is not regularly utilized at the student level in most high school science classrooms, especially in the area of Physical Science (Vernier, 2006). This research explored the effect of calculator based laboratory instruction on standardized test scores. The purpose of this study was to determine the effect of traditional teaching methods versus graphing calculator teaching methods on the state mandated End-of-Course (EOC) Physical Science exam based on ability, gender, and ethnicity. The sample included 187 total tenth and eleventh grade physical science students, 101 of which belonged to a control group and 87 of which belonged to the experimental group. Physical Science End-of-Course scores obtained from the Tennessee Department of Education during the spring of 2005 and the spring of 2006 were used to examine the hypotheses. The findings of this research study suggested the type of teaching method, traditional or calculator based, did not have an effect on standardized test scores. However, the students' ability level, as demonstrated on the End-of-Course test, had a significant effect on End-of-Course test scores. This study focused on a limited population of high school physical science students in the middle Tennessee Putnam County area. The study should be reproduced in various school districts in the state of Tennessee to compare the findings.
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-11-21
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
Development of High Precision Tsunami Runup Calculation Method Coupled with Structure Analysis
NASA Astrophysics Data System (ADS)
Arikawa, Taro; Seki, Katsumi; Chida, Yu; Takagawa, Tomohiro; Shimosako, Kenichiro
2017-04-01
The 2011 Great East Japan Earthquake (GEJE) has shown that tsunami disasters are not limited to inundation damage in a specified region, but may destroy a wide area, causing a major disaster. Evaluating standing land structures and damage to them requires highly precise evaluation of three-dimensional fluid motion - an expensive process. Our research goals were thus to develop a coupling STOC-CADMAS (Arikawa and Tomita, 2016) coupling with the structure analysis (Arikawa et. al., 2009) to efficiently calculate all stages from tsunami source to runup including the deformation of structures and to verify their applicability. We also investigated the stability of breakwaters at Kamaishi Bay. Fig. 1 shows the whole of this calculation system. The STOC-ML simulator approximates pressure by hydrostatic pressure and calculates the wave profiles based on an equation of continuity, thereby lowering calculation cost, primarily calculating from a e epi center to the shallow region. As a simulator, STOC-IC solves pressure based on a Poisson equation to account for a shallower, more complex topography, but reduces computation cost slightly to calculate the area near a port by setting the water surface based on an equation of continuity. CS3D also solves a Navier-Stokes equation and sets the water surface by VOF to deal with the runup area, with its complex surfaces of overflows and bores. STR solves the structure analysis including the geo analysis based on the Biot's formula. By coupling these, it efficiently calculates the tsunami profile from the propagation to the inundation. The numerical results compared with the physical experiments done by Arikawa et. al.,2012. It was good agreement with the experimental ones. Finally, the system applied to the local situation at Kamaishi bay. The almost breakwaters were washed away, whose situation was similar to the damage at Kamaishi bay. REFERENCES T. Arikawa and T. Tomita (2016): "Development of High Precision Tsunami Runup Calculation Method Based on a Hierarchical Simulation", Journal of Disaster ResearchVol.11 No.4 T. Arikawa, K. Hamaguchi, K. Kitagawa, T. Suzuki (2009): "Development of Numerical Wave Tank Coupled with Structure Analysis Based on FEM", Journal of J.S.C.E., Ser. B2 (Coastal Engineering) Vol. 65, No. 1 T. Arikawa et. al.(2012) "Failure Mechanism of Kamaishi Breakwaters due to the Great East Japan Earthquake Tsunami", 33rd International Conference on Coastal Engineering, No.1191
Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon
2012-10-01
This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, p<.001). Experimental group students had higher ability to perform drug dosage calculations than control group students (t=3.98, p<.001), with regard to 'metric conversion' (t=2.25, p=.027), 'table dosage calculation' (t=2.20, p=.031) and 'drop rate calculation' (t=4.60, p<.001). There was no difference in improvement in 'anxiety for drug dosage calculation'. Mean satisfaction score for the program was 86.1. These results indicate that this drug dosage calculation training program using smartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Precision gravity studies at Cerro Prieto: a progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grannell, R.B.; Kroll, R.C.; Wyman, R.M.
A third and fourth year of precision gravity data collection and reduction have now been completed at the Cerro Prieto geothermal field. In summary, 66 permanently monumented stations were occupied between December and April of 1979 to 1980 and 1980 to 1981 by a LaCoste and Romberg gravity meter (G300) at least twice, with a minimum of four replicate values obtained each time. Station 20 alternate, a stable base located on Cerro Prieto volcano, was used as the reference base for the third year and all the stations were tied to this base, using four to five hour loops. Themore » field data were reduced to observed gravity values by (1) multiplication with the appropriate calibration factor; (2) removal of calculated tidal effects; (3) calculation of average values at each station, and (4) linear removal of accumulated instrumental drift which remained after carrying out the first three reductions. Following the reduction of values and calculation of gravity differences between individual stations and the base stations, standard deviations were calculated for the averaged occupation values (two to three per station). In addition, pooled variance calculations were carried out to estimate precision for the surveys as a whole.« less
Through-Flow Calculations in Axial Turbomachinery
1976-10-01
coilditions should be next on the agenda. Authors’ response: I think the process is essentially iterative between SI and S2 solutions. If SI surfaces...secondary flows in high Mach number situations. Concerning Gelder’s approach, i think that your remark is rather optimistic. We use a method based on...my remarks on Gelder’s work were based on calculations made by Gelder himself. One or two other people have managed to get the calculation through
Calculations of lattice vibrational mode lifetimes using Jazz: a Python wrapper for LAMMPS
NASA Astrophysics Data System (ADS)
Gao, Y.; Wang, H.; Daw, M. S.
2015-06-01
Jazz is a new python wrapper for LAMMPS [1], implemented to calculate the lifetimes of vibrational normal modes based on forces as calculated for any interatomic potential available in that package. The anharmonic character of the normal modes is analyzed via the Monte Carlo-based moments approximation as is described in Gao and Daw [2]. It is distributed as open-source software and can be downloaded from the website http://jazz.sourceforge.net/.
Follett, R. K.; Edgell, D. H.; Froula, D. H.; ...
2017-10-20
Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follett, R. K.; Edgell, D. H.; Froula, D. H.
Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less
Zhang, Zhihong; Tendulkar, Amod; Sun, Kay; Saloner, David A; Wallace, Arthur W; Ge, Liang; Guccione, Julius M; Ratcliffe, Mark B
2011-01-01
Both the Young-Laplace law and finite element (FE) based methods have been used to calculate left ventricular wall stress. We tested the hypothesis that the Young-Laplace law is able to reproduce results obtained with the FE method. Magnetic resonance imaging scans with noninvasive tags were used to calculate three-dimensional myocardial strain in 5 sheep 16 weeks after anteroapical myocardial infarction, and in 1 of those sheep 6 weeks after a Dor procedure. Animal-specific FE models were created from the remaining 5 animals using magnetic resonance images obtained at early diastolic filling. The FE-based stress in the fiber, cross-fiber, and circumferential directions was calculated and compared to stress calculated with the assumption that wall thickness is very much less than the radius of curvature (Young-Laplace law), and without that assumption (modified Laplace). First, circumferential stress calculated with the modified Laplace law is closer to results obtained with the FE method than stress calculated with the Young-Laplace law. However, there are pronounced regional differences, with the largest difference between modified Laplace and FE occurring in the inner and outer layers of the infarct borderzone. Also, stress calculated with the modified Laplace is very different than stress in the fiber and cross-fiber direction calculated with FE. As a consequence, the modified Laplace law is inaccurate when used to calculate the effect of the Dor procedure on regional ventricular stress. The FE method is necessary to determine stress in the left ventricle with postinfarct and surgical ventricular remodeling. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Potapczuk, Mark G.
1993-01-01
A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by adding the ice at each control volume in the surface normal direction.
H. T. Schreuder; M. S. Williams
2000-01-01
In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...
Code of Federal Regulations, 2013 CFR
2013-07-01
... fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an electric...
Code of Federal Regulations, 2011 CFR
2011-07-01
... exhaust emission values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy...
Code of Federal Regulations, 2011 CFR
2011-07-01
..., highway, and combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an...
Code of Federal Regulations, 2012 CFR
2012-07-01
... fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an electric...
"Magnitude-based inference": a statistical review.
Welsh, Alan H; Knight, Emma J
2015-04-01
We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.
NASA Astrophysics Data System (ADS)
Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.
2018-01-01
The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.
Analog Signal Correlating Using an Analog-Based Signal Conditioning Front End
NASA Technical Reports Server (NTRS)
Prokop, Norman; Krasowski, Michael
2013-01-01
This innovation is capable of correlating two analog signals by using an analog-based signal conditioning front end to hard-limit the analog signals through adaptive thresholding into a binary bit stream, then performing the correlation using a Hamming "similarity" calculator function embedded in a one-bit digital correlator (OBDC). By converting the analog signal into a bit stream, the calculation of the correlation function is simplified, and less hardware resources are needed. This binary representation allows the hardware to move from a DSP where instructions are performed serially, into digital logic where calculations can be performed in parallel, greatly speeding up calculations.
NASA Astrophysics Data System (ADS)
Meng, ZhuXuan; Fan, Hu; Peng, Ke; Zhang, WeiHua; Yang, HuiXin
2016-12-01
This article presents a rapid and accurate aeroheating calculation method for hypersonic vehicles. The main innovation is combining accurate of numerical method with efficient of engineering method, which makes aeroheating simulation more precise and faster. Based on the Prandtl boundary layer theory, the entire flow field is divided into inviscid and viscid flow at the outer edge of the boundary layer. The parameters at the outer edge of the boundary layer are numerically calculated from assuming inviscid flow. The thermodynamic parameters of constant-volume specific heat, constant-pressure specific heat and the specific heat ratio are calculated, the streamlines on the vehicle surface are derived and the heat flux is then obtained. The results of the double cone show that at the 0° and 10° angle of attack, the method of aeroheating calculation based on inviscid outer edge of boundary layer parameters reproduces the experimental data better than the engineering method. Also the proposed simulation results of the flight vehicle reproduce the viscid numerical results well. Hence, this method provides a promising way to overcome the high cost of numerical calculation and improves the precision.
A new radiation infrastructure for the Modular Earth Submodel System (MESSy, based on version 2.51)
NASA Astrophysics Data System (ADS)
Dietmüller, Simone; Jöckel, Patrick; Tost, Holger; Kunze, Markus; Gellhorn, Catrin; Brinkop, Sabine; Frömming, Christine; Ponater, Michael; Steil, Benedikt; Lauer, Axel; Hendricks, Johannes
2016-06-01
The Modular Earth Submodel System (MESSy) provides an interface to couple submodels to a base model via a highly flexible data management facility (Jöckel et al., 2010). In the present paper we present the four new radiation related submodels RAD, AEROPT, CLOUDOPT, and ORBIT. The submodel RAD (including the shortwave radiation scheme RAD_FUBRAD) simulates the radiative transfer, the submodel AEROPT calculates the aerosol optical properties, the submodel CLOUDOPT calculates the cloud optical properties, and the submodel ORBIT is responsible for Earth orbit calculations. These submodels are coupled via the standard MESSy infrastructure and are largely based on the original radiation scheme of the general circulation model ECHAM5, however, expanded with additional features. These features comprise, among others, user-friendly and flexibly controllable (by namelists) online radiative forcing calculations by multiple diagnostic calls of the radiation routines. With this, it is now possible to calculate radiative forcing (instantaneous as well as stratosphere adjusted) of various greenhouse gases simultaneously in only one simulation, as well as the radiative forcing of cloud perturbations. Examples of online radiative forcing calculations in the ECHAM/MESSy Atmospheric Chemistry (EMAC) model are presented.
NASA Astrophysics Data System (ADS)
Novita, Mega; Nagoshi, Hikari; Sudo, Akiho; Ogasawara, Kazuyoshi
2018-01-01
In this study, we performed an investigation on α-Al2O3: V3+ material, or the so-called color change sapphire, based on first-principles calculations without referring to any experimental parameter. The molecular orbital (MO) structure was estimated by the one-electron MO calculations using the discrete variational-Xα (DV-Xα) method. Next, the absorption spectra were estimated by the many-electron calculations using the discrete variational multi-electron (DVME) method. The effect of lattice relaxation on the crystal structures was estimated based on the first-principles band structure calculations. We performed geometry optimizations on the pure α-Al2O3 and with the impurity V3+ ion using Cambridge Serial Total Energy Package (CASTEP) code. The effect of energy corrections such as configuration dependence correction and correlation correction was also investigated in detail. The results revealed that the structural change on the α-Al2O3: V3+ resulted from the geometry optimization improved the calculated absorption spectra. By a combination of both the lattice relaxation-effect and the energy correction-effect improve the agreement to the experiment fact.
Reddy, M Rami; Singh, U C; Erion, Mark D
2004-05-26
Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.
Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy
NASA Astrophysics Data System (ADS)
Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.
2018-01-01
This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.
Kusano, Maggie; Caldwell, Curtis B
2014-07-01
A primary goal of nuclear medicine facility design is to keep public and worker radiation doses As Low As Reasonably Achievable (ALARA). To estimate dose and shielding requirements, one needs to know both the dose equivalent rate constants for soft tissue and barrier transmission factors (TFs) for all radionuclides of interest. Dose equivalent rate constants are most commonly calculated using published air kerma or exposure rate constants, while transmission factors are most commonly calculated using published tenth-value layers (TVLs). Values can be calculated more accurately using the radionuclide's photon emission spectrum and the physical properties of lead, concrete, and/or tissue at these energies. These calculations may be non-trivial due to the polyenergetic nature of the radionuclides used in nuclear medicine. In this paper, the effects of dose equivalent rate constant and transmission factor on nuclear medicine dose and shielding calculations are investigated, and new values based on up-to-date nuclear data and thresholds specific to nuclear medicine are proposed. To facilitate practical use, transmission curves were fitted to the three-parameter Archer equation. Finally, the results of this work were applied to the design of a sample nuclear medicine facility and compared to doses calculated using common methods to investigate the effects of these values on dose estimates and shielding decisions. Dose equivalent rate constants generally agreed well with those derived from the literature with the exception of those from NCRP 124. Depending on the situation, Archer fit TFs could be significantly more accurate than TVL-based TFs. These results were reflected in the sample shielding problem, with unshielded dose estimates agreeing well, with the exception of those based on NCRP 124, and Archer fit TFs providing a more accurate alternative to TVL TFs and a simpler alternative to full spectral-based calculations. The data provided by this paper should assist in improving the accuracy and tractability of dose and shielding calculations for nuclear medicine facility design.
Predicted phototoxicities of carbon nano-material by quantum mechanical calculations
The purpose of this research is to develop a predictive model for the phototoxicity potential of carbon nanomaterials (fullerenols and single-walled carbon nanotubes). This model is based on the quantum mechanical (ab initio) calculations on these carbon-based materials and compa...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saygin, H.; Hebert, A.
The calculation of a dilution cross section {bar {sigma}}{sub e} is the most important step in the self-shielding formalism based on the equivalence principle. If a dilution cross section that accurately characterizes the physical situation can be calculated, it can then be used for calculating the effective resonance integrals and obtaining accurate self-shielded cross sections. A new technique for the calculation of equivalent cross sections based on the formalism of Riemann integration in the resolved energy domain is proposed. This new method is compared to the generalized Stamm`ler method, which is also based on an equivalence principle, for a two-regionmore » cylindrical cell and for a small pressurized water reactor assembly in two dimensions. The accuracy of each computing approach is obtained using reference results obtained from a fine-group slowing-down code named CESCOL. It is shown that the proposed method leads to slightly better performance than the generalized Stamm`ler approach.« less
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Index cost estimate based BIM method - Computational example for sports fields
NASA Astrophysics Data System (ADS)
Zima, Krzysztof
2017-07-01
The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.
Mitigation of Engine Inlet Distortion Through Adjoint-Based Design
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Rallabhandi, Sriram; Nielsen, Eric J.; Diskin, Boris
2017-01-01
The adjoint-based design capability in FUN3D is extended to allow efficient gradient- based optimization and design of concepts with highly integrated aero-propulsive systems. A circumferential distortion calculation, along with the derivatives needed to perform adjoint-based design, have been implemented in FUN3D. This newly implemented distortion calculation can be used not only for design but also to drive the existing mesh adaptation process and reduce the error associated with the fan distortion calculation. The design capability is demonstrated by the shape optimization of an in-house aircraft concept equipped with an aft fuselage propulsor. The optimization objective is the minimization of flow distortion at the aerodynamic interface plane of this aft fuselage propulsor.
MR Imaging Based Treatment Planning for Radiotherapy of Prostate Cancer
2008-02-01
Radiotherapy, MR-based treatment planning, dosimetry, Monte Carlo dose verification, Prostate Cancer, MRI -based DRRs 16. SECURITY CLASSIFICATION...AcQPlan system Version 5 was used for the study , which is capable of performing dose calculation on both CT and MRI . A four field 3D conformal planning...prostate motion studies for 3DCRT and IMRT of prostate cancer; (2) to investigate and improve the accuracy of MRI -based treatment planning dose calculation
Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S
2012-07-01
The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.
Evaluation of audit-based performance measures for dental care plans.
Bader, J D; Shugars, D A; White, B A; Rindal, D B
1999-01-01
Although a set of clinical performance measures, i.e., a report card for dental plans, has been designed for use with administrative data, most plans do not have administrative data systems containing the data needed to calculate the measures. Therefore, we evaluated the use of a set of proxy clinical performance measures calculated from data obtained through chart audits. Chart audits were conducted in seven dental programs--three public health clinics, two dental health maintenance organizations (DHMO), and two preferred provider organizations (PPO). In all instances audits were completed by clinical staff who had been trained using telephone consultation and a self-instructional audit manual. The performance measures were calculated for the seven programs, audit reliability was assessed in four programs, and for one program the audit-based proxy measures were compared to the measures calculated using administrative data. The audit-based measures were sensitive to known differences in program performance. The chart audit procedures yielded reasonably reliable data. However, missing data in patient charts rendered the calculation of some measures problematic--namely, caries and periodontal disease assessment and experience. Agreement between administrative and audit-based measures was good for most, but not all, measures in one program. The audit-based proxy measures represent a complex but feasible approach to the calculation of performance measures for those programs lacking robust administrative data systems. However, until charts contain more complete diagnostic information (i.e., periodontal charting and diagnostic codes or reason-for-treatment codes), accurate determination of these aspects of clinical performance will be difficult.
Axisymmetric computational fluid dynamics analysis of Saturn V/S1-C/F1 nozzle and plume
NASA Technical Reports Server (NTRS)
Ruf, Joseph H.
1993-01-01
An axisymmetric single engine Computational Fluid Dynamics calculation of the Saturn V/S 1-C vehicle base region and F1 engine plume is described. There were two objectives of this work, the first was to calculate an axisymmetric approximation of the nozzle, plume and base region flow fields of S1-C/F1, relate/scale this to flight data and apply this scaling factor to a NLS/STME axisymmetric calculations from a parallel effort. The second was to assess the differences in F1 and STME plume shear layer development and concentration of combustible gases. This second piece of information was to be input/supporting data for assumptions made in NLS2 base temperature scaling methodology from which the vehicle base thermal environments were being generated. The F1 calculations started at the main combustion chamber faceplate and incorporated the turbine exhaust dump/nozzle film coolant. The plume and base region calculations were made for ten thousand feet and 57 thousand feet altitude at vehicle flight velocity and in stagnant freestream. FDNS was implemented with a 14 species, 28 reaction finite rate chemistry model plus a soot burning model for the RP-1/LOX chemistry. Nozzle and plume flow fields are shown, the plume shear layer constituents are compared to a STME plume. Conclusions are made about the validity and status of the analysis and NLS2 vehicle base thermal environment definition methodology.
Seresht, L. Mousavi; Golparvar, Mohammad; Yaraghi, Ahmad
2014-01-01
Background: Appropriate determination of tidal volume (VT) is important for preventing ventilation induced lung injury. We compared hemodynamic and respiratory parameters in two conditions of receiving VTs calculated by using body weight (BW), which was estimated by measured height (HBW) or demi-span based body weight (DBW). Materials and Methods: This controlled-trial was conducted in St. Alzahra Hospital in 2009 on American Society of Anesthesiologists (ASA) I and II, 18-65-years-old patients. Standing height and weight were measured and then height was calculated using demi-span method. BW and VT were calculated with acute respiratory distress syndrome-net formula. Patients were randomized and then crossed to receive ventilation with both calculated VTs for 20 min. Hemodynamic and respiratory parameters were analyzed with SPSS version 20.0 using univariate and multivariate analyses. Results: Forty nine patients were studied. Demi-span based body weight and thus VT (DTV) were lower than Height based body weight and VT (HTV) (P = 0.028), in male patients (P = 0.005). Difference was observed in peak airway pressure (PAP) and airway resistance (AR) changes with higher PAP and AR at 20 min after receiving HTV compared with DTV. Conclusions: Estimated VT based on measured height is higher than that based on demi-span and this difference exists only in females, and this higher VT results higher airway pressures during mechanical ventilation. PMID:24627845
Seresht, L Mousavi; Golparvar, Mohammad; Yaraghi, Ahmad
2014-01-01
Appropriate determination of tidal volume (VT) is important for preventing ventilation induced lung injury. We compared hemodynamic and respiratory parameters in two conditions of receiving VTs calculated by using body weight (BW), which was estimated by measured height (HBW) or demi-span based body weight (DBW). This controlled-trial was conducted in St. Alzahra Hospital in 2009 on American Society of Anesthesiologists (ASA) I and II, 18-65-years-old patients. Standing height and weight were measured and then height was calculated using demi-span method. BW and VT were calculated with acute respiratory distress syndrome-net formula. Patients were randomized and then crossed to receive ventilation with both calculated VTs for 20 min. Hemodynamic and respiratory parameters were analyzed with SPSS version 20.0 using univariate and multivariate analyses. Forty nine patients were studied. Demi-span based body weight and thus VT (DTV) were lower than Height based body weight and VT (HTV) (P = 0.028), in male patients (P = 0.005). Difference was observed in peak airway pressure (PAP) and airway resistance (AR) changes with higher PAP and AR at 20 min after receiving HTV compared with DTV. Estimated VT based on measured height is higher than that based on demi-span and this difference exists only in females, and this higher VT results higher airway pressures during mechanical ventilation.
The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool
Stephen, Cook; Benjamin, Longo-Mbenza
2013-01-01
AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097
Some computer graphical user interfaces in radiation therapy.
Chow, James C L
2016-03-28
In this review, five graphical user interfaces (GUIs) used in radiation therapy practices and researches are introduced. They are: (1) the treatment time calculator, superficial X-ray treatment time calculator (SUPCALC) used in the superficial X-ray radiation therapy; (2) the monitor unit calculator, electron monitor unit calculator (EMUC) used in the electron radiation therapy; (3) the multileaf collimator machine file creator, sliding window intensity modulated radiotherapy (SWIMRT) used in generating fluence map for research and quality assurance in intensity modulated radiation therapy; (4) the treatment planning system, DOSCTP used in the calculation of 3D dose distribution using Monte Carlo simulation; and (5) the monitor unit calculator, photon beam monitor unit calculator (PMUC) used in photon beam radiation therapy. One common issue of these GUIs is that all user-friendly interfaces are linked to complex formulas and algorithms based on various theories, which do not have to be understood and noted by the user. In that case, user only needs to input the required information with help from graphical elements in order to produce desired results. SUPCALC is a superficial radiation treatment time calculator using the GUI technique to provide a convenient way for radiation therapist to calculate the treatment time, and keep a record for the skin cancer patient. EMUC is an electron monitor unit calculator for electron radiation therapy. Instead of doing hand calculation according to pre-determined dosimetric tables, clinical user needs only to input the required drawing of electron field in computer graphical file format, prescription dose, and beam parameters to EMUC to calculate the required monitor unit for the electron beam treatment. EMUC is based on a semi-experimental theory of sector-integration algorithm. SWIMRT is a multileaf collimator machine file creator to generate a fluence map produced by a medical linear accelerator. This machine file controls the multileaf collimator to deliver intensity modulated beams for a specific fluence map used in quality assurance or research. DOSCTP is a treatment planning system using the computed tomography images. Radiation beams (photon or electron) with different energies and field sizes produced by a linear accelerator can be placed in different positions to irradiate the tumour in the patient. DOSCTP is linked to a Monte Carlo simulation engine using the EGSnrc-based code, so that 3D dose distribution can be determined accurately for radiation therapy. Moreover, DOSCTP can be used for treatment planning of patient or small animal. PMUC is a GUI for calculation of the monitor unit based on the prescription dose of patient in photon beam radiation therapy. The calculation is based on dose corrections in changes of photon beam energy, treatment depth, field size, jaw position, beam axis, treatment distance and beam modifiers. All GUIs mentioned in this review were written either by the Microsoft Visual Basic.net or a MATLAB GUI development tool called GUIDE. In addition, all GUIs were verified and tested using measurements to ensure their accuracies were up to clinical acceptable levels for implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purtov, P.A.; Salikhov, K.M.
1987-09-01
Semiclassical HFI description is applicable to calculating the integral CIDNP effect in weak fields. The HFI has been calculated for radicals with sufficiently numerous magnetically equivalent nuclei (n greater than or equal to 5) in satisfactory agreement with CIDNP calculations based on quantum-mechanical description of radical-pair spin dynamics.
Effect of genome sequence on the force-induced unzipping of a DNA molecule.
Singh, N; Singh, Y
2006-02-01
We considered a dsDNA polymer in which distribution of bases are random at the base pair level but ordered at a length of 18 base pairs and calculated its force elongation behaviour in the constant extension ensemble. The unzipping force F(y) vs. extension y is found to have a series of maxima and minima. By changing base pairs at selected places in the molecule we calculated the change in F(y) curve and found that the change in the value of force is of the order of few pN and the range of the effect depending on the temperature, can spread over several base pairs. We have also discussed briefly how to calculate in the constant force ensemble a pause or a jump in the extension-time curve from the knowledge of F(y).
Magnetic susceptibilities of actinide 3d-metal intermetallic compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muniz, R.B.; d'Albuquerque e Castro, J.; Troper, A.
1988-04-15
We have numerically calculated the magnetic susceptibilities which appear in the Hartree--Fock instability criterion for actinide 3d transition-metal intermetallic compounds. This calculation is based on a previous tight-binding description of these actinide-based compounds (A. Troper and A. A. Gomes, Phys. Rev. B 34, 6487 (1986)). The parameters of the calculation, which starts from simple tight-binding d and f bands are (i) occupation numbers, (ii) ratio of d-f hybridization to d bandwidth, and (iii) electron-electron Coulomb-type interactions.
Symmetric Resonance Charge Exchange Cross Section Based on Impact Parameter Treatment
NASA Technical Reports Server (NTRS)
Omidvar, Kazem; Murphy, Kendrah; Atlas, Robert (Technical Monitor)
2002-01-01
Using a two-state impact parameter approximation, a calculation has been carried out to obtain symmetric resonance charge transfer cross sections between nine ions and their parent atoms or molecules. Calculation is based on a two-dimensional numerical integration. The method is mostly suited for hydrogenic and some closed shell atoms. Good agreement has been obtained with the results of laboratory measurements for the ion-atom pairs H+-H, He+-He, and Ar+-Ar. Several approximations in a similar published calculation have been eliminated.
Software-Based Visual Loan Calculator For Banking Industry
NASA Astrophysics Data System (ADS)
Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.
2012-03-01
industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.
Properties of solid and gaseous hydrogen, based upon anisotropic pair interactions
NASA Technical Reports Server (NTRS)
Etters, R. D.; Danilowicz, R.; England, W.
1975-01-01
Properties of H2 are studied on the basis of an analytic anisotropic potential deduced from atomic orbital and perturbation calculations. The low-pressure solid results are based on a spherical average of the anisotropic potential. The ground state energy and the pressure-volume relation are calculated. The metal-insulator phase transition pressure is predicted. Second virial coefficients are calculated for H2 and D2, as is the difference in second virial coefficients between ortho and para H2 and D2.
Functional design specification: NASA form 1510
NASA Technical Reports Server (NTRS)
1979-01-01
The 1510 worksheet used to calculate approved facility project cost estimates is explained. Topics covered include data base considerations, program structure, relationship of the 1510 form to the 1509 form, and functions which the application must perform: WHATIF, TENENTER, TENTYPE, and data base utilities. A sample NASA form 1510 printout and a 1510 data dictionary are presented in the appendices along with the cost adjustment table, the floppy disk index, and methods for generating the calculated values (TENCALC) and for calculating cost adjustment (CONSTADJ). Storage requirements are given.
Detection and quantification system for monitoring instruments
Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA
2008-08-12
A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
Evaluation of students' knowledge about paediatric dosage calculations.
Özyazıcıoğlu, Nurcan; Aydın, Ayla İrem; Sürenler, Semra; Çinar, Hava Gökdere; Yılmaz, Dilek; Arkan, Burcu; Tunç, Gülseren Çıtak
2018-01-01
Medication errors are common and may jeopardize the patient safety. As paediatric dosages are calculated based on the child's age and weight, risk of error in dosage calculations is increasing. In paediatric patients, overdose drug prescribed regardless of the child's weight, age and clinical picture may lead to excessive toxicity and mortalities while low doses may delay the treatment. This study was carried out to evaluate the knowledge of nursing students about paediatric dosage calculations. This research, which is of retrospective type, covers a population consisting of all the 3rd grade students at the bachelor's degree in May, 2015 (148 students). Drug dose calculation questions in exam papers including 3 open ended questions on dosage calculation problems, addressing 5 variables were distributed to the students and their responses were evaluated by the researchers. In the evaluation of the data, figures and percentage distribution were calculated and Spearman correlation analysis was applied. Exam question on the dosage calculation based on child's age, which is the most common method in paediatrics, and which ensures right dosages and drug dilution was answered correctly by 87.1% of the students while 9.5% answered it wrong and 3.4% left it blank. 69.6% of the students was successful in finding the safe dose range, and 79.1% in finding the right ratio/proportion. 65.5% of the answers with regard to Ml/dzy calculation were correct. Moreover, student's four operation skills were assessed and 68.2% of the students were determined to have found the correct answer. When the relation among the questions on medication was examined, a significant relation (correlation) was determined between them. It is seen that in dosage calculations, the students failed mostly in calculating ml/dzy (decimal). This result means that as dosage calculations are based on decimal values, calculations may be ten times erroneous when the decimal point is placed wrongly. Moreover, it is also seen that students lack maths knowledge in respect of four operations and calculating safe dose range. Relations among the medications suggest that a student wrongly calculating a dosage may also make other errors. Additional courses, exercises or utilisation of different teaching techniques may be suggested to eliminate the deficiencies in terms of basic maths knowledge, problem solving skills and correct dosage calculation of the students. Copyright © 2017 Elsevier Ltd. All rights reserved.
Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael
2007-08-21
Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Chen, S
Purpose: The TG-71 report was published in 2014 to present standardized methodologies for MU calculations and determination of dosimetric quantities. This work explores the clinical implementation of a TG71-based electron MU calculation algorithm and compares it with a recently released commercial secondary calculation program–Mobius3D (Mobius Medical System, LP). Methods: TG-71 electron dosimetry data were acquired, and MU calculations were performed based on the recently published TG-71 report. The formalism in the report for extended SSD using air-gap corrections was used. The dosimetric quantities, such PDD, output factor, and f-air factors were incorporated into an organized databook that facilitates data accessmore » and subsequent computation. The Mobius3D program utilizes a pencil beam redefinition algorithm. To verify the accuracy of calculations, five customized rectangular cutouts of different sizes–6×12, 4×12, 6×8, 4×8, 3×6 cm{sup 2}–were made. Calculations were compared to each other and to point dose measurements for electron beams of energy 6, 9, 12, 16, 20 MeV. Each calculation / measurement point was at the depth of maximum dose for each cutout in a 10×10 cm{sup 2} or 15×15cm{sup 2} applicator with SSDs 100cm and 110cm. Validation measurements were made with a CC04 ion chamber in a solid water phantom for electron beams of energy 9 and 16 MeV. Results: Differences between the TG-71 and the commercial system relative to measurements were within 3% for most combinations of electron energy, cutout size, and SSD. A 5.6% difference between the two calculation methods was found only for the 6MeV electron beam with 3×6 cm{sup 2}cutout in the 10×10{sup 2}cm applicator at 110cm SSD. Both the TG-71 and the commercial calculations show good consistency with chamber measurements: for 5 cutouts, <1% difference for 100cm SSD, and 0.5–2.7% for 110cm SSD. Conclusions: Based on comparisons with measurements, a TG71-based computation method and a Mobius3D program produce reasonably accurate MU calculations for electron-beam therapy.« less
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
Quantification of confounding factors in MRI-based dose calculations as applied to prostate IMRT
NASA Astrophysics Data System (ADS)
Maspero, Matteo; Seevinck, Peter R.; Schubert, Gerald; Hoesl, Michaela A. U.; van Asselen, Bram; Viergever, Max A.; Lagendijk, Jan J. W.; Meijer, Gert J.; van den Berg, Cornelis A. T.
2017-02-01
Magnetic resonance (MR)-only radiotherapy treatment planning requires pseudo-CT (pCT) images to enable MR-based dose calculations. To verify the accuracy of MR-based dose calculations, institutions interested in introducing MR-only planning will have to compare pCT-based and computer tomography (CT)-based dose calculations. However, interpreting such comparison studies may be challenging, since potential differences arise from a range of confounding factors which are not necessarily specific to MR-only planning. Therefore, the aim of this study is to identify and quantify the contribution of factors confounding dosimetric accuracy estimation in comparison studies between CT and pCT. The following factors were distinguished: set-up and positioning differences between imaging sessions, MR-related geometric inaccuracy, pCT generation, use of specific calibration curves to convert pCT into electron density information, and registration errors. The study comprised fourteen prostate cancer patients who underwent CT/MRI-based treatment planning. To enable pCT generation, a commercial solution (MRCAT, Philips Healthcare, Vantaa, Finland) was adopted. IMRT plans were calculated on CT (gold standard) and pCTs. Dose difference maps in a high dose region (CTV) and in the body volume were evaluated, and the contribution to dose errors of possible confounding factors was individually quantified. We found that the largest confounding factor leading to dose difference was the use of different calibration curves to convert pCT and CT into electron density (0.7%). The second largest factor was the pCT generation which resulted in pCT stratified into a fixed number of tissue classes (0.16%). Inter-scan differences due to patient repositioning, MR-related geometric inaccuracy, and registration errors did not significantly contribute to dose differences (0.01%). The proposed approach successfully identified and quantified the factors confounding accurate MRI-based dose calculation in the prostate. This study will be valuable for institutions interested in introducing MR-only dose planning in their clinical practice.
Code of Federal Regulations, 2013 CFR
2013-07-01
... values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy, CO2 emissions, and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value...
Code of Federal Regulations, 2014 CFR
2014-07-01
... values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy, CO2 emissions, and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value...
Code of Federal Regulations, 2012 CFR
2012-07-01
... values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy, CO2 emissions, and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value...
NASA Astrophysics Data System (ADS)
Gerikh, Valentin; Kolosok, Irina; Kurbatsky, Victor; Tomin, Nikita
2009-01-01
The paper presents the results of experimental studies concerning calculation of electricity prices in different price zones in Russia and Europe. The calculations are based on the intelligent software "ANAPRO" that implements the approaches based on the modern methods of data analysis and artificial intelligence technologies.
75 FR 37390 - Caribbean Fishery Management Council; Public Hearings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-29
...; rather, all are calculated based on landings data averaged over alternative time series. The overfished... the USVI, and recreational landings data recorded during 2000-2001. These time series were considered... Calculated Based on the Alternative Time Series Described in Section 4.2.1. Also Included Are the Average...
Calculator Programming Engages Visual and Kinesthetic Learners
ERIC Educational Resources Information Center
Tabor, Catherine
2014-01-01
Inclusion and differentiation--hallmarks of the current educational system--require a paradigm shift in the way that educators run their classrooms. This article enumerates the need for techno-kinesthetic, visually based activities and offers an example of a calculator-based programming activity that addresses that need. After discussing the use…
Implementation of a method for calculating temperature-dependent resistivities in the KKR formalism
NASA Astrophysics Data System (ADS)
Mahr, Carsten E.; Czerner, Michael; Heiliger, Christian
2017-10-01
We present a method to calculate the electron-phonon induced resistivity of metals in scattering-time approximation based on the nonequilibrium Green's function formalism. The general theory as well as its implementation in a density-functional theory based Korringa-Kohn-Rostoker code are described and subsequently verified by studying copper as a test system. We model the thermal expansion by fitting a Debye-Grüneisen curve to experimental data. Both the electronic and vibrational structures are discussed for different temperatures, and employing a Wannier interpolation of these quantities we evaluate the scattering time by integrating the electron linewidth on a triangulation of the Fermi surface. Based thereupon, the temperature-dependent resistivity is calculated and found to be in good agreement with experiment. We show that the effect of thermal expansion has to be considered in the whole calculation regime. Further, for low temperatures, an accurate sampling of the Fermi surface becomes important.
NASA Astrophysics Data System (ADS)
Harmel, M.; Khachai, H.; Ameri, M.; Khenata, R.; Baki, N.; Haddou, A.; Abbar, B.; UǦUR, Ş.; Omran, S. Bin; Soyalp, F.
2012-12-01
Density functional theory (DFT) is performed to study the structural, electronic and optical properties of cubic fluoroperovskite AMF3 (A = Cs; M = Ca and Sr) compounds. The calculations are based on the total-energy calculations within the full-potential linearized augmented plane wave (FP-LAPW) method. The exchange-correlation potential is treated by local density approximation (LDA) and generalized gradient approximation (GGA). The structural properties, including lattice constants, bulk modulus and their pressure derivatives are in very good agreement with the available experimental and theoretical data. The calculations of the electronic band structure, density of states and charge density reveal that compounds are both ionic insulators. The optical properties (namely: the real and the imaginary parts of the dielectric function ɛ(ω), the refractive index n(ω) and the extinction coefficient k(ω)) were calculated for radiation up to 40.0 eV.
Solute effect on basal and prismatic slip systems of Mg.
Moitra, Amitava; Kim, Seong-Gon; Horstemeyer, M F
2014-11-05
In an effort to design novel magnesium (Mg) alloys with high ductility, we present a first principles data based on the Density Functional Theory (DFT). The DFT was employed to calculate the generalized stacking fault energy curves, which can be used in the generalized Peierls-Nabarro (PN) model to study the energetics of basal slip and prismatic slip in Mg with and without solutes to calculate continuum scale dislocation core widths, stacking fault widths and Peierls stresses. The generalized stacking fault energy curves for pure Mg agreed well with other DFT calculations. Solute effects on these curves were calculated for nine alloying elements, namely Al, Ca, Ce, Gd, Li, Si, Sn, Zn and Zr, which allowed the strength and ductility to be qualitatively estimated based on the basal dislocation properties. Based on our multiscale methodology, a suggestion has been made to improve Mg formability.
Calculation of conductivities and currents in the ionosphere
NASA Technical Reports Server (NTRS)
Kirchhoff, V. W. J. H.; Carpenter, L. A.
1975-01-01
Formulas and procedures to calculate ionospheric conductivities are summarized. Ionospheric currents are calculated using a semidiurnal E-region neutral wind model and electric fields from measurements at Millstone Hill. The results agree well with ground based magnetogram records for magnetic quiet days.
CELSS scenario analysis: Breakeven calculations
NASA Technical Reports Server (NTRS)
Mason, R. M.
1980-01-01
A model of the relative mass requirements of food production components in a controlled ecological life support system (CELSS) based on regenerative concepts is described. Included are a discussion of model scope, structure, and example calculations. Computer programs for cultivar and breakeven calculations are also included.
Initial Assessment of a Rapid Method of Calculating CEV Environmental Heating
NASA Technical Reports Server (NTRS)
Pickney, John T.; Milliken, Andrew H.
2010-01-01
An innovative method for rapidly calculating spacecraft environmental absorbed heats in planetary orbit is described. The method employs reading a database of pre-calculated orbital absorbed heats and adjusting those heats for desired orbit parameters. The approach differs from traditional Monte Carlo methods that are orbit based with a planet centered coordinate system. The database is based on a spacecraft centered coordinated system where the range of all possible sun and planet look angles are evaluated. In an example case 37,044 orbit configurations were analyzed for average orbital heats on selected spacecraft surfaces. Calculation time was under 2 minutes while a comparable Monte Carlo evaluation would have taken an estimated 26 hours
Calculation of the surface tension of liquid Ga-based alloys
NASA Astrophysics Data System (ADS)
Dogan, Ali; Arslan, Hüseyin
2018-05-01
As known, Eyring and his collaborators have applied the structure theory to the properties of binary liquid mixtures. In this work, the Eyring model has been extended to calculate the surface tension of liquid Ga-Bi, Ga-Sn and Ga-In binary alloys. It was found that the addition of Sn, In and Bi into Ga leads to significant decrease in the surface tension of the three Ga-based alloy systems, especially for that of Ga-Bi alloys. The calculated surface tension values of these alloys exhibit negative deviation from the corresponding ideal mixing isotherms. Moreover, a comparison between the calculated results and corresponding literature data indicates a good agreement.
Hirano, Toshiyuki; Sato, Fumitoshi
2014-07-28
We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.
Cellular-based preemption system
NASA Technical Reports Server (NTRS)
Bachelder, Aaron D. (Inventor)
2011-01-01
A cellular-based preemption system that uses existing cellular infrastructure to transmit preemption related data to allow safe passage of emergency vehicles through one or more intersections. A cellular unit in an emergency vehicle is used to generate position reports that are transmitted to the one or more intersections during an emergency response. Based on this position data, the one or more intersections calculate an estimated time of arrival (ETA) of the emergency vehicle, and transmit preemption commands to traffic signals at the intersections based on the calculated ETA. Additional techniques may be used for refining the position reports, ETA calculations, and the like. Such techniques include, without limitation, statistical preemption, map-matching, dead-reckoning, augmented navigation, and/or preemption optimization techniques, all of which are described in further detail in the above-referenced patent applications.
NASA Astrophysics Data System (ADS)
He, Yuping
2015-03-01
We present calculations of the thermal transport coefficients of Si-based clathrates and solar perovskites, as obtained from ab initio calculations and models, where all input parameters derived from first principles. We elucidated the physical mechanisms responsible for the measured low thermal conductivity in Si-based clatherates and predicted their electronic properties and mobilities, which were later confirmed experimentally. We also predicted that by appropriately tuning the carrier concentration, the thermoelectric figure of merit of Sn and Pb based perovskites may reach values ranging between 1 and 2, which could possibly be further increased by optimizing the lattice thermal conductivity through engineering perovskite superlattices. Work done in collaboration with Prof. G. Galli, and supported by DOE/BES Grant No. DE-FG0206ER46262.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng
2015-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng; Iapcm Team
2013-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Errors in the Calculation of 27Al Nuclear Magnetic Resonance Chemical Shifts
Wang, Xianlong; Wang, Chengfei; Zhao, Hui
2012-01-01
Computational chemistry is an important tool for signal assignment of 27Al nuclear magnetic resonance spectra in order to elucidate the species of aluminum(III) in aqueous solutions. The accuracy of the popular theoretical models for computing the 27Al chemical shifts was evaluated by comparing the calculated and experimental chemical shifts in more than one hundred aluminum(III) complexes. In order to differentiate the error due to the chemical shielding tensor calculation from that due to the inadequacy of the molecular geometry prediction, single-crystal X-ray diffraction determined structures were used to build the isolated molecule models for calculating the chemical shifts. The results were compared with those obtained using the calculated geometries at the B3LYP/6-31G(d) level. The isotropic chemical shielding constants computed at different levels have strong linear correlations even though the absolute values differ in tens of ppm. The root-mean-square difference between the experimental chemical shifts and the calculated values is approximately 5 ppm for the calculations based on the X-ray structures, but more than 10 ppm for the calculations based on the computed geometries. The result indicates that the popular theoretical models are adequate in calculating the chemical shifts while an accurate molecular geometry is more critical. PMID:23203134
SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H; Barbee, D; Wang, W
Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CTmore » for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.« less
Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation
NASA Astrophysics Data System (ADS)
Frybort, Jan
2017-09-01
Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.
Configurations of base-pair complexes in solutions. [nucleotide chemistry
NASA Technical Reports Server (NTRS)
Egan, J. T.; Nir, S.; Rein, R.; Macelroy, R.
1978-01-01
A theoretical search for the most stable conformations (i.e., stacked or hydrogen bonded) of the base pairs A-U and G-C in water, CCl4, and CHCl3 solutions is presented. The calculations of free energies indicate a significant role of the solvent in determining the conformations of the base-pair complexes. The application of the continuum method yields preferred conformations in good agreement with experiment. Results of the calculations with this method emphasize the importance of both the electrostatic interactions between the two bases in a complex, and the dipolar interaction of the complex with the entire medium. In calculations with the solvation shell method, the last term, i.e., dipolar interaction of the complex with the entire medium, was added. With this modification the prediction of the solvation shell model agrees both with the continuum model and with experiment, i.e., in water the stacked conformation of the bases is preferred.
New size-expanded RNA nucleobase analogs: a detailed theoretical study.
Zhang, Laibin; Zhang, Zhenwei; Ren, Tingqi; Tian, Jianxiang; Wang, Mei
2015-04-05
Fluorescent nucleobase analogs have attracted much attention in recent years due to their potential applications in nucleic acids research. In this work, four new size-expanded RNA base analogs were computationally designed and their structural, electronic, and optical properties are investigated by means of DFT calculations. The results indicate that these analogs can form stable Watson-Crick base pairs with natural counterparts and they have smaller ionization potentials and HOMO-LUMO gaps than natural ones. Particularly, the electronic absorption spectra and fluorescent emission spectra are calculated. The calculated excitation maxima are greatly red-shifted compared with their parental and natural bases, allowing them to be selectively excited. In gas phase, the fluorescence from them would be expected to occur around 526, 489, 510, and 462 nm, respectively. The influences of water solution and base pairing on the relevant absorption spectra of these base analogs are also examined. Copyright © 2015 Elsevier B.V. All rights reserved.
Activity-based differentiation of pathologists' workload in surgical pathology.
Meijer, G A; Oudejans, J J; Koevoets, J J M; Meijer, C J L M
2009-06-01
Adequate budget control in pathology practice requires accurate allocation of resources. Any changes in types and numbers of specimens handled or protocols used will directly affect the pathologists' workload and consequently the allocation of resources. The aim of the present study was to develop a model for measuring the pathologists' workload that can take into account the changes mentioned above. The diagnostic process was analyzed and broken up into separate activities. The time needed to perform these activities was measured. Based on linear regression analysis, for each activity, the time needed was calculated as a function of the number of slides or blocks involved. The total pathologists' time required for a range of specimens was calculated based on standard protocols and validated by comparing to actually measured workload. Cutting up, microscopic procedures and dictating turned out to be highly correlated to number of blocks and/or slides per specimen. Calculated workload per type of specimen was significantly correlated to the actually measured workload. Modeling pathologists' workload based on formulas that calculate workload per type of specimen as a function of the number of blocks and slides provides a basis for a comprehensive, yet flexible, activity-based costing system for pathology.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD CALCULATION OF SHARE PRICES § 1645.1... means the number of shares of an investment fund upon which the calculation of a share price is based. Business day means any calendar day for which share prices are calculated. Forfeitures means amounts...
COMPUTER PROGRAM FOR CALCULATING THE COST OF DRINKING WATER TREATMENT SYSTEMS
This FORTRAN computer program calculates the construction and operation/maintenance costs for 45 centralized unit treatment processes for water supply. The calculated costs are based on various design parameters and raw water quality. These cost data are applicable to small size ...
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan
2014-09-01
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less
NASA Astrophysics Data System (ADS)
Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.
2016-04-01
The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.
NASA Technical Reports Server (NTRS)
Lan, C. Edward
1985-01-01
A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.
Synthesis, characterisation and DFT studies of three Schiff bases derived from histamine
NASA Astrophysics Data System (ADS)
Touafri, Lasnouni; Hellal, Abdelkader; Chafaa, Salah; Khelifa, Abdellah; Kadri, Abdelaziz.
2017-12-01
In this paper, we report first, the synthesis and characterisation of three Schiff bases derived from histamine by condensation of histamine with various aldehydes. Then, we present a detailed DFT study based on B3LYP/6-31G(d,p) of geometrical structures and electronic properties of these compounds. The study was extended to the HOMO-LUMO analysis to calculate the energy gap (Δ), Ionisation potential (I), Electron Affinity (A), Global Hardness (η), Chemical Potential (μ), Electrophilicity (ω), Electronegativity (χ) and Polarisability (α). The calculated HOMO and LUMO energy reveals that the charge transfers occurring within the molecule. On the basis of vibration analyses, the thermodynamic properties of the titles compound were also calculated.
“Magnitude-based Inference”: A Statistical Review
Welsh, Alan H.; Knight, Emma J.
2015-01-01
ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387
The development of android - based children's nutritional status monitoring system
NASA Astrophysics Data System (ADS)
Suryanto, Agus; Paramita, Octavianti; Pribadi, Feddy Setio
2017-03-01
The calculation of BMI (Body Mass Index) is one of the methods to calculate the nutritional status of a person. The BMI calculation has not yet widely understood and known by the public. In addition, people should know the importance of progress in the development of child nutrition each month. Therefore, an application to determine the nutritional status of children based on Android was developed in this study. This study restricted the calculation for children with the age of 0-60 months. The application can run on a smartphone or tablet PC with android operating system due to the rapid development of a smartphone or tablet PC with android operating system and many people own and use it. The aim of this study was to produce a android app to calculate of nutritional status of children. This study was Research and Development (R & D), with a design approach using experimental studies. The steps in this study included analyzing the formula of the Body Mass Index (BMI) and developing the initial application with the help of a computer that includes the design and manufacture of display using Eclipse software. This study resulted in android application that can be used to calculate the nutritional status of children with the age 0-60 months. The results of MES or the error calculation analysis using body mass index formula was 0. In addition, the results of MAPE percentage was 0%. It shows that there is no error in the calculation of the application based on the BMI formula. The smaller value of MSE and MAPE leads to higher level of accuracy.
Electrostatic effects in unfolded staphylococcal nuclease
Fitzkee, Nicholas C.; García-Moreno E, Bertrand
2008-01-01
Structure-based calculations of pK a values and electrostatic free energies of proteins assume that electrostatic effects in the unfolded state are negligible. In light of experimental evidence showing that this assumption is invalid for many proteins, and with increasing awareness that the unfolded state is more structured and compact than previously thought, a detailed examination of electrostatic effects in unfolded proteins is warranted. Here we address this issue with structure-based calculations of electrostatic interactions in unfolded staphylococcal nuclease. The approach involves the generation of ensembles of structures representing the unfolded state, and calculation of Coulomb energies to Boltzmann weight the unfolded state ensembles. Four different structural models of the unfolded state were tested. Experimental proton binding data measured with a variant of nuclease that is unfolded under native conditions were used to establish the validity of the calculations. These calculations suggest that weak Coulomb interactions are an unavoidable property of unfolded proteins. At neutral pH, the interactions are too weak to organize the unfolded state; however, at extreme pH values, where the protein has a significant net charge, the combined action of a large number of weak repulsive interactions can lead to the expansion of the unfolded state. The calculated pK a values of ionizable groups in the unfolded state are similar but not identical to the values in small peptides in water. These studies suggest that the accuracy of structure-based calculations of electrostatic contributions to stability cannot be improved unless electrostatic effects in the unfolded state are calculated explicitly. PMID:18227429
Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.
Khoromskaia, Venera; Khoromskij, Boris N
2015-12-21
We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.
Dyscalculia: neuroscience and education
Kaufmann, Liane
2010-01-01
Background Developmental dyscalculia is a heterogeneous disorder with largely dissociable performance profiles. Though our current understanding of the neurofunctional foundations of (adult) numerical cognition has increased considerably during the past two decades, there are still many unanswered questions regarding the developmental pathways of numerical cognition. Most studies on developmental dyscalculia are based upon adult calculation models which may not provide an adequate theoretical framework for understanding and investigating developing calculation systems. Furthermore, the applicability of neuroscience research to pedagogy has, so far, been limited. Purpose After providing an overview of current conceptualisations of numerical cognition and developmental dyscalculia, the present paper (1) reviews recent research findings that are suggestive of a neurofunctional link between fingers (finger gnosis, finger-based counting and calculation) and number processing, and (2) takes the latter findings as an example to discuss how neuroscience findings may impact on educational understanding and classroom interventions. Sources of evidence Finger-based number representations and finger-based calculation have deep roots in human ontology and phylogeny. Recently, accumulating empirical evidence supporting the hypothesis of a neurofunctional link between fingers and numbers has emerged from both behavioural and brain imaging studies. Main argument Preliminary but converging research supports the notion that finger gnosis and finger use seem to be related to calculation proficiency in elementary school children. Finger-based counting and calculation may facilitate the establishment of mental number representations (possibly by fostering the mapping from concrete non-symbolic to abstract symbolic number magnitudes), which in turn seem to be the foundations for successful arithmetic achievement. Conclusions Based on the findings illustrated here, it is plausible to assume that finger use might be an important and complementary aid (to more traditional pedagogical methods) to establish mental number representations and/or to facilitate learning to count and calculate. Clearly, future prospective studies are needed to investigate whether the explicit use of fingers in early mathematics teaching might prove to be beneficial for typically developing children and/or might support the mapping from concrete to abstract number representations in children with and without developmental dyscalculia. PMID:21258625
Dyscalculia: neuroscience and education.
Kaufmann, Liane
2008-06-01
BACKGROUND: Developmental dyscalculia is a heterogeneous disorder with largely dissociable performance profiles. Though our current understanding of the neurofunctional foundations of (adult) numerical cognition has increased considerably during the past two decades, there are still many unanswered questions regarding the developmental pathways of numerical cognition. Most studies on developmental dyscalculia are based upon adult calculation models which may not provide an adequate theoretical framework for understanding and investigating developing calculation systems. Furthermore, the applicability of neuroscience research to pedagogy has, so far, been limited. PURPOSE: After providing an overview of current conceptualisations of numerical cognition and developmental dyscalculia, the present paper (1) reviews recent research findings that are suggestive of a neurofunctional link between fingers (finger gnosis, finger-based counting and calculation) and number processing, and (2) takes the latter findings as an example to discuss how neuroscience findings may impact on educational understanding and classroom interventions. SOURCES OF EVIDENCE: Finger-based number representations and finger-based calculation have deep roots in human ontology and phylogeny. Recently, accumulating empirical evidence supporting the hypothesis of a neurofunctional link between fingers and numbers has emerged from both behavioural and brain imaging studies. MAIN ARGUMENT: Preliminary but converging research supports the notion that finger gnosis and finger use seem to be related to calculation proficiency in elementary school children. Finger-based counting and calculation may facilitate the establishment of mental number representations (possibly by fostering the mapping from concrete non-symbolic to abstract symbolic number magnitudes), which in turn seem to be the foundations for successful arithmetic achievement. CONCLUSIONS: Based on the findings illustrated here, it is plausible to assume that finger use might be an important and complementary aid (to more traditional pedagogical methods) to establish mental number representations and/or to facilitate learning to count and calculate. Clearly, future prospective studies are needed to investigate whether the explicit use of fingers in early mathematics teaching might prove to be beneficial for typically developing children and/or might support the mapping from concrete to abstract number representations in children with and without developmental dyscalculia.
NASA Astrophysics Data System (ADS)
Kouznetsov, A.; Cully, C. M.; Knudsen, D. J.
2016-12-01
Changes in D-Region ionization caused by energetic particle precipitation are monitored by the Array for Broadband Observations of VLF/ELF Emissions (ABOVE) - a network of receivers deployed across Western Canada. The observed amplitudes and phases of subionospheric-propagating VLF signals from distant artificial transmitters depend sensitively on the free electron population created by precipitation of energetic charged particles. Those include both primary (electrons, protons and heavier ions) and secondary (cascades of ionized particles and electromagnetic radiation) components. We have designed and implemented a full-scale model to predict the received VLF signals based on first-principle charged particle transport calculations coupled to the Long Wavelength Propagation Capability (LWPC) software. Calculations of ionization rates and free electron densities are based on MCNP-6 (a general-purpose Monte Carlo N- Particle) software taking advantage of its capability of coupled neutron/photon/electron transport and novel library of cross-sections for low-energetic electron and photon interactions with matter. Cosmic ray calculations of background ionization are based on source spectra obtained both from PAMELA direct Cosmic Rays spectra measurements and based on the recently-implemented MCNP 6 galactic cosmic-ray source, scaled using our (Calgary) neutron monitor measurement results. Conversion from calculated fluxes (MCNP F4 tallies) to ionization rates for low-energy electrons are based on the total ionization cross-sections for oxygen and nitrogen molecules from the National Institute of Standard and Technology. We use our model to explore the complexity of the physical processes affecting VLF propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Fei; Zhen, Zhao; Liu, Chun
Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less
Wang, Fei; Zhen, Zhao; Liu, Chun; ...
2017-12-18
Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less
Tree value system: description and assumptions.
D.G. Briggs
1989-01-01
TREEVAL is a microcomputer model that calculates tree or stand values and volumes based on product prices, manufacturing costs, and predicted product recovery. It was designed as an aid in evaluating management regimes. TREEVAL calculates values in either of two ways, one based on optimized tree bucking using dynamic programming and one simulating the results of user-...
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2011 CFR
2011-10-01
... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2014 CFR
2014-10-01
... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2012 CFR
2012-10-01
... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...
The Lα (λ = 121.6 nm) solar plage contrasts calculations.
NASA Astrophysics Data System (ADS)
Bruevich, E. A.
1991-06-01
The results of calculations of Lα plage contrasts based on experimental data are presented. A three-component model ideology of Lα solar flux using "Prognoz-10" and SME daily smoothed values of Lα solar flux are applied. The values of contrast are discussed and compared with experimental values based on "Skylab" data.
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
Relative role of different radii in the dynamics of 8B+58Ni reaction
NASA Astrophysics Data System (ADS)
Kaur, Amandeep; Sandhu, Kirandeep; Sharma, Manoj K.
2018-05-01
In the present work, we intend to analyze the significance of three different radius terms in the framework of dynamical cluster-decay model (DCM) based calculations. In the majority of DCM based calculations the impact of mass- dependent radius R(A) is extensively analyzed. The other two factors on which the radius term may depend are, the neutron- proton asymmetry and the charge of the decaying fragments. Hence, the asymmetry dependent radius term R(I) and charge dependent radius term R(Z) are incorporated in DCM based calculations to investigate their effect on the reaction dynamics involved. Here, we present an extension of an earlier work based on the decay of 66As* compound nucleus by including R(I) and R(Z) radii in addition to the R(A) term. The effect of replacement of R(A) with R(I) and R(Z) is analyzed via fragmentation structure, tunneling probabilities (P) and other barrier characteristics like barrier height (VB), barrier position (RB), barrier turning point Ra etc. The role of temperature, deformations and angular momentum is duly incorporated in the present calculations.
Pizzoli, Giuliano; Lobello, Maria Grazia; Carlotti, Benedetta; Elisei, Fausto; Nazeeruddin, Mohammad K; Vitillaro, Giuseppe; De Angelis, Filippo
2012-10-14
We report a combined spectro-photometric and computational investigation of the acid-base equilibria of the N3 solar cell sensitizer [Ru(dcbpyH(2))(2)(NCS)(2)] (dcbpyH(2) = 4,4'-dicarboxyl-2,2' bipyridine) in aqueous/ethanol solutions. The absorption spectra of N3 recorded at various pH values were analyzed by Single Value Decomposition techniques, followed by Global Fitting procedures, allowing us to identify four separate acid-base equilibria and their corresponding ground state pK(a) values. DFT/TDDFT calculations were performed for the N3 dye in solution, investigating the possible relevant species obtained by sequential deprotonation of the four dye carboxylic groups. TDDFT excited state calculations provided UV-vis absorption spectra which nicely agree with the experimental spectral shapes at various pH values. The calculated pK(a) values are also in good agreement with experimental data, within <1 pK(a) unit. Based on the calculated energy differences a tentative assignment of the N3 deprotonation pathway is reported.
Semenov, Valentin A; Samultsev, Dmitry O; Krivdin, Leonid B
2018-02-09
15 N NMR chemical shifts in the representative series of Schiff bases together with their protonated forms have been calculated at the density functional theory level in comparison with available experiment. A number of functionals and basis sets have been tested in terms of a better agreement with experiment. Complimentary to gas phase results, 2 solvation models, namely, a classical Tomasi's polarizable continuum model (PCM) and that in combination with an explicit inclusion of one molecule of solvent into calculation space to form supermolecule 1:1 (SM + PCM), were examined. Best results are achieved with PCM and SM + PCM models resulting in mean absolute errors of calculated 15 N NMR chemical shifts in the whole series of neutral and protonated Schiff bases of accordingly 5.2 and 5.8 ppm as compared with 15.2 ppm in gas phase for the range of about 200 ppm. Noticeable protonation effects (exceeding 100 ppm) in protonated Schiff bases are rationalized in terms of a general natural bond orbital approach. Copyright © 2018 John Wiley & Sons, Ltd.
Study of fatigue crack propagation in Ti-1Al-1Mn based on the calculation of cold work evolution
NASA Astrophysics Data System (ADS)
Plekhov, O. A.; Kostina, A. A.
2017-05-01
The work proposes a numerical method for lifetime assessment for metallic materials based on consideration of energy balance at crack tip. This method is based on the evaluation of the stored energy value per loading cycle. To calculate the stored and dissipated parts of deformation energy an elasto-plastic phenomenological model of energy balance in metals under the deformation and failure processes was proposed. The key point of the model is strain-type internal variable describing the stored energy process. This parameter is introduced based of the statistical description of defect evolution in metals as a second-order tensor and has a meaning of an additional strain due to the initiation and growth of the defects. The fatigue crack rate was calculated in a framework of a stationary crack approach (several loading cycles for every crack length was considered to estimate the energy balance at crack tip). The application of the proposed algorithm is illustrated by the calculation of the lifetime of the Ti-1Al-1Mn compact tension specimen under cyclic loading.
Saito, Norio; Cordier, Stéphane; Lemoine, Pierric; Ohsawa, Takeo; Wada, Yoshiki; Grasset, Fabien; Cross, Jeffrey S; Ohashi, Naoki
2017-06-05
The electronic and crystal structures of Cs 2 [Mo 6 X 14 ] (X = Cl, Br, I) cluster-based compounds were investigated by density functional theory (DFT) simulations and experimental methods such as powder X-ray diffraction, ultraviolet-visible spectroscopy, and X-ray photoemission spectroscopy (XPS). The experimentally determined lattice parameters were in good agreement with theoretically optimized ones, indicating the usefulness of DFT calculations for the structural investigation of these clusters. The calculated band gaps of these compounds reproduced those experimentally determined by UV-vis reflectance within an error of a few tenths of an eV. Core-level XPS and effective charge analyses indicated bonding states of the halogens changed according to their sites. The XPS valence spectra were fairly well reproduced by simulations based on the projected electron density of states weighted with cross sections of Al K α , suggesting that DFT calculations can predict the electronic properties of metal-cluster-based crystals with good accuracy.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Model Comparisons For Space Solar Cell End-Of-Life Calculations
NASA Astrophysics Data System (ADS)
Messenger, Scott; Jackson, Eric; Warner, Jeffrey; Walters, Robert; Evans, Hugh; Heynderickx, Daniel
2011-10-01
Space solar cell end-of-life (EOL) calculations are performed over a wide range of space radiation environments for GaAs-based single and multijunction solar cell technologies. Two general semi-empirical approaches will used to generate these EOL calculation results: 1) the JPL equivalent fluence (EQFLUX) and 2) the NRL displacement damage dose (SCREAM). This paper also includes the first results using the Monte Carlo-based version of SCREAM, called MC- SCREAM, which is now freely available online as part of the SPENVIS suite of programs.
Some computer graphical user interfaces in radiation therapy
Chow, James C L
2016-01-01
In this review, five graphical user interfaces (GUIs) used in radiation therapy practices and researches are introduced. They are: (1) the treatment time calculator, superficial X-ray treatment time calculator (SUPCALC) used in the superficial X-ray radiation therapy; (2) the monitor unit calculator, electron monitor unit calculator (EMUC) used in the electron radiation therapy; (3) the multileaf collimator machine file creator, sliding window intensity modulated radiotherapy (SWIMRT) used in generating fluence map for research and quality assurance in intensity modulated radiation therapy; (4) the treatment planning system, DOSCTP used in the calculation of 3D dose distribution using Monte Carlo simulation; and (5) the monitor unit calculator, photon beam monitor unit calculator (PMUC) used in photon beam radiation therapy. One common issue of these GUIs is that all user-friendly interfaces are linked to complex formulas and algorithms based on various theories, which do not have to be understood and noted by the user. In that case, user only needs to input the required information with help from graphical elements in order to produce desired results. SUPCALC is a superficial radiation treatment time calculator using the GUI technique to provide a convenient way for radiation therapist to calculate the treatment time, and keep a record for the skin cancer patient. EMUC is an electron monitor unit calculator for electron radiation therapy. Instead of doing hand calculation according to pre-determined dosimetric tables, clinical user needs only to input the required drawing of electron field in computer graphical file format, prescription dose, and beam parameters to EMUC to calculate the required monitor unit for the electron beam treatment. EMUC is based on a semi-experimental theory of sector-integration algorithm. SWIMRT is a multileaf collimator machine file creator to generate a fluence map produced by a medical linear accelerator. This machine file controls the multileaf collimator to deliver intensity modulated beams for a specific fluence map used in quality assurance or research. DOSCTP is a treatment planning system using the computed tomography images. Radiation beams (photon or electron) with different energies and field sizes produced by a linear accelerator can be placed in different positions to irradiate the tumour in the patient. DOSCTP is linked to a Monte Carlo simulation engine using the EGSnrc-based code, so that 3D dose distribution can be determined accurately for radiation therapy. Moreover, DOSCTP can be used for treatment planning of patient or small animal. PMUC is a GUI for calculation of the monitor unit based on the prescription dose of patient in photon beam radiation therapy. The calculation is based on dose corrections in changes of photon beam energy, treatment depth, field size, jaw position, beam axis, treatment distance and beam modifiers. All GUIs mentioned in this review were written either by the Microsoft Visual Basic.net or a MATLAB GUI development tool called GUIDE. In addition, all GUIs were verified and tested using measurements to ensure their accuracies were up to clinical acceptable levels for implementations. PMID:27027225
Application of DYNA3D in large scale crashworthiness calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benson, D.J.; Hallquist, J.O.; Igarashi, M.
1986-01-01
This paper presents an example of an automobile crashworthiness calculation. Based on our experiences with the example calculation, we make recommendations to those interested in performing crashworthiness calculations. The example presented in this paper was supplied by Suzuki Motor Co., Ltd., and provided a significant shakedown for the new large deformation shell capability of the DYNA3D code. 15 refs., 3 figs.
Numerical calculation of the Fresnel transform.
Kelly, Damien P
2014-04-01
In this paper, we address the problem of calculating Fresnel diffraction integrals using a finite number of uniformly spaced samples. General and simple sampling rules of thumb are derived that allow the user to calculate the distribution for any propagation distance. It is shown how these rules can be extended to fast-Fourier-transform-based algorithms to increase calculation efficiency. A comparison with other theoretical approaches is made.
Examinations of electron temperature calculation methods in Thomson scattering diagnostics.
Oh, Seungtae; Lee, Jong Ha; Wi, Hanmin
2012-10-01
Electron temperature from Thomson scattering diagnostic is derived through indirect calculation based on theoretical model. χ-square test is commonly used in the calculation, and the reliability of the calculation method highly depends on the noise level of input signals. In the simulations, noise effects of the χ-square test are examined and scale factor test is proposed as an alternative method.
Using the Graphing Calculator--in Two-Dimensional Motion Plots.
ERIC Educational Resources Information Center
Brueningsen, Chris; Bower, William
1995-01-01
Presents a series of simple activities involving generalized two-dimensional motion topics to prepare students to study projectile motion. Uses a pair of motion detectors, each connected to a calculator-based-laboratory (CBL) unit interfaced with a standard graphics calculator, to explore two-dimensional motion. (JRH)
ERIC Educational Resources Information Center
Barber, Betsy; Ball, Rhonda
This project description is designed to show how graphing calculators and calculator-based laboratories (CBLs) can be used to explore topics in physics and health sciences. The activities address such topics as respiration, heart rate, and the circulatory system. Teaching notes and calculator instructions are included as are blackline masters. (MM)
Fang, D Q; Zhang, S L
2016-01-07
The band offsets of the ZnO/anatase TiO2 and GaN/ZnO heterojunctions are calculated using the density functional theory/generalized gradient approximation (DFT/GGA)-1/2 method, which takes into account the self-energy corrections and can give an approximate description to the quasiparticle characteristics of the electronic structure of semiconductors. We present the results of the ionization potential (IP)-based and interfacial offset-based band alignments. In the interfacial offset-based band alignment, to get the natural band offset, we use the surface calculations to estimate the change of reference level due to the interfacial strain. Based on the interface models and GGA-1/2 calculations, we find that the valence band maximum and conduction band minimum of ZnO, respectively, lie 0.64 eV and 0.57 eV above those of anatase TiO2, while lie 0.84 eV and 1.09 eV below those of GaN, which agree well with the experimental data. However, a large discrepancy exists between the IP-based band offset and the calculated natural band offset, the mechanism of which is discussed. Our results clarify band alignment of the ZnO/anatase TiO2 heterojunction and show good agreement with the GW calculations for the GaN/ZnO heterojunction.
Effect of costing methods on unit cost of hospital medical services.
Riewpaiboon, Arthorn; Malaroje, Saranya; Kongsawatt, Sukalaya
2007-04-01
To explore the variance of unit costs of hospital medical services due to different costing methods employed in the analysis. Retrospective and descriptive study at Kaengkhoi District Hospital, Saraburi Province, Thailand, in the fiscal year 2002. The process started with a calculation of unit costs of medical services as a base case. After that, the unit costs were re-calculated based on various methods. Finally, the variations of the results obtained from various methods and the base case were computed and compared. The total annualized capital cost of buildings and capital items calculated by the accounting-based approach (averaging the capital purchase prices throughout their useful life) was 13.02% lower than that calculated by the economic-based approach (combination of depreciation cost and interest on undepreciated portion over the useful life). A change of discount rate from 3% to 6% results in a 4.76% increase of the hospital's total annualized capital cost. When the useful life of durable goods was changed from 5 to 10 years, the total annualized capital cost of the hospital decreased by 17.28% from that of the base case. Regarding alternative criteria of indirect cost allocation, unit cost of medical services changed by a range of -6.99% to +4.05%. We explored the effect on unit cost of medical services in one department. Various costing methods, including departmental allocation methods, ranged between -85% and +32% against those of the base case. Based on the variation analysis, the economic-based approach was suitable for capital cost calculation. For the useful life of capital items, appropriate duration should be studied and standardized. Regarding allocation criteria, single-output criteria might be more efficient than the combined-output and complicated ones. For the departmental allocation methods, micro-costing method was the most suitable method at the time of study. These different costing methods should be standardized and developed as guidelines since they could affect implementation of the national health insurance scheme and health financing management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z
2015-06-15
Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-01-01
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015
Peng, Hai-Qin; Liu, Yan; Gao, Xue-Long; Wang, Hong-Wu; Chen, Yi; Cai, Hui-Yi
2017-11-01
While point source pollutions have gradually been controlled in recent years, the non-point source pollution problem has become increasingly prominent. The receiving waters are frequently polluted by the initial stormwater from the separate stormwater system and the wastewater from sewage pipes through stormwater pipes. Consequently, calculating the intercepted runoff depth has become a problem that must be resolved immediately for initial stormwater pollution management. The accurate calculation of intercepted runoff depth provides a solid foundation for selecting the appropriate size of intercepting facilities in drainage and interception projects. This study establishes a separate stormwater system for the Yishan Building watershed of Fuzhou City using the InfoWorks Integrated Catchment Management (InfoWorks ICM), which can predict the stormwater flow velocity and the flow of discharge outlet after each rainfall. The intercepted runoff depth is calculated from the stormwater quality and environmental capacity of the receiving waters. The average intercepted runoff depth from six rainfall events is calculated as 4.1 mm based on stormwater quality. The average intercepted runoff depth from six rainfall events is calculated as 4.4 mm based on the environmental capacity of the receiving waters. The intercepted runoff depth differs when calculated from various aspects. The selection of the intercepted runoff depth depends on the goal of water quality control, the self-purification capacity of the water bodies, and other factors of the region.
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-12-28
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.
Hakin, A W; Hedwig, G R
2001-02-15
A recent paper in this journal [Amend and Helgeson, Biophys. Chem. 84 (2000) 105] presented a new group additivity model to calculate various thermodynamic properties of unfolded proteins in aqueous solution. The parameters given for the revised Helgeson-Kirkham-Flowers (HKF) equations of state for all the constituent groups of unfolded proteins can be used, in principle, to calculate the partial molar heat capacity, C(o)p.2, and volume, V2(0), at infinite dilution of any polypeptide. Calculations of the values of C(o)p.2 and V2(0) for several polypeptides have been carried out to test the predictive utility of the HKF group additivity model. The results obtained are in very poor agreement with experimental data, and also with results calculated using a peptide-based group additivity model. A critical assessment of these two additivity models is presented.
NASA Astrophysics Data System (ADS)
Skolubovich, Yuriy; Skolubovich, Aleksandr; Voitov, Evgeniy; Soppa, Mikhail; Chirkunov, Yuriy
2017-10-01
The article considers the current questions of technological modeling and calculation of the new facility for cleaning natural waters, the clarifier reactor for the optimal operating mode, which was developed in Novosibirsk State University of Architecture and Civil Engineering (SibSTRIN). A calculation technique based on well-known dependences of hydraulics is presented. A calculation example of a structure on experimental data is considered. The maximum possible rate of ascending flow of purified water was determined, based on the 24 hour clarification cycle. The fractional composition of the contact mass was determined with minimal expansion of contact mass layer, which ensured the elimination of stagnant zones. The clarification cycle duration was clarified by the parameters of technological modeling by recalculating maximum possible upward flow rate of clarified water. The thickness of the contact mass layer was determined. Likewise, clarification reactors can be calculated for any other lightening conditions.
Pricing of premiums for equity-linked life insurance based on joint mortality models
NASA Astrophysics Data System (ADS)
Riaman; Parmikanti, K.; Irianingsih, I.; Supian, S.
2018-03-01
Life insurance equity - linked is a financial product that not only offers protection, but also investment. The calculation of equity-linked life insurance premiums generally uses mortality tables. Because of advances in medical technology and reduced birth rates, it appears that the use of mortality tables is less relevant in the calculation of premiums. To overcome this problem, we use a combination mortality model which in this study is determined based on Indonesian Mortality table 2011 to determine the chances of death and survival. In this research, we use the Combined Mortality Model of the Weibull, Inverse-Weibull, and Gompertz Mortality Model. After determining the Combined Mortality Model, simulators calculate the value of the claim to be given and the premium price numerically. By calculating equity-linked life insurance premiums well, it is expected that no party will be disadvantaged due to the inaccuracy of the calculation result
Comparison of results of experimental research with numerical calculations of a model one-sided seal
NASA Astrophysics Data System (ADS)
Joachimiak, Damian; Krzyślak, Piotr
2015-06-01
Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
NASA Astrophysics Data System (ADS)
Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang
2018-01-01
A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.
NASA Astrophysics Data System (ADS)
Katayama-Yoshida, Hiroshi; Nakanishi, Akitaka; Uede, Hiroki; Takawashi, Yuki; Fukushima, Tetsuya; Sato, Kazunori
2014-03-01
Based upon ab initio electronic structure calculation, I will discuss the general rule of negative effective U system by (1) exchange-correlation-induced negative effective U caused by the stability of the exchange-correlation energy in Hund's rule with high-spin ground states of d5 configuration, and (2) charge-excitation-induced negative effective U caused by the stability of chemical bond in the closed-shell of s2, p6, and d10 configurations. I will show the calculated results of negative effective U systems such as hole-doped CuAlO2 and CuFeS2. Based on the total energy calculations of antiferromagnetic and ferromagnetic states, I will discuss the magnetic phase diagram and superconductivity upon hole doping. I also discuss the computational materials design method of high-Tc superconductors by ab initio calculation to go beyond LDA and multi-scale simulations.
First-principles calculations on thermodynamic properties of BaTiO3 rhombohedral phase.
Bandura, Andrei V; Evarestov, Robert A
2012-07-05
The calculations based on the linear combination of atomic orbitals have been performed for the low-temperature phase of BaTiO(3) crystal. Structural and electronic properties, as well as phonon frequencies were obtained using hybrid PBE0 exchange-correlation functional. The calculated frequencies and total energies at different volumes have been used to determine the equation of state and thermal contribution to the Helmholtz free energy within the quasiharmonic approximation. For the first time, the bulk modulus, volume thermal expansion coefficient, heat capacity, and Grüneisen parameters in BaTiO(3) rhombohedral phase have been estimated at zero pressure and temperatures form 0 to 200 K, based on the results of first-principles calculations. Empirical equation has been proposed to reproduce the temperature dependence of the calculated quantities. The agreement between the theoretical and experimental thermodynamic properties was found to be satisfactory. Copyright © 2012 Wiley Periodicals, Inc.
Versatile fusion source integrator AFSI for fast ion and neutron studies in fusion devices
NASA Astrophysics Data System (ADS)
Sirén, Paula; Varje, Jari; Äkäslompolo, Simppa; Asunta, Otto; Giroud, Carine; Kurki-Suonio, Taina; Weisen, Henri; JET Contributors, The
2018-01-01
ASCOT Fusion Source Integrator AFSI, an efficient tool for calculating fusion reaction rates and characterizing the fusion products, based on arbitrary reactant distributions, has been developed and is reported in this paper. Calculation of reactor-relevant D-D, D-T and D-3He fusion reactions has been implemented based on the Bosch-Hale fusion cross sections. The reactions can be calculated between arbitrary particle populations, including Maxwellian thermal particles and minority energetic particles. Reaction rate profiles, energy spectra and full 4D phase space distributions can be calculated for the non-isotropic reaction products. The code is especially suitable for integrated modelling in self-consistent plasma physics simulations as well as in the Serpent neutronics calculation chain. Validation of the model has been performed for neutron measurements at the JET tokamak and the code has been applied to predictive simulations in ITER.
NASA Astrophysics Data System (ADS)
Kurihara, Osamu; Kim, Eunjoo; Kunishima, Naoaki; Tani, Kotaro; Ishikawa, Tetsuo; Furuyama, Kazuo; Hashimoto, Shozo; Akashi, Makoto
2017-09-01
A tool was developed to facilitate the calculation of the early internal doses to residents involved in the Fukushima Nuclear Disaster based on atmospheric transport and dispersion model (ATDM) simulations performed using Worldwide version of System for Prediction of Environmental Emergency Information 2nd version (WSPEEDI-II) together with personal behavior data containing the history of the whereabouts of individul's after the accident. The tool generates hourly-averaged air concentration data for the simulation grids nearest to an individual's whereabouts using WSPEEDI-II datasets for the subsequent calculation of internal doses due to inhalation. This paper presents an overview of the developed tool and provides tentative comparisons between direct measurement-based and ATDM-based results regarding the internal doses received by 421 persons from whom personal behavior data available.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
NASA Technical Reports Server (NTRS)
Carreno, Victor
2006-01-01
This document describes a method to demonstrate that a UAS, operating in the NAS, can avoid collisions with an equivalent level of safety compared to a manned aircraft. The method is based on the calculation of a collision probability for a UAS , the calculation of a collision probability for a base line manned aircraft, and the calculation of a risk ratio given by: Risk Ratio = P(collision_UAS)/P(collision_manned). A UAS will achieve an equivalent level of safety for collision risk if the Risk Ratio is less than or equal to one. Calculation of the probability of collision for UAS and manned aircraft is accomplished through event/fault trees.
42 CFR 102.81 - Calculation of benefits for lost employment income.
Code of Federal Regulations, 2011 CFR
2011-10-01
... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...
42 CFR 102.81 - Calculation of benefits for lost employment income.
Code of Federal Regulations, 2013 CFR
2013-10-01
... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...
42 CFR 102.81 - Calculation of benefits for lost employment income.
Code of Federal Regulations, 2014 CFR
2014-10-01
... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...
42 CFR 102.81 - Calculation of benefits for lost employment income.
Code of Federal Regulations, 2012 CFR
2012-10-01
... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...
DOE Office of Scientific and Technical Information (OSTI.GOV)
HU TA
2009-10-26
Assess the steady-state flammability level at normal and off-normal ventilation conditions. The hydrogen generation rate was calculated for 177 tanks using the rate equation model. Flammability calculations based on hydrogen, ammonia, and methane were performed for 177 tanks for various scenarios.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-04
... NMAC) addition of in subsections methodology for (A) and (B). fugitive dust control permits, revised... fee Fee Calculations and requirements for Procedures. fugitive dust control permits. 9/7/2004 Section... schedule based on acreage, add and update calculation methodology used to calculate non- programmatic dust...
42 CFR 102.81 - Calculation of benefits for lost employment income.
Code of Federal Regulations, 2010 CFR
2010-10-01
... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...
Remedial Instruction to Enhance Mathematical Ability of Dyscalculics
ERIC Educational Resources Information Center
Kumar, S. Praveen; Raja, B. William Dharma
2012-01-01
The ability to do arithmetic calculations is essential to school-based learning and skill development in an information rich society. Arithmetic is a basic academic skill that is needed for learning which includes the skills such as counting, calculating, reasoning etc. that are used for performing mathematical calculations. Unfortunately, many…
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
An Adaptive Nonlinear Basal-Bolus Calculator for Patients With Type 1 Diabetes
Boiroux, Dimitri; Aradóttir, Tinna Björk; Nørgaard, Kirsten; Poulsen, Niels Kjølstad; Madsen, Henrik; Jørgensen, John Bagterp
2016-01-01
Background: Bolus calculators help patients with type 1 diabetes to mitigate the effect of meals on their blood glucose by administering a large amount of insulin at mealtime. Intraindividual changes in patients physiology and nonlinearity in insulin-glucose dynamics pose a challenge to the accuracy of such calculators. Method: We propose a method based on a continuous-discrete unscented Kalman filter to continuously track the postprandial glucose dynamics and the insulin sensitivity. We augment the Medtronic Virtual Patient (MVP) model to simulate noise-corrupted data from a continuous glucose monitor (CGM). The basal rate is determined by calculating the steady state of the model and is adjusted once a day before breakfast. The bolus size is determined by optimizing the postprandial glucose values based on an estimate of the insulin sensitivity and states, as well as the announced meal size. Following meal announcements, the meal compartment and the meal time constant are estimated, otherwise insulin sensitivity is estimated. Results: We compare the performance of a conventional linear bolus calculator with the proposed bolus calculator. The proposed basal-bolus calculator significantly improves the time spent in glucose target (P < .01) compared to the conventional bolus calculator. Conclusion: An adaptive nonlinear basal-bolus calculator can efficiently compensate for physiological changes. Further clinical studies will be needed to validate the results. PMID:27613658
Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K
2011-12-01
Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2013 CFR
2013-01-01
... CONTROL (REGULATION Y) Pt. 225, App. A Appendix A to Part 225—Capital Adequacy Guidelines for Bank Holding... (Basle Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The framework is...-weighted assets, calculate market risk equivalent assets, and calculate risk-based capital ratios adjusted...
Code of Federal Regulations, 2011 CFR
2011-01-01
...) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM BANK HOLDING COMPANIES AND CHANGE IN BANK CONTROL... Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The framework is described in...-weighted assets, calculate market risk equivalent assets, and calculate risk-based capital ratios adjusted...
ERIC Educational Resources Information Center
Schulz, Andreas
2018-01-01
Theoretical analysis of whole number-based calculation strategies and digit-based algorithms for multi-digit multiplication and division reveals that strategy use includes two kinds of reasoning: reasoning about the relations between numbers and reasoning about the relations between operations. In contrast, algorithms aim to reduce the necessary…
Code of Federal Regulations, 2014 CFR
2014-04-01
... justified by a newly created property-based needs assessment (a life-cycle physical needs assessments... calculated as the sum of total operating cost, modernization cost, and costs to address accrual needs. Costs... assist PHAs in completing the assessments. The spreadsheet calculator is designed to walk housing...
Code of Federal Regulations, 2013 CFR
2013-04-01
... justified by a newly created property-based needs assessment (a life-cycle physical needs assessments... calculated as the sum of total operating cost, modernization cost, and costs to address accrual needs. Costs... assist PHAs in completing the assessments. The spreadsheet calculator is designed to walk housing...
Code of Federal Regulations, 2012 CFR
2012-01-01
...) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM BANK HOLDING COMPANIES AND CHANGE IN BANK CONTROL... Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The framework is described in...-weighted assets, calculate market risk equivalent assets, and calculate risk-based capital ratios adjusted...
Code of Federal Regulations, 2010 CFR
2010-01-01
...) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM BANK HOLDING COMPANIES AND CHANGE IN BANK CONTROL... Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The framework is described in...-weighted assets, calculate market risk equivalent assets, and calculate risk-based capital ratios adjusted...
Code of Federal Regulations, 2014 CFR
2014-01-01
... CONTROL (REGULATION Y) Pt. 225, App. A Appendix A to Part 225—Capital Adequacy Guidelines for Bank Holding... (Basle Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The framework is...-weighted assets, calculate market risk equivalent assets, and calculate risk-based capital ratios adjusted...
[Study on spectrum analysis of X-ray based on rotational mass effect in special relativity].
Yu, Zhi-Qiang; Xie, Quan; Xiao, Qing-Quan
2010-04-01
Based on special relativity, the formation mechanism of characteristic X-ray has been studied, and the influence of rotational mass effect on X-ray spectrum has been given. A calculation formula of the X-ray wavelength based upon special relativity was derived. Error analysis was carried out systematically for the calculation values of characteristic wavelength, and the rules of relative error were obtained. It is shown that the values of the calculation are very close to the experimental values, and the effect of rotational mass effect on the characteristic wavelength becomes more evident as the atomic number increases. The result of the study has some reference meaning for the spectrum analysis of characteristic X-ray in application.
Rotational stellar structures based on the Lagrangian variational principle
NASA Astrophysics Data System (ADS)
Yasutake, Nobutoshi; Fujisawa, Kotaro; Yamada, Shoichi
2017-06-01
A new method for multi-dimensional stellar structures is proposed in this study. As for stellar evolution calculations, the Heney method is the defacto standard now, but basically assumed to be spherical symmetric. It is one of the difficulties for deformed stellar-evolution calculations to trace the potentially complex movements of each fluid element. On the other hand, our new method is very suitable to follow such movements, since it is based on the Lagrange coordinate. This scheme is also based on the variational principle, which is adopted to the studies for the pasta structures inside of neutron stars. Our scheme could be a major break through for evolution calculations of any types of deformed stars: proto-planets, proto-stars, and proto-neutron stars, etc.
NASA Astrophysics Data System (ADS)
Kumar, Rohit; Puri, Rajeev K.
2018-03-01
Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.
Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).
Bag, Arijit; Ghorai, Pradip Kr
2016-05-01
Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Shear, principal, and equivalent strains in equal-channel angular deformation
NASA Astrophysics Data System (ADS)
Xia, K.; Wang, J.
2001-10-01
The shear and principal strains involved in equal channel angular deformation (ECAD) were analyzed using a variety of methods. A general expression for the total shear strain calculated by integrating infinitesimal strain increments gave the same result as that from simple geometric considerations. The magnitude and direction of the accumulated principal strains were calculated based on a geometric and a matrix algebra method, respectively. For an intersecting angle of π/2, the maximum normal strain is 0.881 in the direction at π/8 (22.5 deg) from the longitudinal direction of the material in the exit channel. The direction of the maximum principal strain should be used as the direction of grain elongation. Since the principal direction of strain rotates during ECAD, the total shear strain and principal strains so calculated do not have the same meaning as those in a strain tensor. Consequently, the “equivalent” strain based on the second invariant of a strain tensor is no longer an invariant. Indeed, the equivalent strains calculated using the total shear strain and that using the total principal strains differed as the intensity of deformation increased. The method based on matrix algebra is potentially useful in mathematical analysis and computer calculation of ECAD.
The influence of chemical mechanisms on PDF calculations of non-premixed turbulent flames
NASA Astrophysics Data System (ADS)
Pope, Stephen B.
2005-11-01
A series of calculations is reported of the Barlow & Frank non-premixed piloted jet flames D, E and F, with the aim of determining the level of description of the chemistry necessary to account accurately for the turbulence-chemistry interactions observed in these flames. The calculations are based on the modeled transport equation for the joint probability density function of velocity, turbulence frequency and composition (enthalpy and species mass fractions). Seven chemical mechanisms for methane are investigated, ranging from a five-step reduced mechanism to the 53-species GRI 3.0 mechanism. The results show that, for C-H-O species, accurate results are obtained with the GRI 2.11 and GRI 3.0 mechanisms, as well as with 12 and 15-step reduced mechanisms based on GRI 2.11. But significantly inaccurate calculations result from use of the 5-step reduced mechanism (based on GRI 2.11), and from two different 16-species skeletal mechanisms. As has previously been observed, GRI 3.0 over-predicts NO by up to a factor of two; whereas NO is calculated reasonably accurately by GRI 2.11 and the 15-step reduced mechanism.
Development and evaluation of an audiology app for iPhone/iPad mobile devices.
Larrosa, Francisco; Rama-Lopez, Julio; Benitez, Jesus; Morales, Jose M; Martinez, Asuncion; Alañon, Miguel A; Arancibia-Tagle, Diego; Batuecas-Caletrio, Angel; Martinez-Lopez, Marta; Perez-Fernandez, Nicolas; Gimeno, Carlos; Ispizua, Angel; Urrutikoetxea, Alberto; Rey-Martinez, Jorge
2015-01-01
The application described in this study appears to be accurate and valid, thus allowing calculation of a hearing handicap and assessment of the pure-tone air conduction threshold with iPhone/iPad devices. To develop and evaluate a newly developed professional, computer-based hearing handicap calculator and a manual hearing sensitivity assessment test for the iPhone and iPad (AudCal). Multi-center prospective non-randomized validation study. One hundred and ten consecutive adult participants underwent two hearing evaluations, a standard audiometry and a pure-tone air conduction test using AudCal with an iOS device. The hearing handicap calculation accuracy was evaluated comparing AudCal vs a web-based calculator. Hearing loss was found in 83 and 84 out of 220 standard audiometries and AudCal hearing tests (Cohen's Kappa = 0.89). The mean difference between AudCal and standard audiogram thresholds was -0.21 ± 6.38 dB HL. Excellent reliability and concordance between standard audiometry and the application's hearing loss assessment test were obtained (Cronbach's alpha = 0.96; intra-class correlation coefficient = 0.93). AudCal vs a web-based calculator were perfectly correlated (Pearson's r = 1).
NASA Astrophysics Data System (ADS)
Joshi, Rachana; Pandey, Nidhi; Yadav, Swatantra Kumar; Tilak, Ragini; Mishra, Hirdyesh; Pokharia, Sandeep
2018-07-01
The hydrazino Schiff base (E)-4-amino-5-[N'-(2-nitro-benzylidene)-hydrazino]-2,4-dihydro-[1,2,4]triazole-3-thione was synthesized and structurally characterized by elemental analysis, FT-IR, Raman, 1H and 13C-NMR and UV-Vis studies. A density functional theory (DFT) based electronic structure calculations were accomplished at B3LYP/6-311++G(d,p) level of theory. A comparative analysis of calculated vibrational frequencies with experimental vibrational frequencies was carried out and significant bands were assigned. The results indicate a good correlation (R2 = 0.9974) between experimental and theoretical IR frequencies. The experimental 1H and 13C-NMR resonance signals were also compared to the calculated values. The theoretical UV-Vis spectral studies were carried out using time dependent-DFT method in gas phase and IEFPCM model in solvent field calculation. The geometrical parameters were calculated in the gas phase. Atomic charges at selected atoms were calculated by Mulliken population analysis (MPA), Hirshfeld population analysis (HPA) and Natural population analysis (NPA) schemes. The molecular electrostatic potential (MEP) map was calculated to assign reactive site on the surface of the molecule. The conceptual-DFT based global and local reactivity descriptors were calculated to obtain an insight into the reactivity behaviour. The frontier molecular orbital analysis was carried out to study the charge transfer within the molecule. The detailed natural bond orbital (NBO) analysis was performed to obtain an insight into the intramolecular conjugative electronic interactions. The titled compound was screened for in vitro antifungal activity against four fungal strains and the results obtained are explained through in silico molecular docking studies.
Calculation of {alpha}/{gamma} equilibria in SA508 grade 3 steels for intercritical heat treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, B.J.; Kim, H.D.; Hong, J.H.
1998-05-01
An attempt has been made to suggest an optimum temperature for intercritical heat treatment of an SA508 grade 3 steel for nuclear pressure vessels, based on thermodynamic calculation of the {alpha}/{gamma} phase equilibria. A thermodynamic database constructed for the Fe-Mn-Ni-Mo-Cr-Si-V-Al-C-N ten-component system and an empirical criterion that the amount of reformed austenite should be around 40 pct were used for thermodynamic calculation and derivation of the optimum heat-treatment temperature, respectively. The calculated optimum temperature, 720 C, was in good agreement with an experimentally determined temperature of 725 C obtained through an independent experimental investigation of the same steel. The agreementmore » between the calculated and measured fraction of reformed austenite during the intercritical heat treatment was also confirmed. Based on the agreement between calculation and experiment, it could be concluded that thermodynamic calculations can be successfully applied to the materials and/or process design as an additive tool to the already established technology, and that the currently constructed thermodynamic database for steel systems shows an accuracy that makes such applications possible.« less
Patient-specific CT dosimetry calculation: a feasibility study.
Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W
2011-11-15
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okutsu, N.; Shimamura, K.; Shimizu, E.
To elucidate the effect of radicals on DNA base pairs, we investigated the attacking mechanism of OH and H radicals to the G-C and A-T base pairs, using the density functional theory (DFT) calculations in water approximated by the continuum solvation model. The DFT calculations revealed that the OH radical abstracts the hydrogen atom of a NH{sub 2} group of G or A base and induces a tautomeric reaction for an A-T base pair more significantly than for a G-C base pair. On the other hand, the H radical prefers to bind to the Cytosine NH{sub 2} group of G-Cmore » base pair and induce a tautomeric reaction from G-C to G*-C*, whose activation free energy is considerably small (−0.1 kcal/mol) in comparison with that (42.9 kcal/mol) for the reaction of an A-T base pair. Accordingly, our DFT calculations elucidated that OH and H radicals have a significant effect on A-T and G-C base pairs, respectively. This finding will be useful for predicting the effect of radiation on the genetic information recorded in the base sequences of DNA duplexes.« less
NASA Technical Reports Server (NTRS)
Cheng, H. K.; Wong, Eric Y.; Dogra, V. K.
1991-01-01
Grad's thirteen-moment equations are applied to the flow behind a bow shock under the formalism of a thin shock layer. Comparison of this version of the theory with Direct Simulation Monte Carlo calculations of flows about a flat plate at finite attack angle has lent support to the approach as a useful extension of the continuum model for studying translational nonequilibrium in the shock layer. This paper reassesses the physical basis and limitations of the development with additional calculations and comparisons. The streamline correlation principle, which allows transformation of the 13-moment based system to one based on the Navier-Stokes equations, is extended to a three-dimensional formulation. The development yields a strip theory for planar lifting surfaces at finite incidences. Examples reveal that the lift-to-drag ratio is little influenced by planform geometry and varies with altitudes according to a 'bridging function' determined by correlated two-dimensional calculations.
NASA Astrophysics Data System (ADS)
Chan, GuoXuan; Wang, Xin
2018-04-01
We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.
a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution
NASA Astrophysics Data System (ADS)
Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin
Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.
NASA Astrophysics Data System (ADS)
Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter
2017-04-01
A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).
NESSY: NLTE spectral synthesis code for solar and stellar atmospheres
NASA Astrophysics Data System (ADS)
Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.
2017-07-01
Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.
NASA Astrophysics Data System (ADS)
Clamens, Olivier; Lecerf, Johann; Hudelot, Jean-Pascal; Duc, Bertrand; Cadiou, Thierry; Blaise, Patrick; Biard, Bruno
2018-01-01
CABRI is an experimental pulse reactor, funded by the French Nuclear Safety and Radioprotection Institute (IRSN) and operated by CEA at the Cadarache research center. It is designed to study fuel behavior under RIA conditions. In order to produce the power transients, reactivity is injected by depressurization of a neutron absorber (3He) situated in transient rods inside the reactor core. The shapes of power transients depend on the total amount of reactivity injected and on the injection speed. The injected reactivity can be calculated by conversion of the 3He gas density into units of reactivity. So, it is of upmost importance to properly master gas density evolution in transient rods during a power transient. The 3He depressurization was studied by CFD calculations and completed with measurements using pressure transducers. The CFD calculations show that the density evolution is slower than the pressure drop. Surrogate models were built based on CFD calculations and validated against preliminary tests in the CABRI transient system. Studies also show that it is harder to predict the depressurization during the power transients because of neutron/3He capture reactions that induce a gas heating. This phenomenon can be studied by a multiphysics approach based on reaction rate calculation thanks to Monte Carlo code and study the resulting heating effect with the validated CFD simulation.
An Improved Spectral Analysis Method for Fatigue Damage Assessment of Details in Liquid Cargo Tanks
NASA Astrophysics Data System (ADS)
Zhao, Peng-yuan; Huang, Xiao-ping
2018-03-01
Errors will be caused in calculating the fatigue damages of details in liquid cargo tanks by using the traditional spectral analysis method which is based on linear system, for the nonlinear relationship between the dynamic stress and the ship acceleration. An improved spectral analysis method for the assessment of the fatigue damage in detail of a liquid cargo tank is proposed in this paper. Based on assumptions that the wave process can be simulated by summing the sinusoidal waves in different frequencies and the stress process can be simulated by summing the stress processes induced by these sinusoidal waves, the stress power spectral density (PSD) is calculated by expanding the stress processes induced by the sinusoidal waves into Fourier series and adding the amplitudes of each harmonic component with the same frequency. This analysis method can take the nonlinear relationship into consideration and the fatigue damage is then calculated based on the PSD of stress. Take an independent tank in an LNG carrier for example, the accuracy of the improved spectral analysis method is proved much better than that of the traditional spectral analysis method by comparing the calculated damage results with the results calculated by the time domain method. The proposed spectral analysis method is more accurate in calculating the fatigue damages in detail of ship liquid cargo tanks.
Introducing GFWED: The Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2-3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia,Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRAs precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphereocean controls on fire weather, and calibration of FWI-based fire prediction models.
Godwin, Zachary; Tan, James; Bockhold, Jennifer; Ma, Jason; Tran, Nam K
2015-06-01
We have developed a novel software application that provides a simple and interactive Lund-Browder diagram for automatic calculation of total body surface area (TBSA) burned, fluid formula recommendations, and serial wound photography on a smart device platform. The software was developed for the iPad (Apple, Cupertino, CA) smart device platforms. Ten burns ranging from 5 to 95% TBSA were computer generated on a patient care simulator using Adobe Photoshop CS6 (Adobe, San Jose, CA). Burn clinicians calculated the TBSA first using a paper-based Lund-Browder diagram. Following a one-week "washout period", the same clinicians calculated TBSA using the smart device application. Simulated burns were presented in a random fashion and clinicians were timed. Percent TBSA burned calculated by Peregrine vs. the paper-based Lund-Browder were similar (29.53 [25.57] vs. 28.99 [25.01], p=0.22, n=7). On average, Peregrine allowed users to calculate burn size significantly faster than the paper form (58.18 [31.46] vs. 90.22 [60.60]s, p<0.001, n=7). The smart device application also provided 5 megapixel photography capabilities, and acute burn resuscitation fluid calculator. We developed an innovative smart device application that enables accurate and rapid burn size assessment to be cost-effective and widely accessible. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code
NASA Astrophysics Data System (ADS)
Wemple, Charles; Zwermann, Winfried
2017-09-01
Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.
The advantage of calculating emission reduction with local emission factor in South Sumatera region
NASA Astrophysics Data System (ADS)
Buchari, Erika
2017-11-01
Green House Gases (GHG) which have different Global Warming Potential, usually expressed in CO2 equivalent. German has succeeded in emission reduction of CO2 in year 1990s, while Japan since 2001 increased load factor of public transports. Indonesia National Medium Term Development Plan, 2015-2019, has set up the target of minimum 26% and maximum 41% National Emission Reduction in 2019. Intergovernmental Panel on Climate Change (IPCC), defined three types of accuracy in counting emission of GHG, as tier 1, tier 2, and tier 3. In tier 1, calculation is based on fuel used and average emission (default), which is obtained from statistical data. While in tier 2, calculation is based fuel used and local emission factors. Tier 3 is more accurate from those in tier 1 and 2, and the calculation is based on fuel used from modelling method or from direct measurement. This paper is aimed to evaluate the calculation with tier 2 and tier 3 in South Sumatera region. In 2012, Regional Action Plan for Greenhouse Gases of South Sumatera for 2020 is about 6,569,000 ton per year and with tier 3 is about without mitigation and 6,229,858.468 ton per year. It was found that the calculation in tier 3 is more accurate in terms of fuel used of variation vehicles so that the actions of mitigation can be planned more realistically.
Stochastic optimal operation of reservoirs based on copula functions
NASA Astrophysics Data System (ADS)
Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen
2018-02-01
Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Health Disparities Calculator (HD*Calc) - SEER Software
Statistical software that generates summary measures to evaluate and monitor health disparities. Users can import SEER data or other population-based health data to calculate 11 disparity measurements.
NASA Astrophysics Data System (ADS)
Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian; Rauhut, Guntram
2015-12-01
Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian
Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.
Limited Range Sesame EOS for Ta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greeff, Carl William; Crockett, Scott; Rudin, Sven Peter
2017-03-30
A new Sesame EOS table for Ta has been released for testing. It is a limited range table covering T ≤ 26, 000 K and ρ ≤ 37.53 g/cc. The EOS is based on earlier analysis using DFT phonon calculations to infer the cold pressure from the Hugoniot. The cold curve has been extended into compression using new DFT calculations. The present EOS covers expansion into the gas phase. It is a multi-phase EOS with distinct liquid and solid phases. A cold shear modulus table (431) is included. This is based on an analytic interpolation of DFT calculations.
Calculation of Hugoniot properties for shocked nitromethane based on the improved Tsien's EOS
NASA Astrophysics Data System (ADS)
Zhao, Bo; Cui, Ji-Ping; Fan, Jing
2010-06-01
We have calculated the Hugoniot properties of shocked nitromethane based on the improved Tsien’s equation of state (EOS) that optimized by “exact” numerical molecular dynamic data at high temperatures and pressures. Comparison of the calculated results of the improved Tsien’s EOS with the existed experimental data and the direct simulations show that the behavior of the improved Tsien’s EOS is very good in many aspects. Because of its simple analytical form, the improved Tsien’s EOS can be prospectively used to study the condensed explosive detonation coupling with chemical reaction.
Icing Branch Current Research Activities in Icing Physics
NASA Technical Reports Server (NTRS)
Vargas, Mario
2009-01-01
Current development: A grid block transformation scheme which allows the input of grids in arbitrary reference frames, the use of mirror planes, and grids with relative velocities has been developed. A simple ice crystal and sand particle bouncing scheme has been included. Added an SLD splashing model based on that developed by William Wright for the LEWICE 3.2.2 software. A new area based collection efficiency algorithm will be incorporated which calculates trajectories from inflow block boundaries to outflow block boundaries. This method will be used for calculating and passing collection efficiency data between blade rows for turbo-machinery calculations.
Luber, Sandra
2017-03-14
We describe the calculation of Raman optical activity (ROA) tensors from density functional perturbation theory, which has been implemented into the CP2K software package. Using the mixed Gaussian and plane waves method, ROA spectra are evaluated in the double-harmonic approximation. Moreover, an approach for the calculation of ROA spectra by means of density functional theory-based molecular dynamics is derived and used to obtain an ROA spectrum via time correlation functions, which paves the way for the calculation of ROA spectra taking into account anharmonicities and dynamic effects at ambient conditions.
Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education
ERIC Educational Resources Information Center
Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.
2014-01-01
Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…
12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation
Code of Federal Regulations, 2013 CFR
2013-01-01
... stepped interest rates, and to certain time accounts with a stated maturity greater than one year. A... calculated by the formula shown below. Institutions shall calculate the annual percentage yield based on the... determining the total interest figure to be used in the formula, institutions shall assume that all principal...
12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation
Code of Federal Regulations, 2011 CFR
2011-01-01
... stepped interest rates, and to certain time accounts with a stated maturity greater than one year. A... calculated by the formula shown below. Institutions shall calculate the annual percentage yield based on the... determining the total interest figure to be used in the formula, institutions shall assume that all principal...
12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation
Code of Federal Regulations, 2014 CFR
2014-01-01
... stepped interest rates, and to certain time accounts with a stated maturity greater than one year. A... calculated by the formula shown below. Institutions shall calculate the annual percentage yield based on the... determining the total interest figure to be used in the formula, institutions shall assume that all principal...
40 CFR 74.22 - Actual SO 2 emissions rate.
Code of Federal Regulations, 2014 CFR
2014-07-01
... calculations under this section based on data submitted under § 74.20 for the following calendar year: (1) For combustion sources that commenced operation prior to January 1, 1985, the calendar year for calculating the... January 1, 1985, the calendar year for calculating the actual SO2 emissions rate shall be the first year...
40 CFR 74.22 - Actual SO2 emissions rate.
Code of Federal Regulations, 2012 CFR
2012-07-01
... calculations under this section based on data submitted under § 74.20 for the following calendar year: (1) For combustion sources that commenced operation prior to January 1, 1985, the calendar year for calculating the... January 1, 1985, the calendar year for calculating the actual SO2 emissions rate shall be the first year...
40 CFR 74.22 - Actual SO2 emissions rate.
Code of Federal Regulations, 2011 CFR
2011-07-01
... calculations under this section based on data submitted under § 74.20 for the following calendar year: (1) For combustion sources that commenced operation prior to January 1, 1985, the calendar year for calculating the... January 1, 1985, the calendar year for calculating the actual SO2 emissions rate shall be the first year...
40 CFR 74.22 - Actual SO 2 emissions rate.
Code of Federal Regulations, 2013 CFR
2013-07-01
... calculations under this section based on data submitted under § 74.20 for the following calendar year: (1) For combustion sources that commenced operation prior to January 1, 1985, the calendar year for calculating the... January 1, 1985, the calendar year for calculating the actual SO2 emissions rate shall be the first year...
Using Clinical Data Standards to Measure Quality: A New Approach.
D'Amore, John D; Li, Chun; McCrary, Laura; Niloff, Jonathan M; Sittig, Dean F; McCoy, Allison B; Wright, Adam
2018-04-01
Value-based payment for care requires the consistent, objective calculation of care quality. Previous initiatives to calculate ambulatory quality measures have relied on billing data or individual electronic health records (EHRs) to calculate and report performance. New methods for quality measure calculation promoted by federal regulations allow qualified clinical data registries to report quality outcomes based on data aggregated across facilities and EHRs using interoperability standards. This research evaluates the use of clinical document interchange standards as the basis for quality measurement. Using data on 1,100 patients from 11 ambulatory care facilities and 5 different EHRs, challenges to quality measurement are identified and addressed for 17 certified quality measures. Iterative solutions were identified for 14 measures that improved patient inclusion and measure calculation accuracy. Findings validate this approach to improving measure accuracy while maintaining measure certification. Organizations that report care quality should be aware of how identified issues affect quality measure selection and calculation. Quality measure authors should consider increasing real-world validation and the consistency of measure logic in respect to issues identified in this research. Schattauer GmbH Stuttgart.
Research on the Calculation Method of Optical Path Difference of the Shanghai Tian Ma Telescope
NASA Astrophysics Data System (ADS)
Dong, J.; Fu, L.; Jiang, Y. B.; Liu, Q. H.; Gou, W.; Yan, F.
2016-03-01
Based on the Shanghai Tian Ma Telescope (TM), an optical path difference calculation method of the shaped Cassegrain antenna is presented in the paper. Firstly, the mathematical model of the TM optics is established based on the antenna reciprocity theorem. Secondly, the TM sub-reflector and main reflector are fitted by the Non-Uniform Rational B-Splines (NURBS). Finally, the method of optical path difference calculation is implemented, and the expanding application of the Ruze optical path difference formulas in the TM is researched. The method can be used to calculate the optical path difference distributions across the aperture field of the TM due to misalignment like the axial and lateral displacements of the feed and sub-reflector, or the tilt of the sub-reflector. When the misalignment quantity is small, the expanding Ruze optical path difference formulas can be used to calculate the optical path difference quickly. The paper supports the real-time measurement and adjustment of the TM structure. The research has universality, and can provide reference for the optical path difference calculation of other radio telescopes with shaped surfaces.
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Papadakis, Michael
2005-01-01
Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.
NASA Technical Reports Server (NTRS)
Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard
1991-01-01
Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.
Estimating evaporative vapor generation from automobiles based on parking activities.
Dong, Xinyi; Tschantz, Michael; Fu, Joshua S
2015-07-01
A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gas flow calculation method of a ramjet engine
NASA Astrophysics Data System (ADS)
Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir
2017-11-01
At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.
Two- and three-photon ionization of hydrogen and lithium
NASA Technical Reports Server (NTRS)
Chang, T. N.; Poe, R. T.
1977-01-01
We present the detailed result of a calculation on two- and three-photon ionization of hydrogen and lithium based on a recently proposed calculational method. Our calculation has demonstrated that this method is capable of retaining the numerical advantages enjoyed by most of the existing calculational methods and, at the same time, circumventing their limitations. In particular, we have concentrated our discussion on the relative contribution from the resonant and nonresonant intermediate states.
NASA Astrophysics Data System (ADS)
Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael
2018-02-01
Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.
TrackEtching - A Java based code for etched track profile calculations in SSNTDs
NASA Astrophysics Data System (ADS)
Muraleedhara Varier, K.; Sankar, V.; Gangadathan, M. P.
2017-09-01
A java code incorporating a user friendly GUI has been developed to calculate the parameters of chemically etched track profiles of ion-irradiated solid state nuclear track detectors. Huygen's construction of wavefronts based on secondary wavelets has been used to numerically calculate the etched track profile as a function of the etching time. Provision for normal incidence and oblique incidence on the detector surface has been incorporated. Results in typical cases are presented and compared with experimental data. Different expressions for the variation of track etch rate as a function of the ion energy have been utilized. The best set of values of the parameters in the expressions can be obtained by comparing with available experimental data. Critical angle for track development can also be calculated using the present code.
NASA Astrophysics Data System (ADS)
Siahlo, Andrei I.; Poklonski, Nikolai A.; Lebedev, Alexander V.; Lebedeva, Irina V.; Popov, Andrey M.; Vyrko, Sergey A.; Knizhnik, Andrey A.; Lozovik, Yurii E.
2018-03-01
Single-layer and bilayer carbon and hexagonal boron nitride nanoscrolls as well as nanoscrolls made of bilayer graphene/hexagonal boron nitride heterostructure are considered. Structures of stable states of the corresponding nanoscrolls prepared by rolling single-layer and bilayer rectangular nanoribbons are obtained based on the analytical model and numerical calculations. The lengths of nanoribbons for which stable and energetically favorable nanoscrolls are possible are determined. Barriers to rolling of single-layer and bilayer nanoribbons into nanoscrolls and barriers to nanoscroll unrolling are calculated. Based on the calculated barriers nanoscroll lifetimes in the stable state are estimated. Elastic constants for bending of graphene and hexagonal boron nitride layers used in the model are found by density functional theory calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rüger, Robert, E-mail: rueger@scm.com; Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam; Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig
2016-05-14
We propose a new method of calculating electronically excited states that combines a density functional theory based ground state calculation with a linear response treatment that employs approximations used in the time-dependent density functional based tight binding (TD-DFTB) approach. The new method termed time-dependent density functional theory TD-DFT+TB does not rely on the DFTB parametrization and is therefore applicable to systems involving all combinations of elements. We show that the new method yields UV/Vis absorption spectra that are in excellent agreement with computationally much more expensive TD-DFT calculations. Errors in vertical excitation energies are reduced by a factor of twomore » compared to TD-DFTB.« less
The Triangle Technique: a new evidence-based educational tool for pediatric medication calculations.
Sredl, Darlene
2006-01-01
Many nursing student verbalize an aversion to mathematical concepts and experience math anxiety whenever a mathematical problem is confronted. Since nurses confront mathematical problems on a daily basis, they must learn to feel comfortable with their ability to perform these calculations correctly. The Triangle Technique, a new educational tool available to nurse educators, incorporates evidence-based concepts within a graphic model using visual, auditory, and kinesthetic learning styles to demonstrate pediatric medication calculations of normal therapeutic ranges. The theoretical framework for the technique is presented, as is a pilot study examining the efficacy of the educational tool. Statistically significant results obtained by Pearson's product-moment correlation indicate that students are better able to calculate accurate pediatric therapeutic dosage ranges after participation in the educational intervention of learning the Triangle Technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anisimova, N. P.; Tropina, N. E., E-mail: Mazina_ne@mail.ru; Tropin, A. N.
2010-12-15
The opportunity to increase the output emission efficiency of PbSe-based photoluminescence structures by depositing an antireflection layer is analyzed. A model of a three-layer thin film where the central layer is formed of a composite medium is proposed to calculate the reflectance spectra of the system. In von Bruggeman's approximation of the effective medium theory, the effective permittivity of the composite layer is calculated. The model proposed in the study is used to calculate the thickness of the arsenic chalcogenide (AsS{sub 4}) antireflection layer. The optimal AsS{sub 4} layer thickness determined experimentally is close to the results of calculation, andmore » the corresponding gain in the output photoluminescence efficiency is as high as 60%.« less
Aljasser, Faisal; Vitevitch, Michael S
2018-02-01
A number of databases (Storkel Behavior Research Methods, 45, 1159-1167, 2013) and online calculators (Vitevitch & Luce Behavior Research Methods, Instruments, and Computers, 36, 481-487, 2004) have been developed to provide statistical information about various aspects of language, and these have proven to be invaluable assets to researchers, clinicians, and instructors in the language sciences. The number of such resources for English is quite large and continues to grow, whereas the number of such resources for other languages is much smaller. This article describes the development of a Web-based interface to calculate phonotactic probability in Modern Standard Arabic (MSA). A full description of how the calculator can be used is provided. It can be freely accessed at http://phonotactic.drupal.ku.edu/ .
Sjostrom, Travis; Crockett, Scott
2015-09-02
The liquid regime equation of state of silicon dioxide SiO 2 is calculated via quantum molecular dynamics in the density range of 5 to 15 g/cc and with temperatures from 0.5 to 100 eV, including the α-quartz and stishovite phase Hugoniot curves. Below 8 eV calculations are based on Kohn-Sham density functional theory (DFT), and above 8 eV a new orbital-free DFT formulation, presented here, based on matching Kohn-Sham DFT calculations is employed. Recent experimental shock data are found to be in very good agreement with the current results. Finally both experimental and simulation data are used in constructing amore » new liquid regime equation of state table for SiO 2.« less
An Improved Method of Pose Estimation for Lighthouse Base Station Extension.
Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang
2017-10-22
In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.
An Improved Method of Pose Estimation for Lighthouse Base Station Extension
Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang
2017-01-01
In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal. PMID:29065509
WE-AB-BRA-06: 4DCT-Ventilation: A Novel Imaging Modality for Thoracic Surgical Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinogradskiy, Y; Jackson, M; Schubert, L
Purpose: The current standard-of-care imaging used to evaluate lung cancer patients for surgical resection is nuclear-medicine ventilation. Surgeons use nuclear-medicine images along with pulmonary function tests (PFT) to calculate percent predicted postoperative (%PPO) PFT values by estimating the amount of functioning lung that would be lost with surgery. 4DCT-ventilation is an emerging imaging modality developed in radiation oncology that uses 4DCT data to calculate lung ventilation maps. We perform the first retrospective study to assess the use of 4DCT-ventilation for pre-operative surgical evaluation. The purpose of this work was to compare %PPO-PFT values calculated with 4DCT-ventilation and nuclear-medicine imaging. Methods:more » 16 lung cancer patients retrospectively reviewed had undergone 4DCTs, nuclear-medicine imaging, and had Forced Expiratory Volume in 1 second (FEV1) acquired as part of a standard PFT. For each patient, 4DCT data sets, spatial registration, and a density-change based model were used to compute 4DCT-ventilation maps. Both 4DCT and nuclear-medicine images were used to calculate %PPO-FEV1 using %PPO-FEV1=pre-operative FEV1*(1-fraction of total ventilation of resected lung). Fraction of ventilation resected was calculated assuming lobectomy and pneumonectomy. The %PPO-FEV1 values were compared between the 4DCT-ventilation-based calculations and the nuclear-medicine-based calculations using correlation coefficients and average differences. Results: The correlation between %PPO-FEV1 values calculated with 4DCT-ventilation and nuclear-medicine were 0.81 (p<0.01) and 0.99 (p<0.01) for pneumonectomy and lobectomy respectively. The average difference between the 4DCT-ventilation based and the nuclear-medicine-based %PPO-FEV1 values were small, 4.1±8.5% and 2.9±3.0% for pneumonectomy and lobectomy respectively. Conclusion: The high correlation results provide a strong rationale for a clinical trial translating 4DCT-ventilation to the surgical domain. Compared to nuclear-medicine, 4DCT-ventilation is cheaper, does not require a radioactive contrast agent, provides a faster imaging procedure, and has improved spatial resolution. 4DCT-ventilation can reduce the cost and imaging time for patients while providing improved spatial accuracy and quantitative results for surgeons. YV discloses grant from State of Colorado.« less
Calculation of nuclear spin-spin coupling constants using frozen density embedding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Götz, Andreas W., E-mail: agoetz@sdsc.edu; Autschbach, Jochen; Visscher, Lucas, E-mail: visscher@chem.vu.nl
2014-03-14
We present a method for a subsystem-based calculation of indirect nuclear spin-spin coupling tensors within the framework of current-spin-density-functional theory. Our approach is based on the frozen-density embedding scheme within density-functional theory and extends a previously reported subsystem-based approach for the calculation of nuclear magnetic resonance shielding tensors to magnetic fields which couple not only to orbital but also spin degrees of freedom. This leads to a formulation in which the electron density, the induced paramagnetic current, and the induced spin-magnetization density are calculated separately for the individual subsystems. This is particularly useful for the inclusion of environmental effects inmore » the calculation of nuclear spin-spin coupling constants. Neglecting the induced paramagnetic current and spin-magnetization density in the environment due to the magnetic moments of the coupled nuclei leads to a very efficient method in which the computationally expensive response calculation has to be performed only for the subsystem of interest. We show that this approach leads to very good results for the calculation of solvent-induced shifts of nuclear spin-spin coupling constants in hydrogen-bonded systems. Also for systems with stronger interactions, frozen-density embedding performs remarkably well, given the approximate nature of currently available functionals for the non-additive kinetic energy. As an example we show results for methylmercury halides which exhibit an exceptionally large shift of the one-bond coupling constants between {sup 199}Hg and {sup 13}C upon coordination of dimethylsulfoxide solvent molecules.« less
Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location
NASA Astrophysics Data System (ADS)
Zhao, A. H.
2014-12-01
Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.
The induced electric field due to a current transient
NASA Astrophysics Data System (ADS)
Beck, Y.; Braunstein, A.; Frankental, S.
2007-05-01
Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.
NASA Astrophysics Data System (ADS)
Renuga Devi, T. S.; Sharmi kumar, J.; Ramkumaar, G. R.
2015-02-01
The FTIR and FT-Raman spectra of 2-(cyclohexylamino)ethanesulfonic acid were recorded in the regions 4000-400 cm-1 and 4000-50 cm-1 respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and Density functional method (B3LYP) with the correlation consistent-polarized valence double zeta (cc-pVDZ) basis set and 6-311++G(d,p) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed based on the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Atomic charges were calculated using both Hartee-Fock and density functional method using the cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. 1H and 13C NMR chemical shifts of the molecule were calculated using Gauge Including Atomic Orbital (GIAO) method and were compared with experimental results. Stability of the molecule arising from hyperconjugative interactions, charge delocalization have been analyzed using Natural Bond Orbital (NBO) analysis. The first order hyperpolarizability (β) and Molecular Electrostatic Potential (MEP) of the molecule was computed using DFT calculations. The electron density based local reactivity descriptor such as Fukui functions were calculated to explain the chemical reactivity site in the molecule.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-24
... penetration; underselling and price depression or suppression; lost sales and revenues; low capacity... calculated export price (``EP'') based on lost U.S. sales and offers for sale for major types of steel... calculated EP based on lost U.S. sales and offers for sale for major types of steel threaded rod for delivery...
Code of Federal Regulations, 2010 CFR
2010-10-01
... patient utilization calendar year as identified from Medicare claims is calendar year 2007. (4) Wage index... calculating the per-treatment base rate for 2011 are as follows: (1) Per patient utilization in CY 2007, 2008..., 2008 or 2009 to determine the year with the lowest per patient utilization. (2) Update of per treatment...
Code of Federal Regulations, 2014 CFR
2014-07-01
..., and carbon-related exhaust emissions from the tests performed using gasoline or diesel test fuel. (ii... from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as determined... from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and...
Code of Federal Regulations, 2012 CFR
2012-07-01
..., and carbon-related exhaust emissions from the tests performed using gasoline or diesel test fuel. (ii... from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as determined... from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and...
Code of Federal Regulations, 2012 CFR
2012-07-01
... economy values from the tests performed using gasoline or diesel test fuel. (ii)(A) Calculate the 5-cycle city and highway fuel economy values from the tests performed using alcohol or natural gas test fuel...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08...
Code of Federal Regulations, 2013 CFR
2013-07-01
..., and carbon-related exhaust emissions from the tests performed using gasoline or diesel test fuel. (ii... from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as determined... from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... emission data from tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel... values from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as..., highway, and combined fuel economy and carbon-related exhaust emission values from the tests performed...
Code of Federal Regulations, 2013 CFR
2013-07-01
... economy values from the tests performed using gasoline or diesel test fuel. (ii)(A) Calculate the 5-cycle city and highway fuel economy values from the tests performed using alcohol or natural gas test fuel...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08...
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What do I need to know about the base denomination for redemption value calculations? 351.16 Section 351.16 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...
Code of Federal Regulations, 2014 CFR
2014-07-01
... the Postal Service files its notice of rate adjustment and dividing the sum by 12 (Recent Average... values immediately preceding the Recent Average and dividing the sum by 12 (Base Average). Finally, the full year limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1...
Kanazawa, Yuki; Ehara, Masahiro; Sommerfeld, Thomas
2016-03-10
Low-lying π* resonance states of DNA and RNA bases have been investigated by the recently developed projected complex absorbing potential (CAP)/symmetry-adapted cluster-configuration interaction (SAC-CI) method using a smooth Voronoi potential as CAP. In spite of the challenging CAP applications to higher resonance states of molecules of this size, the present calculations reproduce resonance positions observed by electron transmission spectra (ETS) provided the anticipated deviations due to vibronic effects and limited basis sets are taken into account. Moreover, for the standard nucleobases, the calculated positions and widths qualitatively agree with those obtained in previous electron scattering calculations. For guanine, both keto and enol forms were examined, and the calculated values of the keto form agree clearly better with the experimental findings. In addition to these standard bases, three modified forms of cytosine, which serve as epigenetic or biomarkers, were investigated: formylcytosine, methylcytosine, and chlorocytosine. Last, a strong correlation between the computed positions and the observed ETS values is demonstrated, clearly suggesting that the present computational protocol should be useful for predicting the π* resonances of congeners of DNA and RNA bases.
An Approach in Radiation Therapy Treatment Planning: A Fast, GPU-Based Monte Carlo Method.
Karbalaee, Mojtaba; Shahbazi-Gahrouei, Daryoush; Tavakoli, Mohammad B
2017-01-01
An accurate and fast radiation dose calculation is essential for successful radiation radiotherapy. The aim of this study was to implement a new graphic processing unit (GPU) based radiation therapy treatment planning for accurate and fast dose calculation in radiotherapy centers. A program was written for parallel running based on GPU. The code validation was performed by EGSnrc/DOSXYZnrc. Moreover, a semi-automatic, rotary, asymmetric phantom was designed and produced using a bone, the lung, and the soft tissue equivalent materials. All measurements were performed using a Mapcheck dosimeter. The accuracy of the code was validated using the experimental data, which was obtained from the anthropomorphic phantom as the gold standard. The findings showed that, compared with those of DOSXYZnrc in the virtual phantom and for most of the voxels (>95%), <3% dose-difference or 3 mm distance-to-agreement (DTA) was found. Moreover, considering the anthropomorphic phantom, compared to the Mapcheck dose measurements, <5% dose-difference or 5 mm DTA was observed. Fast calculation speed and high accuracy of GPU-based Monte Carlo method in dose calculation may be useful in routine radiation therapy centers as the core and main component of a treatment planning verification system.
Uranium phase diagram from first principles
NASA Astrophysics Data System (ADS)
Yanilkin, Alexey; Kruglov, Ivan; Migdal, Kirill; Oganov, Artem; Pokatashkin, Pavel; Sergeev, Oleg
2017-06-01
The work is devoted to the investigation of uranium phase diagram up to pressure of 1 TPa and temperature of 15 kK based on density functional theory. First of all the comparison of pseudopotential and full potential calculations is carried out for different uranium phases. In the second step, phase diagram at zero temperature is investigated by means of program USPEX and pseudopotential calculations. Stable and metastable structures with close energies are selected. In order to obtain phase diagram at finite temperatures the preliminary selection of stable phases is made by free energy calculation based on small displacement method. For remaining candidates the accurate values of free energy are obtained by means of thermodynamic integration method (TIM). For this purpose quantum molecular dynamics are carried out at different volumes and temperatures. Interatomic potentials based machine learning are developed in order to consider large systems and long times for TIM. The potentials reproduce the free energy with the accuracy 1-5 meV/atom, which is sufficient for prediction of phase transitions. The equilibrium curves of different phases are obtained based on free energies. Melting curve is calculated by modified Z-method with developed potential.
NASA Astrophysics Data System (ADS)
Chen, Chun-Nan; Luo, Win-Jet; Shyu, Feng-Lin; Chung, Hsien-Ching; Lin, Chiun-Yan; Wu, Jhao-Ying
2018-01-01
Using a non-equilibrium Green’s function framework in combination with the complex energy-band method, an atomistic full-quantum model for solving quantum transport problems for a zigzag-edge graphene nanoribbon (zGNR) structure is proposed. For transport calculations, the mathematical expressions from the theory for zGNR-based device structures are derived in detail. The transport properties of zGNR-based devices are calculated and studied in detail using the proposed method.
The crustal structure in the transition zone between the western and eastern Barents Sea
NASA Astrophysics Data System (ADS)
Shulgin, Alexey; Mjelde, Rolf; Faleide, Jan Inge; Høy, Tore; Flueh, Ernst; Thybo, Hans
2018-04-01
We present a crustal-scale seismic profile in the Barents Sea based on new data. Wide-angle seismic data were recorded along a 600 km long profile at 38 ocean bottom seismometer and 52 onshore station locations. The modeling uses the joint refraction/reflection tomography approach where co-located multi-channel seismic reflection data constrain the sedimentary structure. Further, forward gravity modeling is based on the seismic model. We also calculate net regional erosion based on the calculated shallow velocity structure.
NASA Astrophysics Data System (ADS)
Qin, Cheng-Zhi; Zhan, Lijun
2012-06-01
As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.
Modeling of the metallic port in breast tissue expanders for photon radiotherapy.
Yoon, Jihyung; Xie, Yibo; Heins, David; Zhang, Rui
2018-03-30
The purpose of this study was to model the metallic port in breast tissue expanders and to improve the accuracy of dose calculations in a commercial photon treatment planning system (TPS). The density of the model was determined by comparing TPS calculations and ion chamber (IC) measurements. The model was further validated and compared with two widely used clinical models by using a simplified anthropomorphic phantom and thermoluminescent dosimeters (TLD) measurements. Dose perturbations and target coverage for a single postmastectomy radiotherapy (PMRT) patient were also evaluated. The dimensions of the metallic port model were determined to be 1.75 cm in diameter and 5 mm in thickness. The density of the port was adjusted to be 7.5 g/cm 3 which minimized the differences between IC measurements and TPS calculations. Using the simplified anthropomorphic phantom, we found the TPS calculated point doses based on the new model were in agreement with TLD measurements within 5.0% and were more accurate than doses calculated based on the clinical models. Based on the photon treatment plans for a real patient, we found that the metallic port has a negligible dosimetric impact on chest wall, while the port introduced significant dose shadow in skin area. The current clinical port models either overestimate or underestimate the attenuation from the metallic port, and the dose perturbation depends on the plan and the model in a complex way. TPS calculations based on our model of the metallic port showed good agreement with measurements for all cases. This new model could improve the accuracy of dose calculations for PMRT patients who have temporary tissue expanders implanted during radiotherapy and could potentially reduce the risk of complications after the treatment. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penfold, S; Miller, A
2015-06-15
Purpose: Stoichiometric calibration of Hounsfield Units (HUs) for conversion to proton relative stopping powers (RStPs) is vital for accurate dose calculation in proton therapy. However proton dose distributions are not only dependent on RStP, but also on relative scattering power (RScP) of patient tissues. RScP is approximated from material density but a stoichiometric calibration of HU-density tables is commonly neglected. The purpose of this work was to quantify the difference in calculated dose of a commercial TPS when using HU-density tables based on tissue substitute materials and stoichiometric calibrated ICRU tissues. Methods: Two HU-density calibration tables were generated based onmore » scans of the CIRS electron density phantom. The first table was based directly on measured HU and manufacturer quoted density of tissue substitute materials. The second was based on the same CT scan of the CIRS phantom followed by a stoichiometric calibration of ICRU44 tissue materials. The research version of Pinnacle{sup 3} proton therapy was used to compute dose in a patient CT data set utilizing both HU-density tables. Results: The two HU-density tables showed significant differences for bone tissues; the difference increasing with increasing HU. Differences in density calibration table translated to a difference in calculated RScP of −2.5% for ICRU skeletal muscle and 9.2% for ICRU femur. Dose-volume histogram analysis of a parallel opposed proton therapy prostate plan showed that the difference in calculated dose was negligible when using the two different HU-density calibration tables. Conclusion: The impact of HU-density calibration technique on proton therapy dose calculation was assessed. While differences were found in the calculated RScP of bony tissues, the difference in dose distribution for realistic treatment scenarios was found to be insignificant.« less
Refractive laser beam shaping by means of a functional differential equation based design approach.
Duerr, Fabian; Thienpont, Hugo
2014-04-07
Many laser applications require specific irradiance distributions to ensure optimal performance. Geometric optical design methods based on numerical calculation of two plano-aspheric lenses have been thoroughly studied in the past. In this work, we present an alternative new design approach based on functional differential equations that allows direct calculation of the rotational symmetric lens profiles described by two-point Taylor polynomials. The formalism is used to design a Gaussian to flat-top irradiance beam shaping system but also to generate a more complex dark-hollow Gaussian (donut-like) irradiance distribution with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of both calculated solutions and emphasize the potential of this design approach for refractive beam shaping applications.
A new edge detection algorithm based on Canny idea
NASA Astrophysics Data System (ADS)
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
Dynamic Magnification Factor in a Box-Shape Steel Girder
NASA Astrophysics Data System (ADS)
Rahbar-Ranji, A.
2014-01-01
The dynamic effect of moving loads on structures is treated as a dynamic magnification factor when resonant is not imminent. Studies have shown that the calculated magnification factors from field measurements could be higher than the values specified in design codes. It is the main aim of present paper to investigate the applicability and accuracy of a rule-based expression for calculation of dynamic magnification factor for lifting appliances used in marine industry. A steel box shape girder of a crane is considered and transient dynamic analysis using computer code ANSYS is implemented. Dynamic magnification factor is calculated for different loading conditions and compared with rule-based equation. The effects of lifting speeds, acceleration, damping ratio and position of cargo are examined. It is found that rule-based expression underestimate dynamic magnification factor.
Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong
2007-04-01
Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.
Propulsive efficiency of frog swimming with different feet and swimming patterns
Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu
2017-01-01
ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669
Cottle, Daniel; Mousdale, Stephen; Waqar-Uddin, Haroon; Tully, Redmond; Taylor, Benjamin
2016-02-01
Transferring the theoretical aspect of continuous renal replacement therapy to the bedside and delivering a given "dose" can be difficult. In research, the "dose" of renal replacement therapy is given as effluent flow rate in ml kg -1 h -1 . Unfortunately, most machines require other information when they are initiating therapy, including blood flow rate, pre-blood pump flow rate, dialysate flow rate, etc. This can lead to confusion, resulting in patients receiving inappropriate doses of renal replacement therapy. Our aim was to design an excel calculator which would personalise patient's treatment, deliver an effective, evidence-based dose of renal replacement therapy without large variations in practice and prolong filter life. Our calculator prescribes a haemodialfiltration dose of 25 ml kg -1 h -1 whilst limiting the filtration fraction to 15%. We compared the episodes of renal replacement therapy received by a historical group of patients, by retrieving their data stored on the haemofiltration machines, to a group where the calculator was used. In the second group, the data were gathered prospectively. The median delivered dose reduced from 41.0 ml kg -1 h -1 to 26.8 ml kg -1 h -1 with reduced variability that was significantly closer to the aim of 25 ml kg -1 .h -1 ( p < 0.0001). The median treatment time increased from 8.5 h to 22.2 h ( p = 0.00001). Our calculator significantly reduces variation in prescriptions of continuous veno-venous haemodiafiltration and provides an evidence-based dose. It is easy to use and provides personal care for patients whilst optimizing continuous veno-venous haemodiafiltration delivery and treatment times.
Qiu, Rui; Li, Junli; Zhang, Zhan; Liu, Liye; Bi, Lei; Ren, Li
2009-02-01
A set of conversion coefficients from kerma free-in-air to the organ-absorbed dose are presented for external monoenergetic photon beams from 10 keV to 10 MeV based on the Chinese mathematical phantom, a whole-body mathematical phantom model. The model was developed based on the methods of the Oak Ridge National Laboratory mathematical phantom series and data from the Chinese Reference Man and the Reference Asian Man. This work is carried out to obtain the conversion coefficients based on this model, which represents the characteristics of the Chinese population, as the anatomical parameters of the Chinese are different from those of Caucasians. Monte Carlo simulation with MCNP code is carried out to calculate the organ dose conversion coefficients. Before the calculation, the effects from the physics model and tally type are investigated, considering both the calculation efficiency and precision. In the calculation irradiation conditions include anterior-posterior, posterior-anterior, right lateral, left lateral, rotational and isotropic geometries. Conversion coefficients from this study are compared with those recommended in the Publication 74 of International Commission on Radiological Protection (ICRP74) since both the sets of data are calculated with mathematical phantoms. Overall, consistency between the two sets of data is observed and the difference for more than 60% of the data is below 10%. However, significant deviations are also found, mainly for the superficial organs (up to 65.9%) and bone surface (up to 66%). The big difference of the dose conversion coefficients for the superficial organs at high photon energy could be ascribed to kerma approximation for the data in ICRP74. Both anatomical variations between races and the calculation method contribute to the difference of the data for bone surface.
Kim, Mingue; Eom, Youngsub; Lee, Hwa; Suh, Young-Woo; Song, Jong Suk; Kim, Hyo Myung
2018-02-01
To evaluate the accuracy of IOL power calculation using adjusted corneal power according to the posterior/anterior corneal curvature radii ratio. Nine hundred twenty-eight eyes from 928 reference subjects and 158 eyes from 158 cataract patients who underwent phacoemulsification surgery were enrolled. Adjusted corneal power of cataract patients was calculated using the fictitious refractive index that was obtained from the geometric mean posterior/anterior corneal curvature radii ratio of reference subjects and adjusted anterior and predicted posterior corneal curvature radii from conventional keratometry (K) using the posterior/anterior corneal curvature radii ratio. The median absolute error (MedAE) based on the adjusted corneal power was compared with that based on conventional K in the Haigis and SRK/T formulae. The geometric mean posterior/anterior corneal curvature radii ratio was 0.808, and the fictitious refractive index of the cornea for a single Scheimpflug camera was 1.3275. The mean difference between adjusted corneal power and conventional K was 0.05 diopter (D). The MedAE based on adjusted corneal power (0.31 D in the Haigis formula and 0.32 D in the SRK/T formula) was significantly smaller than that based on conventional K (0.41 D and 0.40 D, respectively; P < 0.001 and P < 0.001, respectively). The percentage of eyes with refractive prediction error within ± 0.50 D calculated using adjusted corneal power (74.7%) was significantly greater than that obtained using conventional K (62.7%) in the Haigis formula (P = 0.029). IOL power calculation using adjusted corneal power according to the posterior/anterior corneal curvature radii ratio provided more accurate refractive outcomes than calculation using conventional K.
NASA Astrophysics Data System (ADS)
Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin
2018-05-01
Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.
Three-Dimensional Electron Beam Dose Calculations.
NASA Astrophysics Data System (ADS)
Shiu, Almon Sowchee
The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry. A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, A; Devlin, P; Bhagwat, M
Purpose: To investigate the sensitivity and specificity of a novel verification methodology for image-guided skin HDR brachytherapy plans using a TRAK-based reasonableness test, compared to a typical manual verification methodology. Methods: Two methodologies were used to flag treatment plans necessitating additional review due to a potential discrepancy of 3 mm between planned dose and clinical target in the skin. Manual verification was used to calculate the discrepancy between the average dose to points positioned at time of planning representative of the prescribed depth and the expected prescription dose. Automatic verification was used to calculate the discrepancy between TRAK of themore » clinical plan and its expected value, which was calculated using standard plans with varying curvatures, ranging from flat to cylindrically circumferential. A plan was flagged if a discrepancy >10% was observed. Sensitivity and specificity were calculated using as a criteria for true positive that >10% of plan dwells had a distance to prescription dose >1 mm different than prescription depth (3 mm + size of applicator). All HDR image-based skin brachytherapy plans treated at our institution in 2013 were analyzed. Results: 108 surface applicator plans to treat skin of the face, scalp, limbs, feet, hands or abdomen were analyzed. Median number of catheters was 19 (range, 4 to 71) and median number of dwells was 257 (range, 20 to 1100). Sensitivity/specificity were 57%/78% for manual and 70%/89% for automatic verification. Conclusion: A check based on expected TRAK value is feasible for irregularly shaped, image-guided skin HDR brachytherapy. This test yielded higher sensitivity and specificity than a test based on the identification of representative points, and can be implemented with a dedicated calculation code or with pre-calculated lookup tables of ideally shaped, uniform surface applicators.« less
A web-based normative calculator for the uniform data set (UDS) neuropsychological test battery.
Shirk, Steven D; Mitchell, Meghan B; Shaughnessy, Lynn W; Sherman, Janet C; Locascio, Joseph J; Weintraub, Sandra; Atri, Alireza
2011-11-11
With the recent publication of new criteria for the diagnosis of preclinical Alzheimer's disease (AD), there is a need for neuropsychological tools that take premorbid functioning into account in order to detect subtle cognitive decline. Using demographic adjustments is one method for increasing the sensitivity of commonly used measures. We sought to provide a useful online z-score calculator that yields estimates of percentile ranges and adjusts individual performance based on sex, age and/or education for each of the neuropsychological tests of the National Alzheimer's Coordinating Center Uniform Data Set (NACC, UDS). In addition, we aimed to provide an easily accessible method of creating norms for other clinical researchers for their own, unique data sets. Data from 3,268 clinically cognitively-normal older UDS subjects from a cohort reported by Weintraub and colleagues (2009) were included. For all neuropsychological tests, z-scores were estimated by subtracting the raw score from the predicted mean and then dividing this difference score by the root mean squared error term (RMSE) for a given linear regression model. For each neuropsychological test, an estimated z-score was calculated for any raw score based on five different models that adjust for the demographic predictors of SEX, AGE and EDUCATION, either concurrently, individually or without covariates. The interactive online calculator allows the entry of a raw score and provides five corresponding estimated z-scores based on predictions from each corresponding linear regression model. The calculator produces percentile ranks and graphical output. An interactive, regression-based, normative score online calculator was created to serve as an additional resource for UDS clinical researchers, especially in guiding interpretation of individual performances that appear to fall in borderline realms and may be of particular utility for operationalizing subtle cognitive impairment present according to the newly proposed criteria for Stage 3 preclinical Alzheimer's disease.
NASA Technical Reports Server (NTRS)
Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.
1995-01-01
The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.
Morin, Jean-François; Botton, Eléonore; Jacquemard, François; Richard-Gireme, Anouk
2013-01-01
The Fetal medicine foundation (FMF) has developed a new algorithm called Prenatal Risk Calculation (PRC) to evaluate Down syndrome screening based on free hCGβ, PAPP-A and nuchal translucency. The peculiarity of this algorithm is to use the degree of extremeness (DoE) instead of the multiple of the median (MoM). The biologists measuring maternal seric markers on Kryptor™ machines (Thermo Fisher Scientific) use Fast Screen pre I plus software for the prenatal risk calculation. This software integrates the PRC algorithm. Our study evaluates the data of 2.092 patient files of which 19 show a fœtal abnormality. These files have been first evaluated with the ViewPoint software based on MoM. The link between DoE and MoM has been analyzed and the different calculated risks compared. The study shows that Fast Screen pre I plus software gives the same risk results as ViewPoint software, but yields significantly fewer false positive results.
Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.
Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo
2013-01-01
To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.
Calculations of Hubbard U from first-principles
NASA Astrophysics Data System (ADS)
Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.
2006-09-01
The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.
A point kernel algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan
2017-11-01
Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Qu, Weilu; Qiu, Weiting
2018-03-01
In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.
Guidelines for the analysis of free energy calculations
Klimovich, Pavel V.; Shirts, Michael R.; Mobley, David L.
2015-01-01
Free energy calculations based on molecular dynamics (MD) simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical–analysis.py, freely available on GitHub at https://github.com/choderalab/pymbar–examples, that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope these tools and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations. PMID:25808134
PVWatts ® Calculator: India (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The PVWatts ® Calculator for India was released by the National Renewable Energy Laboratory in 2013. The online tool estimates electricity production and the monetary value of that production of grid-connected roof- or ground-mounted crystalline silicon photovoltaics systems based on a few simple inputs. This factsheet provides a broad overview of the PVWatts ® Calculator for India.
Henriques, D. A.; Ladbury, J. E.; Jackson, R. M.
2000-01-01
The prediction of binding energies from the three-dimensional (3D) structure of a protein-ligand complex is an important goal of biophysics and structural biology. Here, we critically assess the use of empirical, solvent-accessible surface area-based calculations for the prediction of the binding of Src-SH2 domain with a series of tyrosyl phosphopeptides based on the high-affinity ligand from the hamster middle T antigen (hmT), where the residue in the pY+ 3 position has been changed. Two other peptides based on the C-terminal regulatory site of the Src protein and the platelet-derived growth factor receptor (PDGFR) are also investigated. Here, we take into account the effects of proton linkage on binding, and test five different surface area-based models that include different treatments for the contributions to conformational change and protein solvation. These differences relate to the treatment of conformational flexibility in the peptide ligand and the inclusion of proximal ordered solvent molecules in the surface area calculations. This allowed the calculation of a range of thermodynamic state functions (deltaCp, deltaS, deltaH, and deltaG) directly from structure. Comparison with the experimentally derived data shows little agreement for the interaction of SrcSH2 domain and the range of tyrosyl phosphopeptides. Furthermore, the adoption of the different models to treat conformational change and solvation has a dramatic effect on the calculated thermodynamic functions, making the predicted binding energies highly model dependent. While empirical, solvent-accessible surface area based calculations are becoming widely adopted to interpret thermodynamic data, this study highlights potential problems with application and interpretation of this type of approach. There is undoubtedly some agreement between predicted and experimentally determined thermodynamic parameters: however, the tolerance of this approach is not sufficient to make it ubiquitously applicable. PMID:11106171
Assessment of Spanish Panel Reactive Antibody Calculator and Potential Usefulness.
Asensio, Esther; López-Hoyos, Marcos; Romón, Íñigo; Ontañón, Jesús; San Segundo, David
2017-01-01
The calculated panel reactive of antibodies (cPRAs) necessary for kidney donor-pair exchange and highly sensitized programs are estimated using different panel reactive antibody (PRA) calculators based on big enough samples in Eurotransplant (EUTR), United Network for Organ Sharing (UNOS), and Canadian Transplant Registry (CTR) websites. However, those calculators can vary depending on the ethnic they are applied. Here, we develop a PRA calculator used in the Spanish Program of Transplant Access for Highly Sensitized patients (PATHI) and validate it with EUTR, UNOS, and CTR calculators. The anti-human leukocyte antigen (HLA) antibody profile of 42 sensitized patients on waiting list was defined, and cPRA was calculated with different PRA calculators. Despite different allelic frequencies derived from population differences in donor panel from each calculator, no differences in cPRA between the four calculators were observed. The PATHI calculator includes anti-DQA1 antibody profiles in cPRA calculation; however, no improvement in total cPRA calculation of highly sensitized patients was demonstrated. The PATHI calculator provides cPRA results comparable with those from EUTR, UNOS, and CTR calculators and serves as a tool to develop valid calculators in geographical and ethnic areas different from Europe, USA, and Canada.
NASA Astrophysics Data System (ADS)
Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming
2016-12-01
The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.
NASA Astrophysics Data System (ADS)
Takemine, S.; Rikimaru, A.; Takahashi, K.
The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed
NASA Astrophysics Data System (ADS)
Yu, Jun; Hao, Du; Li, Decai
2018-01-01
The phenomenon whereby an object whose density is greater than magnetic fluid can be suspended stably in magnetic fluid under the magnetic field is one of the peculiar properties of magnetic fluids. Examples of applications based on the peculiar properties of magnetic fluid are sensors and actuators, dampers, positioning systems and so on. Therefore, the calculation and measurement of magnetic levitation force of magnetic fluid is of vital importance. This paper concerns the peculiar second-order buoyancy experienced by a magnet immersed in magnetic fluid. The expression for calculating the second-order buoyancy was derived, and a novel method for calculating and measuring the second-order buoyancy was proposed based on the expression. The second-order buoyancy was calculated by ANSYS and measured experimentally using the novel method. To verify the novel method, the second-order buoyancy was measured experimentally with a nonmagnetic rod stuck on the top surface of the magnet. The results of calculations and experiments show that the novel method for calculating the second-order buoyancy is correct with high accuracy. In addition, the main causes of error were studied in this paper, including magnetic shielding of magnetic fluid and the movement of magnetic fluid in a nonuniform magnetic field.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fue...
Computation of Southern Pine Site Index Using a TI-59 Calculator
Robert M. Farrar
1983-01-01
A program is described that permits computation of site index in the field using a Texas Instruments model TI-59 programmable, hand-held, battery-powered calculator. Based on a series of equations developed by R.M. Farrar, Jr., for the site index curves in USDA Miscellaneous Publication 50, the program can accommodate any index base age, tree age, and height within...
USSR Report, International Affairs.
1986-04-07
inflation" was taken from the yearbook "Estudio Economico de America Latina," 1981. Santiago de Chile, 1983, p 60. 2. Calculated based on... Economico . Resena estadistica 1980-1983. Buenos Aires, 1983, p 69. 6. Calculated based on: International Financial Statistics. Yearbook, 1984, pp 76-79... Economico de America Latina, 1976. 1977, p 23; Estudio Economico de America Latina, 1980. 1981, pp 23-24. 9. Estudio economico de America Latina
RadShield: semiautomated shielding design using a floor plan driven graphical user interface
Wu, Dee H.; Yang, Kai; Rutel, Isaac B.
2016-01-01
The purpose of this study was to introduce and describe the development of RadShield, a Java‐based graphical user interface (GUI), which provides a base design that uniquely performs thorough, spatially distributed calculations at many points and reports the maximum air‐kerma rate and barrier thickness for each barrier pursuant to NCRP Report 147 methodology. Semiautomated shielding design calculations are validated by two approaches: a geometry‐based approach and a manual approach. A series of geometry‐based equations were derived giving the maximum air‐kerma rate magnitude and location through a first derivative root finding approach. The second approach consisted of comparing RadShield results with those found by manual shielding design by an American Board of Radiology (ABR)‐certified medical physicist for two clinical room situations: two adjacent catheterization labs, and a radiographic and fluoroscopic (R&F) exam room. RadShield's efficacy in finding the maximum air‐kerma rate was compared against the geometry‐based approach and the overall shielding recommendations by RadShield were compared against the medical physicist's shielding results. Percentage errors between the geometry‐based approach and RadShield's approach in finding the magnitude and location of the maximum air‐kerma rate was within 0.00124% and 14 mm. RadShield's barrier thickness calculations were found to be within 0.156 mm lead (Pb) and 0.150 mm lead (Pb) for the adjacent catheterization labs and R&F room examples, respectively. However, within the R&F room example, differences in locating the most sensitive calculation point on the floor plan for one of the barriers was not considered in the medical physicist's calculation and was revealed by the RadShield calculations. RadShield is shown to accurately find the maximum values of air‐kerma rate and barrier thickness using NCRP Report 147 methodology. Visual inspection alone of the 2D X‐ray exam distribution by a medical physicist may not be sufficient to accurately select the point of maximum air‐kerma rate or barrier thickness. PACS number(s): 87.55.N, 87.52.‐g, 87.59.Bh, 87.57.‐s PMID:27685128
RadShield: semiautomated shielding design using a floor plan driven graphical user interface.
DeLorenzo, Matthew C; Wu, Dee H; Yang, Kai; Rutel, Isaac B
2016-09-08
The purpose of this study was to introduce and describe the development of RadShield, a Java-based graphical user interface (GUI), which provides a base design that uniquely performs thorough, spatially distributed calculations at many points and reports the maximum air-kerma rate and barrier thickness for each barrier pursuant to NCRP Report 147 methodology. Semiautomated shielding design calculations are validated by two approaches: a geometry-based approach and a manual approach. A series of geometry-based equations were derived giv-ing the maximum air-kerma rate magnitude and location through a first derivative root finding approach. The second approach consisted of comparing RadShield results with those found by manual shielding design by an American Board of Radiology (ABR)-certified medical physicist for two clinical room situations: two adjacent catheterization labs, and a radiographic and fluoroscopic (R&F) exam room. RadShield's efficacy in finding the maximum air-kerma rate was compared against the geometry-based approach and the overall shielding recommendations by RadShield were compared against the medical physicist's shielding results. Percentage errors between the geometry-based approach and RadShield's approach in finding the magnitude and location of the maximum air-kerma rate was within 0.00124% and 14 mm. RadShield's barrier thickness calculations were found to be within 0.156 mm lead (Pb) and 0.150 mm lead (Pb) for the adjacent catheteriza-tion labs and R&F room examples, respectively. However, within the R&F room example, differences in locating the most sensitive calculation point on the floor plan for one of the barriers was not considered in the medical physicist's calculation and was revealed by the RadShield calculations. RadShield is shown to accurately find the maximum values of air-kerma rate and barrier thickness using NCRP Report 147 methodology. Visual inspection alone of the 2D X-ray exam distribution by a medical physicist may not be sufficient to accurately select the point of maximum air-kerma rate or barrier thickness. © 2016 The Authors.
Giessner-Prettre, C; Ribas Prado, F; Pullman, B; Kan, L; Kast, J R; Ts'o, P O
1981-01-01
A FORTRAN computer program called SHIFTS is described. Through SHIFTS, one can calculate the NMR chemical shifts of the proton resonances of single and double-stranded nucleic acids of known sequences and of predetermined conformations. The program can handle RNA and DNA for an arbitrary sequence of a set of 4 out of the 6 base types A,U,G,C,I and T. Data files for the geometrical parameters are available for A-, A'-, B-, D- and S-conformations. The positions of all the atoms are calculated using a modified version of the SEQ program [1]. Then, based on this defined geometry three chemical shift effects exerted by the atoms of the neighboring nucleotides on the protons of each monomeric unit are calculated separately: the ring current shielding effect: the local atomic magnetic susceptibility effect (including both diamagnetic and paramagnetic terms); and the polarization or electric field effect. Results of the program are compared with experimental results for a gamma (ApApGpCpUpU) 2 helical duplex and with calculated results on this same helix based on model building of A'-form and B-form and on graphical procedure for evaluating the ring current effects.
GPU-based ultra-fast dose calculation using a finite size pencil beam model.
Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B
2009-10-21
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.
Shyam, Sangeetha; Wai, Tony Ng Kock; Arshad, Fatimah
2012-01-01
This paper outlines the methodology to add glycaemic index (GI) and glycaemic load (GL) functionality to food DietPLUS, a Microsoft Excel-based Malaysian food composition database and diet intake calculator. Locally determined GI values and published international GI databases were used as the source of GI values. Previously published methodology for GI value assignment was modified to add GI and GL calculators to the database. Two popular local low GI foods were added to the DietPLUS database, bringing up the total number of foods in the database to 838 foods. Overall, in relation to the 539 major carbohydrate foods in the Malaysian Food Composition Database, 243 (45%) food items had local Malaysian values or were directly matched to International GI database and another 180 (33%) of the foods were linked to closely-related foods in the GI databases used. The mean ± SD dietary GI and GL of the dietary intake of 63 women with previous gestational diabetes mellitus, calculated using DietPLUS version3 were, 62 ± 6 and 142 ± 45, respectively. These values were comparable to those reported from other local studies. DietPLUS version3, a simple Microsoft Excel-based programme aids calculation of diet GI and GL for Malaysian diets based on food records.
Data preparation techniques for a perinatal psychiatric study based on linked data.
Xu, Fenglian; Hilder, Lisa; Austin, Marie-Paule; Sullivan, Elizabeth A
2012-06-08
In recent years there has been an increase in the use of population-based linked data. However, there is little literature that describes the method of linked data preparation. This paper describes the method for merging data, calculating the statistical variable (SV), recoding psychiatric diagnoses and summarizing hospital admissions for a perinatal psychiatric study. The data preparation techniques described in this paper are based on linked birth data from the New South Wales (NSW) Midwives Data Collection (MDC), the Register of Congenital Conditions (RCC), the Admitted Patient Data Collection (APDC) and the Pharmaceutical Drugs of Addiction System (PHDAS). The master dataset is the meaningfully linked data which include all or major study data collections. The master dataset can be used to improve the data quality, calculate the SV and can be tailored for different analyses. To identify hospital admissions in the periods before pregnancy, during pregnancy and after birth, a statistical variable of time interval (SVTI) needs to be calculated. The methods and SPSS syntax for building a master dataset, calculating the SVTI, recoding the principal diagnoses of mental illness and summarizing hospital admissions are described. Linked data preparation, including building the master dataset and calculating the SV, can improve data quality and enhance data function.
Sizable band gap in organometallic topological insulator
NASA Astrophysics Data System (ADS)
Derakhshan, V.; Ketabi, S. A.
2017-01-01
Based on first principle calculation when Ceperley-Alder and Perdew-Burke-Ernzerh type exchange-correlation energy functional were adopted to LSDA and GGA calculation, electronic properties of organometallic honeycomb lattice as a two-dimensional topological insulator was calculated. In the presence of spin-orbit interaction bulk band gap of organometallic lattice with heavy metals such as Au, Hg, Pt and Tl atoms were investigated. Our results show that the organometallic topological insulator which is made of Mercury atom shows the wide bulk band gap of about ∼120 meV. Moreover, by fitting the conduction and valence bands to the band-structure which are produced by Density Functional Theory, spin-orbit interaction parameters were extracted. Based on calculated parameters, gapless edge states within bulk insulating gap are indeed found for finite width strip of two-dimensional organometallic topological insulators.
Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp
The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less
Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment
NASA Astrophysics Data System (ADS)
Barnett, D. A., Jr.
1991-02-01
An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.
Bedside risk estimation of morbidly adherent placenta using simple calculator.
Maymon, R; Melcer, Y; Pekar-Zlotin, M; Shaked, O; Cuckle, H; Tovbin, J
2018-03-01
To construct a calculator for 'bedside' estimation of morbidly adherent placenta (MAP) risk based on ultrasound (US) findings. This retrospective study included all pregnant women with at least one previous cesarean delivery attending in our US unit between December 2013 and January 2017. The examination was based on a scoring system which determines the probability for MAP. The study population included 471 pregnant women, and 41 of whom (8.7%) were diagnosed with MAP. Based on ROC curve, the most effective US criteria for detection of MAP were the presence of the placental lacunae, obliteration of the utero-placental demarcation, and placenta previa. On the multivariate logistic regression analysis, US findings of placental lacunae (OR = 3.5; 95% CI, 1.2-9.5; P = 0.01), obliteration of the utero-placental demarcation (OR = 12.4; 95% CI, 3.7-41.6; P < 0.0001), and placenta previa (OR = 10.5; 95% CI, 3.5-31.3; P < 0.0001) were associated with MAP. By combining these three parameters, the receiver operating characteristic curve was calculated, yielding an area under the curve of 0.93 (95% CI, 0.87-0.97). Accordingly, we have constructed a simple calculator for 'bedside' estimation of MAP risk. The calculator is mounted on the hospital's internet website ( http://www.assafh.org/Pages/PPCalc/index.html ). The risk estimation of MAP varies between 1.5 and 87%. The present calculator enables a simple 'bedside' MAP estimation, facilitating accurate and adequate antenatal risk assessment.
Calculations of dose distributions using a neural network model
NASA Astrophysics Data System (ADS)
Mathieu, R.; Martin, E.; Gschwind, R.; Makovicka, L.; Contassot-Vivier, S.; Bahi, J.
2005-03-01
The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journées Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.
Development of a Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2/3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective- Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRA's precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models.
Calculations of dose distributions using a neural network model.
Mathieu, R; Martin, E; Gschwind, R; Makovicka, L; Contassot-Vivier, S; Bahi, J
2005-03-07
The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journees Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.
Research on Signature Verification Method Based on Discrete Fréchet Distance
NASA Astrophysics Data System (ADS)
Fang, J. L.; Wu, W.
2018-05-01
This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.
NASA Astrophysics Data System (ADS)
Gololobova, E. G.; Gorichev, I. G.; Lainer, Yu. A.; Skvortsova, I. V.
2011-05-01
A procedure was proposed for the calculation of the acid-base equilibrium constants at an alumina/electrolyte interface from experimental data on the adsorption of singly charged ions (Na+, Cl-) at various pH values. The calculated constants (p K {1/0}= 4.1, p K {2/0}= 11.9, p K {3/0}= 8.3, and p K {4/0}= 7.7) are shown to agree with the values obtained from an experimental pH dependence of the electrokinetic potential and the results of potentiometric titration of Al2O3 suspensions.
Three-dimensional assessment of scoliosis based on ultrasound data
NASA Astrophysics Data System (ADS)
Zhang, Junhua; Li, Hongjian; Yu, Bo
2015-12-01
In this study, an approach was proposed to assess the 3D scoliotic deformity based on ultrasound data. The 3D spine model was reconstructed by using a freehand 3D ultrasound imaging system. The geometric torsion was then calculated from the reconstructed spine model. A thoracic spine phantom set at a given pose was used in the experiment. The geometric torsion of the spine phantom calculated from the freehand ultrasound imaging system was 0.041 mm-1 which was close to that calculated from the biplanar radiographs (0.025 mm-1). Therefore, ultrasound is a promising technique for the 3D assessment of scoliosis.
Finite area combustor theoretical rocket performance
NASA Technical Reports Server (NTRS)
Gordon, Sanford; Mcbride, Bonnie J.
1988-01-01
Previous to this report, the computer program of NASA SP-273 and NASA TM-86885 was capable of calculating theoretical rocket performance based only on the assumption of an infinite area combustion chamber (IAC). An option was added to this program which now also permits the calculation of rocket performance based on the assumption of a finite area combustion chamber (FAC). In the FAC model, the combustion process in the cylindrical chamber is assumed to be adiabatic, but nonisentropic. This results in a stagnation pressure drop from the injector face to the end of the chamber and a lower calculated performance for the FAC model than the IAC model.
On the binding of indeno[1,2-c]isoquinolines in the DNA-topoisomerase I cleavage complex.
Xiao, Xiangshu; Antony, Smitha; Pommier, Yves; Cushman, Mark
2005-05-05
An ab initio quantum mechanics calculation is reported which predicts the orientation of indenoisoquinoline 4 in the ternary cleavage complex formed from DNA and topoisomerase I (top1). The results of this calculation are consistent with the hypothetical structures previously proposed for the indenoisoquinoline-DNA-top1 ternary complexes based on molecular modeling, the crystal structure of a recently reported ternary complex, and the biological results obtained with a pair of diaminoalkyl-substituted indenoisoquinoline enantiomers. The results of these studies indicate that the pi-pi stacking interactions between the indenoisoquinolines and the neighboring DNA base pairs play a major role in determining binding orientation. The calculation of the electrostatic potential surface maps of the indenoisoquinolines and the adjacent DNA base pairs shows electrostatic complementarity in the observed binding orientation, leading to the conclusion that electrostatic attraction between the intercalators and the base pairs in the cleavage complex plays a major stabilizing role. On the other hand, the calculation of LUMO and HOMO energies of indenoisoquinoline 13b and neighboring DNA base pairs in conjunction with NBO analysis indicates that charge transfer complex formation plays a relatively minor role in stabilizing the ternary complexes derived from indenoisoquinolines, DNA, and top1. The results of these studies are important in understanding the existing structure-activity relationships for the indenoisoquinolines as top1 inhibitors and as anticancer agents, and they will be important in the future design of indenoisoquinoline-based top1 inhibitors.
Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method
NASA Astrophysics Data System (ADS)
Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru
2015-05-01
Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.
Patient‐specific CT dosimetry calculation: a feasibility study
Xie, Huchen; Cheng, Jason Y.; Ning, Holly; Zhuge, Ying; Miller, Robert W.
2011-01-01
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of “standard man”. Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient‐specific CT dosimetry. A radiation treatment planning system was modified to calculate patient‐specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose‐volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi‐empirical, measured correction‐based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point‐by‐point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%–20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient‐specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation. PACS numbers: 87.55.D‐, 87.57.Q‐, 87.53.Bn, 87.55.K‐ PMID:22089016
Nielsen, Jens E.; Gunner, M. R.; Bertrand García-Moreno, E.
2012-01-01
The pKa Cooperative http://www.pkacoop.org was organized to advance development of accurate and useful computational methods for structure-based calculation of pKa values and electrostatic energy in proteins. The Cooperative brings together laboratories with expertise and interest in theoretical, computational and experimental studies of protein electrostatics. To improve structure-based energy calculations it is necessary to better understand the physical character and molecular determinants of electrostatic effects. The Cooperative thus intends to foment experimental research into fundamental aspects of proteins that depend on electrostatic interactions. It will maintain a depository for experimental data useful for critical assessment of methods for structure-based electrostatics calculations. To help guide the development of computational methods the Cooperative will organize blind prediction exercises. As a first step, computational laboratories were invited to reproduce an unpublished set of experimental pKa values of acidic and basic residues introduced in the interior of staphylococcal nuclease by site-directed mutagenesis. The pKa values of these groups are unique and challenging to simulate owing to the large magnitude of their shifts relative to normal pKa values in water. Many computational methods were tested in this 1st Blind Prediction Challenge and critical assessment exercise. A workshop was organized in the Telluride Science Research Center to assess objectively the performance of many computational methods tested on this one extensive dataset. This volume of PROTEINS: Structure, Function, and Bioinformatics introduces the pKa Cooperative, presents reports submitted by participants in the blind prediction challenge, and highlights some of the problems in structure-based calculations identified during this exercise. PMID:22002877
Quantum chemical calculations for polymers and organic compounds
NASA Technical Reports Server (NTRS)
Lopez, J.; Yang, C.
1982-01-01
The relativistic effects of the orbiting electrons on a model compound were calculated. The computational method used was based on 'Modified Neglect of Differential Overlap' (MNDO). The compound tetracyanoplatinate was used since empirical measurement and calculations along "classical" lines had yielded many known properties. The purpose was to show that for large molecules relativity effects could not be ignored and that these effects could be calculated and yield data in closer agreement to empirical measurements. Both the energy band structure and molecular orbitals are depicted.
Vibrational multiconfiguration self-consistent field theory: implementation and test calculations.
Heislbetz, Sandra; Rauhut, Guntram
2010-03-28
A state-specific vibrational multiconfiguration self-consistent field (VMCSCF) approach based on a multimode expansion of the potential energy surface is presented for the accurate calculation of anharmonic vibrational spectra. As a special case of this general approach vibrational complete active space self-consistent field calculations will be discussed. The latter method shows better convergence than the general VMCSCF approach and must be considered the preferred choice within the multiconfigurational framework. Benchmark calculations are provided for a small set of test molecules.
Navy Nuclear Aircraft Carrier (CVN) Homeporting at Mayport: Background and Issues for Congress
2010-05-26
online at http://www.defenselink.mil/releases/release.aspx?releaseid= 12600. 4 Department of Defense, Quadrennial Defense Review Report, February 2010...calculated by the “How Fair Is It?” online distance calculator available at http://www.indo.com/cgi-bin/dist. 10 Although the Navy states that the CVN based...itself. 14 This is the straight-line distance between the two locations, as calculated by the “How Fair Is It?” online distance calculator available
White, Shane A; Landry, Guillaume; Fonseca, Gabriel Paiva; Holt, Randy; Rusch, Thomas; Beaulieu, Luc; Verhaegen, Frank; Reniers, Brigitte
2014-06-01
The recently updated guidelines for dosimetry in brachytherapy in TG-186 have recommended the use of model-based dosimetry calculations as a replacement for TG-43. TG-186 highlights shortcomings in the water-based approach in TG-43, particularly for low energy brachytherapy sources. The Xoft Axxent is a low energy (<50 kV) brachytherapy system used in accelerated partial breast irradiation (APBI). Breast tissue is a heterogeneous tissue in terms of density and composition. Dosimetric calculations of seven APBI patients treated with Axxent were made using a model-based Monte Carlo platform for a number of tissue models and dose reporting methods and compared to TG-43 based plans. A model of the Axxent source, the S700, was created and validated against experimental data. CT scans of the patients were used to create realistic multi-tissue/heterogeneous models with breast tissue segmented using a published technique. Alternative water models were used to isolate the influence of tissue heterogeneity and backscatter on the dose distribution. Dose calculations were performed using Geant4 according to the original treatment parameters. The effect of the Axxent balloon applicator used in APBI which could not be modeled in the CT-based model, was modeled using a novel technique that utilizes CAD-based geometries. These techniques were validated experimentally. Results were calculated using two dose reporting methods, dose to water (Dw,m) and dose to medium (Dm,m), for the heterogeneous simulations. All results were compared against TG-43-based dose distributions and evaluated using dose ratio maps and DVH metrics. Changes in skin and PTV dose were highlighted. All simulated heterogeneous models showed a reduced dose to the DVH metrics that is dependent on the method of dose reporting and patient geometry. Based on a prescription dose of 34 Gy, the average D90 to PTV was reduced by between ~4% and ~40%, depending on the scoring method, compared to the TG-43 result. Peak skin dose is also reduced by 10%-15% due to the absence of backscatter not accounted for in TG-43. The balloon applicator also contributed to the reduced dose. Other ROIs showed a difference depending on the method of dose reporting. TG-186-based calculations produce results that are different from TG-43 for the Axxent source. The differences depend strongly on the method of dose reporting. This study highlights the importance of backscatter to peak skin dose. Tissue heterogeneities, applicator, and patient geometries demonstrate the need for a more robust dose calculation method for low energy brachytherapy sources.
Harmonics analysis of the ITER poloidal field converter based on a piecewise method
NASA Astrophysics Data System (ADS)
Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU
2017-12-01
Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Aggregation of Electric Current Consumption Features to Extract Maintenance KPIs
NASA Astrophysics Data System (ADS)
Simon, Victor; Johansson, Carl-Anders; Galar, Diego
2017-09-01
All electric powered machines offer the possibility of extracting information and calculating Key Performance Indicators (KPIs) from the electric current signal. Depending on the time window, sampling frequency and type of analysis, different indicators from the micro to macro level can be calculated for such aspects as maintenance, production, energy consumption etc. On the micro-level, the indicators are generally used for condition monitoring and diagnostics and are normally based on a short time window and a high sampling frequency. The macro indicators are normally based on a longer time window with a slower sampling frequency and are used as indicators for overall performance, cost or consumption. The indicators can be calculated directly from the current signal but can also be based on a combination of information from the current signal and operational data like rpm, position etc. One or several of those indicators can be used for prediction and prognostics of a machine's future behavior. This paper uses this technique to calculate indicators for maintenance and energy optimization in electric powered machines and fleets of machines, especially machine tools.
NASA Technical Reports Server (NTRS)
Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)
2003-01-01
A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.
New approach to CT pixel-based photon dose calculations in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation
NASA Astrophysics Data System (ADS)
Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.
2017-10-01
There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.
NASA Astrophysics Data System (ADS)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Petersen, Philippe A D; Silva, Andreia S; Gonçalves, Marcos B; Lapolli, André L; Ferreira, Ana Maria C; Carbonari, Artur W; Petrilli, Helena M
2014-06-03
In this work, perturbed angular correlation (PAC) spectroscopy is used to study differences in the nuclear quadrupole interactions of Cd probes in DNA molecules of mice infected with the Y-strain of Trypanosoma cruzi. The possibility of investigating the local genetic alterations in DNA, which occur along generations of mice infected with T. cruzi, using hyperfine interactions obtained from PAC measurements and density functional theory (DFT) calculations in DNA bases is discussed. A comparison of DFT calculations with PAC measurements could determine the type of Cd coordination in the studied molecules. To the best of our knowledge, this is the first attempt to use DFT calculations and PAC measurements to investigate the local environment of Cd ions bound to DNA bases in mice infected with Chagas disease. The obtained results also allowed the detection of local changes occurring in the DNA molecules of different generations of mice infected with T. cruzi, opening the possibility of using this technique as a complementary tool in the characterization of complicated biological systems.
Renuga Devi, T S; Sharmi kumar, J; Ramkumaar, G R
2015-02-25
The FTIR and FT-Raman spectra of 2-(cyclohexylamino)ethanesulfonic acid were recorded in the regions 4000-400 cm(-1) and 4000-50 cm(-1) respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and Density functional method (B3LYP) with the correlation consistent-polarized valence double zeta (cc-pVDZ) basis set and 6-311++G(d,p) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed based on the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Atomic charges were calculated using both Hartee-Fock and density functional method using the cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. (1)H and (13)C NMR chemical shifts of the molecule were calculated using Gauge Including Atomic Orbital (GIAO) method and were compared with experimental results. Stability of the molecule arising from hyperconjugative interactions, charge delocalization have been analyzed using Natural Bond Orbital (NBO) analysis. The first order hyperpolarizability (β) and Molecular Electrostatic Potential (MEP) of the molecule was computed using DFT calculations. The electron density based local reactivity descriptor such as Fukui functions were calculated to explain the chemical reactivity site in the molecule. Copyright © 2014 Elsevier B.V. All rights reserved.
Graphing Calculators, the CBL2[TM] and TI-Interactive[TM] in High School Science.
ERIC Educational Resources Information Center
Molnar, Bill
This collection of activities is designed to show how TI-Interactive[TM] and Calculator-based Laboratories (CBL) can be used to explore topics in high school science. The activities address such topics as specific heat, Boyle's Law, Newton's Law of Cooling, and Antarctic Ozone Levels. Teaching notes and calculator instructions are included as are…
Promoting Graphical Thinking: Using Temperature and a Graphing Calculator to Teach Kinetics Concepts
ERIC Educational Resources Information Center
Cortes-Figueroa, Jose E.; Moore-Russo, Deborah A.
2004-01-01
A combination of graphical thinking with chemical and physical theories in the classroom is encouraged by using the Calculator-Based Laboratory System (CBL) with a temperature sensor and graphing calculator. The theory of first-order kinetics is logically explained with the aid of the cooling or heating of the metal bead of the CBL's temperature…
Code of Federal Regulations, 2011 CFR
2011-07-01
.... (i) Calculate the 5-cycle city and highway fuel economy values from the tests performed using gasoline or diesel test fuel. (ii)(A) Calculate the 5-cycle city and highway fuel economy values from the tests performed using alcohol or natural gas test fuel, if 5-cycle testing has been performed. Otherwise...
NASA Technical Reports Server (NTRS)
Gokoglu, S. A.; Chen, B. K.; Rosner, D. E.
1984-01-01
The computer program based on multicomponent chemically frozen boundary layer (CFBL) theory for calculating vapor and/or small particle deposition rates is documented. A specific application to perimter-averaged Na2SO4 deposition rate calculations on a cylindrical collector is demonstrated. The manual includes a typical program input and output for users.
Project Echo: System Calculations
NASA Technical Reports Server (NTRS)
Ruthroff, Clyde L.; Jakes, William C., Jr.
1961-01-01
The primary experimental objective of Project Echo was the transmission of radio communications between points on the earth by reflection from the balloon satellite. This paper describes system calculations made in preparation for the experiment and their adaptation to the problem of interpreting the results. The calculations include path loss computations, expected audio signal-to-noise ratios, and received signal strength based on orbital parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shvetsov, N. K., E-mail: elmash@em.ispu.ru
2016-11-15
The results of calculations of the increase in losses in an induction motor with frequency control and different forms of the supply voltage are presented. The calculations were performed by an analytic method based on harmonic analysis of the supply voltage as well as numerical calculation of the electromagnetic processes by the finite-element method.
Lead optimization mapper: automating free energy calculations for lead optimization.
Liu, Shuai; Wu, Yujie; Lin, Teng; Abel, Robert; Redmann, Jonathan P; Summa, Christopher M; Jaber, Vivian R; Lim, Nathan M; Mobley, David L
2013-09-01
Alchemical free energy calculations hold increasing promise as an aid to drug discovery efforts. However, applications of these techniques in discovery projects have been relatively few, partly because of the difficulty of planning and setting up calculations. Here, we introduce lead optimization mapper, LOMAP, an automated algorithm to plan efficient relative free energy calculations between potential ligands within a substantial library of perhaps hundreds of compounds. In this approach, ligands are first grouped by structural similarity primarily based on the size of a (loosely defined) maximal common substructure, and then calculations are planned within and between sets of structurally related compounds. An emphasis is placed on ensuring that relative free energies can be obtained between any pair of compounds without combining the results of too many different relative free energy calculations (to avoid accumulation of error) and by providing some redundancy to allow for the possibility of error and consistency checking and provide some insight into when results can be expected to be unreliable. The algorithm is discussed in detail and a Python implementation, based on both Schrödinger's and OpenEye's APIs, has been made available freely under the BSD license.
Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis. PMID:27792763
Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.
Li, Hong Zhi; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min
2011-01-01
We propose a generalized regression neural network (GRNN) approach based on grey relational analysis (GRA) and principal component analysis (PCA) (GP-GRNN) to improve the accuracy of density functional theory (DFT) calculation for homolysis bond dissociation energies (BDE) of Y-NO bond. As a demonstration, this combined quantum chemistry calculation with the GP-GRNN approach has been applied to evaluate the homolysis BDE of 92 Y-NO organic molecules. The results show that the ull-descriptor GRNN without GRA and PCA (F-GRNN) and with GRA (G-GRNN) approaches reduce the root-mean-square (RMS) of the calculated homolysis BDE of 92 organic molecules from 5.31 to 0.49 and 0.39 kcal mol(-1) for the B3LYP/6-31G (d) calculation. Then the newly developed GP-GRNN approach further reduces the RMS to 0.31 kcal mol(-1). Thus, the GP-GRNN correction on top of B3LYP/6-31G (d) can improve the accuracy of calculating the homolysis BDE in quantum chemistry and can predict homolysis BDE which cannot be obtained experimentally.
NASA Astrophysics Data System (ADS)
Liu, Tianhui; Chen, Jun; Zhang, Zhaojun; Shen, Xiangjian; Fu, Bina; Zhang, Dong H.
2018-04-01
We constructed a nine-dimensional (9D) potential energy surface (PES) for the dissociative chemisorption of H2O on a rigid Ni(100) surface using the neural network method based on roughly 110 000 energies obtained from extensive density functional theory (DFT) calculations. The resulting PES is accurate and smooth, based on the small fitting errors and the good agreement between the fitted PES and the direct DFT calculations. Time dependent wave packet calculations also showed that the PES is very well converged with respect to the fitting procedure. The dissociation probabilities of H2O initially in the ground rovibrational state from 9D quantum dynamics calculations are quite different from the site-specific results from the seven-dimensional (7D) calculations, indicating the importance of full-dimensional quantum dynamics to quantitatively characterize this gas-surface reaction. It is found that the validity of the site-averaging approximation with exact potential holds well, where the site-averaging dissociation probability over 15 fixed impact sites obtained from 7D quantum dynamics calculations can accurately approximate the 9D dissociation probability for H2O in the ground rovibrational state.
An Informal Overview of the Unitary Group Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnad, V.; Escher, J.; Kruse, M.
The Unitary Groups Approach (UGA) is an elegant and conceptually unified approach to quantum structure calculations. It has been widely used in molecular structure calculations, and holds the promise of a single computational approach to structure calculations in a variety of different fields. We explore the possibility of extending the UGA to computations in atomic and nuclear structure as a simpler alternative to traditional Racah algebra-based approaches. We provide a simple introduction to the basic UGA and consider some of the issues in using the UGA with spin-dependent, multi-body Hamiltonians requiring multi-shell bases adapted to additional symmetries. While the UGAmore » is perfectly capable of dealing with such problems, it is seen that the complexity rises dramatically, and the UGA is not at this time, a simpler alternative to Racah algebra-based approaches.« less
Response surface method in geotechnical/structural analysis, phase 1
NASA Astrophysics Data System (ADS)
Wong, F. S.
1981-02-01
In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.
NASA Technical Reports Server (NTRS)
Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)
2015-01-01
Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
NASA Technical Reports Server (NTRS)
Bebis, George
2013-01-01
Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
Ertl, P
1998-02-01
Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.
Development of an efficient procedure for calculating the aerodynamic effects of planform variation
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Geller, E. W.
1981-01-01
Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.
Effect of wave function on the proton induced L XRP cross sections for 62Sm and 74W
NASA Astrophysics Data System (ADS)
Shehla, Kaur, Rajnish; Kumar, Anil; Puri, Sanjiv
2015-08-01
The Lk(k= 1, α, β, γ) X-ray production cross sections have been calculated for 74W and 62Sm at different incident proton energies ranging 1-5 MeV using theoretical data sets of different physical parameters, namely, the Li(i=1-3) sub-shell X-ray emission rates based on the Dirac-Fork (DF) model, the fluorescence and Coster Kronig yields based on the Dirac- Hartree-Slater (DHS) model and two sets the proton ionization cross sections based on the DHS model and the ECPSSR in order to assess the influence of the wave function on the XRP cross sections. The calculated cross sections have been compared with the measured cross sections reported in the recent compilation to check the reliability of the calculated values.
van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C
2005-09-01
International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.
EuroFIR Guideline on calculation of nutrient content of foods for food business operators.
Machackova, Marie; Giertlova, Anna; Porubska, Janka; Roe, Mark; Ramos, Carlos; Finglas, Paul
2018-01-01
This paper presents a Guideline for calculating nutrient content of foods by calculation methods for food business operators and presents data on compliance between calculated values and analytically determined values. In the EU, calculation methods are legally valid to determine the nutrient values of foods for nutrition labelling (Regulation (EU) No 1169/2011). However, neither a specific calculation method nor rules for use of retention factors are defined. EuroFIR AISBL (European Food Information Resource) has introduced a Recipe Calculation Guideline based on the EuroFIR harmonized procedure for recipe calculation. The aim is to provide food businesses with a step-by-step tool for calculating nutrient content of foods for the purpose of nutrition declaration. The development of this Guideline and use in the Czech Republic is described and future application to other Member States is discussed. Limitations of calculation methods and the importance of high quality food composition data are discussed. Copyright © 2017. Published by Elsevier Ltd.
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saurov, A. N.; Bulyarskiy, S. V.; Risovaniy, V. D.
An analysis of available and promising developments is carried out in the field of power elements based on β decay. The possible fabrication technologies are described, and the efficiency of the power sources manufactured with them is calculated. The possibility of designing a self-charging supercapacitor based on carbon nanotubes is considered with the use of {sup 63}Ni and {sup 14}C isotopes, and theoretical calculation confirms the promising nature of this line of research.
US Air Force 1989 Research Initiation Program. Volume 2.
1992-06-25
University of Minnesota-Duluth Specialtv: Inorganic Chemistry Specialty: Mechanics Dr. Satish Chandra Mr. Asad Yousuf Kansas State University Savannah...the Study Van der Waals forces in capillary tubes have previously been calculated by Philip (1977b]. His study was based on the Hamaker theory, which...important in condensed media, are not taken into account by the Hamaker theory. Calculations using on the Hamaker theory are often based on an unrealistic
Van Belleghem, Griet; Devos, Stefanie; De Wit, Liesbet; Hubloue, Ives; Lauwaert, Door; Pien, Karen; Putman, Koen
2016-01-01
Injury severity scores are important in the context of developing European and national goals on traffic safety, health-care benchmarking and improving patient communication. Various severity scores are available and are mostly based on Abbreviated Injury Scale (AIS) or International Classification of Diseases (ICD). The aim of this paper is to compare the predictive value for in-hospital mortality between the various severity scores if only International Classification of Diseases, 9th revision, Clinical Modification ICD-9-CM is reported. To estimate severity scores based on the AIS lexicon, ICD-9-CM codes were converted with ICD Programmes for Injury Categorization (ICDPIC) and four AIS-based severity scores were derived: Maximum AIS (MaxAIS), Injury Severity Score (ISS), New Injury Severity Score (NISS) and Exponential Injury Severity Score (EISS). Based on ICD-9-CM, six severity scores were calculated. Determined by the number of injuries taken into account and the means by which survival risk ratios (SRRs) were calculated, four different approaches were used to calculate the ICD-9-based Injury Severity Scores (ICISS). The Trauma Mortality Prediction Model (TMPM) was calculated with the ICD-9-CM-based model averaged regression coefficients (MARC) for both the single worst injury and multiple injuries. Severity scores were compared via model discrimination and calibration. Model comparisons were performed separately for the severity scores based on the single worst injury and multiple injuries. For ICD-9-based scales, estimation of area under the receiver operating characteristic curve (AUROC) ranges between 0.94 and 0.96, while AIS-based scales range between 0.72 and 0.76, respectively. The intercept in the calibration plots is not significantly different from 0 for MaxAIS, ICISS and TMPM. When only ICD-9-CM codes are reported, ICD-9-CM-based severity scores perform better than severity scores based on the conversion to AIS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cross calibration of GF-1 satellite wide field of view sensor with Landsat 8 OLI and HJ-1A HSI
NASA Astrophysics Data System (ADS)
Liu, Li; Gao, Hailiang; Pan, Zhiqiang; Gu, Xingfa; Han, Qijin; Zhang, Xuewen
2018-01-01
This paper focuses on cross calibrating the GaoFen (GF-1) satellite wide field of view (WFV) sensor using the Landsat 8 Operational Land Imager (OLI) and HuanJing-1A (HJ-1A) hyperspectral imager (HSI) as reference sensors. Two methods are proposed to calculate the spectral band adjustment factor (SBAF). One is based on the HJ-1A HSI image and the other is based on ground-measured reflectance. However, the HSI image and ground-measured reflectance were measured at different dates, as the WFV and OLI imagers passed overhead. Three groups of regions of interest (ROIs) were chosen for cross calibration, based on different selection criteria. Cross-calibration gains with nonzero and zero offsets were both calculated. The results confirmed that the gains with zero offset were better, as they were more consistent over different groups of ROIs and SBAF calculation methods. The uncertainty of this cross calibration was analyzed, and the influence of SBAF was calculated based on different HSI images and ground reflectance spectra. The results showed that the uncertainty of SBAF was <3% for bands 1 to 3. Two other large uncertainties in this cross calibration were variation of atmosphere and low ground reflectance.
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.
Desai, Trunil S; Srivastava, Shireesh
2018-01-01
13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses
Desai, Trunil S.
2018-01-01
13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347
NASA Astrophysics Data System (ADS)
Cai, Kaicong; Zheng, Xuan; Du, Fenfen
2017-08-01
The spectroscopy of amide-I vibrations has been widely utilized for the understanding of dynamical structure of polypeptides. For the modeling of amide-I spectra, two frequency maps were built for β-peptide analogue (N-ethylpropionamide, NEPA) in a number of solvents within different schemes (molecular mechanics force field based, GM map; DFT calculation based, GD map), respectively. The electrostatic potentials on the amide unit that originated from solvents and peptide backbone were correlated to the amide-I frequency shift from gas phase to solution phase during map parameterization. GM map is easier to construct with negligible computational cost since the frequency calculations for the samples are purely based on force field, while GD map utilizes sophisticated DFT calculations on the representative solute-solvent clusters and brings insight into the electronic structures of solvated NEPA and its chemical environments. The results show that the maps' predicted amide-I frequencies present solvation environmental sensitivities and exhibit their specific characters with respect to the map protocols, and the obtained vibrational parameters are in satisfactory agreement with experimental amide-I spectra of NEPA in solution phase. Although different theoretical schemes based maps have their advantages and disadvantages, the present maps show their potentials in interpreting the amide-I spectra for β-peptides, respectively.
NASA Astrophysics Data System (ADS)
Ma, Jing; Jiang, Nan; Li, Hui
Hydrogen bonding interaction takes an important position in solutions. The non-classic nature of hydrogen bonding requires the resource-demanding quantum mechanical (QM) calculations. The molecular mechanics (MM) method, with much lower computational load, is applicable to the large-sized system. The combination of QM and MM is an efficient way in the treatment of solution. Taking advantage of the low-cost energy-based fragmentation QM approach (in which the o-molecule is divided into several subsystems, and QM calculation is carried out on each subsystem that is embedded in the environment of background charges of distant parts), the fragmentation-based QM/MM and polarization models have been implemented for the modeling of o-molecule in aqueous solutions, respectively. Within the framework of the fragmentation-based QM/MM hybrid model, the solute is treated by the fragmentation QM calculation while the numerous solvent molecules are described by MM. In the polarization model, the polarizability is considered by allowing the partial charges and fragment-centered dipole moments to be variables, with values coming from the energy-based fragmentation QM calculations. Applications of these two methods to the solvated long oligomers and cyclic peptides have demonstrated that the hydrogen bonding interaction affects the dynamic change in chain conformations of backbone.
Theory study on the bandgap of antimonide-based multi-element alloys
NASA Astrophysics Data System (ADS)
An, Ning; Liu, Cheng-Zhi; Fan, Cun-Bo; Dong, Xue; Song, Qing-Li
2017-05-01
In order to meet the design requirements of the high-performance antimonide-based optoelectronic devices, the spin-orbit splitting correction method for bandgaps of Sb-based multi-element alloys is proposed. Based on the analysis of band structure, a correction factor is introduced in the InxGa1-xAsySb1-y bandgaps calculation with taking into account the spin-orbit coupling sufficiently. In addition, the InxGa1-xAsySb1-y films with different compositions are grown on GaSb substrates by molecular beam epitaxy (MBE), and the corresponding bandgaps are obtained by photoluminescence (PL) to test the accuracy and reliability of this new method. The results show that the calculated values agree fairly well with the experimental results. To further verify this new method, the bandgaps of a series of experimental samples reported before are calculated. The error rate analysis reveals that the α of spin-orbit splitting correction method is decreased to 2%, almost one order of magnitude smaller than the common method. It means this new method can calculate the antimonide multi-element more accurately and has the merit of wide applicability. This work can give a reasonable interpretation for the reported results and beneficial to tailor the antimonides properties and optoelectronic devices.
TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, J; Park, J; Kim, L
2016-06-15
Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommendedmore » by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.« less