Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.
ERIC Educational Resources Information Center
Jackson, David P.
1996-01-01
Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…
Generalized contact and improved frictional heating in the material point method
NASA Astrophysics Data System (ADS)
Nairn, J. A.; Bardenhagen, S. G.; Smith, G. D.
2017-09-01
The material point method (MPM) has proved to be an effective particle method for computational mechanics modeling of problems involving contact, but all prior applications have been limited to Coulomb friction. This paper generalizes the MPM approach for contact to handle any friction law with examples given for friction with adhesion or with a velocity-dependent coefficient of friction. Accounting for adhesion requires an extra calculation to evaluate contact area. Implementation of velocity-dependent laws usually needs numerical methods to find contacting forces. The friction process involves work which can be converted into heat. This paper provides a new method for calculating frictional heating that accounts for interfacial acceleration during the time step. The acceleration terms is small for many problems, but temporal convergence of heating effects for problems involving vibrations and high contact forces is improved by the new method. Fortunately, the new method needs few extra calculations and therefore is recommended for all simulations.
Generalized contact and improved frictional heating in the material point method
NASA Astrophysics Data System (ADS)
Nairn, J. A.; Bardenhagen, S. G.; Smith, G. D.
2018-07-01
The material point method (MPM) has proved to be an effective particle method for computational mechanics modeling of problems involving contact, but all prior applications have been limited to Coulomb friction. This paper generalizes the MPM approach for contact to handle any friction law with examples given for friction with adhesion or with a velocity-dependent coefficient of friction. Accounting for adhesion requires an extra calculation to evaluate contact area. Implementation of velocity-dependent laws usually needs numerical methods to find contacting forces. The friction process involves work which can be converted into heat. This paper provides a new method for calculating frictional heating that accounts for interfacial acceleration during the time step. The acceleration terms is small for many problems, but temporal convergence of heating effects for problems involving vibrations and high contact forces is improved by the new method. Fortunately, the new method needs few extra calculations and therefore is recommended for all simulations.
Propellant Mass Fraction Calculation Methodology for Launch Vehicles
NASA Technical Reports Server (NTRS)
Holt, James B.; Monk, Timothy S.
2009-01-01
Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between competing launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of a generic launch vehicle. This includes fundamental methods of pmf calculation which consider only the loaded propellant and the inert mass of the vehicle, more involved methods which consider the residuals and any other unusable propellant remaining in the vehicle, and other calculations which exclude large mass quantities such as the installed engine mass. Finally, a historic comparison is made between launch vehicles on the basis of the differing calculation methodologies.
DOT National Transportation Integrated Search
2008-08-01
ODOTs policy for Dynamic Message Sign : utilization requires travel time(s) to be displayed as : a default message. The current method of : calculating travel time involves a workstation : operator estimating the travel time based upon : observati...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasos, P.; Perea, C.P.; Jodra, L.G.
1957-01-01
>In order to calculate settling tanks, some tests on batch sedimentation were made, and with the data obtained the dimensions of the settling tank were found. The mechanism of sedimentation is first briefly described, and then the factors involved in the calculation of the dimensions and the sedimentation velocity are discussed. The Cloe and Clevenger method and the Kynch method were investigated experimentally and compared. The application of the calculations are illustrated. It is shown that the two methods gave markedly different results. (J.S.R.)
2PI effective theory at next-to-leading order using the functional renormalization group
NASA Astrophysics Data System (ADS)
Carrington, M. E.; Friesen, S. A.; Meggison, B. A.; Phillips, C. D.; Pickering, D.; Sohrabi, K.
2018-02-01
We consider a symmetric scalar theory with quartic coupling in four dimensions. We show that the four-loop 2PI calculation can be done using a renormalization group method. The calculation involves one bare coupling constant which is introduced at the level of the Lagrangian and is therefore conceptually simpler than a standard 2PI calculation, which requires multiple counterterms. We explain how our method can be used to do the corresponding calculation at the 4PI level, which cannot be done using any known method by introducing counterterms.
Incidence, prevalence, and hybrid approaches to calculating disability-adjusted life years
2012-01-01
When disability-adjusted life years are used to measure the burden of disease on a population in a time interval, they can be calculated in several different ways: from an incidence, pure prevalence, or hybrid perspective. I show that these calculation methods are not equivalent and discuss some of the formal difficulties each method faces. I show that if we don’t discount the value of future health, there is a sense in which the choice of calculation method is a mere question of accounting. Such questions can be important, but they don’t raise deep theoretical concerns. If we do discount, however, choice of calculation method can change the relative burden attributed to different conditions over time. I conclude by recommending that studies involving disability-adjusted life years be explicit in noting what calculation method is being employed and in explaining why that calculation method has been chosen. PMID:22967055
Analytic Method for Computing Instrument Pointing Jitter
NASA Technical Reports Server (NTRS)
Bayard, David
2003-01-01
A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.
NASA Technical Reports Server (NTRS)
Holt, James B.; Monk, Timothy S.
2009-01-01
Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between candidate launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of launch vehicles. This includes fundamental methods of pmf calculation that consider only the total propellant mass and the dry mass of the vehicle; more involved methods that consider the residuals, reserves and any other unusable propellant remaining in the vehicle; and calculations excluding large mass quantities such as the installed engine mass. Finally, a historical comparison is made between launch vehicles on the basis of the differing calculation methodologies, while the unique mission and design requirements of the Ares V Earth Departure Stage (EDS) are examined in terms of impact to pmf.
NASA Technical Reports Server (NTRS)
Martina, Albert P
1953-01-01
The methods of NACA Reports 865 and 1090 have been applied to the calculation of the rolling- and yawing-moment coefficients due to rolling for unswept wings with or without flaps or ailerons. The methods allow the use of nonlinear section lift data together with lifting-line theory. Two calculated examples are presented in simplified computing forms in order to illustrate the procedures involved.
Exclusive Reactions Involving Pions and Nucleons
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.; Tripathi, R. K.
2002-01-01
The HZETRN code requires inclusive cross sections as input. One of the methods used to calculate these cross sections requires knowledge of all exclusive processes contributing to the inclusive reaction. Conservation laws are used to determine all possible exclusive reactions involving strong interactions between pions and nucleons. Inclusive particle masses are subsequently determined and are needed in cross-section calculations for inclusive pion production.
AC and DC conductivity due to hopping mechanism in double ion doped ceramics
NASA Astrophysics Data System (ADS)
Rizwana, Mahboob, Syed; Sarah, P.
2018-04-01
Sr1-2xNaxNdxBi4Ti4O15 (x = 0.1, 0.2 and 0.4) system is prepared by sol gel method involving Pechini process of modified polymeric precursor method. Phase identification is done using X-ray diffraction. Conduction in prepared materials involves different mechanisms and is explained through detailed AC and DC conductivity studies. AC conductivity studies carried out on the samples at different frequencies and different temperatures gives more information about electrical transport. Exponents used in two term power relation helps us to understand the different hopping mechanism involved at low as well as high frequencies. Activation energies calculated from the Arrhenius plots are used to calculate activation energies at different temperatures and frequencies. Hopping frequency calculated from the measured data explains hopping of charge carriers at different temperatures. DC conductivity studies help us to know the role of oxygen vacancies in conduction.
Calculation of unsteady transonic flows with mild separation by viscous-inviscid interaction
NASA Technical Reports Server (NTRS)
Howlett, James T.
1992-01-01
This paper presents a method for calculating viscous effects in two- and three-dimensional unsteady transonic flow fields. An integral boundary-layer method for turbulent viscous flow is coupled with the transonic small-disturbance potential equation in a quasi-steady manner. The viscous effects are modeled with Green's lag-entrainment equations for attached flow and an inverse boundary-layer method for flows that involve mild separation. The boundary-layer method is used stripwise to approximate three-dimensional effects. Applications are given for two-dimensional airfoils, aileron buzz, and a wing planform. Comparisons with inviscid calculations, other viscous calculation methods, and experimental data are presented. The results demonstrate that the present technique can economically and accurately calculate unsteady transonic flow fields that have viscous-inviscid interactions with mild flow separation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
San Fabián, J.; Omar, S.; García de la Vega, J. M., E-mail: garcia.delavega@uam.es
The effect of a fraction of Hartree-Fock exchange on the calculated spin-spin coupling constants involving fluorine through a hydrogen bond is analyzed in detail. Coupling constants calculated using wavefunction methods are revisited in order to get high-level calculations using the same basis set. Accurate MCSCF results are obtained using an additive approach. These constants and their contributions are used as a reference for density functional calculations. Within the density functional theory, the Hartree-Fock exchange functional is split in short- and long-range using a modified version of the Coulomb-attenuating method with the SLYP functional as well as with the original B3LYP.more » Results support the difficulties for calculating hydrogen bond coupling constants using density functional methods when fluorine nuclei are involved. Coupling constants are very sensitive to the Hartree-Fock exchange and it seems that, contrary to other properties, it is important to include this exchange for short-range interactions. Best functionals are tested in two different groups of complexes: those related with anionic clusters of type [F(HF){sub n}]{sup −} and those formed by difluoroacetylene and either one or two hydrogen fluoride molecules.« less
NASA Astrophysics Data System (ADS)
Gallup, G. A.; Gerratt, J.
1985-09-01
The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.
NASA Technical Reports Server (NTRS)
Chaney, William S.
1961-01-01
A theoretical study has been made of molybdenum dioxide and molybdenum trioxide in order to extend the knowledge of factors Involved in the oxidation of molybdenum. New methods were developed for calculating the lattice energies based on electrostatic valence theory, and the coulombic, polarization, Van der Waals, and repulsion energie's were calculated. The crystal structure was examined and structure details were correlated with lattice energy.
Finite difference time domain calculation of transients in antennas with nonlinear loads
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent
1991-01-01
Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.
A New Iterative Method to Calculate [pi
ERIC Educational Resources Information Center
Dion, Peter; Ho, Anthony
2012-01-01
For at least 2000 years people have been trying to calculate the value of [pi], the ratio of the circumference to the diameter of a circle. People know that [pi] is an irrational number; its decimal representation goes on forever. Early methods were geometric, involving the use of inscribed and circumscribed polygons of a circle. However, real…
A Multigroup Method for the Calculation of Neutron Fluence with a Source Term
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.; Clowdsley, M. S.
1998-01-01
Current research on the Grant involves the development of a multigroup method for the calculation of low energy evaporation neutron fluences associated with the Boltzmann equation. This research will enable one to predict radiation exposure under a variety of circumstances. Knowledge of radiation exposure in a free-space environment is a necessity for space travel, high altitude space planes and satellite design. This is because certain radiation environments can cause damage to biological and electronic systems involving both short term and long term effects. By having apriori knowledge of the environment one can use prediction techniques to estimate radiation damage to such systems. Appropriate shielding can be designed to protect both humans and electronic systems that are exposed to a known radiation environment. This is the goal of the current research efforts involving the multi-group method and the Green's function approach.
TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks
DOE Office of Scientific and Technical Information (OSTI.GOV)
French, S; Nazareth, D; Bellor, M
Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less
Theoretical investigation of the gas-phase reactions of CrO(+) with ethylene.
Scupp, Thomas M; Dudley, Timothy J
2010-01-21
The potential energy surfaces associated with the reactions of chromium oxide cation (CrO(+)) with ethylene have been characterized using density functional, coupled-cluster, and multireference methods. Our calculations show that the most probable reaction involves the formation of acetaldehyde and Cr(+) via a hydride transfer involving the metal center. Our calculations support previous experimental hypotheses that a four-membered ring intermediate plays an important role in the reactivity of the system. We have also characterized a number of viable reaction pathways that lead to other products, including ethylene oxide. Due to the experimental observation that CrO(+) can activate carbon-carbon bonds, a reaction pathway involving C-C bond cleavage has also been characterized. Since many of the reactions involve a change in the spin state in going from reactants to products, locations of these spin surface crossings are presented and discussed. The applicability of methods based on Hartree-Fock orbitals is also discussed.
A graph-based semantic similarity measure for the gene ontology.
Alvarez, Marco A; Yan, Changhui
2011-12-01
Existing methods for calculating semantic similarities between pairs of Gene Ontology (GO) terms and gene products often rely on external databases like Gene Ontology Annotation (GOA) that annotate gene products using the GO terms. This dependency leads to some limitations in real applications. Here, we present a semantic similarity algorithm (SSA), that relies exclusively on the GO. When calculating the semantic similarity between a pair of input GO terms, SSA takes into account the shortest path between them, the depth of their nearest common ancestor, and a novel similarity score calculated between the definitions of the involved GO terms. In our work, we use SSA to calculate semantic similarities between pairs of proteins by combining pairwise semantic similarities between the GO terms that annotate the involved proteins. The reliability of SSA was evaluated by comparing the resulting semantic similarities between proteins with the functional similarities between proteins derived from expert annotations or sequence similarity. Comparisons with existing state-of-the-art methods showed that SSA is highly competitive with the other methods. SSA provides a reliable measure for semantics similarity independent of external databases of functional-annotation observations.
Zuend, Stephan J; Jacobsen, Eric N
2007-12-26
The mechanism of the enantioselective cyanosilylation of ketones catalyzed by tertiary amino-thiourea derivatives was investigated using a combination of experimental and theoretical methods. The kinetic analysis is consistent with a cooperative mechanism in which both the thiourea and the tertiary amine of the catalyst are involved productively in the rate-limiting cyanide addition step. Density functional theory calculations were used to distinguish between mechanisms involving thiourea activation of ketone or of cyanide in the enantioselectivity-determining step. The strong correlation obtained between experimental and calculated ee's for a range of substrates and catalysts provides support for the most favorable calculated transition structures involving amine-bound HCN adding to thiourea-bound ketone. The calculations suggest that enantioselectivity arises from direct interactions between the ketone substrate and the amino-acid derived portion of the catalyst. On the basis of this insight, more enantioselective catalysts with broader substrate scope were prepared and evaluated experimentally.
Approximate methods in gamma-ray skyshine calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faw, R.E.; Roseberry, M.L.; Shultis, J.K.
1985-11-01
Gamma-ray skyshine, an important component of the radiation field in the environment of a nuclear power plant, has recently been studied in relation to storage of spent fuel and nuclear waste. This paper reviews benchmark skyshine experiments and transport calculations against which computational procedures may be tested. The paper also addresses the applicability of simplified computational methods involving single-scattering approximations. One such method, suitable for microcomputer implementation, is described and results are compared with other work.
CDC-reported assisted reproductive technology live-birth rates may mislead the public.
Kushnir, Vitaly A; Choi, Jennifer; Darmon, Sarah K; Albertini, David F; Barad, David H; Gleicher, Norbert
2017-08-01
The Centre for Disease Control and Prevention (CDC) publicly reports assisted reproductive technology live-birth rates (LBR) for each US fertility clinic under legal mandate. The 2014 CDC report excluded 35,406 of 184,527 (19.2%) autologous assisted reproductive technology cycles that involved embryo or oocyte banking from LBR calculations. This study calculated 2014 total clinic LBR for all patients utilizing autologous oocytes two ways: including all initiated assisted reproductive technology cycles or excluding banking cycles, as done by the CDC. The main limitation of this analysis is the CDC report did not differentiate between cycles involving long-term banking of embryos or oocytes for fertility preservation from cycles involving short-term embryo banking. Twenty-seven of 458 (6%) clinics reported over 40% of autologous cycles involved banking, collectively performing 12% of all US assisted reproductive technology cycles. LBR in these outlier clinics calculated by the CDC method, was higher than the other 94% of clinics (33.1% versus 31.1%). However, recalculated LBR including banking cycles in the outlier clinics was lower than the other 94% of clinics (15.5% versus 26.6%). LBR calculated by the two methods increasingly diverged based on proportion of banking cycles performed by each clinic reaching 4.5-fold, thereby, potentially misleading the public. Copyright © 2017 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Partition of unity finite element method for quantum mechanical materials calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pask, J. E.; Sukumar, N.
The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less
Partition of unity finite element method for quantum mechanical materials calculations
Pask, J. E.; Sukumar, N.
2016-11-09
The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less
Multiple testing and power calculations in genetic association studies.
So, Hon-Cheong; Sham, Pak C
2011-01-01
Modern genetic association studies typically involve multiple single-nucleotide polymorphisms (SNPs) and/or multiple genes. With the development of high-throughput genotyping technologies and the reduction in genotyping cost, investigators can now assay up to a million SNPs for direct or indirect association with disease phenotypes. In addition, some studies involve multiple disease or related phenotypes and use multiple methods of statistical analysis. The combination of multiple genetic loci, multiple phenotypes, and multiple methods of evaluating associations between genotype and phenotype means that modern genetic studies often involve the testing of an enormous number of hypotheses. When multiple hypothesis tests are performed in a study, there is a risk of inflation of the type I error rate (i.e., the chance of falsely claiming an association when there is none). Several methods for multiple-testing correction are in popular use, and they all have strengths and weaknesses. Because no single method is universally adopted or always appropriate, it is important to understand the principles, strengths, and weaknesses of the methods so that they can be applied appropriately in practice. In this article, we review the three principle methods for multiple-testing correction and provide guidance for calculating statistical power.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
Calculations of reliability predictions for the Apollo spacecraft
NASA Technical Reports Server (NTRS)
Amstadter, B. L.
1966-01-01
A new method of reliability prediction for complex systems is defined. Calculation of both upper and lower bounds are involved, and a procedure for combining the two to yield an approximately true prediction value is presented. Both mission success and crew safety predictions can be calculated, and success probabilities can be obtained for individual mission phases or subsystems. Primary consideration is given to evaluating cases involving zero or one failure per subsystem, and the results of these evaluations are then used for analyzing multiple failure cases. Extensive development is provided for the overall mission success and crew safety equations for both the upper and lower bounds.
NASA Astrophysics Data System (ADS)
Yamazaki, Katsumi
In this paper, we propose a method to calculate the equivalent circuit parameters of interior permanent magnet motors including iron loss resistance using the finite element method. First, the finite element analysis considering harmonics and magnetic saturation is carried out to obtain time variations of magnetic fields in the stator and the rotor core. Second, the iron losses of the stator and the rotor are calculated from the results of the finite element analysis with the considerations of harmonic eddy current losses and the minor hysteresis losses of the core. As a result, we obtain the equivalent circuit parameters i.e. the d-q axis inductance and the iron loss resistance as functions of operating condition of the motor. The proposed method is applied to an interior permanent magnet motor to calculate the characteristics based on the equivalent circuit obtained by the proposed method. The calculated results are compared with the experimental results to verify the accuracy.
Photonic band gap structure simulator
Chen, Chiping; Shapiro, Michael A.; Smirnova, Evgenya I.; Temkin, Richard J.; Sirigiri, Jagadishwar R.
2006-10-03
A system and method for designing photonic band gap structures. The system and method provide a user with the capability to produce a model of a two-dimensional array of conductors corresponding to a unit cell. The model involves a linear equation. Boundary conditions representative of conditions at the boundary of the unit cell are applied to a solution of the Helmholtz equation defined for the unit cell. The linear equation can be approximated by a Hermitian matrix. An eigenvalue of the Helmholtz equation is calculated. One computation approach involves calculating finite differences. The model can include a symmetry element, such as a center of inversion, a rotation axis, and a mirror plane. A graphical user interface is provided for the user's convenience. A display is provided to display to a user the calculated eigenvalue, corresponding to a photonic energy level in the Brilloin zone of the unit cell.
Computer program determines chemical composition of physical system at equilibrium
NASA Technical Reports Server (NTRS)
Kwong, S. S.
1966-01-01
FORTRAN 4 digital computer program calculates equilibrium composition of complex, multiphase chemical systems. This is a free energy minimization method with solution of the problem reduced to mathematical operations, without concern for the chemistry involved. Also certain thermodynamic properties are determined as byproducts of the main calculations.
Quantum-chemical Calculations in the Study of Antitumour Compounds
NASA Astrophysics Data System (ADS)
Luzhkov, V. B.; Bogdanov, G. N.
1986-01-01
The results of quantum-chemical calculations on antitumour preparations concerning the mechanism of their action at the electronic and molecular levels and structure-activity correlations are discussed in this review. Preparations whose action involves alkylating and free-radial mechanisms, complex-forming agents, and antimetabolites are considered. Modern quantum-chemical methods for calculations on biologically active substances are described. The bibliography includes 106 references.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, S. A. M.; Ansbacher, W.; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6
2013-01-15
Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are usedmore » to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.« less
Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko
2017-07-10
This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.
Numerical investigation of finite-volume effects for the HVP
NASA Astrophysics Data System (ADS)
Boyle, Peter; Gülpers, Vera; Harrison, James; Jüttner, Andreas; Portelli, Antonin; Sachrajda, Christopher
2018-03-01
It is important to correct for finite-volume (FV) effects in the presence of QED, since these effects are typically large due to the long range of the electromagnetic interaction. We recently made the first lattice calculation of electromagnetic corrections to the hadronic vacuum polarisation (HVP). For the HVP, an analytical derivation of FV corrections involves a two-loop calculation which has not yet been carried out. We instead calculate the universal FV corrections numerically, using lattice scalar QED as an effective theory. We show that this method gives agreement with known analytical results for scalar mass FV effects, before applying it to calculate FV corrections for the HVP. This method for numerical calculation of FV effects is also widely applicable to quantities beyond the HVP.
NASA Technical Reports Server (NTRS)
Diederich, Franklin W; Zlotnick, Martin
1955-01-01
Spanwise lift distributions have been calculated for nineteen unswept wings with various aspect ratios and taper ratios and with a variety of angle-of-attack or twist distributions, including flap and aileron deflections, by means of the Weissinger method with eight control points on the semispan. Also calculated were aerodynamic influence coefficients which pertain to a certain definite set of stations along the span, and several methods are presented for calculating aerodynamic influence functions and coefficients for stations other than those stipulated. The information presented in this report can be used in the analysis of untwisted wings or wings with known twist distributions, as well as in aeroelastic calculations involving initially unknown twist distributions.
Report to DHS on Summer Internship 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckwith, R H
2006-07-26
This summer I worked at Lawrence Livermore National Laboratory in a bioforensics collection and extraction research group under David Camp. The group is involved with researching efficiencies of various methods for collecting bioforensic evidence from crime scenes. The different methods under examination are a wipe, swab, HVAC filter and a vacuum. The vacuum is something that has particularly gone uncharacterized. My time was spent mostly on modeling and calculations work, but at the end of the summer I completed my internship with a few experiments to supplement my calculations. I had two major projects this summer. My first major projectmore » this summer involved fluid mechanics modeling of collection and extraction situations. This work examines different fluid dynamic models for the case of a micron spore attached to a fiber. The second project I was involved with was a statistical analysis of the different sampling techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rüger, Robert, E-mail: rueger@scm.com; Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam; Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig
2016-05-14
We propose a new method of calculating electronically excited states that combines a density functional theory based ground state calculation with a linear response treatment that employs approximations used in the time-dependent density functional based tight binding (TD-DFTB) approach. The new method termed time-dependent density functional theory TD-DFT+TB does not rely on the DFTB parametrization and is therefore applicable to systems involving all combinations of elements. We show that the new method yields UV/Vis absorption spectra that are in excellent agreement with computationally much more expensive TD-DFT calculations. Errors in vertical excitation energies are reduced by a factor of twomore » compared to TD-DFTB.« less
Cascade flutter analysis with transient response aerodynamics
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Mahajan, Aparajit J.; Keith, Theo G., Jr.; Stefko, George L.
1991-01-01
Two methods for calculating linear frequency domain aerodynamic coefficients from a time marching Full Potential cascade solver are developed and verified. In the first method, the Influence Coefficient, solutions to elemental problems are superposed to obtain the solutions for a cascade in which all blades are vibrating with a constant interblade phase angle. The elemental problem consists of a single blade in the cascade oscillating while the other blades remain stationary. In the second method, the Pulse Response, the response to the transient motion of a blade is used to calculate influence coefficients. This is done by calculating the Fourier Transforms of the blade motion and the response. Both methods are validated by comparison with the Harmonic Oscillation method and give accurate results. The aerodynamic coefficients obtained from these methods are used for frequency domain flutter calculations involving a typical section blade structural model. An eigenvalue problem is solved for each interblade phase angle mode and the eigenvalues are used to determine aeroelastic stability. Flutter calculations are performed for two examples over a range of subsonic Mach numbers.
A kinetic and thermochemical database for organic sulfur and oxygen compounds.
Class, Caleb A; Aguilera-Iparraguirre, Jorge; Green, William H
2015-05-28
Potential energy surfaces and reaction kinetics were calculated for 40 reactions involving sulfur and oxygen. This includes 11 H2O addition, 8 H2S addition, 11 hydrogen abstraction, 7 beta scission, and 3 elementary tautomerization reactions, which are potentially relevant in the combustion and desulfurization of sulfur compounds found in various fuel sources. Geometry optimizations and frequencies were calculated for reactants and transition states using B3LYP/CBSB7, and potential energies were calculated using CBS-QB3 and CCSD(T)-F12a/VTZ-F12. Rate coefficients were calculated using conventional transition state theory, with corrections for internal rotations and tunneling. Additionally, thermochemical parameters were calculated for each of the compounds involved in these reactions. With few exceptions, rate parameters calculated using the two potential energy methods agreed reasonably, with calculated activation energies differing by less than 5 kJ mol(-1). The computed rate coefficients and thermochemical parameters are expected to be useful for kinetic modeling.
Development and Application of a Parallel LCAO Cluster Method
NASA Astrophysics Data System (ADS)
Patton, David C.
1997-08-01
CPU intensive steps in the SCF electronic structure calculations of clusters and molecules with a first-principles LCAO method have been fully parallelized via a message passing paradigm. Identification of the parts of the code that are composed of many independent compute-intensive steps is discussed in detail as they are the most readily parallelized. Most of the parallelization involves spatially decomposing numerical operations on a mesh. One exception is the solution of Poisson's equation which relies on distribution of the charge density and multipole methods. The method we use to parallelize this part of the calculation is quite novel and is covered in detail. We present a general method for dynamically load-balancing a parallel calculation and discuss how we use this method in our code. The results of benchmark calculations of the IR and Raman spectra of PAH molecules such as anthracene (C_14H_10) and tetracene (C_18H_12) are presented. These benchmark calculations were performed on an IBM SP2 and a SUN Ultra HPC server with both MPI and PVM. Scalability and speedup for these calculations is analyzed to determine the efficiency of the code. In addition, performance and usage issues for MPI and PVM are presented.
Method and device for predicting wavelength dependent radiation influences in thermal systems
Kee, Robert J.; Ting, Aili
1996-01-01
A method and apparatus for predicting the spectral (wavelength-dependent) radiation transport in thermal systems including interaction by the radiation with partially transmitting medium. The predicted model of the thermal system is used to design and control the thermal system. The predictions are well suited to be implemented in design and control of rapid thermal processing (RTP) reactors. The method involves generating a spectral thermal radiation transport model of an RTP reactor. The method also involves specifying a desired wafer time dependent temperature profile. The method further involves calculating an inverse of the generated model using the desired wafer time dependent temperature to determine heating element parameters required to produce the desired profile. The method also involves controlling the heating elements of the RTP reactor in accordance with the heating element parameters to heat the wafer in accordance with the desired profile.
Direct Simulation of Reentry Flows with Ionization
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1989-01-01
The Direct Simulation Monte Carlo (DSMC) method is applied in this paper to the study of rarefied, hypersonic, reentry flows. The assumptions and simplifications involved with the treatment of ionization, free electrons and the electric field are investigated. A new method is presented for the calculation of the electric field and handling of charged particles with DSMC. In addition, a two-step model for electron impact ionization is implemented. The flow field representing a 10 km/sec shock at an altitude of 65 km is calculated. The effects of the new modeling techniques on the calculation results are presented and discussed.
Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R
2015-04-14
We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.
Recycling of car tires by means of Waterjet technologies
NASA Astrophysics Data System (ADS)
Holka, Henryk; Jarzyna, Tomasz
2017-03-01
An increasing number of used car tires poses a threat to the environment. Therefore they need to be recycled. In this work a decomposition method that involves applying a stream of water at very high pressure (to 600MPa) is presented. This method is based on the authors' own patent from 2010 and the results have been provided from two year-long tests and calculations This study includes many diagrams, images and calculations that have been used to develop the discussed method which is competitive for currently used ones.
An Eulerian/Lagrangian method for computing blade/vortex impingement
NASA Technical Reports Server (NTRS)
Steinhoff, John; Senge, Heinrich; Yonghu, Wenren
1991-01-01
A combined Eulerian/Lagrangian approach to calculating helicopter rotor flows with concentrated vortices is described. The method computes a general evolving vorticity distribution without any significant numerical diffusion. Concentrated vortices can be accurately propagated over long distances on relatively coarse grids with cores only several grid cells wide. The method is demonstrated for a blade/vortex impingement case in 2D and 3D where a vortex is cut by a rotor blade, and the results are compared to previous 2D calculations involving a fifth-order Navier-Stokes solver on a finer grid.
Rusakov, Yury Yu; Krivdin, Leonid B; Østerstrøm, Freja F; Sauer, Stephan P A; Potapov, Vladimir A; Amosova, Svetlana V
2013-08-21
This paper documents the very first example of a high-level correlated calculation of spin-spin coupling constants involving tellurium taking into account relativistic effects, vibrational corrections and solvent effects for medium sized organotellurium molecules. The (125)Te-(1)H spin-spin coupling constants of tellurophene and divinyl telluride were calculated at the SOPPA and DFT levels, in good agreement with experimental data. A new full-electron basis set, av3z-J, for tellurium derived from the "relativistic" Dyall's basis set, dyall.av3z, and specifically optimized for the correlated calculations of spin-spin coupling constants involving tellurium was developed. The SOPPA method shows a much better performance compared to DFT, if relativistic effects calculated within the ZORA scheme are taken into account. Vibrational and solvent corrections are next to negligible, while conformational averaging is of prime importance in the calculation of (125)Te-(1)H spin-spin couplings. Based on the performed calculations at the SOPPA(CCSD) level, a marked stereospecificity of geminal and vicinal (125)Te-(1)H spin-spin coupling constants originating in the orientational lone pair effect of tellurium has been established, which opens a new guideline in organotellurium stereochemistry.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
A Numerical Method of Calculating Propeller Noise Including Acoustic Nonlinear Effects
NASA Technical Reports Server (NTRS)
Korkan, K. D.
1985-01-01
Using the transonic flow fields(s) generated by the NASPROP-E computer code for an eight blade SR3-series propeller, a theoretical method is investigated to calculate the total noise values and frequency content in the acoustic near and far field without using the Ffowcs Williams - Hawkings equation. The flow field is numerically generated using an implicit three dimensional Euler equation solver in weak conservation law form. Numerical damping is required by the differencing method for stability in three dimensions, and the influence of the damping on the calculated acoustic values is investigated. The acoustic near field is solved by integrating with respect to time the pressure oscillations induced at a stationary observer location. The acoustic far field is calculated from the near field primitive variables as generated by NASPROP-E computer code using a method involving a perturbation velocity potential as suggested by Hawkings in the calculation of the acoustic pressure time-history at a specified far field observed location. the methodologies described are valid for calculating total noise levels and are applicable to any propeller geometry for which a flow field solution is available.
An Efficient numerical method to calculate the conductivity tensor for disordered topological matter
NASA Astrophysics Data System (ADS)
Garcia, Jose H.; Covaci, Lucian; Rappoport, Tatiana G.
2015-03-01
We propose a new efficient numerical approach to calculate the conductivity tensor in solids. We use a real-space implementation of the Kubo formalism where both diagonal and off-diagonal conductivities are treated in the same footing. We adopt a formulation of the Kubo theory that is known as Bastin formula and expand the Green's functions involved in terms of Chebyshev polynomials using the kernel polynomial method. Within this method, all the computational effort is on the calculation of the expansion coefficients. It also has the advantage of obtaining both conductivities in a single calculation step and for various values of temperature and chemical potential, capturing the topology of the band-structure. Our numerical technique is very general and is suitable for the calculation of transport properties of disordered systems. We analyze how the method's accuracy varies with the number of moments used in the expansion and illustrate our approach by calculating the transverse conductivity of different topological systems. T.G.R, J.H.G and L.C. acknowledge Brazilian agencies CNPq, FAPERJ and INCT de Nanoestruturas de Carbono, Flemish Science Foundation for financial support.
Fast, accurate semiempirical molecular orbital calculations for macromolecules
NASA Astrophysics Data System (ADS)
Dixon, Steven L.; Merz, Kenneth M., Jr.
1997-07-01
A detailed review of the semiempirical divide-and-conquer (D&C) method is given, including a new approach to subsetting, which involves dual buffer regions. Comparisons are drawn between this method and other semiempirical macromolecular schemes. D&C calculations are carried out using a basic 32 Mbyte memory workstation on a variety of peptide systems, including proteins containing up to 1960 atoms. Aspects of storage and SCF convergence are addressed, and parallelization of the D&C algorithm is discussed.
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
Calculation of forces on magnetized bodies using COSMIC NASTRAN
NASA Technical Reports Server (NTRS)
Sheerer, John
1987-01-01
The methods described may be used with a high degree of confidence for calculations of magnetic traction forces normal to a surface. In this circumstance all models agree, and test cases have resulted in theoretically correct results. It is shown that the tangential forces are in practice negligible. The surface pole method is preferable to the virtual work method because of the necessity for more than one NASTRAN run in the latter case, and because distributed forces are obtained. The derivation of local forces from the Maxwell stress method involves an undesirable degree of manipulation of the problem and produces a result in contradiction of the surface pole method.
Multigrid method for stability problems
NASA Technical Reports Server (NTRS)
Ta'asan, Shlomo
1988-01-01
The problem of calculating the stability of steady state solutions of differential equations is addressed. Leading eigenvalues of large matrices that arise from discretization are calculated, and an efficient multigrid method for solving these problems is presented. The resulting grid functions are used as initial approximations for appropriate eigenvalue problems. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a nonstandard way in which the right-hand side of the coarse grid equations involves unknown parameters to be solved on the coarse grid. This leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem are presented which demonstrate the effectiveness of the method.
NASA Astrophysics Data System (ADS)
Ahmad, Zeeshan; Viswanathan, Venkatasubramanian
2016-08-01
Computationally-guided material discovery is being increasingly employed using a descriptor-based screening through the calculation of a few properties of interest. A precise understanding of the uncertainty associated with first-principles density functional theory calculated property values is important for the success of descriptor-based screening. The Bayesian error estimation approach has been built in to several recently developed exchange-correlation functionals, which allows an estimate of the uncertainty associated with properties related to the ground state energy, for example, adsorption energies. Here, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, which depend on the derivatives of the energy. The procedure involves calculating energies around the equilibrium cell volume with different strains and fitting the obtained energies to the corresponding energy-strain relationship. At each strain, we use instead of a single energy, an ensemble of energies, giving us an ensemble of fits and thereby, an ensemble of mechanical properties associated with each fit, whose spread can be used to quantify its uncertainty. The generation of ensemble of energies is only a post-processing step involving a perturbation of parameters of the exchange-correlation functional and solving for the energy non-self-consistently. The proposed method is computationally efficient and provides a more robust uncertainty estimate compared to the approach of self-consistent calculations employing several different exchange-correlation functionals. We demonstrate the method by calculating the uncertainty bounds for several materials belonging to different classes and having different structures using the developed method. We show that the calculated uncertainty bounds the property values obtained using three different GGA functionals: PBE, PBEsol, and RPBE. Finally, we apply the approach to calculate the uncertainty associated with the DFT-calculated elastic properties of solid state Li-ion and Na-ion conductors.
Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome
2013-06-01
A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.
NASA Astrophysics Data System (ADS)
Dias, L. G.; Shimizu, K.; Farah, J. P. S.; Chaimovich, H.
2002-09-01
We propose and demonstrate the usefulness of a method, defined as generalized Born electronegativity equalization method (GBEEM) to estimate solvent-induced charge redistribution. The charges obtained by GBEEM, in a representative series of small organic molecules, were compared to PM3-CM1 charges in vacuum and in water. Linear regressions with appropriate correlation coefficients and standard deviations between GBEEM and PM3-CM1 methods were obtained ( R=0.94,SD=0.15, Ftest=234, N=32, in vacuum; R=0.94,SD=0.16, Ftest=218, N=29, in water). In order to test the GBEEM response when intermolecular interactions are involved we calculated a water dimer in dielectric water using both GBEEM and PM3-CM1 and the results were similar. Hence, the method developed here is comparable to established calculation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jentschura, Ulrich D.; National Institute of Standards and Technology, Gaithersburg, Maryland 20899-8401; Mohr, Peter J.
We describe the calculation of hydrogenic (one-loop) Bethe logarithms for all states with principal quantum numbers n{<=}200. While, in principle, the calculation of the Bethe logarithm is a rather easy computational problem involving only the nonrelativistic (Schroedinger) theory of the hydrogen atom, certain calculational difficulties affect highly excited states, and in particular states for which the principal quantum number is much larger than the orbital angular momentum quantum number. Two evaluation methods are contrasted. One of these is based on the calculation of the principal value of a specific integral over a virtual photon energy. The other method relies directlymore » on the spectral representation of the Schroedinger-Coulomb propagator. Selected numerical results are presented. The full set of values is available at arXiv.org/quant-ph/0504002.« less
Algorithms and physical parameters involved in the calculation of model stellar atmospheres
NASA Astrophysics Data System (ADS)
Merlo, D. C.
This contribution summarizes the Doctoral Thesis presented at Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba for the degree of PhD in Astronomy. We analyze some algorithms and physical parameters involved in the calculation of model stellar atmospheres, such as atomic partition functions, functional relations connecting gaseous and electronic pressure, molecular formation, temperature distribution, chemical compositions, Gaunt factors, atomic cross-sections and scattering sources, as well as computational codes for calculating models. Special attention is paid to the integration of hydrostatic equation. We compare our results with those obtained by other authors, finding reasonable agreement. We make efforts on the implementation of methods that modify the originally adopted temperature distribution in the atmosphere, in order to obtain constant energy flux throughout. We find limitations and we correct numerical instabilities. We integrate the transfer equation solving directly the integral equation involving the source function. As a by-product, we calculate updated atomic partition functions of the light elements. Also, we discuss and enumerate carefully selected formulae for the monochromatic absorption and dispersion of some atomic and molecular species. Finally, we obtain a flexible code to calculate model stellar atmospheres.
ERIC Educational Resources Information Center
Herron, J. Dudley, Ed.
1975-01-01
Describes methods for teaching the mole concept, including analogous calculations involving household objects, and the use of a cardboard wheel showing the interrelationships between moles, molecular weight, and the gaseous molar volume. (MLH)
Compatibility of Segments of Thermoelectric Generators
NASA Technical Reports Server (NTRS)
Snyder, G. Jeffrey; Ursell, Tristan
2009-01-01
A method of calculating (usually for the purpose of maximizing) the power-conversion efficiency of a segmented thermoelectric generator is based on equations derived from the fundamental equations of thermoelectricity. Because it is directly traceable to first principles, the method provides physical explanations in addition to predictions of phenomena involved in segmentation. In comparison with the finite-element method used heretofore to predict (without being able to explain) the behavior of a segmented thermoelectric generator, this method is much simpler to implement in practice: in particular, the efficiency of a segmented thermoelectric generator can be estimated by evaluating equations using only hand-held calculator with this method. In addition, the method provides for determination of cascading ratios. The concept of cascading is illustrated in the figure and the definition of the cascading ratio is defined in the figure caption. An important aspect of the method is its approach to the issue of compatibility among segments, in combination with introduction of the concept of compatibility within a segment. Prior approaches involved the use of only averaged material properties. Two materials in direct contact could be examined for compatibility with each other, but there was no general framework for analysis of compatibility. The present method establishes such a framework. The mathematical derivation of the method begins with the definition of reduced efficiency of a thermoelectric generator as the ratio between (1) its thermal-to-electric power-conversion efficiency and (2) its Carnot efficiency (the maximum efficiency theoretically attainable, given its hot- and cold-side temperatures). The derivation involves calculation of the reduced efficiency of a model thermoelectric generator for which the hot-side temperature is only infinitesimally greater than the cold-side temperature. The derivation includes consideration of the ratio (u) between the electric current and heat-conduction power and leads to the concept of compatibility factor (s) for a given thermoelectric material, defined as the value of u that maximizes the reduced efficiency of the aforementioned model thermoelectric generator.
New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
A study of the chiro-optical properties of Carvone
NASA Astrophysics Data System (ADS)
Lambert, Jason
2011-10-01
The intrinsic optical rotatory dispersion (IORD) and circular dichroism (CD) of the conformationally flexible carvone molecule has been investigated in 17 solvents and compared with results from calculations for the ``free'' (gas phase) molecule. The G3 method was used to determine the relative energies of the six conformers. The ORD of (R)-(-)-carvone at 589 nm was calculated using coupled cluster and density-functional methods, including temperature-dependent vibrational corrections. Vibrational corrections are significant and are primarily associated with normal modes involving the stereogenic carbon atom and the carbonyl group, whose n->&*circ; excitation plays a significant role in the chiroptical response of carvone. However, without the vibrational correction the calculated ORD is of opposite sign to that of the experiment for the CCSD and B3LYP methods. Calculations performed in solution using the PCM model were also opposite in sign to of the experiment when using the B3LYP density functional.
Relativistic semiempirical-core-potential calculations in Ca+,Sr+ , and Ba+ ions on Lagrange meshes
NASA Astrophysics Data System (ADS)
Filippin, Livio; Schiffmann, Sacha; Dohet-Eraly, Jérémy; Baye, Daniel; Godefroid, Michel
2018-01-01
Relativistic atomic structure calculations are carried out in alkaline-earth-metal ions using a semiempirical-core-potential approach. The systems are partitioned into frozen-core electrons and an active valence electron. The core orbitals are defined by a Dirac-Hartree-Fock calculation using the grasp2k package. The valence electron is described by a Dirac-like Hamiltonian involving a core-polarization potential to simulate the core-valence electron correlation. The associated equation is solved with the Lagrange-mesh method, which is an approximate variational approach having the form of a mesh calculation because of the use of a Gauss quadrature to calculate matrix elements. Properties involving the low-lying metastable
Calculating Potential Energy Curves with Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2014-06-01
Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).
Experiments With Magnetic Vector Potential
ERIC Educational Resources Information Center
Skinner, J. W.
1975-01-01
Describes the experimental apparatus and method for the study of magnetic vector potential (MVP). Includes a discussion of inherent errors in the calculations involved, precision of the results, and further applications of MVP. (GS)
[Information value of "additional tasks" method to evaluate pilot's work load].
Gorbunov, V V
2005-01-01
"Additional task" method was used to evaluate pilot's work load in prolonged flight. Calculated through durations of latent periods of motor responses, quantitative criterion of work load is more informative for objective evaluation of pilot's involvement in his piloting functions rather than of other registered parameters.
Development of a neural network technique for KSTAR Thomson scattering diagnostics.
Lee, Seung Hun; Lee, J H; Yamada, I; Park, Jae Sun
2016-11-01
Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ 2 method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ 2 method. The best results were obtained for 10 3 training cycles and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ 2 method and performs the calculation twenty times faster.
S -matrix calculations of energy levels of sodiumlike ions
Sapirstein, J.; Cheng, K. T.
2015-06-24
A recent S -matrix-based QED calculation of energy levels of the lithium isoelectronic sequence is extended to the general case of a valence electron outside an arbitrary filled core. Emphasis is placed on modifications of the lithiumlike formulas required because more than one core state is present, and an unusual feature of the two-photon exchange contribution involving autoionizing states is discussed. Here, the method is illustrated with a calculation of the energy levels of sodiumlike ions, with results for 3s 1/2, 3p 1/2, and 3p 3/2 energies tabulated for the range Z = 30 – 100 . Comparison with experimentmore » and other calculations is given, and prospects for extension of the method to ions with more complex electronic structure discussed.« less
Simplex volume analysis for finding endmembers in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.
2015-05-01
Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.
NASA Technical Reports Server (NTRS)
Jones, Robert T
1937-01-01
A simplified treatment of the application of Heaviside's operational methods to problems of airplane dynamics is given. Certain graphical methods and logarithmic formulas that lessen the amount of computation involved are explained. The problem representing a gust disturbance or control manipulation is taken up and it is pointed out that in certain cases arbitrary control manipulations may be dealt with as though they imposed specific constraints on the airplane, thus avoiding the necessity of any integration. The application of the calculations described in the text is illustrated by several examples chosen to show the use of the methods and the practicability of the graphical and logarithmic computations described.
Bond additivity corrections for quantum chemistry methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. F. Melius; M. D. Allendorf
1999-04-01
In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method duemore » to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.« less
Besley, Nicholas A
2016-10-11
The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .
Multigrid method for stability problems
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1988-01-01
The problem of calculating the stability of steady state solutions of differential equations is treated. Leading eigenvalues (i.e., having maximal real part) of large matrices that arise from discretization are to be calculated. An efficient multigrid method for solving these problems is presented. The method begins by obtaining an initial approximation for the dominant subspace on a coarse level using a damped Jacobi relaxation. This proceeds until enough accuracy for the dominant subspace has been obtained. The resulting grid functions are then used as an initial approximation for appropriate eigenvalue problems. These problems are being solved first on coarse levels, followed by refinement until a desired accuracy for the eigenvalues has been achieved. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a non-standard way in which the right hand side of the coarse grid equations involves unknown parameters to be solved for on the coarse grid. This in particular leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem demonstrate the effectiveness of the method proposed. Using an FMG algorithm a solution to the level of discretization errors is obtained in just a few work units (less than 10), where a work unit is the work involved in one Jacobi relization on the finest level.
NASA Astrophysics Data System (ADS)
Lu, Benzhuo; Cheng, Xiaolin; Hou, Tingjun; McCammon, J. Andrew
2005-08-01
The electrostatic interaction among molecules solvated in ionic solution is governed by the Poisson-Boltzmann equation (PBE). Here the hypersingular integral technique is used in a boundary element method (BEM) for the three-dimensional (3D) linear PBE to calculate the Maxwell stress tensor on the solvated molecular surface, and then the PB forces and torques can be obtained from the stress tensor. Compared with the variational method (also in a BEM frame) that we proposed recently, this method provides an even more efficient way to calculate the full intermolecular electrostatic interaction force, especially for macromolecular systems. Thus, it may be more suitable for the application of Brownian dynamics methods to study the dynamics of protein/protein docking as well as the assembly of large 3D architectures involving many diffusing subunits. The method has been tested on two simple cases to demonstrate its reliability and efficiency, and also compared with our previous variational method used in BEM.
NASA Technical Reports Server (NTRS)
Sivells, James C; Westrick, Gertrude C
1952-01-01
A method is presented which allows the use of nonlinear section lift data in the calculation of the spanwise lift distribution of unswept wings with flaps or ailerons. This method is based upon lifting line theory and is an extension to the method described in NACA rep. 865. The mathematical treatment of the discontinuity in absolute angle of attack at the end of the flap or aileron involves the use of a correction factor which accounts for the inability of a limited trigonometric series to represent adequately the spanwise lift distribution. A treatment of the apparent discontinuity in maximum section lift coefficient is also described. Simplified computing forms containing detailed examples are given for both symmetrical and asymmetrical lift distributions. A few comparisons of calculated characteristics with those obtained experimentally are also presented.
Methods for calculating conjugate problems of heat transfer
NASA Astrophysics Data System (ADS)
Kalinin, E. K.; Dreitser, G. A.; Kostiuk, V. V.; Berlin, I. I.
Methods are examined for calculating various conjugate problems of heat transfer in channels and closed vessels in cases of single-phase and two-phase flow in steady and unsteady conditions. The single-phase-flow studies involve the investigation of gaseous and liquid heat-carriers in pipes, annular and plane channels, and pipe bundles in cases of cooling and heating. General relationships are presented for heat transfer in cases of film, transition, and nucleate boiling, as well as for boiling crises. Attention is given to methods for analyzing the filling and cooling of conduits and tanks by cryogenic liquids; and ways to intensify heat transfer in these conditions are examined.
Efficient Control Law Simulation for Multiple Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.
1998-10-06
In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less
Calculating Launch Vehicle Flight Performance Reserve
NASA Technical Reports Server (NTRS)
Hanson, John M.; Pinson, Robin M.; Beard, Bernard B.
2011-01-01
This paper addresses different methods for determining the amount of extra propellant (flight performance reserve or FPR) that is necessary to reach orbit with a high probability of success. One approach involves assuming that the various influential parameters are independent and that the result behaves as a Gaussian. Alternatively, probabilistic models may be used to determine the vehicle and environmental models that will be available (estimated) for a launch day go/no go decision. High-fidelity closed-loop Monte Carlo simulation determines the amount of propellant used with each random combination of parameters that are still unknown at the time of launch. Using the results of the Monte Carlo simulation, several methods were used to calculate the FPR. The final chosen solution involves determining distributions for the pertinent outputs and running a separate Monte Carlo simulation to obtain a best estimate of the required FPR. This result differs from the result obtained using the other methods sufficiently that the higher fidelity is warranted.
Direct sampling for stand density index
Mark J. Ducey; Harry T. Valentine
2008-01-01
A direct method of estimating stand density index in the field, without complex calculations, would be useful in a variety of silvicultural situations. We present just such a method. The approach uses an ordinary prism or other angle gauge, but it involves deliberately "pushing the point" or, in some cases, "pulling the point." This adjusts the...
Composite membranes for fluid separations
Blume, Ingo; Peinemann, Klaus-Viktor; Pinnau, Ingo; Wijmans, Johannes G.
1992-01-01
A method for designing and making composite membranes having a microporous support membrane coated with a permselective layer. The method involves calculating the minimum thickness of the permselective layer such that the selectivity of the composite membrane is close to the intrinsic selectivity of the perselective layer. The invention also provides high performance membranes with optimized properties.
Composite membranes for fluid separations
Blume, Ingo; Peinemann, Klaus-Viktor; Pinnau, Ingo; Wijmans, Johannes G.
1991-01-01
A method for designing and making composite membranes having a microporous support membrane coated with a permselective layer. The method involves calculating the minimum thickness of the permselective layer such that the selectivity of the composite membrane is close to the intrinsic selectivity of the permselective layer. The invention also provides high performance membranes with optimized properties.
Composite membranes for fluid separations
Blume, Ingo; Peinemann, Klaus-Viktor; Pinnau, Ingo; Wijmans, Johannes G.
1990-01-01
A method for designing and making composite membranes having a microporous support membrane coated with a permselective layer. The method involves calculating the minimum thickness of the permselective layer such that the selectivity of the composite membrane is close to the intrinsic selectivity of the permselective layer. The invention also provides high performance membranes with optimized properties.
Development of a neural network technique for KSTAR Thomson scattering diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seung Hun, E-mail: leesh81@nfri.re.kr; Lee, J. H.; Yamada, I.
Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ{sup 2} method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ{sup 2} method. The best results were obtained for 10{sup 3} training cyclesmore » and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ{sup 2} method and performs the calculation twenty times faster.« less
NASA Technical Reports Server (NTRS)
Riley, Donald R.
2016-01-01
Calculated numerical values for some aerodynamic terms and stability Derivatives for several different wings in unseparated inviscid incompressible flow were made using a discrete vortex method involving a limited number of horseshoe vortices. Both longitudinal and lateral-directional derivatives were calculated for steady conditions as well as for sinusoidal oscillatory motions. Variables included the number of vortices used and the rotation axis/moment center chordwise location. Frequencies considered were limited to the range of interest to vehicle dynamic stability (kb <.24 ). Comparisons of some calculated numerical results with experimental wind-tunnel measurements were in reasonable agreement in the low angle-of-attack range considering the differences existing between the mathematical representation and experimental wind-tunnel models tested. Of particular interest was the presence of induced drag for the oscillatory condition.
An approximate method for calculating three-dimensional inviscid hypersonic flow fields
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Dejarnette, Fred R.
1990-01-01
An approximate solution technique was developed for 3-D inviscid, hypersonic flows. The method employs Maslen's explicit pressure equation in addition to the assumption of approximate stream surfaces in the shock layer. This approximation represents a simplification to Maslen's asymmetric method. The present method presents a tractable procedure for computing the inviscid flow over 3-D surfaces at angle of attack. The solution procedure involves iteratively changing the shock shape in the subsonic-transonic region until the correct body shape is obtained. Beyond this region, the shock surface is determined using a marching procedure. Results are presented for a spherically blunted cone, paraboloid, and elliptic cone at angle of attack. The calculated surface pressures are compared with experimental data and finite difference solutions of the Euler equations. Shock shapes and profiles of pressure are also examined. Comparisons indicate the method adequately predicts shock layer properties on blunt bodies in hypersonic flow. The speed of the calculations makes the procedure attractive for engineering design applications.
Carcass Functions in Variational Calculations for Few-Body Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donchev, A.G.; Kalachev, S.A.; Kolesnikov, N.N.
For variational calculations of molecular and nuclear systems involving a few particles, it is proposed to use carcass basis functions that generalize exponential and Gaussian trial functions. It is shown that the matrix elements of the Hamiltonian are expressed in a closed form for a Coulomb potential, as well as for other popular particle-interaction potentials. The use of such carcass functions in two-center Coulomb problems reduces, in relation to other methods, the number of terms in a variational expansion by a few orders of magnitude at a commensurate or even higher accuracy. The efficiency of the method is illustrated bymore » calculations of the three-particle Coulomb systems {mu}{mu}e, ppe, dde, and tte and the four-particle molecular systems H{sub 2} and HeH{sup +} of various isotopic composition. By considering the example of the {sub {lambda}}{sup 9}Be hypernucleus, it is shown that the proposed method can be used in calculating nuclear systems as well.« less
Xu, Shenghua; Liu, Jie; Sun, Zhiwei
2006-12-01
Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.
Estimating proportions in petrographic mixing equations by least-squares approximation.
Bryan, W B; Finger, L W; Chayes, F
1969-02-28
Petrogenetic hypotheses involving fractional crystallization, assimilation, or mixing of magmas may be expressed and tested as problems in leastsquares approximation. The calculation uses all of the data and yields a unique solution for each model, thus avoiding the ambiguity inherent in graphical or trial-and-error procedures. The compositional change in the 1960 lavas of Kilauea Volcano, Hawaii, is used to illustrate the method of calculation.
Singh, J; Thornton, J M
1990-02-05
Automated methods have been developed to determine the preferred packing arrangement between interacting protein groups. A suite of FORTRAN programs, SIRIUS, is described for calculating and analysing the geometries of interacting protein groups using crystallographically derived atomic co-ordinates. The programs involved in calculating the geometries search for interacting pairs of protein groups using a distance criterion, and then calculate the spatial disposition and orientation of the pair. The second set of programs is devoted to analysis. This involves calculating the observed and expected distributions of the angles and assessing the statistical significance of the difference between the two. A database of the geometries of the 400 combinations of side-chain to side-chain interaction has been created. The approach used in analysing the geometrical information is illustrated here with specific examples of interactions between side-chains, peptide groups and particular types of atom. At the side-chain level, an analysis of aromatic-amino interactions, and the interactions of peptide carbonyl groups with arginine residues is presented. At the atomic level the analyses include the spatial disposition of oxygen atoms around tyrosine residues, and the frequency and type of contact between carbon, nitrogen and oxygen atoms. This information is currently being applied to the modelling of protein interactions.
Finite difference time domain calculation of transients in antennas with nonlinear loads
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent
1991-01-01
In this paper transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.
An Efficient Statistical Method to Compute Molecular Collisional Rate Coefficients
NASA Astrophysics Data System (ADS)
Loreau, Jérôme; Lique, François; Faure, Alexandre
2018-01-01
Our knowledge about the “cold” universe often relies on molecular spectra. A general property of such spectra is that the energy level populations are rarely at local thermodynamic equilibrium. Solving the radiative transfer thus requires the availability of collisional rate coefficients with the main colliding partners over the temperature range ∼10–1000 K. These rate coefficients are notoriously difficult to measure and expensive to compute. In particular, very few reliable collisional data exist for inelastic collisions involving reactive radicals or ions. In this Letter, we explore the use of a fast quantum statistical method to determine molecular collisional excitation rate coefficients. The method is benchmarked against accurate (but costly) rigid-rotor close-coupling calculations. For collisions proceeding through the formation of a strongly bound complex, the method is found to be highly satisfactory up to room temperature. Its accuracy decreases with decreasing potential well depth and with increasing temperature, as expected. This new method opens the way to the determination of accurate inelastic collisional data involving key reactive species such as {{{H}}}3+, H2O+, and H3O+ for which exact quantum calculations are currently not feasible.
A combined representation method for use in band structure calculations. 1: Method
NASA Technical Reports Server (NTRS)
Friedli, C.; Ashcroft, N. W.
1975-01-01
A representation was described whose basis levels combine the important physical aspects of a finite set of plane waves with those of a set of Bloch tight-binding levels. The chosen combination has a particularly simple dependence on the wave vector within the Brillouin Zone, and its use in reducing the standard one-electron band structure problem to the usual secular equation has the advantage that the lattice sums involved in the calculation of the matrix elements are actually independent of the wave vector. For systems with complicated crystal structures, for which the Korringa-Kohn-Rostoker (KKR), Augmented-Plane Wave (APW) and Orthogonalized-Plane Wave (OPW) methods are difficult to apply, the present method leads to results with satisfactory accuracy and convergence.
NASA Astrophysics Data System (ADS)
Yannopapas, Vassilios; Paspalakis, Emmanuel
2018-07-01
We present a new theoretical tool for simulating optical trapping of nanoparticles in the presence of an arbitrary metamaterial design. The method is based on rigorously solving Maxwell's equations for the metamaterial via a hybrid discrete-dipole approximation/multiple-scattering technique and direct calculation of the optical force exerted on the nanoparticle by means of the Maxwell stress tensor. We apply the method to the case of a spherical polystyrene probe trapped within the optical landscape created by illuminating of a plasmonic metamaterial consisting of periodically arranged tapered metallic nanopyramids. The developed technique is ideally suited for general optomechanical calculations involving metamaterial designs and can compete with purely numerical methods such as finite-difference or finite-element schemes.
Bourlier, Christophe; Kubické, Gildas; Déchamps, Nicolas
2008-04-01
A fast, exact numerical method based on the method of moments (MM) is developed to calculate the scattering from an object below a randomly rough surface. Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)] have recently developed the PILE (propagation-inside-layer expansion) method for a stack of two one-dimensional rough interfaces separating homogeneous media. From the inversion of the impedance matrix by block (in which two impedance matrices of each interface and two coupling matrices are involved), this method allows one to calculate separately and exactly the multiple-scattering contributions inside the layer in which the inverses of the impedance matrices of each interface are involved. Our purpose here is to apply this method for an object below a rough surface. In addition, to invert a matrix of large size, the forward-backward spectral acceleration (FB-SA) approach of complexity O(N) (N is the number of unknowns on the interface) proposed by Chou and Johnson [Radio Sci.33, 1277 (1998)] is applied. The new method, PILE combined with FB-SA, is tested on perfectly conducting circular and elliptic cylinders located below a dielectric rough interface obeying a Gaussian process with Gaussian and exponential height autocorrelation functions.
Traino, A C; Marcatili, S; Avigo, C; Sollini, M; Erba, P A; Mariani, G
2013-04-01
Nonuniform activity within the target lesions and the critical organs constitutes an important limitation for dosimetric estimates in patients treated with tumor-seeking radiopharmaceuticals. The tumor control probability and the normal tissue complication probability are affected by the distribution of the radionuclide in the treated organ/tissue. In this paper, a straightforward method for calculating the absorbed dose at the voxel level is described. This new method takes into account a nonuniform activity distribution in the target/organ. The new method is based on the macroscopic S-values (i.e., the S-values calculated for the various organs, as defined in the MIRD approach), on the definition of the number of voxels, and on the raw-count 3D array, corrected for attenuation, scatter, and collimator resolution, in the lesion/organ considered. Starting from these parameters, the only mathematical operation required is to multiply the 3D array by a scalar value, thus avoiding all the complex operations involving the 3D arrays. A comparison with the MIRD approach, fully described in the MIRD Pamphlet No. 17, using S-values at the voxel level, showed a good agreement between the two methods for (131)I and for (90)Y. Voxel dosimetry is becoming more and more important when performing therapy with tumor-seeking radiopharmaceuticals. The method presented here does not require calculating the S-values at the voxel level, and thus bypasses the mathematical problems linked to the convolution of 3D arrays and to the voxel size. In the paper, the results obtained with this new simplified method as well as the possibility of using it for other radionuclides commonly employed in therapy are discussed. The possibility of using the correct density value of the tissue/organs involved is also discussed.
Automatic identification of abstract online groups
Engel, David W; Gregory, Michelle L; Bell, Eric B; Cowell, Andrew J; Piatt, Andrew W
2014-04-15
Online abstract groups, in which members aren't explicitly connected, can be automatically identified by computer-implemented methods. The methods involve harvesting records from social media and extracting content-based and structure-based features from each record. Each record includes a social-media posting and is associated with one or more entities. Each feature is stored on a data storage device and includes a computer-readable representation of an attribute of one or more records. The methods further involve grouping records into record groups according to the features of each record. Further still the methods involve calculating an n-dimensional surface representing each record group and defining an outlier as a record having feature-based distances measured from every n-dimensional surface that exceed a threshold value. Each of the n-dimensional surfaces is described by a footprint that characterizes the respective record group as an online abstract group.
Flood hydrology for Dry Creek, Lake County, Northwestern Montana
Parrett, C.; Jarrett, R.D.
2004-01-01
Dry Creek drains about 22.6 square kilometers of rugged mountainous terrain upstream from Tabor Dam in the Mission Range near St. Ignatius, Montana. Because of uncertainty about plausible peak discharges and concerns regarding the ability of the Tabor Dam spillway to safely convey these discharges, the flood hydrology for Dry Creek was evaluated on the basis of three hydrologic and geologic methods. The first method involved determining an envelope line relating flood discharge to drainage area on the basis of regional historical data and calculating a 500-year flood for Dry Creek using a regression equation. The second method involved paleoflood methods to estimate the maximum plausible discharge for 35 sites in the study area. The third method involved rainfall-runoff modeling for the Dry Creek basin in conjunction with regional precipitation information to determine plausible peak discharges. All of these methods resulted in estimates of plausible peak discharges that are substantially less than those predicted by the more generally applied probable maximum flood technique. Copyright ASCE 2004.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, D.W.; Linnhoff, B.
In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less
New approach to isometric transformations in oblique local coordinate systems of reference
NASA Astrophysics Data System (ADS)
Stępień, Grzegorz; Zalas, Ewa; Ziębka, Tomasz
2017-12-01
The research article describes a method of isometric transformation and determining an exterior orientation of a measurement instrument. The method is based on a designation of a "virtual" translation of two relative oblique orthogonal systems to a common, known in the both systems, point. The relative angle orientation of the systems does not change as each of the systems is moved along its axis. The next step is the designation of the three rotation angles (e.g. Tait-Bryan or Euler angles), transformation of the system convoluted at the calculated angles and moving the system to the initial position where the primary coordinate system was. This way eliminates movements of the systems from the calculations and makes it possible to calculate angles of mutual rotation angles of two orthogonal systems primarily involved in the movement. The research article covers laboratory calculations for simulated data. The accuracy of the results is 10-6 m (10-3 regarding the accuracy of the input data). This confi rmed the correctness of the assumed calculation method. In the following step the method was verifi ed under fi eld conditions, where the accuracy of the method raised to 0.003 m. The proposed method enabled to make the measurements with the oblique and uncentered instrument, e.g. total station instrument set over an unknown point. This is the reason why the method was named by the authors as Total Free Station - TFS. The method may be also used for isometric transformations for photogrammetric purposes.
Wear Calculation Approach for Sliding - Friction Pairs
NASA Astrophysics Data System (ADS)
Springis, G.; Rudzitis, J.; Lungevics, J.; Berzins, K.
2017-05-01
One of the most important things how to predict the service life of different products is always connected with the choice of adequate method. With the development of production technologies and measuring devices and with ever increasing precision one can get the appropriate data to be used in analytic calculations. Historically one can find several theoretical wear calculation methods but still there are no exact wear calculation model that could be applied to all cases of wear processes because of difficulties connected with a variety of parameters that are involved in wear process of two or several surfaces. Analysing the wear prediction theories that could be classified into definite groups one can state that each of them has shortcomings that might impact the results thus making unnecessary theoretical calculations. The offered wear calculation method is based on the theories of different branches of science. It includes the description of 3D surface micro-topography using standardized roughness parameters, explains the regularities of particle separation from the material in the wear process using fatigue theory and takes into account material’s physical and mechanical characteristics and definite conditions of product’s working time. The proposed wear calculation model could be of value for prediction of the exploitation time for sliding friction pairs thus allowing the best technologies to be chosen for many mechanical details.
Multiple steady states in atmospheric chemistry
NASA Technical Reports Server (NTRS)
Stewart, Richard W.
1993-01-01
The equations describing the distributions and concentrations of trace species are nonlinear and may thus possess more than one solution. This paper develops methods for searching for multiple physical solutions to chemical continuity equations and applies these to subsets of equations describing tropospheric chemistry. The calculations are carried out with a box model and use two basic strategies. The first strategy is a 'search' method. This involves fixing model parameters at specified values, choosing a wide range of initial guesses at a solution, and using a Newton-Raphson technique to determine if different initial points converge to different solutions. The second strategy involves a set of techniques known as homotopy methods. These do not require an initial guess, are globally convergent, and are guaranteed, in principle, to find all solutions of the continuity equations. The first method is efficient but essentially 'hit or miss' in the sense that it cannot guarantee that all solutions which may exist will be found. The second method is computationally burdensome but can, in principle, determine all the solutions of a photochemical system. Multiple solutions have been found for models that contain a basic complement of photochemical reactions involving O(x), HO(x), NO(x), and CH4. In the present calculations, transitions occur between stable branches of a multiple solution set as a control parameter is varied. These transitions are manifestations of hysteresis phenomena in the photochemical system and may be triggered by increasing the NO flux or decreasing the CH4 flux from current mean tropospheric levels.
Uniform semiclassical sudden approximation for rotationally inelastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korsch, H.J.; Schinke, R.
1980-08-01
The infinite-order-sudden (IOS) approximation is investigated in the semiclassical limit. A simplified IOS formula for rotationally inelastic differential cross sections is derived involving a uniform stationary phase approximation for two-dimensional oscillatory integrals with two stationary points. The semiclassical analysis provides a quantitative description of the rotational rainbow structure in the differential cross section. The numerical calculation of semiclassical IOS cross sections is extremely fast compared to numerically exact IOS methods, especially if high ..delta..j transitions are involved. Rigid rotor results for He--Na/sub 2/ collisions with ..delta..j< or approx. =26 and for K--CO collisions with ..delta..j< or approx. =70 show satisfactorymore » agreement with quantal IOS calculations.« less
Improvements of CO2 and O2 Transmission Modeling for ASCENDS Mission Applications
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narashimha S.
2011-01-01
Simulations using the HITRAN database and other data have been carried out to select the optimum laser wavelengths for the measurements of CO2 and O2 concentrations with the application to the ASCENDS mission. The accuracy set forth for the ASCENDS mission requires accurate line-by-line calculations involving the use of non-Voigt line shapes. To aid in achieving this goal, improved CO2 and O2 transmission calculation methods are being developed. In particular, line-by-line transmission modeling of CO2 was improved by implementing non-Voigt spectral lineshapes. Ongoing work involves extending this approach to the O2 molecule 1.26-1.27micron spectral band.
Nakagawa, Yoshiaki; Takemura, Tadamasa; Yoshihara, Hiroyuki; Nakagawa, Yoshinobu
2011-04-01
A hospital director must estimate the revenues and expenses not only in a hospital but also in each clinical division to determine the proper management strategy. A new prospective payment system based on the Diagnosis Procedure Combination (DPC/PPS) introduced in 2003 has made the attribution of revenues and expenses for each clinical department very complicated because of the intricate involvement between the overall or blanket component and a fee-for service (FFS). Few reports have so far presented a programmatic method for the calculation of medical costs and financial balance. A simple method has been devised, based on personnel cost, for calculating medical costs and financial balance. Using this method, one individual was able to complete the calculations for a hospital which contains 535 beds and 16 clinics, without using the central hospital computer system.
Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix
NASA Astrophysics Data System (ADS)
Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia
2011-03-01
During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.
Calculation of Expectation Values of Operators in the Complex Scaling Method
Papadimitriou, G.
2016-06-14
In the complex scaling method (CSM) provides with a way to obtain resonance parameters of particle unstable states by rotating the coordinates and momenta of the original Hamiltonian. It is convenient to use an L 2 integrable basis to resolve the complex rotated or complex scaled Hamiltonian H θ , with θ being the angle of rotation in the complex energy plane. Within the CSM, resonance and scattering solutions have fall-off asymptotics. Furthermore, one of the consequences is that, expectation values of operators in a resonance or scattering complex scaled solution are calculated by complex rotating the operators. In thismore » work we are exploring applications of the CSM on calculations of expectation values of quantum mechanical operators by using the regularized backrotation technique and calculating hence the expectation value using the unrotated operator. Moreover, the test cases involve a schematic two-body Gaussian model and also applications using realistic interactions.« less
Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.
Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H
2015-11-01
Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations. Copyright © 2015 Elsevier Inc. All rights reserved.
Udzawa-type iterative method with parareal preconditioner for a parabolic optimal control problem
NASA Astrophysics Data System (ADS)
Lapin, A.; Romanenko, A.
2016-11-01
The article deals with the optimal control problem with the parabolic equation as state problem. There are point-wise constraints on the state and control functions. The objective functional involves the observation given in the domain at each moment. The conditions for convergence Udzawa's type iterative method are given. The parareal method to inverse preconditioner is given. The results of calculations are presented.
Improved and standardized method for assessing years lived with disability after injury
Polinder, S; Lyons, RA; Lund, J; Ditsuwan, V; Prinsloo, M; Veerman, JL; van Beeck, EF
2012-01-01
Abstract Objective To develop a standardized method for calculating years lived with disability (YLD) after injury. Methods The method developed consists of obtaining data on injury cases seen in emergency departments as well as injury-related hospital admissions, using the EUROCOST system to link the injury cases to disability information and employing empirical data to describe functional outcomes in injured patients. Findings Overall, 87 weights and proportions for 27 injury diagnoses involving lifelong consequences were included in the method. Almost all of the injuries investigated (96–100%) could be assigned to EUROCOST categories. The mean number of YLD per case of injury varied with the country studied. Use of the novel method resulted in estimated burdens of injury that were 3 to 8 times higher, in terms of YLD, than the corresponding estimates produced using the conventional methods employed in global burden of disease studies, which employ disability-adjusted life years. Conclusion The novel method for calculating YLD after injury can be applied in different settings, overcomes some limitations of the method used to calculate the global burden of disease, and allows more accurate estimates of the population burden of injury. PMID:22807597
2017-01-01
In this work, a new protocol for the calculation of valence-to-core resonant X-ray emission (VtC RXES) spectra is introduced. The approach is based on the previously developed restricted open configuration interaction with singles (ROCIS) method and its parametrized version, based on a ground-state Kohn–Sham determinant (DFT/ROCIS) method. The ROCIS approach has the following features: (1) In the first step approximation, many-particle eigenstates are calculated in which the total spin is retained as a good quantum number. (2) The ground state with total spin S and excited states with spin S′ = S, S ± 1, are obtained. (3) These states have a qualitatively correct multiplet structure. (4) Quasi-degenerate perturbation theory is used to treat the spin–orbit coupling operator variationally at the many-particle level. (5) Transition moments are obtained between the relativistic many-particle states. The method has shown great potential in the field of X-ray spectroscopy, in particular in the field of transition-metal L-edge, which cannot be described correctly with particle–hole theories. In this work, the method is extended to the calculation of resonant VtC RXES [alternatively referred to as 1s-VtC resonant inelastic X-ray scattering (RIXS)] spectra. The complete Kramers–Dirac–Heisenerg equation is taken into account. Thus, state interference effects are treated naturally within this protocol. As a first application of this protocol, a computational study on the previously reported VtC RXES plane on a molecular managanese(V) complex is performed. Starting from conventional X-ray absorption spectra (XAS), we present a systematic study that involves calculations and electronic structure analysis of both the XAS and non-resonant and resonant VtC XES spectra. The very good agreement between theory and experiment, observed in all cases, allows us to unravel the complicated intensity mechanism of these spectroscopic techniques as a synergic function of state polarization and interference effects. In general, intense features in the RIXS spectra originate from absorption and emission processes that involve nonorthogonal transition moments. We also present a graphical method to determine the sign of the interference contributions. PMID:28920680
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L. L. W.; La Russa, D. J.; Rogers, D. W. O.
In a previous study [Med. Phys. 35, 1747-1755 (2008)], the authors proposed two direct methods of calculating the replacement correction factors (P{sub repl} or p{sub cav}p{sub dis}) for ion chambers by Monte Carlo calculation. By ''direct'' we meant the stopping-power ratio evaluation is not necessary. The two methods were named as the high-density air (HDA) and low-density water (LDW) methods. Although the accuracy of these methods was briefly discussed, it turns out that the assumption made regarding the dose in an HDA slab as a function of slab thickness is not correct. This issue is reinvestigated in the current study,more » and the accuracy of the LDW method applied to ion chambers in a {sup 60}Co photon beam is also studied. It is found that the two direct methods are in fact not completely independent of the stopping-power ratio of the two materials involved. There is an implicit dependence of the calculated P{sub repl} values upon the stopping-power ratio evaluation through the choice of an appropriate energy cutoff {Delta}, which characterizes a cavity size in the Spencer-Attix cavity theory. Since the {Delta} value is not accurately defined in the theory, this dependence on the stopping-power ratio results in a systematic uncertainty on the calculated P{sub repl} values. For phantom materials of similar effective atomic number to air, such as water and graphite, this systematic uncertainty is at most 0.2% for most commonly used chambers for either electron or photon beams. This uncertainty level is good enough for current ion chamber dosimetry, and the merits of the two direct methods of calculating P{sub repl} values are maintained, i.e., there is no need to do a separate stopping-power ratio calculation. For high-Z materials, the inherent uncertainty would make it practically impossible to calculate reliable P{sub repl} values using the two direct methods.« less
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Mielke, Steven L.; Clarkson, Kenneth L.; Truhlar, Donald G.
2012-08-01
We present a Fortran program package, MSTor, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsional motions by the recently proposed MS-T method. This method interpolates between the local harmonic approximation in the low-temperature limit, and the limit of free internal rotation of all torsions at high temperature. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes six utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Catalogue identifier: AEMF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 77 434 No. of bytes in distributed program, including test data, etc.: 3 264 737 Distribution format: tar.gz Programming language: Fortran 90, C, and Perl Computer: Itasca (HP Linux cluster, each node has two-socket, quad-core 2.8 GHz Intel Xeon X5560 “Nehalem EP” processors), Calhoun (SGI Altix XE 1300 cluster, each node containing two quad-core 2.66 GHz Intel Xeon “Clovertown”-class processors sharing 16 GB of main memory), Koronis (Altix UV 1000 server with 190 6-core Intel Xeon X7542 “Westmere” processors at 2.66 GHz), Elmo (Sun Fire X4600 Linux cluster with AMD Opteron cores), and Mac Pro (two 2.8 GHz Quad-core Intel Xeon processors) Operating system: Linux/Unix/Mac OS RAM: 2 Mbytes Classification: 16.3, 16.12, 23 Nature of problem: Calculation of the partition functions and thermodynamic functions (standard-state energy, enthalpy, entropy, and free energy as functions of temperatures) of complex molecules involving multiple torsional motions. Solution method: The multi-structural approximation with torsional anharmonicity (MS-T). The program also provides results for the multi-structural local harmonic approximation [1]. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multi-torsional problems for which one can afford to calculate all the conformations and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes and six utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomain defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 24 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 seconds. J. Zheng, T. Yu, E. Papajak, I.M. Alecu, S.L. Mielke, D.G. Truhlar, Practical methods for including torsional anharmonicity in thermochemical calculations of complex molecules: The internal-coordinate multi-structural approximation, Phys. Chem. Chem. Phys. 13 (2011) 10885-10907.
Calculation of Drug Solubilities by Pharmacy Students.
ERIC Educational Resources Information Center
Cates, Lindley A.
1981-01-01
A method of estimating the solubilities of drugs in water is reported that is based on a principle applied in quantitative structure-activity relationships. This procedure involves correlation of partition coefficient values using the octanol/water system and aqueous solubility. (Author/MLW)
Measuring Road Network Vulnerability with Sensitivity Analysis
Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin
2017-01-01
This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, T.; Rabitz, H.
1996-02-01
A general interpolation method for constructing smooth molecular potential energy surfaces (PES{close_quote}s) from {ital ab} {ital initio} data are proposed within the framework of the reproducing kernel Hilbert space and the inverse problem theory. The general expression for an {ital a} {ital posteriori} error bound of the constructed PES is derived. It is shown that the method yields globally smooth potential energy surfaces that are continuous and possess derivatives up to second order or higher. Moreover, the method is amenable to correct symmetry properties and asymptotic behavior of the molecular system. Finally, the method is generic and can be easilymore » extended from low dimensional problems involving two and three atoms to high dimensional problems involving four or more atoms. Basic properties of the method are illustrated by the construction of a one-dimensional potential energy curve of the He{endash}He van der Waals dimer using the exact quantum Monte Carlo calculations of Anderson {ital et} {ital al}. [J. Chem. Phys. {bold 99}, 345 (1993)], a two-dimensional potential energy surface of the HeCO van der Waals molecule using recent {ital ab} {ital initio} calculations by Tao {ital et} {ital al}. [J. Chem. Phys. {bold 101}, 8680 (1994)], and a three-dimensional potential energy surface of the H{sup +}{sub 3} molecular ion using highly accurate {ital ab} {ital initio} calculations of R{umlt o}hse {ital et} {ital al}. [J. Chem. Phys. {bold 101}, 2231 (1994)]. In the first two cases the constructed potentials clearly exhibit the correct asymptotic forms, while in the last case the constructed potential energy surface is in excellent agreement with that constructed by R{umlt o}hse {ital et} {ital al}. using a low order polynomial fitting procedure. {copyright} {ital 1996 American Institute of Physics.}« less
More on approximations of Poisson probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C
1980-05-01
Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less
Self-interaction correction in multiple scattering theory: application to transition metal oxides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daene, Markus W; Lueders, Martin; Ernst, Arthur
2009-01-01
We apply to transition metal monoxides the self-interaction corrected (SIC) local spin density (LSD) approximation, implemented locally in the multiple scattering theory within the Korringa-Kohn-Rostoker (KKR) band structure method. The calculated electronic structure and in particular magnetic moments and energy gaps are discussed in reference to the earlier SIC results obtained within the LMTO-ASA band structure method, involving transformations between Bloch and Wannier representations to solve the eigenvalue problem and calculate the SIC charge and potential. Since the KKR can be easily extended to treat disordered alloys, by invoking the coherent potential approximation (CPA), in this paper we compare themore » CPA approach and supercell calculations to study the electronic structure of NiO with cation vacancies.« less
GPU-Q-J, a fast method for calculating root mean square deviation (RMSD) after optimal superposition
2011-01-01
Background Calculation of the root mean square deviation (RMSD) between the atomic coordinates of two optimally superposed structures is a basic component of structural comparison techniques. We describe a quaternion based method, GPU-Q-J, that is stable with single precision calculations and suitable for graphics processor units (GPUs). The application was implemented on an ATI 4770 graphics card in C/C++ and Brook+ in Linux where it was 260 to 760 times faster than existing unoptimized CPU methods. Source code is available from the Compbio website http://software.compbio.washington.edu/misc/downloads/st_gpu_fit/ or from the author LHH. Findings The Nutritious Rice for the World Project (NRW) on World Community Grid predicted de novo, the structures of over 62,000 small proteins and protein domains returning a total of 10 billion candidate structures. Clustering ensembles of structures on this scale requires calculation of large similarity matrices consisting of RMSDs between each pair of structures in the set. As a real-world test, we calculated the matrices for 6 different ensembles from NRW. The GPU method was 260 times faster that the fastest existing CPU based method and over 500 times faster than the method that had been previously used. Conclusions GPU-Q-J is a significant advance over previous CPU methods. It relieves a major bottleneck in the clustering of large numbers of structures for NRW. It also has applications in structure comparison methods that involve multiple superposition and RMSD determination steps, particularly when such methods are applied on a proteome and genome wide scale. PMID:21453553
ERIC Educational Resources Information Center
Molfenter, Sonja M.; Steele, Catriona M.
2014-01-01
Purpose: Traditional methods for measuring hyoid excursion from dynamic videofluoroscopy recordings involve calculating changes in position in absolute units (mm). This method shows a high degree of variability across studies but agreement that greater hyoid excursion occurs inmen than in women. Given that men are typically taller than women, the…
Restricted Collision List method for faster Direct Simulation Monte-Carlo (DSMC) collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macrossan, Michael N., E-mail: m.macrossan@uq.edu.au
The ‘Restricted Collision List’ (RCL) method for speeding up the calculation of DSMC Variable Soft Sphere collisions, with Borgnakke–Larsen (BL) energy exchange, is presented. The method cuts down considerably on the number of random collision parameters which must be calculated (deflection and azimuthal angles, and the BL energy exchange factors). A relatively short list of these parameters is generated and the parameters required in any cell are selected from this list. The list is regenerated at intervals approximately equal to the smallest mean collision time in the flow, and the chance of any particle re-using the same collision parameters inmore » two successive collisions is negligible. The results using this method are indistinguishable from those obtained with standard DSMC. The CPU time saving depends on how much of a DSMC calculation is devoted to collisions and how much is devoted to other tasks, such as moving particles and calculating particle interactions with flow boundaries. For 1-dimensional calculations of flow in a tube, the new method saves 20% of the CPU time per collision for VSS scattering with no energy exchange. With RCL applied to rotational energy exchange, the CPU saving can be greater; for small values of the rotational collision number, for which most collisions involve some rotational energy exchange, the CPU may be reduced by 50% or more.« less
NASA Technical Reports Server (NTRS)
Sohn, J. L.; Heinrich, J. C.
1990-01-01
The calculation of pressures when the penalty-function approximation is used in finite-element solutions of laminar incompressible flows is addressed. A Poisson equation for the pressure is formulated that involves third derivatives of the velocity field. The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The present scheme is shown to be efficient, free of spurious oscillations, and accurate. Examples of applications are given and compared with results obtained using mixed formulations.
NASA Technical Reports Server (NTRS)
Watkins, Charles E; Durling, Barbara J
1956-01-01
This report presents tabulated values of certain definite integral that are involved in the calculation of near-field propeller noise when the chordwise forces are assumed to be either uniform or of a Dirac delta type. The tabulations are over a wide range of operating conditions and are useful for estimating propeller noise when either the concept of an effective radius or radial distributions of forces are considered. Use of the tabulations is illustrated by several examples of calculated results for some specific propellers.
Ab-initio Computation of the Electronic, transport, and Bulk Properties of Calcium Oxide.
NASA Astrophysics Data System (ADS)
Mbolle, Augustine; Banjara, Dipendra; Malozovsky, Yuriy; Franklin, Lashounda; Bagayoko, Diola
We report results from ab-initio, self-consistent, local Density approximation (LDA) calculations of electronic and related properties of calcium oxide (CaO) in the rock salt structure. We employed the Ceperley and Alder LDA potential and the linear combination of atomic orbitals (LCAO) formalism. Our calculations are non-relativistic. We implemented the LCAO formalism following the Bagayoko, Zhao, and Williams (BZW) method, as enhanced by Ekuma and Franklin (BZW-EF). The BZW-EF method involves a methodical search for the optimal basis set that yields the absolute minima of the occupied energies, as required by density functional theory (DFT). Our calculated, indirect band gap of 6.91eV, from towards the L point, is in excellent agreement with experimental value of 6.93-7.7eV, at room temperature (RT). We have also calculated the total (DOS) and partial (pDOS) densities of states as well as the bulk modulus. Our calculated bulk modulus is in excellent agreement with experiment. Work funded in part by the US Department of Energy (DOE), National Nuclear Security Administration (NNSA) (Award No.DE-NA0002630), the National Science Foundation (NSF) (Award No, 1503226), LaSPACE, and LONI-SUBR.
Shear, principal, and equivalent strains in equal-channel angular deformation
NASA Astrophysics Data System (ADS)
Xia, K.; Wang, J.
2001-10-01
The shear and principal strains involved in equal channel angular deformation (ECAD) were analyzed using a variety of methods. A general expression for the total shear strain calculated by integrating infinitesimal strain increments gave the same result as that from simple geometric considerations. The magnitude and direction of the accumulated principal strains were calculated based on a geometric and a matrix algebra method, respectively. For an intersecting angle of π/2, the maximum normal strain is 0.881 in the direction at π/8 (22.5 deg) from the longitudinal direction of the material in the exit channel. The direction of the maximum principal strain should be used as the direction of grain elongation. Since the principal direction of strain rotates during ECAD, the total shear strain and principal strains so calculated do not have the same meaning as those in a strain tensor. Consequently, the “equivalent” strain based on the second invariant of a strain tensor is no longer an invariant. Indeed, the equivalent strains calculated using the total shear strain and that using the total principal strains differed as the intensity of deformation increased. The method based on matrix algebra is potentially useful in mathematical analysis and computer calculation of ECAD.
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Meana-Pañeda, Rubén; Truhlar, Donald G.
2013-08-01
We present an improved version of the MSTor program package, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsions; the method is based on either a coupled torsional potential or an uncoupled torsional potential. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes seven utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files for the MSTor calculation and Voronoi calculation, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multitorsional problems for which one can afford to calculate all the conformational structures and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes, the symmetry program for determining point group symmetry of a molecule, and seven utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes of the torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 26 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 s. References: [1] MS-T(C) method: Quantum Thermochemistry: Multi-Structural Method with Torsional Anharmonicity Based on a Coupled Torsional Potential, J. Zheng and D.G. Truhlar, Journal of Chemical Theory and Computation 9 (2013) 1356-1367, DOI: http://dx.doi.org/10.1021/ct3010722. [2] MS-T(U) method: Practical Methods for Including Torsional Anharmonicity in Thermochemical Calculations of Complex Molecules: The Internal-Coordinate Multi-Structural Approximation, J. Zheng, T. Yu, E. Papajak, I, M. Alecu, S.L. Mielke, and D.G. Truhlar, Physical Chemistry Chemical Physics 13 (2011) 10885-10907.
Electron Capture in Slow Collisions of Si4+ With Atomic Hydrogen
NASA Astrophysics Data System (ADS)
Joseph, D. C.; Gu, J. P.; Saha, B. C.
2009-10-01
In recent years the charge transfer involving Si4+ and H at low energies has drawn considerable attention both theoretically and experimentally due to its importance not only in astronomical environments but also in modern semiconductor industries. Accurate information regarding its molecular structures and interactions are essential to understand the low energy collision dynamics. Ab initio calculations are performed using the multireference single- and double-excitation configuration-interaction (MRD-CI) method to evaluate potential energies. State selective cross sections are calculate using fully quantum and semi-classical molecular-orbital close coupling (MOCC) methods in the adiabatic representation. Detail results will be presented in the conference.
Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P
1999-10-01
A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.
Massive Photons: An Infrared Regularization Scheme for Lattice QCD+QED.
Endres, Michael G; Shindler, Andrea; Tiburzi, Brian C; Walker-Loud, André
2016-08-12
Standard methods for including electromagnetic interactions in lattice quantum chromodynamics calculations result in power-law finite-volume corrections to physical quantities. Removing these by extrapolation requires costly computations at multiple volumes. We introduce a photon mass to alternatively regulate the infrared, and rely on effective field theory to remove its unphysical effects. Electromagnetic modifications to the hadron spectrum are reliably estimated with a precision and cost comparable to conventional approaches that utilize multiple larger volumes. A significant overall cost advantage emerges when accounting for ensemble generation. The proposed method may benefit lattice calculations involving multiple charged hadrons, as well as quantum many-body computations with long-range Coulomb interactions.
Vortex lattice prediction of subsonic aerodynamics of hypersonic vehicle concepts
NASA Technical Reports Server (NTRS)
Pittman, J. L.; Dillon, J. L.
1977-01-01
The vortex lattice method introduced by Lamar and Gloss (1975) was applied to the prediction of subsonic aerodynamic characteristics of hypersonic body-wing configurations. The reliability of the method was assessed through comparison of the calculated and observed aerodynamic performances of two National Hypersonic Flight Research Facility craft at Mach 0.2. The investigation indicated that a vortex lattice model involving 120 or more panel elements can give good results for the lift and induced drag coefficients of the craft, as well as for the pitching moment at angles of attack below 10 to 15 deg. Automated processes for calculating the local slopes of mean-camber surfaces may also render the method suitable for use in preliminary design phases.
Extended quantum jump description of vibronic two-dimensional spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, Julian; Falge, Mirjam; Keß, Martin
2015-06-07
We calculate two-dimensional (2D) vibronic spectra for a model system involving two electronic molecular states. The influence of a bath is simulated using a quantum-jump approach. We use a method introduced by Makarov and Metiu [J. Chem. Phys. 111, 10126 (1999)] which includes an explicit treatment of dephasing. In this way it is possible to characterize the influence of dissipation and dephasing on the 2D-spectra, using a wave function based method. The latter scales with the number of stochastic runs and the number of system eigenstates included in the expansion of the wave-packets to be propagated with the stochastic methodmore » and provides an efficient method for the calculation of the 2D-spectra.« less
NASA Technical Reports Server (NTRS)
Kraft, Ralph P.; Burrows, David N.; Nousek, John A.
1991-01-01
Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.
Greene, Samuel M; Shan, Xiao; Clary, David C
2015-12-17
Quantum mechanical methods for calculating rate constants are often intractable for reactions involving many atoms. Semiclassical transition state theory (SCTST) offers computational advantages over these methods but nonetheless scales exponentially with the number of degrees of freedom (DOFs) of the system. Here we present a method with more favorable scaling, reduced-dimensionality SCTST (RD SCTST), that treats only a subset of DOFs of the system explicitly. We apply it to three H abstraction and exchange reactions for which two-dimensional potential energy surfaces (PESs) have previously been constructed and evaluated using RD quantum scattering calculations. We differentiated these PESs to calculate harmonic frequencies and anharmonic constants, which were then used to calculate cumulative reaction probabilities and rate constants by RD SCTST. This method yielded rate constants in good agreement with quantum scattering results. Notably, it performed well for a heavy-light-heavy reaction, even though it does not explicitly account for corner-cutting effects. Recent extensions to SCTST that improve its treatment of deep tunneling were also evaluated within the reduced-dimensionality framework. The success of RD SCTST in this study suggests its potential applicability to larger systems.
Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.
Cornforth, David J.; Tarvainen, Mika P.; Jelinek, Herbert F.
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311
Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.
Reed, George H; Poyner, Russell R
2015-01-01
An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Newhauser, Wayne D.
2009-03-01
In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.
Delving Deeper: Transforming Shapes Physically and Analytically
ERIC Educational Resources Information Center
Rathouz, Margaret; Novak, Christopher; Clifford, John
2013-01-01
Constructing formulas "from scratch" for calculating geometric measurements of shapes--for example, the area of a triangle--involves reasoning deductively and drawing connections between different methods (Usnick, Lamphere, and Bright 1992). Visual and manipulative models also play a role in helping students understand the underlying…
NASA Technical Reports Server (NTRS)
Howlett, James T.; Bland, Samuel R.
1987-01-01
A method is described for calculating unsteady transonic flow with viscous interaction by coupling a steady integral boundary-layer code with an unsteady, transonic, inviscid small-disturbance computer code in a quasi-steady fashion. Explicit coupling of the equations together with viscous -inviscid iterations at each time step yield converged solutions with computer times about double those required to obtain inviscid solutions. The accuracy and range of applicability of the method are investigated by applying it to four AGARD standard airfoils. The first-harmonic components of both the unsteady pressure distributions and the lift and moment coefficients have been calculated. Comparisons with inviscid calcualtions and experimental data are presented. The results demonstrate that accurate solutions for transonic flows with viscous effects can be obtained for flows involving moderate-strength shock waves.
Response surface method in geotechnical/structural analysis, phase 1
NASA Astrophysics Data System (ADS)
Wong, F. S.
1981-02-01
In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.
Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär
2016-01-01
To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research. PMID:27505418
Ericsson, Jonas; Husmark, Teodor; Mathiesen, Christoffer; Sepahvand, Benjamin; Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär; Schröder, Elsebeth
2016-01-01
To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research.
Constrained variation in Jastrow method at high density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, J.C.; Bishop, R.F.; Irvine, J.M.
1976-11-01
A method is derived for constraining the correlation function in a Jastrow variational calculation which permits the truncation of the cluster expansion after two-body terms, and which permits exact minimization of the two-body cluster by functional variation. This method is compared with one previously proposed by Pandharipande and is found to be superior both theoretically and practically. The method is tested both on liquid /sup 3/He, by using the Lennard--Jones potential, and on the model system of neutrons treated as Boltzmann particles (''homework'' problem). Good agreement is found both with experiment and with other calculations involving the explicit evaluation ofmore » higher-order terms in the cluster expansion. The method is then applied to a more realistic model of a neutron gas up to a density of 4 neutrons per F/sup 3/, and is found to give ground-state energies considerably lower than those of Pandharipande. (AIP)« less
NASA Astrophysics Data System (ADS)
Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin
1997-06-01
A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
Importance sampling with imperfect cloning for the computation of generalized Lyapunov exponents
NASA Astrophysics Data System (ADS)
Anteneodo, Celia; Camargo, Sabrina; Vallejos, Raúl O.
2017-12-01
We revisit the numerical calculation of generalized Lyapunov exponents, L (q ) , in deterministic dynamical systems. The standard method consists of adding noise to the dynamics in order to use importance sampling algorithms. Then L (q ) is obtained by taking the limit noise-amplitude → 0 after the calculation. We focus on a particular method that involves periodic cloning and pruning of a set of trajectories. However, instead of considering a noisy dynamics, we implement an imperfect (noisy) cloning. This alternative method is compared with the standard one and, when possible, with analytical results. As a workbench we use the asymmetric tent map, the standard map, and a system of coupled symplectic maps. The general conclusion of this study is that the imperfect-cloning method performs as well as the standard one, with the advantage of preserving the deterministic dynamics.
NASA Technical Reports Server (NTRS)
Lyusternik, L. A.
1980-01-01
The mathematics involved in numerically solving for the plane boundary value of the Laplace equation by the grid method is developed. The approximate solution of a boundary value problem for the domain of the Laplace equation by the grid method consists of finding u at the grid corner which satisfies the equation at the internal corners (u=Du) and certain boundary value conditions at the boundary corners.
Better Than Counting: Density Profiles from Force Sampling
NASA Astrophysics Data System (ADS)
de las Heras, Daniel; Schmidt, Matthias
2018-05-01
Calculating one-body density profiles in equilibrium via particle-based simulation methods involves counting of events of particle occurrences at (histogram-resolved) space points. Here, we investigate an alternative method based on a histogram of the local force density. Via an exact sum rule, the density profile is obtained with a simple spatial integration. The method circumvents the inherent ideal gas fluctuations. We have tested the method in Monte Carlo, Brownian dynamics, and molecular dynamics simulations. The results carry a statistical uncertainty smaller than that of the standard counting method, reducing therefore the computation time.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Unraveling Higher Education's Costs.
ERIC Educational Resources Information Center
Gordon, Gus; Charles, Maria
1998-01-01
The activity-based costing (ABC) method of analyzing institutional costs in higher education involves four procedures: determining the various discrete activities of the organization; calculating the cost of each; determining the cost drivers; tracing cost to the cost objective or consumer of each activity. Few American institutions have used the…
Accounting for the Benefits of Database Normalization
ERIC Educational Resources Information Center
Wang, Ting J.; Du, Hui; Lehmann, Constance M.
2010-01-01
This paper proposes a teaching approach to reinforce accounting students' understanding of the concept of database normalization. Unlike a conceptual approach shown in most of the AIS textbooks, this approach involves with calculations and reconciliations with which accounting students are familiar because the methods are frequently used in…
Robert R. Ziemer
1979-01-01
For years, the principal objective of evapotranspiration research has been to calculate the loss of water under varying conditions of climate, soil, and vegetation. The early simple empirical methods have generally been replaced by more detailed models which more closely represent the physical and biological processes involved. Monteith's modification of the...
Campbell, Anne A.; Porter, Wallace D.; Katoh, Yutai; ...
2016-01-14
Silicon carbide is used as a passive post-irradiation temperature monitor because the irradiation defects will anneal out above the irradiation temperature. The irradiation temperature is determined by measuring a property change after isochronal annealing, i.e., lattice spacing, dimensions, electrical resistivity, thermal diffusivity, or bulk density. However, such methods are time-consuming since the steps involved must be performed in a serial manner. This work presents the use of thermal expansion from continuous dilatometry to calculate the SiC irradiation temperature, which is an automated process requiring minimal setup time. Analysis software was written that performs the calculations to obtain the irradiation temperaturemore » and removes possible user-introduced error while standardizing the analysis. In addition, this method has been compared to an electrical resistivity and isochronal annealing investigation, and the results revealed agreement of the calculated temperatures. These results show that dilatometry is a reliable and less time-intensive process for determining irradiation temperature from passive SiC thermometry.« less
NASA Astrophysics Data System (ADS)
Campbell, Anne A.; Porter, Wallace D.; Katoh, Yutai; Snead, Lance L.
2016-03-01
Silicon carbide is used as a passive post-irradiation temperature monitor because the irradiation defects will anneal out above the irradiation temperature. The irradiation temperature is determined by measuring a property change after isochronal annealing, i.e., lattice spacing, dimensions, electrical resistivity, thermal diffusivity, or bulk density. However, such methods are time-consuming since the steps involved must be performed in a serial manner. This work presents the use of thermal expansion from continuous dilatometry to calculate the SiC irradiation temperature, which is an automated process requiring minimal setup time. Analysis software was written that performs the calculations to obtain the irradiation temperature and removes possible user-introduced error while standardizing the analysis. This method has been compared to an electrical resistivity and isochronal annealing investigation, and the results revealed agreement of the calculated temperatures. These results show that dilatometry is a reliable and less time-intensive process for determining irradiation temperature from passive SiC thermometry.
NASA Astrophysics Data System (ADS)
Cheng, Lan; Wang, Fan; Stanton, John F.; Gauss, Jürgen
2018-01-01
A scheme is reported for the perturbative calculation of spin-orbit coupling (SOC) within the spin-free exact two-component theory in its one-electron variant (SFX2C-1e) in combination with the equation-of-motion coupled-cluster singles and doubles method. Benchmark calculations of the spin-orbit splittings in 2Π and 2P radicals show that the accurate inclusion of scalar-relativistic effects using the SFX2C-1e scheme extends the applicability of the perturbative treatment of SOC to molecules that contain heavy elements. The contributions from relaxation of the coupled-cluster amplitudes are shown to be relatively small; significant contributions from correlating the inner-core orbitals are observed in calculations involving third-row and heavier elements. The calculation of term energies for the low-lying electronic states of the PtH radical, which serves to exemplify heavy transition-metal containing systems, further demonstrates the quality that can be achieved with the pragmatic approach presented here.
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
Wave vector modification of the infinite order sudden approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sachs, J.G.; Bowman, J.M.
1980-10-15
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories ismore » run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities P/sub n/1..-->..nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when ..delta..n=such thatub f/-n/sub i/ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.« less
Wave vector modification of the infinite order sudden approximation
NASA Astrophysics Data System (ADS)
Sachs, Judith Grobe; Bowman, Joel M.
1980-10-01
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities Pn1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=‖nf-ni‖ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.
Comparison of MM/GBSA calculations based on explicit and implicit solvent simulations.
Godschalk, Frithjof; Genheden, Samuel; Söderhjelm, Pär; Ryde, Ulf
2013-05-28
Molecular mechanics with generalised Born and surface area solvation (MM/GBSA) is a popular method to calculate the free energy of the binding of ligands to proteins. It involves molecular dynamics (MD) simulations with an explicit solvent of the protein-ligand complex to give a set of snapshots for which energies are calculated with an implicit solvent. This change in the solvation method (explicit → implicit) would strictly require that the energies are reweighted with the implicit-solvent energies, which is normally not done. In this paper we calculate MM/GBSA energies with two generalised Born models for snapshots generated by the same methods or by explicit-solvent simulations for five synthetic N-acetyllactosamine derivatives binding to galectin-3. We show that the resulting energies are very different both in absolute and relative terms, showing that the change in the solvent model is far from innocent and that standard MM/GBSA is not a consistent method. The ensembles generated with the various solvent models are quite different with root-mean-square deviations of 1.2-1.4 Å. The ensembles can be converted to each other by performing short MD simulations with the new method, but the convergence is slow, showing mean absolute differences in the calculated energies of 6-7 kJ mol(-1) after 2 ps simulations. Minimisations show even slower convergence and there are strong indications that the energies obtained from minimised structures are different from those obtained by MD.
The effective molarity (EM)--a computational approach.
Karaman, Rafik
2010-08-01
The effective molarity (EM) for 12 intramolecular S(N)2 processes involving the formation of substituted aziridines and substituted epoxides were computed using ab initio and DFT calculation methods. Strong correlation was found between the calculated effective molarity and the experimentally determined values. This result could open a door for obtaining EM values for intramolecular processes that are difficult to be experimentally provided. Furthermore, the calculation results reveal that the driving forces for ring-closing reactions in the two different systems are proximity orientation of the nucleophile to the electrophile and the ground strain energies of the products and the reactants. Copyright 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Hongxing; Fang, Hengrui; Miller, Mitchell D.
2016-07-15
An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationshipmore » of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.« less
Calculating work in weakly driven quantum master equations: Backward and forward equations
NASA Astrophysics Data System (ADS)
Liu, Fei
2016-01-01
I present a technical report indicating that the two methods used for calculating characteristic functions for the work distribution in weakly driven quantum master equations are equivalent. One involves applying the notion of quantum jump trajectory [Phys. Rev. E 89, 042122 (2014), 10.1103/PhysRevE.89.042122], while the other is based on two energy measurements on the combined system and reservoir [Silaev et al., Phys. Rev. E 90, 022103 (2014), 10.1103/PhysRevE.90.022103]. These represent backward and forward methods, respectively, which adopt a very similar approach to that of the Kolmogorov backward and forward equations used in classical stochastic theory. The microscopic basis for the former method is also clarified. In addition, a previously unnoticed equality related to the heat is also revealed.
Exotic and excited-state radiative transitions in charmonium from lattice QCD
Dudek, Jozef J.; Edwards, Robert G.; Thomas, Christopher E.
2009-05-01
We compute, for the first time using lattice QCD methods, radiative transition rates involving excited charmonium states, states of high spin and exotics. Utilizing a large basis of interpolating fields we are able to project out various excited state contributions to three-point correlators computed on quenched anisotropic lattices. In the first lattice QCD calculation of the exoticmore » $$1^{-+}$$ $$\\eta_{c1}$$ radiative decay, we find a large partial width $$\\Gamma(\\eta_{c1} \\to J/\\psi \\gamma) \\sim 100 \\,\\mathrm{keV}$$. We find clear signals for electric dipole and magnetic quadrupole transition form factors in $$\\chi_{c2} \\to J/\\psi \\gamma$$, calculated for the first time in this framework, and study transitions involving excited $$\\psi$$ and $$\\chi_{c1,2}$$ states. We calculate hindered magnetic dipole transition widths without the sensitivity to assumptions made in model studies and find statistically significant signals, including a non-exotic vector hybrid candidate $Y_{\\mathrm{hyb?}} \\to \\et« less
Exploratory Lattice QCD Study of the Rare Kaon Decay K^{+}→π^{+}νν[over ¯].
Bai, Ziyuan; Christ, Norman H; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T
2017-06-23
We report a first, complete lattice QCD calculation of the long-distance contribution to the K^{+}→π^{+}νν[over ¯] decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
Exploratory Lattice QCD Study of the Rare Kaon Decay K+→π+ν ν ¯
NASA Astrophysics Data System (ADS)
Bai, Ziyuan; Christ, Norman H.; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T.; Rbc-Ukqcd Collaboration
2017-06-01
We report a first, complete lattice QCD calculation of the long-distance contribution to the K+→π+ν ν ¯ decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
COMPARISON OF CONVENTIONAL AND PROGRAMED INSTRUCTION IN TEACHING AVIONICS FUNDAMENTALS.
ERIC Educational Resources Information Center
LONGO, ALEXANDER A.; MAYO, G. DOUGLAS
THIS STUDY, PART OF A SERIES INVOLVING A VARIETY OF COURSE CONTENT AND TRAINING CONDITIONS, COMPARED PROGRAMED INSTRUCTION WITH CONVENTIONAL INSTRUCTION TO GAIN INFORMATION ABOUT THE GENERAL UTILITY OF PROGRAMED METHODS. THE PERFORMANCE OF 200 NAVY TRAINEES TAKING 26 HOURS OF CONVENTIONAL INSTRUCTION IN ELECTRICAL CALCULATIONS, DIRECT CURRENT…
Reducing maintenance costs in agreement with CNC machine tools reliability
NASA Astrophysics Data System (ADS)
Ungureanu, A. L.; Stan, G.; Butunoi, P. A.
2016-08-01
Aligning maintenance strategy with reliability is a challenge due to the need to find an optimal balance between them. Because the various methods described in the relevant literature involve laborious calculations or use of software that can be costly, this paper proposes a method that is easier to implement on CNC machine tools. The new method, called the Consequence of Failure Analysis (CFA) is based on technical and economic optimization, aimed at obtaining a level of required performance with minimum investment and maintenance costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don E.; Marshall, William J.; Wagner, John C.
The U.S. Nuclear Regulatory Commission (NRC) Division of Spent Fuel Storage and Transportation recently issued Interim Staff Guidance (ISG) 8, Revision 3. This ISG provides guidance for burnup credit (BUC) analyses supporting transport and storage of PWR pressurized water reactor (PWR) fuel in casks. Revision 3 includes guidance for addressing validation of criticality (k eff) calculations crediting the presence of a limited set of fission products and minor actinides (FP&MA). Based on previous work documented in NUREG/CR-7109, recommendation 4 of ISG-8, Rev. 3, includes a recommendation to use 1.5 or 3% of the FP&MA worth to conservatively cover the biasmore » due to the specified FP&MAs. This bias is supplementary to the bias and bias uncertainty resulting from validation of k eff calculations for the major actinides in SNF and does not address extension to actinides and fission products beyond those identified herein. The work described in this report involves comparison of FP&MA worths calculated using SCALE and MCNP with ENDF/B-V, -VI, and -VII based nuclear data and supports use of the 1.5% FP&MA worth bias when either SCALE or MCNP codes are used for criticality calculations, provided the other conditions of the recommendation 4 are met. The method used in this report may also be applied to demonstrate the applicability of the 1.5% FP&MA worth bias to other codes using ENDF/B V, VI or VII based nuclear data. The method involves use of the applicant s computational method to generate FP&MA worths for a reference SNF cask model using specified spent fuel compositions. The applicant s FP&MA worths are then compared to reference values provided in this report. The applicants FP&MA worths should not exceed the reference results by more than 1.5% of the reference FP&MA worths.« less
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2002-01-01
A recently developed variationally stable quasi-relativistic method, which is based on the low-order approximation to the method of normalized elimination of the small component, was incorporated into density functional theory (DFT). The new method was tested for diatomic molecules involving Ag, Cd, Au, and Hg by calculating equilibrium bond lengths, vibrational frequencies, and dissociation energies. The method is easy to implement into standard quantum chemical programs and leads to accurate results for the benchmark systems studied.
NASA Astrophysics Data System (ADS)
Bekas, C.; Curioni, A.
2010-06-01
Enforcing the orthogonality of approximate wavefunctions becomes one of the dominant computational kernels in planewave based Density Functional Theory electronic structure calculations that involve thousands of atoms. In this context, algorithms that enjoy both excellent scalability and single processor performance properties are much needed. In this paper we present block versions of the Gram-Schmidt method and we show that they are excellent candidates for our purposes. We compare the new approach with the state of the art practice in planewave based calculations and find that it has much to offer, especially when applied on massively parallel supercomputers such as the IBM Blue Gene/P Supercomputer. The new method achieves excellent sustained performance that surpasses 73 TFLOPS (67% of peak) on 8 Blue Gene/P racks (32 768 compute cores), while it enables more than a two fold decrease in run time when compared with the best competing methodology.
A single camera roentgen stereophotogrammetry method for static displacement analysis.
Gussekloo, S W; Janssen, B A; George Vosselman, M; Bout, R G
2000-06-01
A new method to quantify motion or deformation of bony structures has been developed, since quantification is often difficult due to overlaying tissue, and the currently used roentgen stereophotogrammetry method requires significant investment. In our method, a single stationary roentgen source is used, as opposed to the usual two, which, in combination with a fixed radiogram cassette holder, forms a camera with constant interior orientation. By rotating the experimental object, it is possible to achieve a sufficient angle between the various viewing directions, enabling photogrammetric calculations. The photogrammetric procedure was performed on digitised radiograms and involved template matching to increase accuracy. Co-ordinates of spherical markers in the head of a bird (Rhea americana), were calculated with an accuracy of 0.12mm. When these co-ordinates were used in a deformation analysis, relocations of about 0.5mm could be accurately determined.
Dračínský, Martin; Buděšínský, Miloš; Warżajtis, Beata; Rychlewska, Urszula
2012-01-12
Selected guaianolide type sesquiterpene lactones were studied combining solution and solid-state NMR spectroscopy with theoretical calculations of the chemical shifts in both environments and with the X-ray data. The experimental (1)H and (13)C chemical shifts in solution were successfully reproduced by theoretical calculations (with the GIAO method and DFT B3LYP 6-31++G**) after geometry optimization (DFT B3LYP 6-31 G**) in vacuum. The GIPAW method was used for calculations of solid-state (13)C chemical shifts. The studied cases involved two polymorphs of helenalin, two pseudopolymorphs of 6α-hydroxydihydro-aromaticin and two cases of multiple asymmetric units in crystals: one in which the symmetry-independent molecules were connected by a series of hydrogen bonds (geigerinin) and the other in which the symmetry-independent molecules, deprived of any specific intermolecular interactions, differed in the conformation of the side chain (badkhysin). Geometrically different molecules present in the crystal lattices could be easily distinguished in the solid-state NMR spectra. Moreover, the experimental differences in the (13)C chemical shifts corresponding to nuclei in different polymorphs or in geometrically different molecules were nicely reproduced with the GIPAW calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, G.A.; Pack, R.T
1978-02-15
A simple, direct derivation of the rotational infinite order sudden (IOS) approximation in molecular scattering theory is given. Connections between simple scattering amplitude formulas, choice of average partial wave parameter, and magnetic transitions are reviewed. Simple procedures for calculating cross sections for specific transitions are discussed and many older model formulas are given clear derivations. Total (summed over rotation) differential, integral, and transport cross sections, useful in the analysis of many experiments involving nonspherical molecules, are shown to be exceedingly simple: They are just averages over the potential angle of cross sections calculated using simple structureless spherical particle formulas andmore » programs. In the case of vibrationally inelastic scattering, the IOSA, without further approximation, provides a well-defined way to get fully three dimensional cross sections from calculations no more difficult than collinear calculations. Integral, differential, viscosity, and diffusion cross sections for He-CO/sub 2/ obtained from the IOSA and a realistic intermolecular potential are calculated as an example and compared with experiment. Agreement is good for the complete potential but poor when only its spherical part is used, so that one should never attempt to treat this system with a spherical model. The simplicity and accuracy of the IOSA make it a viable method for routine analysis of experiments involving collisions of nonspherical molecules.« less
Enhanced automated spiral bevel gear inspection
NASA Technical Reports Server (NTRS)
Frint, Harold K.; Glasow, Warren
1992-01-01
Presented here are the results of a manufacturing and technology program to define, develop, and evaluate an enhanced inspection system for spiral bevel gears. The method uses a multi-axis coordinate measuring machine which maps the working surface of the tooth and compares it with nominal reference values stored in the machine's computer. The enhanced technique features a means for automatically calculating corrective grinding machine settings, involving both first and second order changes, to control the tooth profile to within specified tolerance limits. This enhanced method eliminates the subjective decision making involved in the tooth patterning method, still in use today, which compares contract patterns obtained when the gear is set to run under light load in a rolling test machine. It produces a higher quality gear with significant inspection time and cost savings.
Albuquerque, Maicon R.; Lopes, Mariana C.; de Paula, Jonas J.; Faria, Larissa O.; Pereira, Eveline T.; da Costa, Varley T.
2017-01-01
In order to understand the reasons that lead individuals to practice physical activity, researchers developed the Motives for Physical Activity Measure-Revised (MPAM-R) scale. In 2010, a translation of MPAM-R to Portuguese and its validation was performed. However, psychometric measures were not acceptable. In addition, factor scores in some sports psychology scales are calculated by the mean of scores by items of the factor. Nevertheless, it seems appropriate that items with higher factor loadings, extracted by Factor Analysis, have greater weight in the factor score, as items with lower factor loadings have less weight in the factor score. The aims of the present study are to translate, validate the MPAM-R for Portuguese versions, and investigate agreement between two methods used to calculate factor scores. Three hundred volunteers who were involved in physical activity programs for at least 6 months were collected. Confirmatory Factor Analysis of the 30 items indicated that the version did not fit the model. After excluding four items, the final model with 26 items showed acceptable model fit measures by Exploratory Factor Analysis, as well as it conceptually supports the five factors as the original proposal. When two methods are compared to calculate factors scores, our results showed that only “Enjoyment” and “Appearance” factors showed agreement between methods to calculate factor scores. So, the Portuguese version of the MPAM-R can be used in a Brazilian context, and a new proposal for the calculation of the factor score seems to be promising. PMID:28293203
Pellis, Lorenzo; Ball, Frank; Trapman, Pieter
2012-01-01
The basic reproduction number R0 is one of the most important quantities in epidemiology. However, for epidemic models with explicit social structure involving small mixing units such as households, its definition is not straightforward and a wealth of other threshold parameters has appeared in the literature. In this paper, we use branching processes to define R0, we apply this definition to models with households or other more complex social structures and we provide methods for calculating it. PMID:22085761
New determination of the fine structure constant from the electron value and QED.
Gabrielse, G; Hanneke, D; Kinoshita, T; Nio, M; Odom, B
2006-07-21
Quantum electrodynamics (QED) predicts a relationship between the dimensionless magnetic moment of the electron (g) and the fine structure constant (alpha). A new measurement of g using a one-electron quantum cyclotron, together with a QED calculation involving 891 eighth-order Feynman diagrams, determine alpha(-1)=137.035 999 710 (96) [0.70 ppb]. The uncertainties are 10 times smaller than those of nearest rival methods that include atom-recoil measurements. Comparisons of measured and calculated g test QED most stringently, and set a limit on internal electron structure.
Directivity analysis of meander-line-coil EMATs with a wholly analytical method.
Xie, Yuedong; Liu, Zenghua; Yin, Liyuan; Wu, Jiande; Deng, Peng; Yin, Wuliang
2017-01-01
This paper presents the simulation and experimental study of the radiation pattern of a meander-line-coil EMAT. A wholly analytical method, which involves the coupling of two models: an analytical EM model and an analytical UT model, has been developed to build EMAT models and analyse the Rayleigh waves' beam directivity. For a specific sensor configuration, Lorentz forces are calculated using the EM analytical method, which is adapted from the classic Deeds and Dodd solution. The calculated Lorentz force density are imported to an analytical ultrasonic model as driven point sources, which produce the Rayleigh waves within a layered medium. The effect of the length of the meander-line-coil on the Rayleigh waves' beam directivity is analysed quantitatively and verified experimentally. Copyright © 2016 Elsevier B.V. All rights reserved.
Computational methods for yeast prion curing curves.
Ridout, Martin S
2008-10-01
If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.
Santos, Eliane Macedo Sobrinho; Santos, Hércules Otacílio; Dos Santos Dias, Ivoneth; Santos, Sérgio Henrique; Batista de Paula, Alfredo Maurício; Feltenberger, John David; Sena Guimarães, André Luiz; Farias, Lucyana Conceição
2016-01-01
Pathogenesis of odontogenic tumors is not well known. It is important to identify genetic deregulations and molecular alterations. This study aimed to investigate, through bioinformatic analysis, the possible genes involved in the pathogenesis of ameloblastoma (AM) and keratocystic odontogenic tumor (KCOT). Genes involved in the pathogenesis of AM and KCOT were identified in GeneCards. Gene list was expanded, and the gene interactions network was mapped using the STRING software. "Weighted number of links" (WNL) was calculated to identify "leader genes" (highest WNL). Genes were ranked by K-means method and Kruskal-Wallis test was used (P<0.001). Total interactions score (TIS) was also calculated using all interaction data generated by the STRING database, in order to achieve global connectivity for each gene. The topological and ontological analyses were performed using Cytoscape software and BinGO plugin. Literature review data was used to corroborate the bioinformatics data. CDK1 was identified as leader gene for AM. In KCOT group, results show PCNA and TP53 . Both tumors exhibit a power law behavior. Our topological analysis suggested leader genes possibly important in the pathogenesis of AM and KCOT, by clustering coefficient calculated for both odontogenic tumors (0.028 for AM, zero for KCOT). The results obtained in the scatter diagram suggest an important relationship of these genes with the molecular processes involved in AM and KCOT. Ontological analysis for both AM and KCOT demonstrated different mechanisms. Bioinformatics analyzes were confirmed through literature review. These results may suggest the involvement of promising genes for a better understanding of the pathogenesis of AM and KCOT.
Time on Your Hands: Modeling Time
ERIC Educational Resources Information Center
Finson, Kevin; Beaver, John
2007-01-01
Building physical models relative to a concept can be an important activity to help students develop and manipulate abstract ideas and mental models that often prove difficult to grasp. One such concept is "time". A method for helping students understand the cyclical nature of time involves the construction of a Time Zone Calculator through a…
The Use of Binary Search Trees in External Distribution Sorting.
ERIC Educational Resources Information Center
Cooper, David; Lynch, Michael F.
1984-01-01
Suggests new method of external distribution called tree partitioning that involves use of binary tree to split incoming file into successively smaller partitions for internal sorting. Number of disc accesses during a tree-partitioning sort were calculated in simulation using files extracted from British National Bibliography catalog files. (19…
Research on plasma turbulence involving binary particle collisions and collective effects
NASA Technical Reports Server (NTRS)
Sandri, G.
1972-01-01
Plasmas in which binary collisions are important are studied by means of nonadiabatic methods. Two- and three-body correlations are calculated to determine the one-particle distribution for the ionization model. The general dispersion analysis is summarized, and examples of the ionization model and of the static fluctuations are discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-26
..., particulates, carbon monoxide and sulfur dioxide. The revision involves the deletion of obsolete, the adoption... Regulation refer to Colorado's Regulation 1. (vi) The initials SO2 mean or refer to sulfur dioxide, HC mean... modifies the method for calculating compliance with emission limits for petroleum refining and cement...
Calculating excess lifetime risk in relative risk models.
Vaeth, M; Pierce, D A
1990-01-01
When assessing the impact of radiation exposure it is common practice to present the final conclusions in terms of excess lifetime cancer risk in a population exposed to a given dose. The present investigation is mainly a methodological study focusing on some of the major issues and uncertainties involved in calculating such excess lifetime risks and related risk projection methods. The age-constant relative risk model used in the recent analyses of the cancer mortality that was observed in the follow-up of the cohort of A-bomb survivors in Hiroshima and Nagasaki is used to describe the effect of the exposure on the cancer mortality. In this type of model the excess relative risk is constant in age-at-risk, but depends on the age-at-exposure. Calculation of excess lifetime risks usually requires rather complicated life-table computations. In this paper we propose a simple approximation to the excess lifetime risk; the validity of the approximation for low levels of exposure is justified empirically as well as theoretically. This approximation provides important guidance in understanding the influence of the various factors involved in risk projections. Among the further topics considered are the influence of a latent period, the additional problems involved in calculations of site-specific excess lifetime cancer risks, the consequences of a leveling off or a plateau in the excess relative risk, and the uncertainties involved in transferring results from one population to another. The main part of this study relates to the situation with a single, instantaneous exposure, but a brief discussion is also given of the problem with a continuous exposure at a low-dose rate. PMID:2269245
Calculating excess lifetime risk in relative risk models.
Vaeth, M; Pierce, D A
1990-07-01
When assessing the impact of radiation exposure it is common practice to present the final conclusions in terms of excess lifetime cancer risk in a population exposed to a given dose. The present investigation is mainly a methodological study focusing on some of the major issues and uncertainties involved in calculating such excess lifetime risks and related risk projection methods. The age-constant relative risk model used in the recent analyses of the cancer mortality that was observed in the follow-up of the cohort of A-bomb survivors in Hiroshima and Nagasaki is used to describe the effect of the exposure on the cancer mortality. In this type of model the excess relative risk is constant in age-at-risk, but depends on the age-at-exposure. Calculation of excess lifetime risks usually requires rather complicated life-table computations. In this paper we propose a simple approximation to the excess lifetime risk; the validity of the approximation for low levels of exposure is justified empirically as well as theoretically. This approximation provides important guidance in understanding the influence of the various factors involved in risk projections. Among the further topics considered are the influence of a latent period, the additional problems involved in calculations of site-specific excess lifetime cancer risks, the consequences of a leveling off or a plateau in the excess relative risk, and the uncertainties involved in transferring results from one population to another. The main part of this study relates to the situation with a single, instantaneous exposure, but a brief discussion is also given of the problem with a continuous exposure at a low-dose rate.
Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications
NASA Technical Reports Server (NTRS)
Biedron, Robert T,; Thomas, James L.
2009-01-01
An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabacchi, G; Hutter, J; Mundy, C
2005-04-07
A combined linear response--frozen electron density model has been implemented in a molecular dynamics scheme derived from an extended Lagrangian formalism. This approach is based on a partition of the electronic charge distribution into a frozen region described by Kim-Gordon theory, and a response contribution determined by the instaneous ionic configuration of the system. The method is free from empirical pair-potentials and the parameterization protocol involves only calculations on properly chosen subsystems. They apply this method to a series of alkali halides in different physical phases and are able to reproduce experimental structural and thermodynamic properties with an accuracy comparablemore » to Kohn-Sham density functional calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, G. Barratt, E-mail: barratt@mit.edu
Franck-Condon vibrational overlap integrals for the A{sup ~1}A{sub u}—X{sup ~1}Σ{sup +}{sub g} transition in acetylene have been calculated in full dimension in the harmonic normal mode basis. The calculation uses the method of generating functions first developed for polyatomic Franck-Condon factors by Sharp and Rosenstock [J. Chem. Phys. 41(11), 3453–3463 (1964)], and previously applied to acetylene by Watson [J. Mol. Spectrosc. 207(2), 276–284 (2001)] in a reduced-dimension calculation. Because the transition involves a large change in the equilibrium geometry of the electronic states, two different types of corrections to the coordinate transformation are considered to first order: corrections for axis-switchingmore » between the Cartesian molecular frames and corrections for the curvilinear nature of the normal modes at large amplitude. The angular factor in the wave function for the out-of-plane component of the trans bending mode, ν{sub 4}{sup ″}, is treated as a rotation, which results in an Eckart constraint on the polar coordinates of the bending modes. To simplify the calculation, the other degenerate bending mode, ν{sub 5}{sup ″}, is integrated in the Cartesian basis and later transformed to the constrained polar coordinate basis, restoring the conventional v and l quantum numbers. An updated A{sup ~}-state harmonic force field obtained recently in the R. W. Field research group is evaluated. The results for transitions involving the gerade vibrational modes are in qualitative agreement with experiment. Calculated results for transitions involving ungerade modes are presented in Paper II of this series [G. B. Park, J. H. Baraban, and R. W. Field, “Full dimensional Franck–Condon factors for the acetylene A{sup ~1}A{sub u}—X{sup ~1}Σ{sup +}{sub g} transition. II. Vibrational overlap factors for levels involving excitation in ungerade modes,” J. Chem. Phys. 141, 134305 (2014)].« less
A Comparison of Three Theoretical Methods of Calculating Span Load Distribution on Swept Wings
NASA Technical Reports Server (NTRS)
VanDorn, Nicholas H.; DeYoung, John
1947-01-01
Three methods for calculating span load distribution, those developed by V.M Falkner, Wm. Mutterperl, and J. Weissinger, have been applied to five swept wings. The angles of sweep ranged from -45 degrees to +45 degrees. These methods were examined to establish their relative accuracy and case of application. Experimentally determined loadings were used as a basis for judging accuracy. For the convenience of the readers the computing forms and all information requisite to their application are included in appendixes. From the analysis it was found that the Weissinger method would be best suited to an over-all study of the effects of plan form on the span loading and associated characteristics of wings. The method gave good, but not best, accuracy and involved by far the least computing effort. The Falkner method gave the best accuracy but at a considerable expanse in computing effort and hence appeared to be most useful for a detailed study of a specific wing. The Mutterperl method offered no advantages in accuracy of facility over either of the other methods and hence is not recommended for use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.
Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less
Multiscale examination and modeling of electron transport in nanoscale materials and devices
NASA Astrophysics Data System (ADS)
Banyai, Douglas R.
For half a century the integrated circuits (ICs) that make up the heart of electronic devices have been steadily improving by shrinking at an exponential rate. However, as the current crop of ICs get smaller and the insulating layers involved become thinner, electrons leak through due to quantum mechanical tunneling. This is one of several issues which will bring an end to this incredible streak of exponential improvement of this type of transistor device, after which future improvements will have to come from employing fundamentally different transistor architecture rather than fine tuning and miniaturizing the metal-oxide-semiconductor field effect transistors (MOSFETs) in use today. Several new transistor designs, some designed and built here at Michigan Tech, involve electrons tunneling their way through arrays of nanoparticles. We use a multi-scale approach to model these devices and study their behavior. For investigating the tunneling characteristics of the individual junctions, we use a first-principles approach to model conduction between sub-nanometer gold particles. To estimate the change in energy due to the movement of individual electrons, we use the finite element method to calculate electrostatic capacitances. The kinetic Monte Carlo method allows us to use our knowledge of these details to simulate the dynamics of an entire device---sometimes consisting of hundreds of individual particles---and watch as a device 'turns on' and starts conducting an electric current. Scanning tunneling microscopy (STM) and the closely related scanning tunneling spectroscopy (STS) are a family of powerful experimental techniques that allow for the probing and imaging of surfaces and molecules at atomic resolution. However, interpretation of the results often requires comparison with theoretical and computational models. We have developed a new method for calculating STM topographs and STS spectra. This method combines an established method for approximating the geometric variation of the electronic density of states, with a modern method for calculating spin-dependent tunneling currents, offering a unique balance between accuracy and accessibility.
Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.; ...
2016-04-01
Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less
Minimum current principle and variational method in theory of space charge limited flow
NASA Astrophysics Data System (ADS)
Rokhlenko, A.
2015-10-01
In spirit of the principle of least action, which means that when a perturbation is applied to a physical system, its reaction is such that it modifies its state to "agree" with the perturbation by "minimal" change of its initial state. In particular, the electron field emission should produce the minimum current consistent with boundary conditions. It can be found theoretically by solving corresponding equations using different techniques. We apply here the variational method for the current calculation, which can be quite effective even when involving a short set of trial functions. The approach to a better result can be monitored by the total current that should decrease when we on the right track. Here, we present only an illustration for simple geometries of devices with the electron flow. The development of these methods can be useful when the emitter and/or anode shapes make difficult the use of standard approaches. Though direct numerical calculations including particle-in-cell technique are very effective, but theoretical calculations can provide an important insight for understanding general features of flow formation and even sometimes be realized by simpler routines.
New approach in direct-simulation of gas mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren
1991-01-01
Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.
Stresses and elastic constants of crystalline sodium, from molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiferl, S.K.
1985-02-01
The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less
Yang, Y Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin
2016-03-07
In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C-H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.
NASA Astrophysics Data System (ADS)
Yang, Y. Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin
2016-03-01
In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. Isaac; Zhang, Jun; Che, Xing
2016-03-07
In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence ofmore » the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ − ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw
Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less
Quantifying cause-related mortality by weighting multiple causes of death
Moreno-Betancur, Margarita; Lamarche-Vadel, Agathe; Rey, Grégoire
2016-01-01
Abstract Objective To investigate a new approach to calculating cause-related standardized mortality rates that involves assigning weights to each cause of death reported on death certificates. Methods We derived cause-related standardized mortality rates from death certificate data for France in 2010 using: (i) the classic method, which considered only the underlying cause of death; and (ii) three novel multiple-cause-of-death weighting methods, which assigned weights to multiple causes of death mentioned on death certificates: the first two multiple-cause-of-death methods assigned non-zero weights to all causes mentioned and the third assigned non-zero weights to only the underlying cause and other contributing causes that were not part of the main morbid process. As the sum of the weights for each death certificate was 1, each death had an equal influence on mortality estimates and the total number of deaths was unchanged. Mortality rates derived using the different methods were compared. Findings On average, 3.4 causes per death were listed on each certificate. The standardized mortality rate calculated using the third multiple-cause-of-death weighting method was more than 20% higher than that calculated using the classic method for five disease categories: skin diseases, mental disorders, endocrine and nutritional diseases, blood diseases and genitourinary diseases. Moreover, this method highlighted the mortality burden associated with certain diseases in specific age groups. Conclusion A multiple-cause-of-death weighting approach to calculating cause-related standardized mortality rates from death certificate data identified conditions that contributed more to mortality than indicated by the classic method. This new approach holds promise for identifying underrecognized contributors to mortality. PMID:27994280
Application of a boundary element method to the study of dynamical torsion of beams
NASA Technical Reports Server (NTRS)
Czekajski, C.; Laroze, S.; Gay, D.
1982-01-01
During dynamic torsion of beam elements, consideration of nonuniform warping effects involves a more general technical formulation then that of Saint-Venant. Nonclassical torsion constants appear in addition to the well known torsional rigidity. The adaptation of the boundary integral element method to the calculation of these constants for general section shapes is described. The suitability of the formulation is investigated with some examples of thick as well as thin walled cross sections.
Calculation of Moment Matrix Elements for Bilinear Quadrilaterals and Higher-Order Basis Functions
2016-01-06
methods are known as boundary integral equation (BIE) methods and the present study falls into this category. The numerical solution of the BIE is...iterated integrals. The inner integral involves the product of the free-space Green’s function for the Helmholtz equation multiplied by an appropriate...Website: http://www.wipl-d.com/ 5. Y. Zhang and T. K. Sarkar, Parallel Solution of Integral Equation -Based EM Problems in the Frequency Domain. New
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, Wan-Yee, E-mail: josephlau@surgery.cuhk.edu.hk; Kennedy, Andrew S.; Department of Biomedical Engineering, North Carolina State University, Raleigh, NC
Purpose: Selective internal radiotherapy (SIRT) with yttrium-90 ({sup 90}Y) resin microspheres can improve the clinical outcomes for selected patients with inoperable liver cancer. This technique involves intra-arterial delivery of {beta}-emitting microspheres into hepatocellular carcinomas or liver metastases while sparing uninvolved structures. Its unique mode of action, including both {sup 90}Y brachytherapy and embolization of neoplastic microvasculature, necessitates activity planning methods specific to SIRT. Methods and Materials: A panel of clinicians experienced in {sup 90}Y resin microsphere SIRT was convened to integrate clinical experience with the published data to propose an activity planning pathway for radioembolization. Results: Accurate planning is essentialmore » to minimize potentially fatal sequelae such as radiation-induced liver disease while delivering tumoricidal {sup 90}Y activity. Planning methods have included empiric dosing according to degree of tumor involvement, empiric dosing adjusted for the body surface area, and partition model calculations using Medical Internal Radiation Dose principles. It has been recommended that at least two of these methods be compared when calculating the microsphere activity for each patient. Conclusions: Many factors inform {sup 90}Y resin microsphere SIRT activity planning, including the therapeutic intent, tissue and vasculature imaging, tumor and uninvolved liver characteristics, previous therapies, and localization of the microsphere infusion. The influence of each of these factors has been discussed.« less
NASA Technical Reports Server (NTRS)
Young, J. W.; Schy, A. A.; Johnson, K. G.
1977-01-01
An analytical method has been developed for predicting critical control inputs for which nonlinear rotational coupling may cause sudden jumps in aircraft response. The analysis includes the effect of aerodynamics which are nonlinear in angle of attack. The method involves the simultaneous solution of two polynomials in roll rate, whose coefficients are functions of angle of attack and the control inputs. Results obtained using this procedure are compared with calculated time histories to verify the validity of the method for predicting jump-like instabilities.
Using pyramids to define local thresholds for blob detection.
Shneier, M
1983-03-01
A method of detecting blobs in images is described. The method involves building a succession of lower resolution images and looking for spots in these images. A spot in a low resolution image corresponds to a distinguished compact region in a known position in the original image. Further, it is possible to calculate thresholds in the low resolution image, using very simple methods, and to apply those thresholds to the region of the original image corresponding to the spot. Examples are shown in which variations of the technique are applied to several images.
NASA Astrophysics Data System (ADS)
Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.
2017-10-01
This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution
Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.
Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M
2015-03-06
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.
Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'
Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.
2015-01-01
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468
Bettens, Ryan P A
2003-01-15
Collins' method of interpolating a potential energy surface (PES) from quantum chemical calculations for reactive systems (Jordan, M. J. T.; Thompson, K. C.; Collins, M. A. J. Chem. Phys. 1995, 102, 5647. Thompson, K. C.; Jordan, M. J. T.; Collins, M. A. J. Chem. Phys. 1998, 108, 8302. Bettens, R. P. A.; Collins, M. A. J. Chem. Phys. 1999, 111, 816) has been applied to a bound state problem. The interpolation method has been combined for the first time with quantum diffusion Monte Carlo calculations to obtain an accurate ground state zero-point energy, the vibrationally average rotational constants, and the vibrationally averaged internal coordinates. In particular, the system studied was fluoromethane using a composite method approximating the QCISD(T)/6-311++G(2df,2p) level of theory. The approach adopted in this work (a) is fully automated, (b) is fully ab initio, (c) includes all nine nuclear degrees of freedom, (d) requires no assumption of the functional form of the PES, (e) possesses the full symmetry of the system, (f) does not involve fitting any parameters of any kind, and (g) is generally applicable to any system amenable to quantum chemical calculations and Collins' interpolation method. The calculated zero-point energy agrees to within 0.2% of its current best estimate. A0 and B0 are within 0.9 and 0.3%, respectively, of experiment.
Rapid extraction of image texture by co-occurrence using a hybrid data structure
NASA Astrophysics Data System (ADS)
Clausi, David A.; Zhao, Yongping
2002-07-01
Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.
DSMC simulations of shock tube experiments for the dissociation rate of nitrogen
NASA Astrophysics Data System (ADS)
Bird, G. A.
2012-11-01
The DSMC method has been used to simulate the flow associated with several experiments that led to predictions of the dissociation rate in nitrogen. One involved optical interferometry to determine the density behind strong shock wave and the other involved the measurement of the shock tube end-wall pressure after the reflection of a similar shock wave. DSMC calculations for the un-reflected shock wave were made with the older TCE model that converts rate coefficients to reaction cross-sections, with the newer Q-K model that predicts the rates and with a set of reaction cross-sections for nitrogen dissociation from QCT calculations. A comparison of the resulting density profiles with the measured profile provides a test of the validity of the DSMC chemistry models. The DSMC reaction rates were sampled directly in the DSMC calculation, both far downstream where the flow is in equilibrium and in the non-equilibrium region immediately behind the shock. This permits a critical evaluation of data reduction procedures that were employed to deduce the dissociation rate from the measured quantities.
Carballa, Marta; Omil, Francisco; Lema, Juan M
2007-02-01
Two different methods are proposed to perform the mass balance calculations of micropollutants in sewage treatment plants (STPs). The first method uses the measured data in both liquid and sludge phase and the second one uses the solid-water distribution coefficient (Kd) to calculate the concentrations in the sludge from those measured in the liquid phase. The proposed methodologies facilitate the identification of the main mechanisms involved in the elimination of micropollutants. Both methods are applied for determining mass balances of selected pharmaceutical and personal care products (PPCPs) and their results are discussed. In that way, the fate of 2 musks (galaxolide and tonalide), 3 pharmaceuticals (ibuprofen, naproxen, and sulfamethoxazole), and 2 natural estrogens (estrone and 17beta-estradiol) has been investigated along the different water and sludge treatment units of a STP. Ibuprofen, naproxen, and sulfamethoxazole are biologically degraded in the aeration tank (50-70%), while musks are equally sorbed to the sludge and degraded. In contrast, estrogens are not removed in the STP studied. About 40% of the initial load of pharmaceuticals passes through the plant unaltered, with the fraction associated to sludge lower than 0.5%. In contrast, between 20 and 40% of the initial load of musks leaves the plant associated to solids, with less than 10% present in the final effluent. The results obtained show that the conclusions concerning the efficiency of micropollutants removal in a particular STP may be seriously affected by the calculation method used.
NASA Technical Reports Server (NTRS)
Pegg, D. J.; Elston, S. B.; Griffin, P. M.; Forester, J. P.; Thoe, R. S.; Peterson, R. S.; Sellin, I. A.; Hayden, H. C.
1976-01-01
The beam-foil time-of-flight method has been used to investigate radiative lifetimes and transition rates involving allowed intrashell transitions within the L shell of highly ionized sulfur. The results for these transitions, which can be particularly correlation-sensitive, are compared with current calculations based upon multiconfigurational models.
ERIC Educational Resources Information Center
Satake, Eiki; Vashlishan Murray, Amy
2015-01-01
This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…
Score Calculation in Informatics Contests Using Multiple Criteria Decision Methods
ERIC Educational Resources Information Center
Skupiene, Jurate
2011-01-01
The Lithuanian Informatics Olympiad is a problem solving contest for high school students. The work of each contestant is evaluated in terms of several criteria, where each criterion is measured according to its own scale (but the same scale for each contestant). Several jury members are involved in the evaluation. This paper analyses the problem…
NASA Astrophysics Data System (ADS)
Kleinman, Leonard
2001-03-01
The history of pseudopotentials from 1934 to the present time will be discussed. The speaker's personal involvement will be described but not to the neglect of the many others who have made huge contributions to the field. We end with the question, 'Is it possible that pseudopotential calculations could be more accurate than those made using the full potential augmented plane wave method?'.
Pretest Predictions for Ventilation Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Sun; H. Yang; H.N. Kalia
The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, concrete pipe walls, and insulation that will be developed during the ventilation tests involving various test conditions. The results will be used as input to the following three areas: (1) Decisions regarding testing set-up and performance. (2) Assessing how best to scale the test phenomena measured. (3) Validating numerical approach for modeling continuous ventilation. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the ventilation tests, and develop and describe numerical methods that canmore » be used to calculate the effects of continuous ventilation. Sensitivity studies to assess the impact of variation of linear power densities (linear heat loads) and ventilation air flow rates are included. The calculation is limited to thermal effect only.« less
Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering
Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.
2011-01-01
Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761
An Exact Formula for Calculating Inverse Radial Lens Distortions
Drap, Pierre; Lefèvre, Julien
2016-01-01
This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view. PMID:27258288
Undergraduate paramedic students cannot do drug calculations
Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett
2012-01-01
BACKGROUND: Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. METHODS: A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. RESULTS: The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they ‘did not have any drug calculations issues’. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. CONCLUSIONS: This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment. PMID:25215067
Esque, Jeremy; Cecchini, Marco
2015-04-23
The calculation of the free energy of conformation is key to understanding the function of biomolecules and has attracted significant interest in recent years. Here, we present an improvement of the confinement method that was designed for use in the context of explicit solvent MD simulations. The development involves an additional step in which the solvation free energy of the harmonically restrained conformers is accurately determined by multistage free energy perturbation simulations. As a test-case application, the newly introduced confinement/solvation free energy (CSF) approach was used to compute differences in free energy between conformers of the alanine dipeptide in explicit water. The results are in excellent agreement with reference calculations based on both converged molecular dynamics and umbrella sampling. To illustrate the general applicability of the method, conformational equilibria of met-enkephalin (5 aa) and deca-alanine (10 aa) in solution were also analyzed. In both cases, smoothly converged free-energy results were obtained in agreement with equilibrium sampling or literature calculations. These results demonstrate that the CSF method may provide conformational free-energy differences of biomolecules with small statistical errors (below 0.5 kcal/mol) and at a moderate computational cost even with a full representation of the solvent.
NASA Astrophysics Data System (ADS)
Derevianko, Andrei; Porsev, Sergey G.
2005-03-01
We consider evaluation of matrix elements with the coupled-cluster method. Such calculations formally involve infinite number of terms and we devise a method of partial summation (dressing) of the resulting series. Our formalism is built upon an expansion of the product C†C of cluster amplitudes C into a sum of n -body insertions. We consider two types of insertions: particle (hole) line insertion and two-particle (two-hole) random-phase-approximation-like insertion. We demonstrate how to “dress” these insertions and formulate iterative equations. We illustrate the dressing equations in the case when the cluster operator is truncated at single and double excitations. Using univalent systems as an example, we upgrade coupled-cluster diagrams for matrix elements with the dressed insertions and highlight a relation to pertinent fourth-order diagrams. We illustrate our formalism with relativistic calculations of the hyperfine constant A(6s) and the 6s1/2-6p1/2 electric-dipole transition amplitude for the Cs atom. Finally, we augment the truncated coupled-cluster calculations with otherwise omitted fourth order diagrams. The resulting analysis for Cs is complete through the fourth order of many-body perturbation theory and reveals an important role of triple and disconnected quadruple excitations.
NASA Astrophysics Data System (ADS)
Sangeetha, M.; Mathammal, R.
2018-02-01
The ionic cocrystals of 5-amino-2-naphthalene sulfonate · ammonium ions (ANSA-ṡNH4+) were grown under slow evaporation method and examined in detail for pharmaceutical applications. The crystal structure and intermolecular interactions were studied from the single X-ray diffraction analysis and the Hirshfeld surfaces. The 2D fingerprint plots displayed the inter-contacts possible in the ionic crystal. Computational DFT method was established to determine the structural, physical and chemical properties. The molecular geometries obtained from the X-ray studies were compared with the optimized geometrical parameters calculated using DFT/6-31 + G(d,p) method. The band gap energy calculated from the UV-Visible spectral analysis and the HOMO-LUMO energy gap are compared. The theoretical UV-Visible calculations helped in determining the type of electronic transition taking place in the title molecule. The maximum absorption bands and transitions involved in the molecule represented the drug reaction possible. Non-linear optical properties were characterized from SHG efficiency measurements experimentally and the NLO parameters are also calculated from the optimized structure. The reactive sites within the molecule are detailed from the MEP surface maps. The molecular docking studies evident the structure-activity of the ionic cocrystal for anti-cancer drug property.
Petti, S; Renzini, G
1994-03-01
The percentage of anaerobic micro-organisms in the subgingival microflora represents a simple microbiological index which not only refers to the state but also the risks of periodontal health. The present study aimed to compare two different methods of calculating this index. The study was performed in 45 subjects with moderate gingivitis provoked by the previous application of dental fixtures anchored to both arches. A sample of subgingival microflora was collected from each patient at the level of the vestibular gingival sulcus of the first upper right molar. This was then vortexed, diluted and inoculated in three series of plates. It was chosen to use Walker's culture medium. The total bacterial count was evaluated by incubating the first series of plates in anaerobiosis; the anaerobic bacterial was calculated by subtracting from the total the of facultative aerobic-anaerobic micro-organisms, which in turn was obtained using two methods: the first (method AE) consisted of incubating another series of plates in aerobiosis; the second (method M) involved incubating the last series of plates in anaerobiosis, and adding metronidazole to the culture medium in a solution of 2.5 mg/l. The plates were then kept at 37 degrees C for seven days. The mean percentage of anaerobic microorganisms, given by the percentage ratio between anaerobic and total, relating to the 45 cases studied, was as follows: using method AE: 57.8 +/- 26.3%, and using method M: 40.2 +/- 27.2%. Both figures come close to that proposed and calculated using a much more sophisticated method by Slots, namely 41.5 +/- 19.2% in the event of gingivitis.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Bonitati, Joey; Slimmer, Ben; Li, Weichuan; Potel, Gregory; Nunes, Filomena
2017-09-01
The calculable form of the R-matrix method has been previously shown to be a useful tool in approximately solving the Schrodinger equation in nuclear scattering problems. We use this technique combined with the Gauss quadrature for the Lagrange-mesh method to efficiently solve for the wave functions of projectile nuclei in low energy collisions (1-100 MeV) involving an arbitrary number of channels. We include the local Woods-Saxon potential, the non-local potential of Perey and Buck, a Coulomb potential, and a coupling potential to computationally solve for the wave function of two nuclei at short distances. Object oriented programming is used to increase modularity, and parallel programming techniques are introduced to reduce computation time. We conclude that the R-matrix method is an effective method to predict the wave functions of nuclei in scattering problems involving both multiple channels and non-local potentials. Michigan State University iCER ACRES REU.
Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.
Khoromskaia, Venera; Khoromskij, Boris N
2015-12-21
We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.
Exact special twist method for quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Dagrada, Mario; Karakuzu, Seher; Vildosola, Verónica Laura; Casula, Michele; Sorella, Sandro
2016-12-01
We present a systematic investigation of the special twist method introduced by Rajagopal et al. [Phys. Rev. B 51, 10591 (1995), 10.1103/PhysRevB.51.10591] for reducing finite-size effects in correlated calculations of periodic extended systems with Coulomb interactions and Fermi statistics. We propose a procedure for finding special twist values which, at variance with previous applications of this method, reproduce the energy of the mean-field infinite-size limit solution within an adjustable (arbitrarily small) numerical error. This choice of the special twist is shown to be the most accurate single-twist solution for curing one-body finite-size effects in correlated calculations. For these reasons we dubbed our procedure "exact special twist" (EST). EST only needs a fully converged independent-particles or mean-field calculation within the primitive cell and a simple fit to find the special twist along a specific direction in the Brillouin zone. We first assess the performances of EST in a simple correlated model such as the three-dimensional electron gas. Afterwards, we test its efficiency within ab initio quantum Monte Carlo simulations of metallic elements of increasing complexity. We show that EST displays an overall good performance in reducing finite-size errors comparable to the widely used twist average technique but at a much lower computational cost since it involves the evaluation of just one wave function. We also demonstrate that the EST method shows similar performances in the calculation of correlation functions, such as the ionic forces for structural relaxation and the pair radial distribution function in liquid hydrogen. Our conclusions point to the usefulness of EST for correlated supercell calculations; our method will be particularly relevant when the physical problem under consideration requires large periodic cells.
NASA Technical Reports Server (NTRS)
Bovina, T. A.; Zviagin, Y. V.; Markelov, N. V.; Chudetskiy, Y. V.
1986-01-01
A method is presented for calculating the heating and erosion of blunt bodies made of graphite in a high-enthalpy flow of dissociated air, assuming chemical equilibrium on the surface and taking account of the thermal effects of combustion and sublimation of graphite. The analysis involves the use of a finite difference scheme to solve an equation of unsteady heat conduction. Attention is given to the equilibrium vaporization of C, C2 and C3 molecules. The calculations agree well with experimental data for a wide range of temperatures and stagnation pressures.
Analytic approximation for random muffin-tin alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, R.; Gray, L.J.; Kaplan, T.
1983-03-15
The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less
NASA Astrophysics Data System (ADS)
Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.
2005-02-01
A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.
On the mode I fracture analysis of cracked Brazilian disc using a digital image correlation method
NASA Astrophysics Data System (ADS)
Abshirini, Mohammad; Soltani, Nasser; Marashizadeh, Parisa
2016-03-01
Mode I of fracture of centrally cracked Brazilian disc was investigated experimentally using a digital image correlation (DIC) method. Experiments were performed on PMMA polymers subjected to diametric-compression load. The displacement fields were determined by a correlation between the reference and the deformed images captured before and during loading. The stress intensity factors were calculated by displacement fields using William's equation and the least square algorithm. The parameters involved in the accuracy of SIF calculation such as number of terms in William's equation and the region of analysis around the crack were discussed. The DIC results were compared with the numerical results available in literature and a very good agreement between them was observed. By extending the tests up to the critical state, mode I fracture toughness was determined by analyzing the image of specimen captured at the moment before fracture. The results showed that the digital image correlation was a reliable technique for the calculation of the fracture toughness of brittle materials.
Transient-Free Operations With Physics-Based Real-time Analysis and Control
NASA Astrophysics Data System (ADS)
Kolemen, Egemen; Burrell, Keith; Eggert, William; Eldon, David; Ferron, John; Glasser, Alex; Humphreys, David
2016-10-01
In order to understand and predict disruptions, the two most common methods currently employed in tokamak analysis are the time-consuming ``kinetic EFITs,'' which are done offline with significant human involvement, and the search for correlations with global precursors using various parameterization techniques. We are developing automated ``kinetic EFITs'' at DIII-D to enable calculation of the stability as the plasma evolves close to the disruption. This allows us to quantify the probabilistic nature of the stability calculations and provides a stability metric for all possible linear perturbations to the plasma. This study also provides insight into how the control system can avoid the unstable operating space, which is critical for high-performance operations close to stability thresholds at ITER. A novel, efficient ideal stability calculation method and new real-time CER acquisition system are being developed, and a new 77-core server has been installed on the DIII-D PCS to enable experimental use. Sponsored by US DOE under DE-SC0015878 and DE-FC02-04ER54698.
Branduardi, Davide; Faraldo-Gómez, José D
2013-09-10
The string method is a molecular-simulation technique that aims to calculate the minimum free-energy path of a chemical reaction or conformational transition, in the space of a pre-defined set of reaction coordinates that is typically highly dimensional. Any descriptor may be used as a reaction coordinate, but arguably the Cartesian coordinates of the atoms involved are the most unprejudiced and intuitive choice. Cartesian coordinates, however, present a non-trivial problem, in that they are not invariant to rigid-body molecular rotations and translations, which ideally ought to be unrestricted in the simulations. To overcome this difficulty, we reformulate the framework of the string method to integrate an on-the-fly structural-alignment algorithm. This approach, referred to as SOMA (String method with Optimal Molecular Alignment), enables the use of Cartesian reaction coordinates in freely tumbling molecular systems. In addition, this scheme permits the dissection of the free-energy change along the most probable path into individual atomic contributions, thus revealing the dominant mechanism of the simulated process. This detailed analysis also provides a physically-meaningful criterion to coarse-grain the representation of the path. To demonstrate the accuracy of the method we analyze the isomerization of the alanine dipeptide in vacuum and the chair-to-inverted-chair transition of β -D mannose in explicit water. Notwithstanding the simplicity of these systems, the SOMA approach reveals novel insights into the atomic mechanism of these isomerizations. In both cases, we find that the dynamics and the energetics of these processes are controlled by interactions involving only a handful of atoms in each molecule. Consistent with this result, we show that a coarse-grained SOMA calculation defined in terms of these subsets of atoms yields nearidentical minimum free-energy paths and committor distributions to those obtained via a highly-dimensional string.
Branduardi, Davide; Faraldo-Gómez, José D.
2014-01-01
The string method is a molecular-simulation technique that aims to calculate the minimum free-energy path of a chemical reaction or conformational transition, in the space of a pre-defined set of reaction coordinates that is typically highly dimensional. Any descriptor may be used as a reaction coordinate, but arguably the Cartesian coordinates of the atoms involved are the most unprejudiced and intuitive choice. Cartesian coordinates, however, present a non-trivial problem, in that they are not invariant to rigid-body molecular rotations and translations, which ideally ought to be unrestricted in the simulations. To overcome this difficulty, we reformulate the framework of the string method to integrate an on-the-fly structural-alignment algorithm. This approach, referred to as SOMA (String method with Optimal Molecular Alignment), enables the use of Cartesian reaction coordinates in freely tumbling molecular systems. In addition, this scheme permits the dissection of the free-energy change along the most probable path into individual atomic contributions, thus revealing the dominant mechanism of the simulated process. This detailed analysis also provides a physically-meaningful criterion to coarse-grain the representation of the path. To demonstrate the accuracy of the method we analyze the isomerization of the alanine dipeptide in vacuum and the chair-to-inverted-chair transition of β-D mannose in explicit water. Notwithstanding the simplicity of these systems, the SOMA approach reveals novel insights into the atomic mechanism of these isomerizations. In both cases, we find that the dynamics and the energetics of these processes are controlled by interactions involving only a handful of atoms in each molecule. Consistent with this result, we show that a coarse-grained SOMA calculation defined in terms of these subsets of atoms yields nearidentical minimum free-energy paths and committor distributions to those obtained via a highly-dimensional string. PMID:24729762
Hoo, Zhe Hui; Curley, Rachael; Campbell, Michael J; Walters, Stephen J; Hind, Daniel; Wildman, Martin J
2016-01-01
Background Preventative inhaled treatments in cystic fibrosis will only be effective in maintaining lung health if used appropriately. An accurate adherence index should therefore reflect treatment effectiveness, but the standard method of reporting adherence, that is, as a percentage of the agreed regimen between clinicians and people with cystic fibrosis, does not account for the appropriateness of the treatment regimen. We describe two different indices of inhaled therapy adherence for adults with cystic fibrosis which take into account effectiveness, that is, “simple” and “sophisticated” normative adherence. Methods to calculate normative adherence Denominator adjustment involves fixing a minimum appropriate value based on the recommended therapy given a person’s characteristics. For simple normative adherence, the denominator is determined by the person’s Pseudomonas status. For sophisticated normative adherence, the denominator is determined by the person’s Pseudomonas status and history of pulmonary exacerbations over the previous year. Numerator adjustment involves capping the daily maximum inhaled therapy use at 100% so that medication overuse does not artificially inflate the adherence level. Three illustrative cases Case A is an example of inhaled therapy under prescription based on Pseudomonas status resulting in lower simple normative adherence compared to unadjusted adherence. Case B is an example of inhaled therapy under-prescription based on previous exacerbation history resulting in lower sophisticated normative adherence compared to unadjusted adherence and simple normative adherence. Case C is an example of nebulizer overuse exaggerating the magnitude of unadjusted adherence. Conclusion Different methods of reporting adherence can result in different magnitudes of adherence. We have proposed two methods of standardizing the calculation of adherence which should better reflect treatment effectiveness. The value of these indices can be tested empirically in clinical trials in which there is careful definition of treatment regimens related to key patient characteristics, alongside accurate measurement of health outcomes. PMID:27284242
A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement
NASA Astrophysics Data System (ADS)
Sun, Hui; Liu, Ji-Gou
2018-07-01
This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Chia -Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-19
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wavemore » function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Lastly, our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.« less
Ab initio atomic recombination reaction energetics on model heat shield surfaces
NASA Technical Reports Server (NTRS)
Senese, Fredrick; Ake, Robert
1992-01-01
Ab initio quantum mechanical calculations on small hydration complexes involving the nitrate anion are reported. The self-consistent field method with accurate basis sets has been applied to compute completely optimized equilibrium geometries, vibrational frequencies, thermochemical parameters, and stable site labilities of complexes involving 1, 2, and 3 waters. The most stable geometries in the first hydration shell involve in-plane waters bridging pairs of nitrate oxygens with two equal and bent hydrogen bonds. A second extremely labile local minimum involves out-of-plane waters with a single hydrogen bond and lies about 2 kcal/mol higher. The potential in the region of the second minimum is extremely flat and qualitatively sensitive to changes in the basis set; it does not correspond to a true equilibrium structure.
Aerodynamic Performance Predictions of Single and Twin Jet Afterbodies
NASA Technical Reports Server (NTRS)
Carlson, John R.; Pao, S. Paul; Abdol-Hamid, Khaled S.; Jones, William T.
1995-01-01
The multiblock three-dimensional Navier-Stokes method PAB3D was utilized by the Component Integration Branch (formerly Propulsion Aerodynamics Branch) at the NASA-Langley Research Center in an international study sponsored by AGARD Working Group #17 for the assessment of the state-of-the-art of propulsion-airframe integration testing techniques and CFD prediction technologies. Three test geometries from ONERA involving fundamental flow physics and four geometries from NASA-LaRC involving realistic flow interactions of wing, body, tail, and jet plumes were chosen by the Working Group. An overview of results on four (1 ONERA and 3 LaRC) of the seven test cases is presented. External static pressures, integrated pressure drag and total drag were calculated for the Langley test cases and jet plume velocity profiles and turbulent viscous stresses were calculated for the ONERA test case. Only selected data from these calculations are presented in this paper. The complete data sets calculated by the participants will be presented in an AGARD summary report. Predicted surface static pressures compared favorably with experimental data for the Langley geometries. Predicted afterbody drag compared well with experiment. Predicted nozzle drag was typically low due to over-compression of the flow near the trailing edge. Total drag was typically high. Predicted jet plume quantities on the ONERA case compared generally well with data.
[Calculation of workers' health care costs].
Rydlewska-Liszkowska, Izabela
2006-01-01
In different health care systems, there are different schemes of organization and principles of financing activities aimed at ensuring the working population health and safety. Regardless of the scheme and the range of health care provided, economists strive for rationalization of costs (including their reduction). This applies to both employers who include workers' health care costs into indirect costs of the market product manufacture and health care institutions, which provide health care services. In practice, new methods of setting costs of workers' health care facilitate regular cost control, acquisition of detailed information about costs, and better adjustment of information to planning and control needs in individual health care institutions. For economic institutions and institutions specialized in workers' health care, a traditional cost-effect calculation focused on setting costs of individual products (services) is useful only if costs are relatively low and the output of simple products is not very high. But when products form aggregates of numerous actions like those involved in occupational medicine services, the method of activity based costing (ABC), representing the process approach, is much more useful. According to this approach costs are attributed to the product according to resources used during different activities involved in its production. The calculation of costs proceeds through allocation of all direct costs for specific processes in a given institution. Indirect costs are settled on the basis of resources used during the implementation of individual tasks involved in the process of making a new product. In this method, so called map of processes/actions consisted in the manufactured product and their interrelations are of particular importance. Advancements in the cost-effect for the management of health care institutions depend on their managerial needs. Current trends in this regard primarily depend on treating all cost reference subjects as cost objects and taking account of all their interrelations. Final products, specific assignments, resources and activities may all be regarded as cost objects. The ABC method is characterized by a very high informative value in terms of setting prices of products in the area of workers' health care. It also facilitates the assessment of costs of individual activities under a multidisciplinary approach to health care and the setting costs of varied products. The ABC method provides precise data on the consumption of resources, such as human labor or various materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flampouri, S; Li, Z; Hoppe, B
2015-06-15
Purpose: To develop a treatment planning method for passively-scattered involved-node proton therapy of mediastinal lymphoma robust to breathing and cardiac motions. Methods: Beam-specific planning treatment volumes (bsPTV) are calculated for each proton field to incorporate pertinent uncertainties. Geometric margins are added laterally to each beam while margins for range uncertainty due to setup errors, breathing, and calibration curve uncertainties are added along each beam. The calculation of breathing motion and deformation effects on proton range includes all 4DCT phases. The anisotropic water equivalent margins are translated to distances on average 4DCT. Treatment plans are designed so each beam adequately coversmore » the corresponding bsPTV. For targets close to the heart, cardiac motion effects on dosemaps are estimated by using a library of anonymous ECG-gated cardiac CTs (cCT). The cCT, originally contrast-enhanced, are partially overridden to allow meaningful proton dose calculations. Targets similar to the treatment targets are drawn on one or more cCT sets matching the anatomy of the patient. Plans based on the average cCT are calculated on individual phases, then deformed to the average and accumulated. When clinically significant dose discrepancies occur between planned and accumulated doses, the patient plan is modified to reduce the cardiac motion effects. Results: We found that bsPTVs as planning targets create dose distributions similar to the conventional proton planning distributions, while they are a valuable tool for visualization of the uncertainties. For large targets with variability in motion and depth, integral dose was reduced because of the anisotropic margins. In most cases, heart motion has a clinically insignificant effect on target coverage. Conclusion: A treatment planning method was developed and used for proton therapy of mediastinal lymphoma. The technique incorporates bsPTVs compensating for all common sources of uncertainties and estimation of the effects of cardiac motion not commonly performed.« less
Head-and-neck IMRT treatments assessed with a Monte Carlo dose calculation engine.
Seco, J; Adams, E; Bidmead, M; Partridge, M; Verhaegen, F
2005-03-07
IMRT is frequently used in the head-and-neck region, which contains materials of widely differing densities (soft tissue, bone, air-cavities). Conventional methods of dose computation for these complex, inhomogeneous IMRT cases involve significant approximations. In the present work, a methodology for the development, commissioning and implementation of a Monte Carlo (MC) dose calculation engine for intensity modulated radiotherapy (MC-IMRT) is proposed which can be used by radiotherapy centres interested in developing MC-IMRT capabilities for research or clinical evaluations. The method proposes three levels for developing, commissioning and maintaining a MC-IMRT dose calculation engine: (a) development of a MC model of the linear accelerator, (b) validation of MC model for IMRT and (c) periodic quality assurance (QA) of the MC-IMRT system. The first step, level (a), in developing an MC-IMRT system is to build a model of the linac that correctly predicts standard open field measurements for percentage depth-dose and off-axis ratios. Validation of MC-IMRT, level (b), can be performed in a rando phantom and in a homogeneous water equivalent phantom. Ultimately, periodic quality assurance of the MC-IMRT system is needed to verify the MC-IMRT dose calculation system, level (c). Once the MC-IMRT dose calculation system is commissioned it can be applied to more complex clinical IMRT treatments. The MC-IMRT system implemented at the Royal Marsden Hospital was used for IMRT calculations for a patient undergoing treatment for primary disease with nodal involvement in the head-and-neck region (primary treated to 65 Gy and nodes to 54 Gy), while sparing the spinal cord, brain stem and parotid glands. Preliminary MC results predict a decrease of approximately 1-2 Gy in the median dose of both the primary tumour and nodal volumes (compared with both pencil beam and collapsed cone). This is possibly due to the large air-cavity (the larynx of the patient) situated in the centre of the primary PTV and the approximations present in the dose calculation.
NASA Astrophysics Data System (ADS)
Araki, Samuel J.
2016-11-01
In the plumes of Hall thrusters and ion thrusters, high energy ions experience elastic collisions with slow neutral atoms. These collisions involve a process of momentum exchange, altering the initial velocity vectors of the collision pair. In addition to the momentum exchange process, ions and atoms can exchange electrons, resulting in slow charge-exchange ions and fast atoms. In these simulations, it is particularly important to accurately perform computations of ion-atom elastic collisions in determining the plume current profile and assessing the integration of spacecraft components. The existing models are currently capable of accurate calculation but are not fast enough such that the calculation can be a bottleneck of plume simulations. This study investigates methods to accelerate an ion-atom elastic collision calculation that includes both momentum- and charge-exchange processes. The scattering angles are pre-computed through a classical approach with ab initio spin-orbit free potential and are stored in a two-dimensional array as functions of impact parameter and energy. When performing a collision calculation for an ion-atom pair, the scattering angle is computed by a table lookup and multiple linear interpolations, given the relative energy and randomly determined impact parameter. In order to further accelerate the calculations, the number of collision calculations is reduced by properly defining two cut-off cross-sections for the elastic scattering. In the MCC method, the target atom needs to be sampled; however, it is confirmed that initial target atom velocity does not play a significant role in typical electric propulsion plume simulations such that the sampling process is unnecessary. With these implementations, the computational run-time to perform a collision calculation is reduced significantly compared to previous methods, while retaining the accuracy of the high fidelity models.
NASA Astrophysics Data System (ADS)
Owens, A. R.; Kópházi, J.; Eaton, M. D.
2017-12-01
In this paper, a new method to numerically calculate the trace inequality constants, which arise in the calculation of penalty parameters for interior penalty discretisations of elliptic operators, is presented. These constants are provably optimal for the inequality of interest. As their calculation is based on the solution of a generalised eigenvalue problem involving the volumetric and face stiffness matrices, the method is applicable to any element type for which these matrices can be calculated, including standard finite elements and the non-uniform rational B-splines of isogeometric analysis. In particular, the presented method does not require the Jacobian of the element to be constant, and so can be applied to a much wider variety of element shapes than are currently available in the literature. Numerical results are presented for a variety of finite element and isogeometric cases. When the Jacobian is constant, it is demonstrated that the new method produces lower penalty parameters than existing methods in the literature in all cases, which translates directly into savings in the solution time of the resulting linear system. When the Jacobian is not constant, it is shown that the naive application of existing approaches can result in penalty parameters that do not guarantee coercivity of the bilinear form, and by extension, the stability of the solution. The method of manufactured solutions is applied to a model reaction-diffusion equation with a range of parameters, and it is found that using penalty parameters based on the new trace inequality constants result in better conditioned linear systems, which can be solved approximately 11% faster than those produced by the methods from the literature.
Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph
2016-10-15
The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-ordermore » differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.« less
Calculation of vitrinite reflectance from thermal histories: A comparison of some methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, D.W.; Issler, D.R.
1993-04-01
Vitrinite reflectance values (%R[sub o]) calculated from commonly used methods are compared with respect to time invariant temperatures and constant heating rates. Two monofunctional methods, one involving a time-temperature index to vitrinite reflectance correlation (TTI-%R[sub o]) to depth correlation, yield vitrinite reflectance values that are similar to those calculated by recently published Arrhenius-based methods, such as EASY%R[sub o]. The approximate agreement between these methods supports the perception that the EASY%R[sub o] algorithm is the most accurate method for the prediction of vitrinite reflectances throughout the range of organic maturity normally encountered. However, calibration of these methods against vitrinite reflectance datamore » from two basin sequences with well-documented geologic histories indicates that, although the EASY%R[sub o] method has wide applicability, it slightly overestimates vitrinite reflectances in strata of low to medium maturity up to a %R[sub o] value of 0.9%. The two monofunctional methods may be more accurate for prediction of vitrinite reflectances in similar sequences of low maturity. An older, but previously widely accepted TTI-%R[sub O] correlation consistently overestimates vitrinite reflectances with respect to other methods. Underestimation of paleogeothermal gradients in the original calibration of time-temperature history to vitrinite reflectance may have introduced a systematic bias to the TTI-%R[sub o] correlation used in this method. Also, incorporation of TAI (thermal alteration index) data and its conversion to %R[sub o]-equivalent values may have introduced inaccuracies. 36 refs., 7 figs.« less
A comparison of the environmental impact of different AOPs: risk indexes.
Giménez, Jaime; Bayarri, Bernardí; González, Óscar; Malato, Sixto; Peral, José; Esplugas, Santiago
2014-12-31
Today, environmental impact associated with pollution treatment is a matter of great concern. A method is proposed for evaluating environmental risk associated with Advanced Oxidation Processes (AOPs) applied to wastewater treatment. The method is based on the type of pollution (wastewater, solids, air or soil) and on materials and energy consumption. An Environmental Risk Index (E), constructed from numerical criteria provided, is presented for environmental comparison of processes and/or operations. The Operation Environmental Risk Index (EOi) for each of the unit operations involved in the process and the Aspects Environmental Risk Index (EAj) for process conditions were also estimated. Relative indexes were calculated to evaluate the risk of each operation (E/NOP) or aspect (E/NAS) involved in the process, and the percentage of the maximum achievable for each operation and aspect was found. A practical application of the method is presented for two AOPs: photo-Fenton and heterogeneous photocatalysis with suspended TiO2 in Solarbox. The results report the environmental risks associated with each process, so that AOPs tested and the operations involved with them can be compared.
ERIC Educational Resources Information Center
Sangwin, Christopher J.; Jones, Ian
2017-01-01
In this paper we report the results of an experiment designed to test the hypothesis that when faced with a question involving the inverse direction of a reversible mathematical process, students solve a multiple-choice version by verifying the answers presented to them by the direct method, not by undertaking the actual inverse calculation.…
Contribution of relativistic quantum chemistry to electron’s electric dipole moment for CP violation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe, M., E-mail: minoria@tmu.ac.jp; Gopakumar, G., E-mail: gopakumargeetha@gmail.com; Hada, M., E-mail: hada@tmu.ac.jp
The search for the electric dipole moment of the electron (eEDM) is important because it is a probe of Charge Conjugation-Parity (CP) violation. It can also shed light on new physics beyond the standard model. It is not possible to measure the eEDM directly. However, the interaction energy involving the effective electric field (E{sub eff}) acting on an electron in a molecule and the eEDM can be measured. This quantity can be combined with E{sub eff}, which is calculated by relativistic molecular orbital theory to determine eEDM. Previous calculations of E{sub eff} were not sufficiently accurate in the treatment ofmore » relativistic or electron correlation effects. We therefore developed a new method to calculate E{sub eff} based on a four-component relativistic coupled-cluster theory. We demonstrated our method for YbF molecule, one of the promising candidates for the eEDM search. Using very large basis set and without freezing any core orbitals, we obtain a value of 23.1 GV/cm for E{sub eff} in YbF with an estimated error of less than 10%. The error is assessed by comparison of our calculations and experiments for two properties relevant for E{sub eff}, permanent dipole moment and hyperfine coupling constant. Our method paves the way to calculate properties of various kinds of molecules which can be described by a single-reference wave function.« less
A new Morse-oscillator based Hamiltonian for H 3+: Calculation of line strengths
NASA Astrophysics Data System (ADS)
Jensen, Per; Špirko, V.
1986-07-01
In two recent publications [V. Špirko, P. Jensen, P. R. Bunker, and A. Čejchan, J. Mol. Spectrosc.112, 183-202 (1985); P. Jensen, V. Špirko, and P. R. Bunker, J. Mol. Spectrosc.115, 269-293 (1986)], we have described the development of Morse oscillator adapted rotation-vibration Hamiltonians for equilateral triangular X3 and Y2X molecules, and we have used these Hamiltonians to calculate the rotation-vibration energies for H 3+ and its X3+ and Y2X+ isotopes from ab initio potential energy functions. The present paper presents a method for calculating rotation-vibration line strengths of H 3+ and its isotopes using an ab initio dipole moment function [G. D. Carney and R. N. Porter, J. Chem. Phys.60, 4251-4264 (1974)] together with the energies and wave-functions obtained by diagonalization of the Morse oscillator adapted Hamiltonians. We use this method for calculating the vibrational transition moments involving the lowest vibrational states of H 3+, D 3+, H 2D +, and D 2H +. Further, we calculate the line strengths of the low- J transitions in the rotational spectra of H 3+ in the vibrational ground state and in the ν1 and ν2 states. We hope that the calculations presented will facilitate the search for further rotation-vibration transitions of H 3+ and its isotopes.
An efficient method for the computation of Legendre moments.
Yap, Pew-Thian; Paramesran, Raveendran
2005-12-01
Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method.
Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya
2017-12-01
We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.
The calculation of aquifer chemistry in hot-water geothermal systems
Truesdell, Alfred H.; Singers, Wendy
1974-01-01
The temperature and chemical conditions (pH, gas pressure, and ion activities) in a geothermal aquifer supplying a producing bore can be calculated from the enthalpy of the total fluid (liquid + vapor) produced and chemical analyses of water and steam separated and collected at known pressures. Alternatively, if a single water phase exists in the aquifer, the complete analysis (including gases) of a sample collected from the aquifer by a downhole sampler is sufficient to determine the aquifer chemistry without a measured value of the enthalpy. The assumptions made are that the fluid is produced from a single aquifer and is homogeneous in enthalpy and chemical composition. These calculations of aquifer chemistry involving large amounts of ancillary information and many iterations require computer methods. A computer program in PL-1 to perform these calculations is available from the National Technical Information Service as document PB-219 376.
Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.
2003-01-01
The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and calculated the generalized aerodynamic forces, eigenvalues, and response amplitudes. The eigenvalues determine the flutter frequency and damping. As a test case, the flutter of a helical fan was calculated with LINFLUX and compared with calculations from TURBO-AE, a nonlinear time domain code, and from ASTROP2, a code based on linear unsteady aerodynamics.
NASA Technical Reports Server (NTRS)
Frohberg, M. G.; Betz, G.
1982-01-01
A method was tested for measuring the enthalpies of mixing of liquid metallic alloying systems, involving the combination of two samples in the electromagnetic field of an induction coil. The heat of solution is calculated from the pyrometrically measured temperature effect, the heat capacity of the alloy, and the heat content of the added sample. The usefulness of the method was tested experimentally with iron-copper and niobium-silicon systems. This method should be especially applicable to high-melting alloys, for which conventional measurements have failed.
NASA Astrophysics Data System (ADS)
Greffrath, Fabian; Prieler, Robert; Telle, Rainer
2014-11-01
A new method for the experimental estimation of radiant heat emittance at high temperatures has been developed which involves aero-acoustic levitation of samples, laser heating and contactless temperature measurement. Radiant heat emittance values are determined from the time dependent development of the sample temperature which requires analysis of both the radiant and convective heat transfer towards the surroundings by means of fluid dynamics calculations. First results for the emittance of a corundum sample obtained with this method are presented in this article and found in good agreement with literature values.
NASA Astrophysics Data System (ADS)
Choi, Chu Hwan
2002-09-01
Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.
3D Multi-Level Non-LTE Radiative Transfer for the CO Molecule
NASA Astrophysics Data System (ADS)
Berkner, A.; Schweitzer, A.; Hauschildt, P. H.
2015-01-01
The photospheres of cool stars are both rich in molecules and an environment where the assumption of LTE can not be upheld under all circumstances. Unfortunately, detailed 3D non-LTE calculations involving molecules are hardly feasible with current computers. For this reason, we present our implementation of the super level technique, in which molecular levels are combined into super levels, to reduce the number of unknowns in the rate equations and, thus, the computational effort and memory requirements involved, and show the results of our first tests against the 1D implementation of the same method.
Electronic propensity rules in Li-H+ collisions involving initial and/or final oriented states
NASA Astrophysics Data System (ADS)
Salas, P. J.
2000-12-01
Electronic excitation and capture processes are studied in collisions involving systems with only one active electron such as the alkaline (Li)-proton in the medium-energy region (0.1-15 keV). Using the semiclassical impact parameter method, the probabilities and the orientation parameter are calculated for transitions between initial and/or final oriented states. The results show a strong asymmetry in the probabilities depending on the orientation of the initial and/or final states. An intuitive view of the processes, by means of the concepts of propensity and velocity matching rules, is provided.
Comparison of Minimally and More Invasive Methods of Determining Mixed Venous Oxygen Saturation.
Smit, Marli; Levin, Andrew I; Coetzee, Johan F
2016-04-01
To investigate the accuracy of a minimally invasive, 2-step, lookup method for determining mixed venous oxygen saturation compared with conventional techniques. Single-center, prospective, nonrandomized, pilot study. Tertiary care hospital, university setting. Thirteen elective cardiac and vascular surgery patients. All participants received intra-arterial and pulmonary artery catheters. Minimally invasive oxygen consumption and cardiac output were measured using a metabolic module and lithium-calibrated arterial waveform analysis (LiDCO; LiDCO, London), respectively. For the minimally invasive method, Step 1 involved these minimally invasive measurements, and arterial oxygen content was entered into the Fick equation to calculate mixed venous oxygen content. Step 2 used an oxyhemoglobin curve spreadsheet to look up mixed venous oxygen saturation from the calculated mixed venous oxygen content. The conventional "invasive" technique used pulmonary artery intermittent thermodilution cardiac output, direct sampling of mixed venous and arterial blood, and the "reverse-Fick" method of calculating oxygen consumption. LiDCO overestimated thermodilution cardiac output by 26%. Pulmonary artery catheter-derived oxygen consumption underestimated metabolic module measurements by 27%. Mixed venous oxygen saturation differed between techniques; the calculated values underestimated the direct measurements by between 12% to 26.3%, this difference being statistically significant. The magnitude of the differences between the minimally invasive and invasive techniques was too great for the former to act as a surrogate of the latter and could adversely affect clinical decision making. Copyright © 2016 Elsevier Inc. All rights reserved.
Design optimization of natural laminar flow bodies in compressible flow
NASA Technical Reports Server (NTRS)
Dodbele, Simha S.
1992-01-01
An optimization method has been developed to design axisymmetric body shapes such as fuselages, nacelles, and external fuel tanks with increased transition Reynolds numbers in subsonic compressible flow. The new design method involves a constraint minimization procedure coupled with analysis of the inviscid and viscous flow regions and linear stability analysis of the compressible boundary-layer. In order to reduce the computer time, Granville's transition criterion is used to predict boundary-layer transition and to calculate the gradients of the objective function, and linear stability theory coupled with the e(exp n)-method is used to calculate the objective function at the end of each design iteration. Use of a method to design an axisymmetric body with extensive natural laminar flow is illustrated through the design of a tiptank of a business jet. For the original tiptank, boundary layer transition is predicted to occur at a transition Reynolds number of 6.04 x 10(exp 6). For the designed body shape, a transition Reynolds number of 7.22 x 10(exp 6) is predicted using compressible linear stability theory coupled with the e(exp n)-method.
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
Process modelling for space station experiments
NASA Technical Reports Server (NTRS)
Rosenberger, Franz; Alexander, J. Iwan D.
1988-01-01
The work performed during the first year 1 Oct. 1987 to 30 Sept. 1988 involved analyses of crystal growth from the melt and from solution. The particular melt growth technique under investigation is directional solidification by the Bridgman-Stockbarger method. Two types of solution growth systems are also being studied. One involves growth from solution in a closed container, the other concerns growth of protein crystals by the hanging drop method. Following discussions with Dr. R. J. Naumann of the Low Gravity Science Division at MSFC it was decided to tackle the analysis of crystal growth from the melt earlier than originally proposed. Rapid progress was made in this area. Work is on schedule and full calculations were underway for some time. Progress was also made in the formulation of the two solution growth models.
NASA Astrophysics Data System (ADS)
Merker, L.; Costi, T. A.
2012-08-01
We introduce a method to obtain the specific heat of quantum impurity models via a direct calculation of the impurity internal energy requiring only the evaluation of local quantities within a single numerical renormalization group (NRG) calculation for the total system. For the Anderson impurity model we show that the impurity internal energy can be expressed as a sum of purely local static correlation functions and a term that involves also the impurity Green function. The temperature dependence of the latter can be neglected in many cases, thereby allowing the impurity specific heat Cimp to be calculated accurately from local static correlation functions; specifically via Cimp=(∂Eionic)/(∂T)+(1)/(2)(∂Ehyb)/(∂T), where Eionic and Ehyb are the energies of the (embedded) impurity and the hybridization energy, respectively. The term involving the Green function can also be evaluated in cases where its temperature dependence is non-negligible, adding an extra term to Cimp. For the nondegenerate Anderson impurity model, we show by comparison with exact Bethe ansatz calculations that the results recover accurately both the Kondo induced peak in the specific heat at low temperatures as well as the high-temperature peak due to the resonant level. The approach applies to multiorbital and multichannel Anderson impurity models with arbitrary local Coulomb interactions. An application to the Ohmic two-state system and the anisotropic Kondo model is also given, with comparisons to Bethe ansatz calculations. The approach could also be of interest within other impurity solvers, for example, within quantum Monte Carlo techniques.
Conformal mapping for multiple terminals
Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao
2016-01-01
Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746
Langley, Robin S; Cotoni, Vincent
2010-04-01
Large sections of many types of engineering construction can be considered to constitute a two-dimensional periodic structure, with examples ranging from an orthogonally stiffened shell to a honeycomb sandwich panel. In this paper, a method is presented for computing the boundary (or edge) impedance of a semi-infinite two-dimensional periodic structure, a quantity which is referred to as the direct field boundary impedance matrix. This terminology arises from the fact that none of the waves generated at the boundary (the direct field) are reflected back to the boundary in a semi-infinite system. The direct field impedance matrix can be used to calculate elastic wave transmission coefficients, and also to calculate the coupling loss factors (CLFs), which are required by the statistical energy analysis (SEA) approach to predicting high frequency vibration levels in built-up systems. The calculation of the relevant CLFs enables a two-dimensional periodic region of a structure to be modeled very efficiently as a single subsystem within SEA, and also within related methods, such as a recently developed hybrid approach, which couples the finite element method with SEA. The analysis is illustrated by various numerical examples involving stiffened plate structures.
Efficient calculation of luminance variation of a luminaire that uses LED light sources
NASA Astrophysics Data System (ADS)
Goldstein, Peter
2007-09-01
Many luminaires have an array of LEDs that illuminate a lenslet-array diffuser in order to create the appearance of a single, extended source with a smooth luminance distribution. Designing such a system is challenging because luminance calculations for a lenslet array generally involve tracing millions of rays per LED, which is computationally intensive and time-consuming. This paper presents a technique for calculating an on-axis luminance distribution by tracing only one ray per LED per lenslet. A multiple-LED system is simulated with this method, and with Monte Carlo ray-tracing software for comparison. Accuracy improves, and computation time decreases by at least five orders of magnitude with this technique, which has applications in LED-based signage, displays, and general illumination.
Students' mental models on the solubility and solubility product concept
NASA Astrophysics Data System (ADS)
Rahmi, Chusnur; Katmiati, Siti; Wiji, Mulyani, Sri
2017-05-01
This study aims to obtain some information regarding profile of students' mental models on the solubility and solubility product concept. A descriptive qualitative method was the method employed in the study. The participants of the study were students XI grade of a senior high school in Bandung. To collect the data, diagnostic test on mental model-prediction, observation, explanation (TDM-POE) instrument was employed in the study. The results of the study revealed that on the concept of precipitation formation of a reaction, 30% of students were not able to explain the precipitation formation of a reaction either in submicroscopic or symbolic level although the microscopic have been shown; 26% of students were able to explain the precipitation formation of a reaction based on the relation of Qsp and Ksp, but they were not able to explain the interaction of particles that involved in the reaction and to calculate Qsp; 26% of students were able to explain the precipitation formation of a reaction based on the relation of Qsp and Ksp, and determine the particles involved, but they did not have the knowledge about the interactions occured and were uncapable of calculating Qsp; and 18% of students were able to explain the precipitation formation of a reaction based on the relation of Qsp and Ksp, and determine the interactions of the particles involved in the reactions but they were not able to calculate Qsp. On the effect of adding common ions and decreasing pH towards the solubility concept, 96% of students were not able to explain the effect of adding common ions and decreasing pH towards the solubility either in submicroscopic or symbolic level although the microscopic have been shown; while 4% of students were only able to explain the effect of adding common ions towards the solubility based on the chemical equilibrium shifts and predict the effect of decreasing pH towards the solubility. However, they were not able to calculate the solubility before and after adding common ions and explain it up to the submicroscopic level either based on the shift of equilibrium solubility or the comparison of solubility calculation results before and after decreasing pH. Overall, the present study showed that most students obtain incomplete mental model on the solubility and solubility product concept. From the findings, it is recommended for the teachers to improve students' learning activity.
Regnier, D.; Litaize, O.; Serot, O.
2015-12-23
Numerous nuclear processes involve the deexcitation of a compound nucleus through the emission of several neutrons, gamma-rays and/or conversion electrons. The characteristics of such a deexcitation are commonly derived from a total statistical framework often called “Hauser–Feshbach” method. In this work, we highlight a numerical limitation of this kind of method in the case of the deexcitation of a high spin initial state. To circumvent this issue, an improved technique called the Fluctuating Structure Properties (FSP) method is presented. Two FSP algorithms are derived and benchmarked on the calculation of the total radiative width for a thermal neutron capture onmore » 238U. We compare the standard method with these FSP algorithms for the prediction of particle multiplicities in the deexcitation of a high spin level of 143Ba. The gamma multiplicity turns out to be very sensitive to the numerical method. The bias between the two techniques can reach 1.5 γγ/cascade. Lastly, the uncertainty of these calculations coming from the lack of knowledge on nuclear structure is estimated via the FSP method.« less
Pari, Sangavi; Wang, Inger A; Liu, Haizhou; Wong, Bryan M
2017-03-22
Advanced oxidation processes that utilize highly oxidative radicals are widely used in water reuse treatment. In recent years, the application of sulfate radical (SO 4 ˙ - ) as a promising oxidant for water treatment has gained increasing attention. To understand the efficiency of SO 4 ˙ - in the degradation of organic contaminants in wastewater effluent, it is important to be able to predict the reaction kinetics of various SO 4 ˙ - -driven oxidation reactions. In this study, we utilize density functional theory (DFT) and high-level wavefunction-based methods (including computationally-intensive coupled cluster methods), to explore the activation energies of SO 4 ˙ - -driven oxidation reactions on a series of benzene-derived contaminants. These high-level calculations encompass a wide set of reactions including 110 forward/reverse reactions and 5 different computational methods in total. Based on the high-level coupled-cluster quantum calculations, we find that the popular M06-2X DFT functional is significantly more accurate for OH - additions than for SO 4 ˙ - reactions. Most importantly, we highlight some of the limitations and deficiencies of other computational methods, and we recommend the use of high-level quantum calculations to spot-check environmental chemistry reactions that may lie outside the training set of the M06-2X functional, particularly for water oxidation reactions that involve SO 4 ˙ - and other inorganic species.
Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module
NASA Astrophysics Data System (ADS)
Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge
2017-04-01
A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.
Calculation of the Full Scattering Amplitude without Partial Wave Decomposition II
NASA Technical Reports Server (NTRS)
Shertzer, J.; Temkin, A.
2003-01-01
As is well known, the full scattering amplitude can be expressed as an integral involving the complete scattering wave function. We have shown that the integral can be simplified and used in a practical way. Initial application to electron-hydrogen scattering without exchange was highly successful. The Schrodinger equation (SE) can be reduced to a 2d partial differential equation (pde), and was solved using the finite element method. We have now included exchange by solving the resultant SE, in the static exchange approximation. The resultant equation can be reduced to a pair of coupled pde's, to which the finite element method can still be applied. The resultant scattering amplitudes, both singlet and triplet, as a function of angle can be calculated for various energies. The results are in excellent agreement with converged partial wave results.
A non-discrete method for computation of residence time in fluid mechanics simulations.
Esmaily-Moghadam, Mahdi; Hsia, Tain-Yen; Marsden, Alison L
2013-11-01
Cardiovascular simulations provide a promising means to predict risk of thrombosis in grafts, devices, and surgical anatomies in adult and pediatric patients. Although the pathways for platelet activation and clot formation are not yet fully understood, recent findings suggest that thrombosis risk is increased in regions of flow recirculation and high residence time (RT). Current approaches for calculating RT are typically based on releasing a finite number of Lagrangian particles into the flow field and calculating RT by tracking their positions. However, special care must be taken to achieve temporal and spatial convergence, often requiring repeated simulations. In this work, we introduce a non-discrete method in which RT is calculated in an Eulerian framework using the advection-diffusion equation. We first present the formulation for calculating residence time in a given region of interest using two alternate definitions. The physical significance and sensitivity of the two measures of RT are discussed and their mathematical relation is established. An extension to a point-wise value is also presented. The methods presented here are then applied in a 2D cavity and two representative clinical scenarios, involving shunt placement for single ventricle heart defects and Kawasaki disease. In the second case study, we explored the relationship between RT and wall shear stress, a parameter of particular importance in cardiovascular disease.
A non-discrete method for computation of residence time in fluid mechanics simulations
NASA Astrophysics Data System (ADS)
Esmaily-Moghadam, Mahdi; Hsia, Tain-Yen; Marsden, Alison L.
2013-11-01
Cardiovascular simulations provide a promising means to predict risk of thrombosis in grafts, devices, and surgical anatomies in adult and pediatric patients. Although the pathways for platelet activation and clot formation are not yet fully understood, recent findings suggest that thrombosis risk is increased in regions of flow recirculation and high residence time (RT). Current approaches for calculating RT are typically based on releasing a finite number of Lagrangian particles into the flow field and calculating RT by tracking their positions. However, special care must be taken to achieve temporal and spatial convergence, often requiring repeated simulations. In this work, we introduce a non-discrete method in which RT is calculated in an Eulerian framework using the advection-diffusion equation. We first present the formulation for calculating residence time in a given region of interest using two alternate definitions. The physical significance and sensitivity of the two measures of RT are discussed and their mathematical relation is established. An extension to a point-wise value is also presented. The methods presented here are then applied in a 2D cavity and two representative clinical scenarios, involving shunt placement for single ventricle heart defects and Kawasaki disease. In the second case study, we explored the relationship between RT and wall shear stress, a parameter of particular importance in cardiovascular disease.
NASA Astrophysics Data System (ADS)
Wilson, John P.
2012-01-01
This article examines how the methods and data sources used to generate DEMs and calculate land surface parameters have changed over the past 25 years. The primary goal is to describe the state-of-the-art for a typical digital terrain modeling workflow that starts with data capture, continues with data preprocessing and DEM generation, and concludes with the calculation of one or more primary and secondary land surface parameters. The article first describes some of ways in which LiDAR and RADAR remote sensing technologies have transformed the sources and methods for capturing elevation data. It next discusses the need for and various methods that are currently used to preprocess DEMs along with some of the challenges that confront those who tackle these tasks. The bulk of the article describes some of the subtleties involved in calculating the primary land surface parameters that are derived directly from DEMs without additional inputs and the two sets of secondary land surface parameters that are commonly used to model solar radiation and the accompanying interactions between the land surface and the atmosphere on the one hand and water flow and related surface processes on the other. It concludes with a discussion of the various kinds of errors that are embedded in DEMs, how these may be propagated and carried forward in calculating various land surface parameters, and the consequences of this state-of-affairs for the modern terrain analyst.
A non-discrete method for computation of residence time in fluid mechanics simulations
Esmaily-Moghadam, Mahdi; Hsia, Tain-Yen; Marsden, Alison L.
2013-01-01
Cardiovascular simulations provide a promising means to predict risk of thrombosis in grafts, devices, and surgical anatomies in adult and pediatric patients. Although the pathways for platelet activation and clot formation are not yet fully understood, recent findings suggest that thrombosis risk is increased in regions of flow recirculation and high residence time (RT). Current approaches for calculating RT are typically based on releasing a finite number of Lagrangian particles into the flow field and calculating RT by tracking their positions. However, special care must be taken to achieve temporal and spatial convergence, often requiring repeated simulations. In this work, we introduce a non-discrete method in which RT is calculated in an Eulerian framework using the advection-diffusion equation. We first present the formulation for calculating residence time in a given region of interest using two alternate definitions. The physical significance and sensitivity of the two measures of RT are discussed and their mathematical relation is established. An extension to a point-wise value is also presented. The methods presented here are then applied in a 2D cavity and two representative clinical scenarios, involving shunt placement for single ventricle heart defects and Kawasaki disease. In the second case study, we explored the relationship between RT and wall shear stress, a parameter of particular importance in cardiovascular disease. PMID:24046509
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding
2012-08-15
Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receivingmore » a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.« less
Development of Scatterometer-Derived Surface Pressures
NASA Astrophysics Data System (ADS)
Hilburn, K. A.; Bourassa, M. A.; O'Brien, J. J.
2001-12-01
SeaWinds scatterometer-derived wind fields can be used to estimate surface pressure fields. The method to be used has been developed and tested with Seasat-A and NSCAT wind measurements. The method involves blending two dynamically consistent values of vorticity. Geostrophic relative vorticity is calculated from an initial guess surface pressure field (AVN analysis in this case). Relative vorticity is calculated from SeaWinds winds, adjusted to a geostrophic value, and then blended with the initial guess. An objective method applied minimizes the differences between the initial guess field and scatterometer field, subject to regularization. The long-term goal of this project is to derive research-quality pressure fields from the SeaWinds winds for the Southern Ocean from the Antarctic ice sheet to 30 deg S. The intermediate goal of this report involves generation of pressure fields over the northern hemisphere for testing purposes. Specifically, two issues need to be addressed. First, the most appropriate initial guess field will be determined: the pure AVN analysis or the previously assimilated pressure field. The independent comparison data to be used in answering this question will involve data near land, ship data, and ice data that were not included in the AVN analysis. Second, the smallest number of pressure observations required to anchor the assimilated field will be determined. This study will use Neumann (derivative) boundary conditions on the region of interest. Such boundary conditions only determine the solution to within a constant that must be determined by a number of anchoring points. The smallness of the number of anchoring points will demonstrate the viability of the general use of the scatterometer as a barometer over the oceans.
Patient Dose In Diagnostic Radiology: When & How?
NASA Astrophysics Data System (ADS)
Lassen, Margit; Gorson, Robert O.
1980-08-01
Different situations are discussed in which it is of value to know radiation dose to the patient in diagnostic radiology. Radiation dose to specific organs is determined using the Handbook on Organ Doses published by the Bureau of Radiological Health of the Food and Drug Administration; the method is applied to a specific case. In this example dose to an embryo is calculated in examinations involving both fluoroscopy and radiography. In another example dose is determined to a fetus in late pregnancy using tissue air ratios. Patient inquiries about radiation dose are discussed, and some answers are suggested. The reliability of dose calculations is examined.
Validation d'un nouveau calcul de reference en evolution pour les reacteurs thermiques
NASA Astrophysics Data System (ADS)
Canbakan, Axel
Resonance self-shielding calculations are an essential component of a deterministic lattice code calculation. Even if their aim is to correct the cross sections deviation, they introduce a non negligible error in evaluated parameters such as the flux. Until now, French studies for light water reactors are based on effective reaction rates obtained using an equivalence in dilution technique. With the increase of computing capacities, this method starts to show its limits in precision and can be replaced by a subgroup method. Originally used for fast neutron reactor calculations, the subgroup method has many advantages such as using an exact slowing down equation. The aim of this thesis is to suggest a validation as precise as possible without burnup, and then with an isotopic depletion study for the subgroup method. In the end, users interested in implementing a subgroup method in their scheme for Pressurized Water Reactors can rely on this thesis to justify their modelization choices. Moreover, other parameters are validated to suggest a new reference scheme for fast execution and precise results. These new techniques are implemented in the French lattice scheme SHEM-MOC, composed of a Method Of Characteristics flux calculation and a SHEM-like 281-energy group mesh. First, the libraries processed by the CEA are compared. Then, this thesis suggests the most suitable energetic discretization for a subgroup method. Finally, other techniques such as the representation of the anisotropy of the scattering sources and the spatial representation of the source in the MOC calculation are studied. A DRAGON5 scheme is also validated as it shows interesting elements: the DRAGON5 subgroup method is run with a 295-eenergy group mesh (compared to 361 groups for APOLLO2). There are two reasons to use this code. The first involves offering a new reference lattice scheme for Pressurized Water Reactors to DRAGON5 users. The second is to study parameters that are not available in APOLLO2 such as self-shielding in a temperature gradient and using a flux calculation based on MOC in the self-shielding part of the simulation. This thesis concludes that: (1) The subgroup method is at least more precise than a technique based on effective reaction rates, only if we use a 361-energy group mesh; (2) MOC with a linear source in a geometrical region gives better results than a MOC with a constant model. A moderator discretization is compulsory; (3) A P3 choc law is satisfactory, ensuring a coherence with 2D full core calculations; (4) SHEM295 is viable with a Subgroup Projection Method for DRAGON5.
NASA Astrophysics Data System (ADS)
Andriopoulos, K.; Dimas, S.; Leach, P. G. L.; Tsoubelis, D.
2009-08-01
Complete symmetry groups enable one to characterise fully a given differential equation. By considering the reversal of an approach based upon complete symmetry groups we construct new classes of differential equations which have the equations of Bateman, Monge-Ampère and Born-Infeld as special cases. We develop a symbolic algorithm to decrease the complexity of the calculations involved.
ERIC Educational Resources Information Center
Casey, James B.
1998-01-01
Explains how a public library can compute the actual cost of distributing tax forms to the public by listing all direct and indirect costs and demonstrating the formulae and necessary computations. Supplies directions for calculating costs involved for all levels of staff as well as associated public relations efforts, space, and utility costs.…
Interatomic potentials in condensed matter via the maximum-entropy principle
NASA Astrophysics Data System (ADS)
Carlsson, A. E.
1987-09-01
A general method is described for the calculation of interatomic potentials in condensed-matter systems by use of a maximum-entropy Ansatz for the interatomic correlation functions. The interatomic potentials are given explicitly in terms of statistical correlation functions involving the potential energy and the structure factor of a ``reference medium.'' Illustrations are given for Al-Cu alloys and a model transition metal.
Optimised effective potential for ground states, excited states, and time-dependent phenomena
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross, E.K.U.
1996-12-31
(1) The optimized effective potential method is a variant of the traditional Kohn-Sham scheme. In this variant, the exchange-correlation energy E{sub xc} is an explicit functional of single-particle orbitals. The exchange-correlation potential, given as usual by the functional derivative v{sub xc} = {delta}E{sub xc}/{delta}{rho}, then satisfies as integral equation involving the single-particle orbitals. This integral equation in solved semi-analytically using a scheme recently proposed by Krieger, Li and Iafrate. If the exact (Fock) exchange-energy functional is employed together with the Colle-Salvetti orbital functional for the correlation energy, the mean absolute deviation of the resulting ground-state energies from the exact nonrelativisticmore » values is CT mH for the first-row atoms, as compared to 4.5 mH in a state-of-the-art CI calculation. The proposed scheme is thus significantly more accurate than the conventional Kohn-Sham method while the numerical effort involved is about the same as for an ordinary Hanree-Fock calculation. (2) A time-dependent generalization of the optimized-potential method is presented and applied to the linear-response regime. Since time-dependent density functional theory leads to a formally exact representation of the frequency-dependent linear density response and since the latter, as a function of frequency, has poles at the excitation energies of the fully interacting system, the formalism is suitable for the calculation of excitation energies. A simple additive correction to the Kohn-Sham single-particle excitation energies will be deduced and first results for atomic and molecular singlet and triplet excitation energies will be presented. (3) Beyond the regime of linear response, the time-dependent optimized-potential method is employed to describe atoms in strong emtosecond laser pulses. Ionization yields and harmonic spectra will be presented and compared with experimental data.« less
Casadesús, Ricard; Moreno, Miquel; González-Lafont, Angels; Lluch, José M; Repasky, Matthew P
2004-01-15
In this article a wide variety of computational approaches (molecular mechanics force fields, semiempirical formalisms, and hybrid methods, namely ONIOM calculations) have been used to calculate the energy and geometry of the supramolecular system 2-(2'-hydroxyphenyl)-4-methyloxazole (HPMO) encapsulated in beta-cyclodextrin (beta-CD). The main objective of the present study has been to examine the performance of these computational methods when describing the short range H. H intermolecular interactions between guest (HPMO) and host (beta-CD) molecules. The analyzed molecular mechanics methods do not provide unphysical short H...H contacts, but it is obvious that their applicability to the study of supramolecular systems is rather limited. For the semiempirical methods, MNDO is found to generate more reliable geometries than AM1, PM3 and the two recently developed schemes PDDG/MNDO and PDDG/PM3. MNDO results only give one slightly short H...H distance, whereas the NDDO formalisms with modifications of the Core Repulsion Function (CRF) via Gaussians exhibit a large number of short to very short and unphysical H...H intermolecular distances. In contrast, the PM5 method, which is the successor to PM3, gives very promising results. Our ONIOM calculations indicate that the unphysical optimized geometries from PM3 are retained when this semiempirical method is used as the low level layer in a QM:QM formulation. On the other hand, ab initio methods involving good enough basis sets, at least for the high level layer in a hybrid ONIOM calculation, behave well, but they may be too expensive in practice for most supramolecular chemistry applications. Finally, the performance of the evaluated computational methods has also been tested by evaluating the energetic difference between the two most stable conformations of the host(beta-CD)-guest(HPMO) system. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 25: 99-105, 2004
Deng, Nanjie; Flynn, William F; Xia, Junchao; Vijayan, R S K; Zhang, Baofeng; He, Peng; Mentes, Ahmet; Gallicchio, Emilio; Levy, Ronald M
2016-09-01
We describe binding free energy calculations in the D3R Grand Challenge 2015 for blind prediction of the binding affinities of 180 ligands to Hsp90. The present D3R challenge was built around experimental datasets involving Heat shock protein (Hsp) 90, an ATP-dependent molecular chaperone which is an important anticancer drug target. The Hsp90 ATP binding site is known to be a challenging target for accurate calculations of ligand binding affinities because of the ligand-dependent conformational changes in the binding site, the presence of ordered waters and the broad chemical diversity of ligands that can bind at this site. Our primary focus here is to distinguish binders from nonbinders. Large scale absolute binding free energy calculations that cover over 3000 protein-ligand complexes were performed using the BEDAM method starting from docked structures generated by Glide docking. Although the ligand dataset in this study resembles an intermediate to late stage lead optimization project while the BEDAM method is mainly developed for early stage virtual screening of hit molecules, the BEDAM binding free energy scoring has resulted in a moderate enrichment of ligand screening against this challenging drug target. Results show that, using a statistical mechanics based free energy method like BEDAM starting from docked poses offers better enrichment than classical docking scoring functions and rescoring methods like Prime MM-GBSA for the Hsp90 data set in this blind challenge. Importantly, among the three methods tested here, only the mean value of the BEDAM binding free energy scores is able to separate the large group of binders from the small group of nonbinders with a gap of 2.4 kcal/mol. None of the three methods that we have tested provided accurate ranking of the affinities of the 147 active compounds. We discuss the possible sources of errors in the binding free energy calculations. The study suggests that BEDAM can be used strategically to discriminate binders from nonbinders in virtual screening and to more accurately predict the ligand binding modes prior to the more computationally expensive FEP calculations of binding affinity.
NASA Astrophysics Data System (ADS)
Deng, Nanjie; Flynn, William F.; Xia, Junchao; Vijayan, R. S. K.; Zhang, Baofeng; He, Peng; Mentes, Ahmet; Gallicchio, Emilio; Levy, Ronald M.
2016-09-01
We describe binding free energy calculations in the D3R Grand Challenge 2015 for blind prediction of the binding affinities of 180 ligands to Hsp90. The present D3R challenge was built around experimental datasets involving Heat shock protein (Hsp) 90, an ATP-dependent molecular chaperone which is an important anticancer drug target. The Hsp90 ATP binding site is known to be a challenging target for accurate calculations of ligand binding affinities because of the ligand-dependent conformational changes in the binding site, the presence of ordered waters and the broad chemical diversity of ligands that can bind at this site. Our primary focus here is to distinguish binders from nonbinders. Large scale absolute binding free energy calculations that cover over 3000 protein-ligand complexes were performed using the BEDAM method starting from docked structures generated by Glide docking. Although the ligand dataset in this study resembles an intermediate to late stage lead optimization project while the BEDAM method is mainly developed for early stage virtual screening of hit molecules, the BEDAM binding free energy scoring has resulted in a moderate enrichment of ligand screening against this challenging drug target. Results show that, using a statistical mechanics based free energy method like BEDAM starting from docked poses offers better enrichment than classical docking scoring functions and rescoring methods like Prime MM-GBSA for the Hsp90 data set in this blind challenge. Importantly, among the three methods tested here, only the mean value of the BEDAM binding free energy scores is able to separate the large group of binders from the small group of nonbinders with a gap of 2.4 kcal/mol. None of the three methods that we have tested provided accurate ranking of the affinities of the 147 active compounds. We discuss the possible sources of errors in the binding free energy calculations. The study suggests that BEDAM can be used strategically to discriminate binders from nonbinders in virtual screening and to more accurately predict the ligand binding modes prior to the more computationally expensive FEP calculations of binding affinity.
Group Additivity Determination for Oxygenates, Oxonium Ions, and Oxygen-Containing Carbenium Ions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dellon, Lauren D.; Sung, Chun-Yi; Robichaud, David J.
Bio-oil produced from biomass fast pyrolysis often requires catalytic upgrading to remove oxygen and acidic species over zeolite catalysts. The elementary reactions in the mechanism for this process involve carbenium and oxonium ions. In order to develop a detailed kinetic model for the catalytic upgrading of biomass, rate constants are required for these elementary reactions. The parameters in the Arrhenius equation can be related to thermodynamic properties through structure-reactivity relationships, such as the Evans-Polanyi relationship. For this relationship, enthalpies of formation of each species are required, which can be reasonably estimated using group additivity. However, the literature previously lacked groupmore » additivity values for oxygenates, oxonium ions, and oxygen-containing carbenium ions. In this work, 71 group additivity values for these types of groups were regressed, 65 of which had not been reported previously and six of which were newly estimated based on regression in the context of the 65 new groups. Heats of formation based on atomization enthalpy calculations for a set of reference molecules and isodesmic reactions for a small set of larger species for which experimental data was available were used to demonstrate the accuracy of the Gaussian-4 quantum mechanical method in estimating enthalpies of formation for species involving the moieties of interest. Isodesmic reactions for a total of 195 species were constructed from the reference molecules to calculate enthalpies of formation that were used to regress the group additivity values. The results showed an average deviation of 1.95 kcal/mol between the values calculated from Gaussian-4 and isodesmic reactions versus those calculated from the group additivity values that were newly regressed. Importantly, the new groups enhance the database for group additivity values, especially those involving oxonium ions.« less
Modified Mixed Lagrangian-Eulerian Method Based on Numerical Framework of MT3DMS on Cauchy Boundary.
Suk, Heejun
2016-07-01
MT3DMS, a modular three-dimensional multispecies transport model, has long been a popular model in the groundwater field for simulating solute transport in the saturated zone. However, the method of characteristics (MOC), modified MOC (MMOC), and hybrid MOC (HMOC) included in MT3DMS did not treat Cauchy boundary conditions in a straightforward or rigorous manner, from a mathematical point of view. The MOC, MMOC, and HMOC regard the Cauchy boundary as a source condition. For the source, MOC, MMOC, and HMOC calculate the Lagrangian concentration by setting it equal to the cell concentration at an old time level. However, the above calculation is an approximate method because it does not involve backward tracking in MMOC and HMOC or allow performing forward tracking at the source cell in MOC. To circumvent this problem, a new scheme is proposed that avoids direct calculation of the Lagrangian concentration on the Cauchy boundary. The proposed method combines the numerical formulations of two different schemes, the finite element method (FEM) and the Eulerian-Lagrangian method (ELM), into one global matrix equation. This study demonstrates the limitation of all MT3DMS schemes, including MOC, MMOC, HMOC, and a third-order total-variation-diminishing (TVD) scheme under Cauchy boundary conditions. By contrast, the proposed method always shows good agreement with the exact solution, regardless of the flow conditions. Finally, the successful application of the proposed method sheds light on the possible flexibility and capability of the MT3DMS to deal with the mass transport problems of all flow regimes. © 2016, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Glushkov, A. V.; Khetselius, O. Yu; Agayar, E. V.; Buyadzhi, V. V.; Romanova, A. V.; Mansarliysky, V. F.
2017-10-01
We present a new effective approach to analysis and modelling the natural air ventilation in an atmosphere of the industrial city, which is based on the Arakawa-Schubert and Glushkov models, modified to calculate the current involvement of the ensemble of clouds, and advanced mathematical methods of modelling an unsteady turbulence in the urban area. For the first time the methods of a plane complex field and spectral expansion algorithms are applied to calculate the air circulation for the cloud layer arrays, penetrating into the territory of the industrial city. We have also taken into account for the mechanisms of transformation of the cloud system advection over the territory of the urban area. The results of test computing the air ventilation characteristics are presented for the Odessa city. All above cited methods and models together with the standard monitoring and management systems can be considered as a basis for comprehensive “Green City” construction technology.
An automated method to find transition states using chemical dynamics simulations.
Martínez-Núñez, Emilio
2015-02-05
A procedure to automatically find the transition states (TSs) of a molecular system (MS) is proposed. It has two components: high-energy chemical dynamics simulations (CDS), and an algorithm that analyzes the geometries along the trajectories to find reactive pathways. Two levels of electronic structure calculations are involved: a low level (LL) is used to integrate the trajectories and also to optimize the TSs, and a higher level (HL) is used to reoptimize the structures. The method has been tested in three MSs: formaldehyde, formic acid (FA), and vinyl cyanide (VC), using MOPAC2012 and Gaussian09 to run the LL and HL calculations, respectively. Both the efficacy and efficiency of the method are very good, with around 15 TS structures optimized every 10 trajectories, which gives a total of 7, 12, and 83 TSs for formaldehyde, FA, and VC, respectively. The use of CDS makes it a powerful tool to unveil possible nonstatistical behavior of the system under study. © 2014 Wiley Periodicals, Inc.
Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions
NASA Technical Reports Server (NTRS)
Gilland, James H.
1991-01-01
The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.
NASA Astrophysics Data System (ADS)
Perekhodtseva, E. V.
2009-09-01
Development of successful method of forecast of storm winds, including squalls and tornadoes and heavy rainfalls, that often result in human and material losses, could allow one to take proper measures against destruction of buildings and to protect people. Well-in-advance successful forecast (from 12 hours to 48 hour) makes possible to reduce the losses. Prediction of the phenomena involved is a very difficult problem for synoptic till recently. The existing graphic and calculation methods still depend on subjective decision of an operator. Nowadays in Russia there is no hydrodynamic model for forecast of the maximal precipitation and wind velocity V> 25m/c, hence the main tools of objective forecast are statistical methods using the dependence of the phenomena involved on a number of atmospheric parameters (predictors). Statistical decisive rule of the alternative and probability forecast of these events was obtained in accordance with the concept of "perfect prognosis" using the data of objective analysis. For this purpose the different teaching samples of present and absent of this storm wind and rainfalls were automatically arranged that include the values of forty physically substantiated potential predictors. Then the empirical statistical method was used that involved diagonalization of the mean correlation matrix R of the predictors and extraction of diagonal blocks of strongly correlated predictors. Thus for these phenomena the most informative predictors were selected without loosing information. The statistical decisive rules for diagnosis and prognosis of the phenomena involved U(X) were calculated for choosing informative vector-predictor. We used the criterion of distance of Mahalanobis and criterion of minimum of entropy by Vapnik-Chervonenkis for the selection predictors. Successful development of hydrodynamic models for short-term forecast and improvement of 36-48h forecasts of pressure, temperature and others parameters allowed us to use the prognostic fields of those models for calculations of the discriminant functions in the nodes of the grid 150x150km and the values of probabilities P of dangerous wind and thus to get fully automated forecasts. In order to change to the alternative forecast the author proposes the empirical threshold values specified for this phenomenon and advance period 36 hours. In the accordance to the Pirsey-Obukhov criterion (T), the success of these automated statistical methods of forecast of squalls and tornadoes to 36 -48 hours ahead and heavy rainfalls in the warm season for the territory of Italy, Spain and Balkan countries is T = 1-a-b=0,54: 0,78 after author experiments. A lot of examples of very successful forecasts of summer storm wind and heavy rainfalls over the Italy and Spain territory are submitted at this report. The same decisive rules were applied to the forecast of these phenomena during cold period in this year too. This winter heavy snowfalls in Spain and in Italy and storm wind at this territory were observed very often. And our forecasts are successful.
Nicolás, Paula; Lassalle, Verónica L; Ferreira, María L
2017-02-01
The aim of this manuscript was to study the application of a new method of protein quantification in Candida antarctica lipase B commercial solutions. Error sources associated to the traditional Bradford technique were demonstrated. Eight biocatalysts based on C. antarctica lipase B (CALB) immobilized onto magnetite nanoparticles were used. Magnetite nanoparticles were coated with chitosan (CHIT) and modified with glutaraldehyde (GLUT) and aminopropyltriethoxysilane (APTS). Later, CALB was adsorbed on the modified support. The proposed novel protein quantification method included the determination of sulfur (from protein in CALB solution) by means of Atomic Emission by Inductive Coupling Plasma (AE-ICP). Four different protocols were applied combining AE-ICP and classical Bradford assays, besides Carbon, Hydrogen and Nitrogen (CHN) analysis. The calculated error in protein content using the "classic" Bradford method with bovine serum albumin as standard ranged from 400 to 1200% when protein in CALB solution was quantified. These errors were calculated considering as "true protein content values" the results of the amount of immobilized protein obtained with the improved method. The optimum quantification procedure involved the combination of Bradford method, ICP and CHN analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
2010-01-01
Background Detection of nerve involvement originating in the spine is a primary concern in the assessment of spine symptoms. Magnetic resonance imaging (MRI) has become the diagnostic method of choice for this detection. However, the agreement between MRI and other diagnostic methods for detecting nerve involvement has not been fully evaluated. The aim of this diagnostic study was to evaluate the agreement between nerve involvement visible in MRI and findings of nerve involvement detected in a structured physical examination and a simplified pain drawing. Methods Sixty-one consecutive patients referred for MRI of the lumbar spine were - without knowledge of MRI findings - assessed for nerve involvement with a simplified pain drawing and a structured physical examination. Agreement between findings was calculated as overall agreement, the p value for McNemar's exact test, specificity, sensitivity, and positive and negative predictive values. Results MRI-visible nerve involvement was significantly less common than, and showed weak agreement with, physical examination and pain drawing findings of nerve involvement in corresponding body segments. In spine segment L4-5, where most findings of nerve involvement were detected, the mean sensitivity of MRI-visible nerve involvement to a positive neurological test in the physical examination ranged from 16-37%. The mean specificity of MRI-visible nerve involvement in the same segment ranged from 61-77%. Positive and negative predictive values of MRI-visible nerve involvement in segment L4-5 ranged from 22-78% and 28-56% respectively. Conclusion In patients with long-standing nerve root symptoms referred for lumbar MRI, MRI-visible nerve involvement significantly underestimates the presence of nerve involvement detected by a physical examination and a pain drawing. A structured physical examination and a simplified pain drawing may reveal that many patients with "MRI-invisible" lumbar symptoms need treatment aimed at nerve involvement. Factors other than present MRI-visible nerve involvement may be responsible for findings of nerve involvement in the physical examination and the pain drawing. PMID:20831785
Cheesman, Andrew; Harvey, Jeremy N; Ashfold, Michael N R
2008-11-13
Accurate potential energy surface calculations are presented for many of the key steps involved in diamond chemical vapor deposition on the [100] surface (in its 2 x 1 reconstructed and hydrogenated form). The growing diamond surface was described by using a large (approximately 1500 atoms) cluster model, with the key atoms involved in chemical steps being described by using a quantum mechanical (QM, density functional theory, DFT) method and the bulk of the atoms being described by molecular mechanics (MM). The resulting hybrid QM/MM calculations are more systematic and/or at a higher level of theory than previous work on this growth process. The dominant process for carbon addition, in the form of methyl radicals, is predicted to be addition to a surface radical site, opening of the adjacent C-C dimer bond, insertion, and ultimate ring closure. Other steps such as insertion across the trough between rows of dimer bonds or addition to a neighboring dimer leading to formation of a reconstruction on the next layer may also contribute. Etching of carbon can also occur; the most likely mechanism involves loss of a two-carbon moiety in the form of ethene. The present higher-level calculations confirm that migration of inserted carbon along both dimer rows and chains should be relatively facile, with barriers of approximately 150 kJ mol (-1) when starting from suitable diradical species, and that this step should play an important role in establishing growth of smooth surfaces.
Olah, George A; Surya Prakash, G K; Rasul, Golam
2008-07-16
The structures and energies of the carbocations C 4H 7 (+) and C 5H 9 (+) were calculated using the ab initio method. The (13)C NMR chemical shifts of the carbocations were calculated using the GIAO-CCSD(T) method. The pisigma-delocalized bisected cyclopropylcarbinyl cation, 1 and nonclassical bicyclobutonium ion, 2 were found to be the minima for C 4H 7 (+) at the MP2/cc-pVTZ level. At the MP4(SDTQ)/cc-pVTZ//MP2/cc-pVTZ + ZPE level the structure 2 is 0.4 kcal/mol more stable than the structure 1. The (13)C NMR chemical shifts of 1 and 2 were calculated by the GIAO-CCSD(T) method. Based on relative energies and (13)C NMR chemical shift calculations, an equilibrium involving the 1 and 2 in superacid solutions is most likely responsible for the experimentally observed (13)C NMR chemical shifts, with the latter as the predominant equilibrating species. The alpha-methylcyclopropylcarbinyl cation, 4, and nonclassical bicyclobutonium ion, 5, were found to be the minima for C 5H 9 (+) at the MP2/cc-pVTZ level. At the MP4(SDTQ)/cc-pVTZ//MP2/cc-pVTZ + ZPE level ion 5 is 5.9 kcal/mol more stable than the structure 4. The calculated (13)C NMR chemical shifts of 5 agree rather well with the experimental values of C 5H 9 (+).
Wagner, Pablo; Standard, Shawn C; Herzenberg, John E
The multiplier method (MM) is frequently used to predict limb-length discrepancy and timing of epiphysiodesis. The traditional MM uses complex formulae and requires a calculator. A mobile application was developed in an attempt to simplify and streamline these calculations. We compared the accuracy and speed of using the traditional pencil and paper technique with that using the Multiplier App (MA). After attending a training lecture and a hands-on workshop on the MM and MA, 30 resident surgeons were asked to apply the traditional MM and the MA at different weeks of their rotations. They were randomized as to the method they applied first. Subjects performed calculations for 5 clinical exercises that involved congenital and developmental limb-length discrepancies and timing of epiphysiodesis. The amount of time required to complete the exercises and the accuracy of the answers were evaluated for each subject. The test subjects answered 60% of the questions correctly using the traditional MM and 80% of the questions correctly using the MA (P=0.001). The average amount of time to complete the 5 exercises with the MM and MA was 22 and 8 minutes, respectively (P<0.0001). Several reports state that the traditional MM is quick and easy to use. Nevertheless, even in the most experienced hands, performing the calculations in clinical practice can be time-consuming. Errors may result from choosing the wrong formulae and from performing the calculations by hand. Our data show that the MA is simpler, more accurate, and faster than the traditional MM from a practical standpoint. Level II.
NASA Astrophysics Data System (ADS)
Czarnecki, S.; Williams, S.
2017-12-01
The accuracy of a method for measuring the effective atomic numbers of minerals using bremsstrahlung intensities has been investigated. The method is independent of detector-efficiency and maximum accelerating voltage. In order to test the method, experiments were performed which involved low-energy electrons incident on thick malachite, pyrite, and galena targets. The resultant thick-target bremsstrahlung was compared to bremsstrahlung produced using a standard target, and experimental effective atomic numbers were calculated using data from a previous study (in which the Z-dependence of thick-target bremsstrahlung was studied). Comparisons of the results to theoretical values suggest that the method has potential for implementation in energy-dispersive X-ray spectroscopy systems.
Kartal, Mehmet E.
2013-01-01
The contour method is one of the most prevalent destructive techniques for residual stress measurement. Up to now, the method has involved the use of the finite-element (FE) method to determine the residual stresses from the experimental measurements. This paper presents analytical solutions, obtained for a semi-infinite strip and a finite rectangle, which can be used to calculate the residual stresses directly from the measured data; thereby, eliminating the need for an FE approach. The technique is then used to determine the residual stresses in a variable-polarity plasma-arc welded plate and the results show good agreement with independent neutron diffraction measurements. PMID:24204187
9Be scattering with microscopic wave functions and the continuum-discretized coupled-channel method
NASA Astrophysics Data System (ADS)
Descouvemont, P.; Itagaki, N.
2018-01-01
We use microscopic 9Be wave functions defined in a α +α +n multicluster model to compute 9Be+target scattering cross sections. The parameter sets describing 9Be are generated in the spirit of the stochastic variational method, and the optimal solution is obtained by superposing Slater determinants and by diagonalizing the Hamiltonian. The 9Be three-body continuum is approximated by square-integral wave functions. The 9Be microscopic wave functions are then used in a continuum-discretized coupled-channel (CDCC) calculation of 9Be+208Pb and of 9Be+27Al elastic scattering. Without any parameter fitting, we obtain a fair agreement with experiment. For a heavy target, the influence of 9Be breakup is important, while it is weaker for light targets. This result confirms previous nonmicroscopic CDCC calculations. One of the main advantages of the microscopic CDCC is that it is based on nucleon-target interactions only; there is no adjustable parameter. The present work represents a first step towards more ambitious calculations involving heavier Be isotopes.
Fast Fourier transform discrete dislocation dynamics
NASA Astrophysics Data System (ADS)
Graham, J. T.; Rollett, A. D.; LeSar, R.
2016-12-01
Discrete dislocation dynamics simulations have been generally limited to modeling systems described by isotropic elasticity. Effects of anisotropy on dislocation interactions, which can be quite large, have generally been ignored because of the computational expense involved when including anisotropic elasticity. We present a different formalism of dislocation dynamics in which the dislocations are represented by the deformation tensor, which is a direct measure of the slip in the lattice caused by the dislocations and can be considered as an eigenstrain. The stresses arising from the dislocations are calculated with a fast Fourier transform (FFT) method, from which the forces are determined and the equations of motion are solved. Use of the FFTs means that the stress field is only available at the grid points, which requires some adjustments/regularizations to be made to the representation of the dislocations and the calculation of the force on individual segments, as is discussed hereinafter. A notable advantage of this approach is that there is no computational penalty for including anisotropic elasticity. We review the method and apply it in a simple dislocation dynamics calculation.
Fukunishi, Yoshifumi
2010-01-01
For fragment-based drug development, both hit (active) compound prediction and docking-pose (protein-ligand complex structure) prediction of the hit compound are important, since chemical modification (fragment linking, fragment evolution) subsequent to the hit discovery must be performed based on the protein-ligand complex structure. However, the naïve protein-compound docking calculation shows poor accuracy in terms of docking-pose prediction. Thus, post-processing of the protein-compound docking is necessary. Recently, several methods for the post-processing of protein-compound docking have been proposed. In FBDD, the compounds are smaller than those for conventional drug screening. This makes it difficult to perform the protein-compound docking calculation. A method to avoid this problem has been reported. Protein-ligand binding free energy estimation is useful to reduce the procedures involved in the chemical modification of the hit fragment. Several prediction methods have been proposed for high-accuracy estimation of protein-ligand binding free energy. This paper summarizes the various computational methods proposed for docking-pose prediction and their usefulness in FBDD.
Kim, Seung-Cheol; Kim, Eun-Soo
2009-02-20
In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.
An iterative method for analysis of hadron ratios and Spectra in relativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Choi, Suk; Lee, Kang Seog
2016-04-01
A new iteration method is proposed for analyzing both the multiplicities and the transverse momentum spectra measured within a small rapidity interval with low momentum cut-off without assuming the invariance of the rapidity distribution under the Lorentz-boost and is applied to the hadron data measured by the ALICE collaboration for Pb+Pb collisions at √ {^sNN} = 2.76 TeV. In order to correctly consider the resonance contribution only to the small rapidity interval measured, we only consider ratios involving only those hadrons whose transverse momentum spectrum is available. In spite of the small number of ratios considered, the quality of fitting both of the ratios and the transverse momentum spectra is excellent. Also, the calculated ratios involving strange baryons with the fitted parameters agree with the data surprisingly well.
Barfield, A; Melo, J; Coutinho, E; Alvarez-Sanchez, F; Faundes, A; Brache, V; Leon, P; Frick, J; Bartsch, G; Weiske, W H; Brenner, P; Mishell, D; Bernstein, G; Ortiz, A
1979-08-01
A potential male contraceptive approach was evaluated in clinical trials involving monthly injections of depot medroxyprogesterone acetate and either subdermal implants of testosterone propionate or monthly injections of testosterone enanthate. Pregnancies occurred in partners of 9 men with recent sperm counts of 10 million/ml or below. In 5 of the 9 instances, the sperm counts were less than 1 million/ml. It appears that male contraceptive methods involving spermatogenic suppression may require attainment and maintenance of azoospermia. The pregnancy rate cannot be calculated, because the extent of other contraceptive use is uncertain. There were no spontaneous abortions. 6 pregnancies were carried to term, and all progeny were normal, based on physical examination at birth or 3 months after birth.
Interference method for obtaining the potential flow past an arbitrary cascade of airfoils
NASA Technical Reports Server (NTRS)
Katzoff, S; Finn, Robert S; Laurence, James C
1947-01-01
A procedure is presented for obtaining the pressure distribution on an arbitrary airfoil section in cascade in a two-dimensional, incompressible, and nonviscous flow. The method considers directly the influence on a given airfoil of the rest of the cascade and evaluates this interference by an iterative process, which appeared to converge rapidly in the cases tried (about unit solidity, stagger angles of 0 degree and 45 degrees). Two variations of the basic interference calculations are described. One, which is accurate enough for most purposes, involves the substitution of sources, sinks, and vortices for the interfering airfoils; the other, which may be desirable for the final approximation, involves a contour integration. The computations are simplified by the use of a chart presented by Betz in a related paper. Illustrated examples are included.
Laser-induced plasma characterization through self-absorption quantification
NASA Astrophysics Data System (ADS)
Hou, JiaJia; Zhang, Lei; Zhao, Yang; Yan, Xingyu; Ma, Weiguang; Dong, Lei; Yin, Wangbao; Xiao, Liantuan; Jia, Suotang
2018-07-01
A self-absorption quantification method is proposed to quantify the self-absorption degree of spectral lines, in which plasma characteristics including electron temperature, elemental concentration ratio, and absolute species number density can be deduced directly. Since there is no spectral intensity involved in the calculation, the analysis results are independent of the self-absorption effects and the additional spectral efficiency calibration is not required. In order to evaluate the practicality, the limitation for application and the precision of this method are also discussed. Experimental results of aluminum-lithium alloy prove that the proposed method is qualified to realize semi-quantitative measurements and fast plasma characteristics diagnostics.
NASA Astrophysics Data System (ADS)
Antsiferov, SV; Sammal, AS; Deev, PV
2018-03-01
To determine the stress-strain state of multilayer support of vertical shafts, including cross-sectional deformation of the tubing rings as against the design, the authors propose an analytical method based on the provision of the mechanics of underground structures and surrounding rock mass as the elements of an integrated deformable system. The method involves a rigorous solution of the corresponding problem of elasticity, obtained using the mathematical apparatus of the theory of analytic functions of a complex variable. The design method is implemented as a software program allowing multivariate applied computation. Examples of the calculation are given.
Chaurasia, Ashok; Harel, Ofer
2015-02-10
Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.
Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core
NASA Astrophysics Data System (ADS)
Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.
2017-01-01
The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.
Yago, Tomoaki; Wakasa, Masanobu
2015-04-21
A practical method to calculate time evolutions of magnetic field effects (MFEs) on photochemical reactions involving radical pairs is developed on the basis of the theory of the chemically induced dynamic spin polarization proposed by Pedersen and Freed. In theory, the stochastic Liouville equation (SLE), including the spin Hamiltonian, diffusion motions of the radical pair, chemical reactions, and spin relaxations, is solved by using the Laplace and the inverse Laplace transformation technique. In our practical approach, time evolutions of the MFEs are successfully calculated by applying the Miller-Guy method instead of the final value theorem to the inverse Laplace transformation process. Especially, the SLE calculations are completed in a short time when the radical pair dynamics can be described by the chemical kinetics consisting of diffusions, reactions and spin relaxations. The SLE analysis with a short calculation time enables one to examine the various parameter sets for fitting the experimental date. Our study demonstrates that simultaneous fitting of the time evolution of the MFE and of the magnetic field dependence of the MFE provides valuable information on the diffusion motions of the radical pairs in nano-structured materials such as micelles where the lifetimes of radical pairs are longer than hundreds of nano-seconds and the magnetic field dependence of the spin relaxations play a major role for the generation of the MFE.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron.
Gaul, Konstantin; Berger, Robert
2017-07-07
A quasi-relativistic two-component approach for an efficient calculation of P,T-odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron
NASA Astrophysics Data System (ADS)
Gaul, Konstantin; Berger, Robert
2017-07-01
A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
A constrained modulus reconstruction technique for breast cancer assessment.
Samani, A; Bishop, J; Plewes, D B
2001-09-01
A reconstruction technique for breast tissue elasticity modulus is described. This technique assumes that the geometry of normal and suspicious tissues is available from a contrast-enhanced magnetic resonance image. Furthermore, it is assumed that the modulus is constant throughout each tissue volume. The technique, which uses quasi-static strain data, is iterative where each iteration involves modulus updating followed by stress calculation. Breast mechanical stimulation is assumed to be done by two compressional rigid plates. As a result, stress is calculated using the finite element method based on the well-controlled boundary conditions of the compression plates. Using the calculated stress and the measured strain, modulus updating is done element-by-element based on Hooke's law. Breast tissue modulus reconstruction using simulated data and phantom modulus reconstruction using experimental data indicate that the technique is robust.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Calawerts, William M; Lin, Liyu; Sprott, JC; Jiang, Jack J
2016-01-01
Objective/Hypothesis The purpose of this paper is to introduce rate of divergence as an objective measure to differentiate between the four voice types based on the amount of disorder present in a signal. We hypothesized that rate of divergence would provide an objective measure that can quantify all four voice types. Study Design 150 acoustic voice recordings were randomly selected and analyzed using traditional perturbation, nonlinear, and rate of divergence analysis methods. ty Methods We developed a new parameter, rate of divergence, which uses a modified version of Wolf’s algorithm for calculating Lyapunov exponents of a system. The outcome of this calculation is not a Lyapunov exponent, but rather a description of the divergence of two nearby data points for the next three points in the time series, followed in three time delayed embedding dimensions. This measure was compared to currently existing perturbation and nonlinear dynamic methods of distinguishing between voice signals. Results There was a direct relationship between voice type and rate of divergence. This calculation is especially effective at differentiating between type 3 and type 4 voices (p<0.001), and is equally effective at differentiating type 1, type 2, and type 3 signals as currently existing methods. Conclusion The rate of divergence calculation introduced is an objective measure that can be used to distinguish between all four voice types based on amount of disorder present, leading to quicker and more accurate voice typing as well as an improved understanding of the nonlinear dynamics involved in phonation. PMID:26920858
NASA Astrophysics Data System (ADS)
Demissie, Taye B.
2017-11-01
The NMR chemical shifts and indirect spin-spin coupling constants of 12 molecules containing 29Si, 73Ge, 119Sn, and 207Pb [X(CCMe)4, Me2X(CCMe)2, and Me3XCCH] are presented. The results are obtained from non-relativistic as well as two- and four-component relativistic density functional theory (DFT) calculations. The scalar and spin-orbit relativistic contributions as well as the total relativistic corrections are determined. The main relativistic effect in these molecules is not due to spin-orbit coupling but rather to the scalar relativistic contraction of the s-shells. The correlation between the calculated and experimental indirect spin-spin coupling constants showed that the four-component relativistic density functional theory (DFT) approach using the Perdew's hybrid scheme exchange-correlation functional (PBE0; using the Perdew-Burke-Ernzerhof exchange and correlation functionals) gives results in good agreement with experimental values. The indirect spin-spin coupling constants calculated using the spin-orbit zeroth order regular approximation together with the hybrid PBE0 functional and the specially designed J-coupling (JCPL) basis sets are in good agreement with the results obtained from the four-component relativistic calculations. For the coupling constants involving the heavy atoms, the relativistic corrections are of the same order of magnitude compared to the non-relativistically calculated results. Based on the comparisons of the calculated results with available experimental values, the best results for all the chemical shifts and non-existing indirect spin-spin coupling constants for all the molecules are reported, hoping that these accurate results will be used to benchmark future DFT calculations. The present study also demonstrates that the four-component relativistic DFT method has reached a level of maturity that makes it a convenient and accurate tool to calculate indirect spin-spin coupling constants of "large" molecular systems involving heavy atoms.
Non-local transport in turbulent MHD convection
NASA Technical Reports Server (NTRS)
Miesch, Mark; Brandenburg, Axel; Zweibel, Ellen; Toomre, Juri
1995-01-01
The nonlocal non-diffusive transport of passive scalars in turbulent magnetohydrodynamic (MHD) convection is investigated using transilient matrices. These matrices describe the probability that a tracer particle beginning at one position in a flow will be advected to another position after some time. A method for the calculation of these matrices from simulation data which involves following the trajectories of passive tracer particles and calculating their transport statistics, is presented. The method is applied to study the transport in several simulations of turbulent, rotating, three dimensional compressible, penetrative MDH convection. Transport coefficients and other diagnostics are used to quantify the transport, which is found to resemble advection more closely than diffusion. Some of the results are found to have direct relevance to other physical problems, such as the light element depletion in sun-type stars. The large kurtosis found for downward moving particles at the base of the convection zone implies several extreme events.
Calculation of x-ray scattering patterns from nanocrystals at high x-ray intensity
Abdullah, Malik Muhammad; Jurek, Zoltan; Son, Sang-Kil; Santra, Robin
2016-01-01
We present a generalized method to describe the x-ray scattering intensity of the Bragg spots in a diffraction pattern from nanocrystals exposed to intense x-ray pulses. Our method involves the subdivision of a crystal into smaller units. In order to calculate the dynamics within every unit, we employ a Monte-Carlo-molecular dynamics-ab-initio hybrid framework using real space periodic boundary conditions. By combining all the units, we simulate the diffraction pattern of a crystal larger than the transverse x-ray beam profile, a situation commonly encountered in femtosecond nanocrystallography experiments with focused x-ray free-electron laser radiation. Radiation damage is not spatially uniform and depends on the fluence associated with each specific region inside the crystal. To investigate the effects of uniform and non-uniform fluence distribution, we have used two different spatial beam profiles, Gaussian and flattop. PMID:27478859
NASA Astrophysics Data System (ADS)
Dogra, Sugandha; Singh, Jasveer; Lodh, Abhishek; Dilawar Sharma, Nita; Bandyopadhyay, A. K.
2011-02-01
This paper reports the behavior of a well-characterized pneumatic piston gauge in the pressure range up to 8 MPa through simulation using finite element method (FEM). Experimentally, the effective area of this piston gauge has been estimated by cross-floating to obtain A0 and λ. The FEM technique addresses this problem through simulation and optimization with standard commercial software (ANSYS) where the material properties of the piston and cylinder, dimensional measurements, etc are used as the input parameters. The simulation provides the effective area Ap as a function of pressure in the free deformation mode. From these data, one can estimate Ap versus pressure and thereby Ao and λ. Further, we have carried out a similar theoretical calculation of Ap using the conventional method involving the Dadson's as well as Johnson-Newhall equations. A comparison of these results with the experimental results has been carried out.
Higher-order gravitational lensing reconstruction using Feynman diagrams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, Elizabeth E.; Manohar, Aneesh V.; Yadav, Amit P.S.
2014-09-01
We develop a method for calculating the correlation structure of the Cosmic Microwave Background (CMB) using Feynman diagrams, when the CMB has been modified by gravitational lensing, Faraday rotation, patchy reionization, or other distorting effects. This method is used to calculate the bias of the Hu-Okamoto quadratic estimator in reconstructing the lensing power spectrum up to O (φ{sup 4}) in the lensing potential φ. We consider both the diagonal noise TT TT, EB EB, etc. and, for the first time, the off-diagonal noise TT TE, TB EB, etc. The previously noted large O (φ{sup 4}) term in the second order noise ismore » identified to come from a particular class of diagrams. It can be significantly reduced by a reorganization of the φ expansion. These improved estimators have almost no bias for the off-diagonal case involving only one B component of the CMB, such as EE EB.« less
Ab initio ONIOM-molecular dynamics (MD) study on the deamination reaction by cytidine deaminase.
Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako
2007-08-23
We applied the ONIOM-molecular dynamics (MD) method to the hydrolytic deamination of cytidine by cytidine deaminase, which is an essential step of the activation process of the anticancer drug inside the human body. The direct MD simulations were performed for the realistic model of cytidine deaminase by calculating the energy and its gradient by the ab initio ONIOM method on the fly. The ONIOM-MD calculations including the thermal motion show that the neighboring amino acid residue is an important factor of the environmental effects and significantly affects not only the geometry and energy of the substrate trapped in the pocket of the active site but also the elementary step of the catalytic reaction. We successfully simulate the second half of the catalytic cycle, which has been considered to involve the rate-determining step, and reveal that the rate-determining step is the release of the NH3 molecule.
NASA Technical Reports Server (NTRS)
Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot
1992-01-01
An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.
The He+H¯→He p¯+e+ rearrangement
NASA Astrophysics Data System (ADS)
Todd, Allan C.; Armour, Edward A. G.
2006-06-01
In this paper, we present a summary of our work in progress on calculating cross sections for the He+H¯→He p¯+e+ rearrangement process in He H¯ scattering. This has involved a study of the system He p¯ within the Born-Oppenheimer (BO) approximation using the Rayleigh-Ritz variational method. This work has been reported in [A.C. Todd, E.A.G. Armour, J. Phys. B 38 (2005) 3367] and is summarised here. Similar calculations are in progress for the He+H¯ entrance channel. We intend to use the entrance channel and rearrangement channel wave functions to obtain the cross sections for the rearrangement using the distorted wave Born approximation T-matrix method described elsewhere in these proceedings [E.A.G. Armour, S. Jonsell, Y. Liu, A.C. Todd, these Proceedings, doi:10.1016/j.nimb.2006.01.049].
The development and application of CFD technology in mechanical engineering
NASA Astrophysics Data System (ADS)
Wei, Yufeng
2017-12-01
Computational Fluid Dynamics (CFD) is an analysis of the physical phenomena involved in fluid flow and heat conduction by computer numerical calculation and graphical display. The numerical method simulates the complexity of the physical problem and the precision of the numerical solution, which is directly related to the hardware speed of the computer and the hardware such as memory. With the continuous improvement of computer performance and CFD technology, it has been widely applied to the field of water conservancy engineering, environmental engineering and industrial engineering. This paper summarizes the development process of CFD, the theoretical basis, the governing equations of fluid mechanics, and introduces the various methods of numerical calculation and the related development of CFD technology. Finally, CFD technology in the mechanical engineering related applications are summarized. It is hoped that this review will help researchers in the field of mechanical engineering.
Structural Code Considerations for Solar Rooftop Installations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwyer, Stephen F.; Dwyer, Brian P.; Sanchez, Alfred
2014-12-01
Residential rooftop solar panel installations are limited in part by the high cost of structural related code requirements for field installation. Permitting solar installations is difficult because there is a belief among residential permitting authorities that typical residential rooftops may be structurally inadequate to support the additional load associated with a photovoltaic (PV) solar installation. Typical engineering methods utilized to calculate stresses on a roof structure involve simplifying assumptions that render a complex non-linear structure to a basic determinate beam. This method of analysis neglects the composite action of the entire roof structure, yielding a conservative analysis based on amore » rafter or top chord of a truss. Consequently, the analysis can result in an overly conservative structural analysis. A literature review was conducted to gain a better understanding of the conservative nature of the regulations and codes governing residential construction and the associated structural system calculations.« less
Development of car theft crime index in peninsular Malaysia
NASA Astrophysics Data System (ADS)
Zulkifli, Malina; Ismail, Noriszura; Razali, Ahmad Mahir; Kasim, Maznah Mat
2014-06-01
Vehicle theft is classified as property crime and is considered as the most frequently reported crime in Malaysia. The rising number of vehicle thefts requires proper control by relevant authorities, especially through planning and implementation of strategic and effective measures. Nevertheless, the effort to control this crime would be much easier if there is an indication or index which is more specific to vehicle theft. This study aims to build an index crime which is specific to vehicle theft. The development of vehicle theft index proposed in this study requires three main steps; the first involves identification of criteria related to vehicle theft, the second requires calculation of degrees of importance, or weighting criteria, which involves application of correlation and entropy methods, and the final involves building of vehicle theft index using method of linear combination, or weighted arithmetic average. The results show that the two methods used for determining weights of vehicle theft index are similar. Information generated from the results can be used as a primary source for local authorities to plan strategies for reduction of vehicle theft and for insurance companies to determine premium rates of automobile insurance.
NASA Technical Reports Server (NTRS)
Laufer, A. H.; Gardner, E. P.; Kwok, T. L.; Yung, Y. L.
1983-01-01
The rate coefficients, including Arrhenius parameters, have been computed for a number of chemical reactions involving hydrocarbon species for which experimental data are not available and which are important in planetary atmospheric models. The techniques used to calculate the kinetic parameters include the Troe and semiempirical bond energy-bond order (BEBO) or bond strength-bond length (BSBL) methods.
HEMP 3D: A finite difference program for calculating elastic-plastic flow, appendix B
NASA Astrophysics Data System (ADS)
Wilkins, Mark L.
1993-05-01
The HEMP 3D program can be used to solve problems in solid mechanics involving dynamic plasticity and time dependent material behavior and problems in gas dynamics. The equations of motion, the conservation equations, and the constitutive relations listed below are solved by finite difference methods following the format of the HEMP computer simulation program formulated in two space dimensions and time.
Semi-quantitative spectrographic analysis and rank correlation in geochemistry
Flanagan, F.J.
1957-01-01
The rank correlation coefficient, rs, which involves less computation than the product-moment correlation coefficient, r, can be used to indicate the degree of relationship between two elements. The method is applicable in situations where the assumptions underlying normal distribution correlation theory may not be satisfied. Semi-quantitative spectrographic analyses which are reported as grouped or partly ranked data can be used to calculate rank correlations between elements. ?? 1957.
Stewart, James J P
2016-11-01
A new method for predicting the energy contributions to substrate binding and to specificity has been developed. Conventional global optimization methods do not permit the subtle effects responsible for these properties to be modeled with sufficient precision to allow confidence to be placed in the results, but by making simple alterations to the model, the precisions of the various energies involved can be improved from about ±2 kcal mol -1 to ±0.1 kcal mol -1 . This technique was applied to the oxidized nucleotide pyrophosphohydrolase enzyme MTH1. MTH1 is unusual in that the binding and reaction sites are well separated-an advantage from a computational chemistry perspective, as it allows the energetics involved in docking to be modeled without the need to consider any issues relating to reaction mechanisms. In this study, two types of energy terms were investigated: the noncovalent interactions between the binding site and the substrate, and those responsible for discriminating between the oxidized nucleotide 8-oxo-dGTP and the normal dGTP. Both of these were investigated using the semiempirical method PM7 in the program MOPAC. The contributions of the individual residues to both the binding energy and the specificity of MTH1 were calculated by simulating the effect of mutations. Where comparisons were possible, all calculated results were in agreement with experimental observations. This technique provides fresh insight into the binding mechanism that enzymes use for discriminating between possible substrates.
Developing a Conceptually Equivalent Type 2 Diabetes Risk Score for Indian Gujaratis in the UK
Patel, Naina; Stone, Margaret; Barber, Shaun; Gray, Laura; Davies, Melanie; Khunti, Kamlesh
2016-01-01
Aims. To apply and assess the suitability of a model consisting of commonly used cross-cultural translation methods to achieve a conceptually equivalent Gujarati language version of the Leicester self-assessment type 2 diabetes risk score. Methods. Implementation of the model involved multiple stages, including pretesting of the translated risk score by conducting semistructured interviews with a purposive sample of volunteers. Interviews were conducted on an iterative basis to enable findings to inform translation revisions and to elicit volunteers' ability to self-complete and understand the risk score. Results. The pretest stage was an essential component involving recruitment of a diverse sample of 18 Gujarati volunteers, many of whom gave detailed suggestions for improving the instructions for the calculation of the risk score and BMI table. Volunteers found the standard and level of Gujarati accessible and helpful in understanding the concept of risk, although many of the volunteers struggled to calculate their BMI. Conclusions. This is the first time that a multicomponent translation model has been applied to the translation of a type 2 diabetes risk score into another language. This project provides an invaluable opportunity to share learning about the transferability of this model for translation of self-completed risk scores in other health conditions. PMID:27703985
NASA Astrophysics Data System (ADS)
Ye, Qian; Lin, Haoze
2017-07-01
Though extensively used in calculating optical force and torque acting on a material object illuminated by laser, the Maxwell stress tensor (MST) method follows the electromagnetic linear and angular momentum balance that is usually derived in most textbooks for a continuous volume charge distribution in free space, if not resorting to the application of Noether’s theorem in electrodynamics. To cast the conservation laws into a physically appealing form involving the current densities of linear and angular momentum, on which the MST method is based, the divergence theorem is employed to transform a volume integral into a surface integral. When a material object of finite volume is put into the field, it brings about a discontinuity of field across its surface, due to the presence of induced surface charge and surface current. Ambiguity arises among students in whether the divergence theorem can still be directly used without any justification. By taking into account the effect of the induced surface charge and current, we present a simple pedagogical derivation for the MST method for calculating the optical force and torque on an object immersed in monochromatic optical field, without resorting to Noether’s theorem. Although the results turn out to be identical to those given in the standard textbooks, our derivation avoids the direct use of the divergence theorem on a discontinuous function.
Schut, T C; Hesselink, G; de Grooth, B G; Greve, J
1991-01-01
We have developed a computer program based on the geometrical optics approach proposed by Roosen to calculate the forces on dielectric spheres in focused laser beams. We have explicitly taken into account the polarization of the laser light and thd divergence of the laser beam. The model can be used to evaluate the stability of optical traps in a variety of different optical configurations. Our calculations explain the experimental observation by Ashkin that a stable single-beam optical trap, without the help of the gravitation force, can be obtained with a strongly divergent laser beam. Our calculations also predict a different trap stability in the directions orthogonal and parallel to the polarization direction of the incident light. Different experimental methods were used to test the predictions of the model for the gravity trap. A new method for measuring the radiation force along the beam axis in both the stable and instable regions is presented. Measurements of the radiation force on polystyrene spheres with diameters of 7.5 and 32 microns in a TEM00-mode laser beam showed a good qualitative correlation with the predictions and a slight quantitative difference. The validity of the geometrical approximations involved in the model will be discussed for spheres of different sizes and refractive indices.
Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru
2006-07-17
In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bockris, J.O.; Devanathan, M.A.V.
The galvanostatic double charging method was applied to determine the coverage of Ni cathodes with adsorbed atomic H in 2 N NaOH solutions. Anodic current densities were varied from 0.05 to 1.8 amp/sq cm. The plateau indicating absence of readsorption was between 0.6 and 1.8 amp/sq cm, for a constant cathodic c.d. of 1/10,000 amp/sq cm. The variation of the adsorbed H over cathodic c.d.'s ranging from 10 to the -6th power to 1/10 at a constant anodic c.d. of 1 amp/sq cm were calculated and the coverage calculated. The mechanism of the H evolution reaction was elucidated. The ratemore » determining step is discharge from a water molecules followed by rapid Tafel recombination. The rate constants for these processes and the rate constant for the ionisation, calculated with the extrapolated value of coverage for the reversible H electrode, were determined. A modification of the Tafel equation which takes into account both coverage and ionisation is in harmony with the results. A new method for the determination of coverage suitable for corrodible metals is described which involves the measurement of the rate of permeation of H by electrochemical techniques which enhances the sensitivity of the method. (Author)« less
Hybrid parallel code acceleration methods in full-core reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courau, T.; Plagne, L.; Ponicot, A.
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less
Single-sample method for the estimation of glomerular filtration rate in children
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tauxe, W.N.; Bagchi, A.; Tepe, P.G.
1987-03-01
A method for the determination of the glomerular filtration rate (GFR) in children which involves the use of a single-plasma sample (SPS) after the injection of a radioactive indicator such as radioiodine labeled diatrizoate (Hypaque) has been developed. This is analogous to previously published SPS techniques of effective renal plasma flow (ERPF) in adults and children and GFR SPS techniques in adults. As a reference standard, GFR has been calculated from compartment analysis of injected radiopharmaceuticals (Sapirstein Method). Theoretical volumes of distribution were calculated at various times after injection (Vt) by dividing the total injected counts (I) by the plasmamore » concentration (Ct) expressed in liters, determined by counting an aliquot of plasma in a well type scintillation counter. Errors of predicting GFR from the various Vt values were determined as the standard error of estimate (Sy.x) in ml/min. They were found to be relatively high early after injection and to fall to a nadir of 3.9 ml/min at 91 min. The Sy.x Vt relationship was examined in linear, quadratic, and exponential form, but the simpler linear relationship was found to yield the lowest error. Other data calculated from the compartment analysis of the reference plasma disappearance curves are presented, but at this time have apparently little clinical relevance.« less
A multi-physics study of Li-ion battery material Li1+xTi2O4
NASA Astrophysics Data System (ADS)
Jiang, Tonghu; Falk, Michael; Siva Shankar Rudraraju, Krishna; Garikipati, Krishna; van der Ven, Anton
2013-03-01
Recently, lithium ion batteries have been subject to intense scientific study due to growing demand arising from their utilization in portable electronics, electric vehicles and other applications. Most cathode materials in lithium ion batteries involve a two-phase process during charging and discharging, and the rate of these processes is typically limited by the slow interface mobility. We have undertaken modeling regarding how lithium diffusion in the interface region affects the motion of the phase boundary. We have developed a multi-physics computational method suitable for predicting time evolution of the driven interface. In this method, we calculate formation energies and migration energy barriers by ab initio methods, which are then approximated by cluster expansions. Monte Carlo calculation is further employed to obtain thermodynamic and kinetic information, e.g., anisotropic interfacial energies, and mobilities, which are used to parameterize continuum modeling of the charging and discharging processes. We test this methodology on spinel Li1+xTi2O4. Elastic effects are incorporated into the calculations to determine the effect of variations in modulus and strain on stress concentrations and failure modes within the material. We acknowledge support by the National Science Foundation Cyber Discovery and Innovation Program under Award No. 1027765.
NASA Astrophysics Data System (ADS)
Brandt, C.; Thakur, S. C.; Tynan, G. R.
2016-04-01
Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculations are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.
Neutral Kaon Mixing from Lattice QCD
NASA Astrophysics Data System (ADS)
Bai, Ziyuan
In this work, we report the lattice calculation of two important quantities which emerge from second order, K0 - K¯0 mixing : DeltaMK and epsilonK. The RBC-UKQCD collaboration has performed the first calculation of DeltaMK with unphysical kinematics [1]. We now extend this calculation to near-physical and physical ensembles. In these physical or near-physical calculations, the two-pion energies are below the kaon threshold, and we have to examine the two-pion intermediate states contribution to DeltaMK, as well as the enhanced finite volume corrections arising from these two-pion intermediate states. We also report the ?rst lattice calculation of the long-distance contribution to the indirect CP violation parameter, the epsilonK. This calculation involves the treatment of a short-distance, ultra-violet divergence that is absent in the calculation of DeltaMK, and we will report our techniques for correcting this divergence on the lattice. In this calculation, we used unphysical quark masses on the same ensemble that we used in [1]. Therefore, rather than providing a physical result, this calculation demonstrates the technique for calculating epsilonK, and provides an approximate understanding the size of the long-distance contributions. Various new techniques are employed in this work, such as the use of All-Mode-Averaging (AMA), the All-to-All (A2A) propagators and the use of super-jackknife method in analyzing the data.
Eulerian adaptive finite-difference method for high-velocity impact and penetration problems
NASA Astrophysics Data System (ADS)
Barton, P. T.; Deiterding, R.; Meiron, D.; Pullin, D.
2013-05-01
Owing to the complex processes involved, faithful prediction of high-velocity impact events demands a simulation method delivering efficient calculations based on comprehensively formulated constitutive models. Such an approach is presented herein, employing a weighted essentially non-oscillatory (WENO) method within an adaptive mesh refinement (AMR) framework for the numerical solution of hyperbolic partial differential equations. Applied widely in computational fluid dynamics, these methods are well suited to the involved locally non-smooth finite deformations, circumventing any requirement for artificial viscosity functions for shock capturing. Application of the methods is facilitated through using a model of solid dynamics based upon hyper-elastic theory comprising kinematic evolution equations for the elastic distortion tensor. The model for finite inelastic deformations is phenomenologically equivalent to Maxwell's model of tangential stress relaxation. Closure relations tailored to the expected high-pressure states are proposed and calibrated for the materials of interest. Sharp interface resolution is achieved by employing level-set functions to track boundary motion, along with a ghost material method to capture the necessary internal boundary conditions for material interactions and stress-free surfaces. The approach is demonstrated for the simulation of high velocity impacts of steel projectiles on aluminium target plates in two and three dimensions.
NASA Astrophysics Data System (ADS)
Ouyang, Lizhi
A systematic improvement and extension of the orthogonalized linear combinations of atomic orbitals method was carried out using a combined computational and theoretical approach. For high performance parallel computing, a Beowulf class personal computer cluster was constructed. It also served as a parallel program development platform that helped us to port the programs of the method to the national supercomputer facilities. The program, received a language upgrade from Fortran 77 to Fortran 90, and a dynamic memory allocation feature. A preliminary parallel High Performance Fortran version of the program has been developed as well. To be of more benefit though, scalability improvements are needed. In order to circumvent the difficulties of the analytical force calculation in the method, we developed a geometry optimization scheme using the finite difference approximation based on the total energy calculation. The implementation of this scheme was facilitated by the powerful general utility lattice program, which offers many desired features such as multiple optimization schemes and usage of space group symmetry. So far, many ceramic oxides have been tested with the geometry optimization program. Their optimized geometries were in excellent agreement with the experimental data. For nine ceramic oxide crystals, the optimized cell parameters differ from the experimental ones within 0.5%. Moreover, the geometry optimization was recently used to predict a new phase of TiNx. The method has also been used to investigate a complex Vitamin B12-derivative, the OHCbl crystals. In order to overcome the prohibitive disk I/O demand, an on-demand version of the method was developed. Based on the electronic structure calculation of the OHCbl crystal, a partial density of states analysis and a bond order analysis were carried out. The calculated bonding of the corrin ring of OHCbl model was coincident with the big open-ring pi bond. One interesting find of the calculation was that the Co-OH bond was weak. This, together with the ongoing projects studying different Vitamin B12 derivatives, might help us to answer questions about the Co-C cleavage of the B12 coenzyme, which is involved in many important B12 enzymatic reactions.
Hispanic children and the obesity epidemic: Exploring the role of abuelas
Pulgarón, Elizabeth R.; Patiño-Fernández, Anna Maria; Sanchez, Janine; Carrillo, Adriana; Delamater, Alan
2014-01-01
Objective This study evaluated the rate of Hispanic children who have grandparents involved in caretaking and whether grandparents’ involvement has a negative impact on feeding practices, children's physical activity, and body mass index (BMI). Method One hundred and ninety-nine children and their parents were recruited at an elementary school. Parents completed a questionnaire regarding their children's grandparents’ involvement as caretakers and the feeding and physical activity practices of that grandparent when with the child. Children's height and weight were measured and zBMI scores were calculated. Results Forty-three percent of parents reported that there was a grandparent involved in their child's caretaking. Grandparents served a protective role on zBMI for youth of Hispanic descent, except for the Cuban subgroup. There was no relationship between grandparent involvement and feeding and physical activity behaviors. Conclusions In some cases grandparents may serve a protective function for childhood obesity. These results highlight the need for future research on grandparents and children's health, especially among Hispanic subgroups. PMID:24059275
NASA Technical Reports Server (NTRS)
Stallcop, James R.; Partridge, Harry; Levin, Eugene; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
Collision integrals are fundamental quantities required to determine the transport properties of the environment surrounding aerospace vehicles in the upper atmosphere. These collision integrals can be determined as a function of temperature from the potential energy curves describing the atomic and molecular collisions. Ab initio calculations provide a practical method of computing the required interaction potentials. In this work we will discuss recent advances in scattering calculations with an emphasis on the accuracy that is obtainable. Results for interactions of the atoms and ionized atoms of nitrogen and oxygen will be reviewed and their application to the determination of transport properties, such as diffusion and viscosity coefficients, will be examined.
Chemical element transport in stellar evolution models
Cassisi, Santi
2017-01-01
Stellar evolution computations provide the foundation of several methods applied to study the evolutionary properties of stars and stellar populations, both Galactic and extragalactic. The accuracy of the results obtained with these techniques is linked to the accuracy of the stellar models, and in this context the correct treatment of the transport of chemical elements is crucial. Unfortunately, in many respects calculations of the evolution of the chemical abundance profiles in stars are still affected by sometimes sizable uncertainties. Here, we review the various mechanisms of element transport included in the current generation of stellar evolution calculations, how they are implemented, the free parameters and uncertainties involved, the impact on the models and the observational constraints. PMID:28878972
Chemical element transport in stellar evolution models.
Salaris, Maurizio; Cassisi, Santi
2017-08-01
Stellar evolution computations provide the foundation of several methods applied to study the evolutionary properties of stars and stellar populations, both Galactic and extragalactic. The accuracy of the results obtained with these techniques is linked to the accuracy of the stellar models, and in this context the correct treatment of the transport of chemical elements is crucial. Unfortunately, in many respects calculations of the evolution of the chemical abundance profiles in stars are still affected by sometimes sizable uncertainties. Here, we review the various mechanisms of element transport included in the current generation of stellar evolution calculations, how they are implemented, the free parameters and uncertainties involved, the impact on the models and the observational constraints.
Sensitivity analysis of complex coupled systems extended to second and higher order derivatives
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
In design of engineering systems, the what if questions often arise such as: what will be the change of the aircraft payload, if the wing aspect ratio is incremented by 10 percent. Answers to such questions are commonly sought by incrementing the pertinent variable, and reevaluating the major disciplinary analyses involved. These analyses are contributed by engineering disciplines that are, usually, coupled, as are the aerodynamics, structures, and performance in the context of the question above. The what if questions can be answered precisely by computation of the derivatives. A method for calculation of the first derivatives has been developed previously. An algorithm is presented for calculation of the second and higher order derivatives.
CCKT Calculation of e-H Total Cross Sections
NASA Technical Reports Server (NTRS)
Bhatia, Aaron K.; Schneider, B. I.; Temkin, A.; Fisher, Richard R. (Technical Monitor)
2002-01-01
We are in the process of carrying out calculations of e-H total cross sections using the 'complex-correlation Kohn-T' (CCKT) method. In a later paper, we described the methodology more completely, but confined calculations to the elastic scattering region, with definitive, precision results for S-wave phase shifts. Here we extend the calculations to the (low) continuum (1 much less than k(exp 2) much less than 3) using a Green's function formulation. This avoids having to solve integro-differential equations; rather we evaluate indefinite integrals involving appropriate Green's functions and the (complex) optical potential to find the scattering function u(r). From the asymptotic form of u(r) we extract a T(sub L) which is a complex number. From T(sub L), elastic sigma(sub L)(elastic) = 4pi(2L+1)((absolute value of T(sub L))(exp 2)), and total sigma (sub L)(total) = 4pi/k(2L+1)Im(T(sub L)) cross sections follow.
Poirier, Bill; Salam, A
2004-07-22
In a previous paper [J. Theo. Comput. Chem. 2, 65 (2003)], one of the authors (B.P.) presented a method for solving the multidimensional Schrodinger equation, using modified Wilson-Daubechies wavelets, and a simple phase space truncation scheme. Unprecedented numerical efficiency was achieved, enabling a ten-dimensional calculation of nearly 600 eigenvalues to be performed using direct matrix diagonalization techniques. In a second paper [J. Chem. Phys. 121, 1690 (2004)], and in this paper, we extend and elaborate upon the previous work in several important ways. The second paper focuses on construction and optimization of the wavelength functions, from theoretical and numerical viewpoints, and also examines their localization. This paper deals with their use in representations and eigenproblem calculations, which are extended to 15-dimensional systems. Even higher dimensionalities are possible using more sophisticated linear algebra techniques. This approach is ideally suited to rovibrational spectroscopy applications, but can be used in any context where differential equations are involved.
Acoustic intensity calculations for axisymmetrically modeled fluid regions
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.; Everstine, Gordon C.
1992-01-01
An algorithm for calculating acoustic intensities from a time harmonic pressure field in an axisymmetric fluid region is presented. Acoustic pressures are computed in a mesh of NASTRAN triangular finite elements of revolution (TRIAAX) using an analogy between the scalar wave equation and elasticity equations. Acoustic intensities are then calculated from pressures and pressure derivatives taken over the mesh of TRIAAX elements. Intensities are displayed as vectors indicating the directions and magnitudes of energy flow at all mesh points in the acoustic field. A prolate spheroidal shell is modeled with axisymmetric shell elements (CONEAX) and submerged in a fluid region of TRIAAX elements. The model is analyzed to illustrate the acoustic intensity method and the usefulness of energy flow paths in the understanding of the response of fluid-structure interaction problems. The structural-acoustic analogy used is summarized for completeness. This study uncovered a NASTRAN limitation involving numerical precision issues in the CONEAX stiffness calculation causing large errors in the system matrices for nearly cylindrical cones.
Spectral method for the static electric potential of a charge density in a composite medium
NASA Astrophysics Data System (ADS)
Bergman, David J.; Farhi, Asaf
2018-04-01
A spectral representation for the static electric potential field in a two-constituent composite medium is presented. A theory is developed for calculating the quasistatic eigenstates of Maxwell's equations for such a composite. The local physical potential field produced in the system by a given source charge density is expanded in this set of orthogonal eigenstates for any position r. The source charges can be located anywhere, i.e., inside any of the constituents. This is shown to work even if the eigenfunctions are normalized in an infinite volume. If the microstructure consists of a cluster of separate inclusions in a uniform host medium, then the quasistatic eigenstates of all the separate isolated inclusions can be used to calculate the eigenstates of the total structure as well as the local potential field. Once the eigenstates are known for a given host and a given microstructure, then calculation of the local field only involves calculating three-dimensional integrals of known functions and solving sets of linear algebraic equations.
Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi
2017-10-10
We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.
On the origin independence of the Verdet tensor†
NASA Astrophysics Data System (ADS)
Caputo, M. C.; Coriani, S.; Pelloni, S.; Lazzeretti, P.
2013-07-01
The condition for invariance under a translation of the coordinate system of the Verdet tensor and the Verdet constant, calculated via quantum chemical methods using gaugeless basis sets, is expressed by a vanishing sum rule involving a third-rank polar tensor. The sum rule is, in principle, satisfied only in the ideal case of optimal variational electronic wavefunctions. In general, it is not fulfilled in non-variational calculations and variational calculations allowing for the algebraic approximation, but it can be satisfied for reasons of molecular symmetry. Group-theoretical procedures have been used to determine (i) the total number of non-vanishing components and (ii) the unique components of both the polar tensor appearing in the sum rule and the axial Verdet tensor, for a series of symmetry groups. Test calculations at the random-phase approximation level of accuracy for water, hydrogen peroxide and ammonia molecules, using basis sets of increasing quality, show a smooth convergence to zero of the sum rule. Verdet tensor components calculated for the same molecules converge to limit values, estimated via large basis sets of gaugeless Gaussian functions and London orbitals.
Using DFT Methods to Study Activators in Optical Materials
Du, Mao-Hua
2015-08-17
Density functional theory (DFT) calculations of various activators (ranging from transition metal ions, rare-earth ions, ns 2 ions, to self-trapped and dopant-bound excitons) in phosphors and scintillators are reviewed. As a single-particle ground-state theory, DFT calculations cannot reproduce the experimentally observed optical spectra, which involve transitions between multi-electronic states. However, DFT calculations can generally provide sufficiently accurate structural relaxation and distinguish different hybridization strengths between an activator and its ligands in different host compounds. This is important because the activator-ligand interaction often governs the trends in luminescence properties in phosphors and scintillators, and can be used to search for newmore » materials. DFT calculations of the electronic structure of the host compound and the positions of the activator levels relative to the host band edges in scintillators are also important for finding optimal host-activator combinations for high light yields and fast scintillation response. Mn 4+ activated red phosphors, scintillators activated by Ce 3+, Eu 2+, Tl +, and excitons are shown as examples of using DFT calculations in phosphor and scintillator research.« less
Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo
2014-01-01
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929
NASA Technical Reports Server (NTRS)
Eskins, Jonathan
1988-01-01
The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.
Monte Carlo based electron treatment planning and cutout output factor calculations
NASA Astrophysics Data System (ADS)
Mitrou, Ellis
Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.
NASA Astrophysics Data System (ADS)
Moghadasi, Jalil; Yousefi, Fakhri; Papari, Mohammad Mehdi; Faghihi, Mohammad Ali; Mohsenipour, Ali Asghar
2009-09-01
It is the purpose of this paper to extract unlike intermolecular potential energies of five carbon dioxide-based binary gas mixtures including CO2-He, CO2-Ne, CO2-Ar, CO2-Kr, and CO2-Xe from viscosity data and compare the calculated potentials with other models potential energy reported in literature. Then, dilute transport properties consisting of viscosity, diffusion coefficient, thermal diffusion factor, and thermal conductivity of aforementioned mixtures are calculated from the calculated potential energies and compared with literature data. Rather accurate correlations for the viscosity coefficient of afore-cited mixtures embracing the temperature range 200 K < T < 3273.15 K is reproduced from the present unlike intermolecular potentials energy. Our estimated accuracies for the viscosity are to within ±2%. In addition, the calculated potential energies are used to present smooth correlations for other transport properties. The accuracies of the binary diffusion coefficients are of the order of ±3%. Finally, the unlike interaction energy and the calculated low density viscosity have been employed to calculate high density viscosities using Vesovic-Wakeham method.
Reddy, M Rami; Erion, Mark D
2009-12-01
Molecular dynamics (MD) simulations in conjunction with thermodynamic perturbation approach was used to calculate relative solvation free energies of five pairs of small molecules, namely; (1) methanol to ethane, (2) acetone to acetamide, (3) phenol to benzene, (4) 1,1,1 trichloroethane to ethane, and (5) phenylalanine to isoleucine. Two studies were performed to evaluate the dependence of the convergence of these calculations on MD simulation length and starting configuration. In the first study, each transformation started from the same well-equilibrated configuration and the simulation length was varied from 230 to 2,540 ps. The results indicated that for transformations involving small structural changes, a simulation length of 860 ps is sufficient to obtain satisfactory convergence. In contrast, transformations involving relatively large structural changes, such as phenylalanine to isoleucine, require a significantly longer simulation length (>2,540 ps) to obtain satisfactory convergence. In the second study, the transformation was completed starting from three different configurations and using in each case 860 ps of MD simulation. The results from this study suggest that performing one long simulation may be better than averaging results from three different simulations using a shorter simulation length and three different starting configurations.
Homayoon, D; Dahlhoff, P; Augustin, M
2017-12-15
Uncertainty regarding the suitable amount of prescribed ointment and its application by patients may cause insufficient or uneconomic health care provision. To address this issue, standardized methods and experts' knowledge on the suitable amount and coherent patient's elucidation for application of topicals are needed. Presented are current data in routine care and scientific evidence on the prescribed amount of topical agents as well as its application by patients in dermatological care. A literature review was conducted via PubMed using the keywords as individual and pooled search terms: "local therapy", "topical treatment", "prescription", "amount of ointment needed", "involved area", "BSA", "finger-tip-unit", "Rule of Hand", "calculated dosage" and "rule of nines". We included original studies by manually screening title and abstract according to the relevance of the topic. The search strategy identified 19 clinical trials. The fingertip unit (FTU) is the most frequently used measurement for accurate application of external agents. Appropriate prescribed amount is calculated by required topical agent per involved surface area. There is still a need for clarification to which extent the optimized amount of ointment is prescribed and advice for its application in routine care is given. The FTU combined with the "Rule of Hand" is an adequate measurement for patient's guidance on self-application.
Reliability Driven Space Logistics Demand Analysis
NASA Technical Reports Server (NTRS)
Knezevic, J.
1995-01-01
Accurate selection of the quantity of logistic support resources has a strong influence on mission success, system availability and the cost of ownership. At the same time the accurate prediction of these resources depends on the accurate prediction of the reliability measures of the items involved. This paper presents a method for the advanced and accurate calculation of the reliability measures of complex space systems which are the basis for the determination of the demands for logistics resources needed during the operational life or mission of space systems. The applicability of the method presented is demonstrated through several examples.
Pseudorapidity configurations in collisions between gold nuclei and track-emulsion nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gulamov, K. G.; Zhokhova, S. I.; Lugovoi, V. V., E-mail: lugovoi@uzsci.net
2010-07-15
A method of parametrically invariant quantities is developed for studying pseudorapidity configurations in nucleus-nucleus collisions involving a large number of secondary particles. In simple models where the spectrum of pseudorapidities depends on three parameters, the shape of the spectrum may differ strongly from the shape of pseudorapidity configurations in individual events. Pseudorapidity configurations in collisions between gold nuclei of energy 10.6 GeV per nucleon and track-emulsion nuclei are contrasted against those in random stars calculated theoretically. An investigation of pseudorapidity configurations in individual events is an efficient method for verifying theoretical models.
Study of diatomic molecules. 2: Intensities. [optical emission spectroscopy of ScO
NASA Technical Reports Server (NTRS)
Femenias, J. L.
1978-01-01
The theory of perturbations, giving the diatomic effective Hamiltonian, is used for calculating actual molecular wave functions and intensity factors involved in transitions between states arising from Hund's coupling cases a,b, intermediate a-b, and c tendency. The Herman and Wallis corrections are derived, without any knowledge of the analytical expressions of the wave functions, and generalized to transitions between electronic states with whatever symmetry and multiplicity. A general method for studying perturbed intensities is presented using primarily modern spectroscopic numerical approaches. The method is used in the study of the ScO optical emission spectrum.
Fast correlation method for passive-solar design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wray, W.O.; Biehl, F.A.; Kosiewicz, C.E.
1982-01-01
A passive-solar design manual for single-family detached residences and dormitory-type buildings is being developed. The design procedure employed in the manual is a simplification of the original monthly solar load ratio (SLR) method. The new SLR correlations involve a single constant for each system. The correlation constant appears as a scale factor permitting the use of a universal performance curve for all passive systems. Furthermore, by providing location-dependent correlations between the annual solar heating fraction (SHF) and the minimum monthly SHF, we have eliminated the need to perform an SLR calculation for each month of the heating season.
A New Approach to Image Fusion Based on Cokriging
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; LeMoigne, Jacqueline; Mount, David M.; Morisette, Jeffrey T.
2005-01-01
We consider the image fusion problem involving remotely sensed data. We introduce cokriging as a method to perform fusion. We investigate the advantages of fusing Hyperion with ALI. The evaluation is performed by comparing the classification of the fused data with that of input images and by calculating well-chosen quantitative fusion quality metrics. We consider the Invasive Species Forecasting System (ISFS) project as our fusion application. The fusion of ALI with Hyperion data is studies using PCA and wavelet-based fusion. We then propose utilizing a geostatistical based interpolation method called cokriging as a new approach for image fusion.
Method of controlling a resin curing process. [for fiber reinforced composites
NASA Technical Reports Server (NTRS)
Webster, Charles Neal (Inventor); Scott, Robert O. (Inventor)
1989-01-01
The invention relates to an analytical technique for controlling the curing process of fiber-reinforced composite materials that are formed using thermosetting resins. The technique is the percent gel method and involves development of a time-to-gel equation as a function of temperature. From this equation a rate-of-gel equation is then determined, and a percent gel is calculated which is the product of rate-of-gel times time. Percent gel accounting is used to control the proper pressure application point in an autoclave cure process to achieve desired properties in a production composite part.
Bertilson, Bo C; Brosjö, Eva; Billing, Hans; Strender, Lars-Erik
2010-09-10
Detection of nerve involvement originating in the spine is a primary concern in the assessment of spine symptoms. Magnetic resonance imaging (MRI) has become the diagnostic method of choice for this detection. However, the agreement between MRI and other diagnostic methods for detecting nerve involvement has not been fully evaluated. The aim of this diagnostic study was to evaluate the agreement between nerve involvement visible in MRI and findings of nerve involvement detected in a structured physical examination and a simplified pain drawing. Sixty-one consecutive patients referred for MRI of the lumbar spine were - without knowledge of MRI findings - assessed for nerve involvement with a simplified pain drawing and a structured physical examination. Agreement between findings was calculated as overall agreement, the p value for McNemar's exact test, specificity, sensitivity, and positive and negative predictive values. MRI-visible nerve involvement was significantly less common than, and showed weak agreement with, physical examination and pain drawing findings of nerve involvement in corresponding body segments. In spine segment L4-5, where most findings of nerve involvement were detected, the mean sensitivity of MRI-visible nerve involvement to a positive neurological test in the physical examination ranged from 16-37%. The mean specificity of MRI-visible nerve involvement in the same segment ranged from 61-77%. Positive and negative predictive values of MRI-visible nerve involvement in segment L4-5 ranged from 22-78% and 28-56% respectively. In patients with long-standing nerve root symptoms referred for lumbar MRI, MRI-visible nerve involvement significantly underestimates the presence of nerve involvement detected by a physical examination and a pain drawing. A structured physical examination and a simplified pain drawing may reveal that many patients with "MRI-invisible" lumbar symptoms need treatment aimed at nerve involvement. Factors other than present MRI-visible nerve involvement may be responsible for findings of nerve involvement in the physical examination and the pain drawing.
Risk analysis of chemical, biological, or radionuclear threats: implications for food security.
Mohtadi, Hamid; Murshid, Antu Panini
2009-09-01
If the food sector is attacked, the likely agents will be chemical, biological, or radionuclear (CBRN). We compiled a database of international terrorist/criminal activity involving such agents. Based on these data, we calculate the likelihood of a catastrophic event using extreme value methods. At the present, the probability of an event leading to 5,000 casualties (fatalities and injuries) is between 0.1 and 0.3. However, pronounced, nonstationary patterns within our data suggest that the "reoccurrence period" for such attacks is decreasing every year. Similarly, disturbing trends are evident in a broader data set, which is nonspecific as to the methods or means of attack. While at the present the likelihood of CBRN events is quite low, given an attack, the probability that it involves CBRN agents increases with the number of casualties. This is consistent with evidence of "heavy tails" in the distribution of casualties arising from CBRN events.
Stochastic determination of matrix determinants
NASA Astrophysics Data System (ADS)
Dorn, Sebastian; Enßlin, Torsten A.
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations—matrices—acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
Stochastic determination of matrix determinants.
Dorn, Sebastian; Ensslin, Torsten A
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
Computational inhibitor design against malaria plasmepsins.
Bjelic, S; Nervall, M; Gutiérrez-de-Terán, H; Ersmark, K; Hallberg, A; Aqvist, J
2007-09-01
Plasmepsins are aspartic proteases involved in the degradation of the host cell hemoglobin that is used as a food source by the malaria parasite. Plasmepsins are highly promising as drug targets, especially when combined with the inhibition of falcipains that are also involved in hemoglobin catabolism. In this review, we discuss the mechanism of plasmepsins I-IV in view of the interest in transition state mimetics as potential compounds for lead development. Inhibitor development against plasmepsin II as well as relevant crystal structures are summarized in order to give an overview of the field. Application of computational techniques, especially binding affinity prediction by the linear interaction energy method, in the development of malarial plasmepsin inhibitors has been highly successful and is discussed in detail. Homology modeling and molecular docking have been useful in the current inhibitor design project, and the combination of such methods with binding free energy calculations is analyzed.
Multilevel geometry optimization
NASA Astrophysics Data System (ADS)
Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.
2000-02-01
Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.
Stated Preference Economic Development Model
2015-02-01
calculated the public benefit associated with Petroglyph by extracting the value for day hikes from the first study, the added value of rock art from the...2002. There are a lack of data and methods to determine the net social benefit of this aid. Additionally, currently available data are insufficient to...properly prioritize the usage and award of this aid. SPED involved the creation of tools that estimate the net social benefit of projects using
Methods of Single Station and Limited Data Analysis and Forecasting
1985-08-15
example using real data. Discusses modifications of SSA technique in certain climatological regimes and describes some statistical tech- niques for SSA of... caster has access to radar or satellite observations, or any computer products during the period of his isolation. Where calculations are involved, it is...chapters of the text will deal with special topics such as modifications of the SSA technique that must be considered for certain clima- tological regimes
Fuel quality/processing study. Volume 3: Fuel upgrading studies
NASA Technical Reports Server (NTRS)
Jones, G. E., Jr.; Bruggink, P.; Sinnett, C.
1981-01-01
The methods used to calculate the refinery selling prices for the turbine fuels of low quality are described. Detailed descriptions and economics of the upgrading schemes are included. These descriptions include flow diagrams showing the interconnection between processes and the stream flows involved. Each scheme is in a complete, integrated, stand alone facility. Except for the purchase of electricity and water, each scheme provides its own fuel and manufactures, when appropriate, its own hydrogen.
Time and Frequency Activities at the National Physical Laboratory
1999-12-01
TWSTFT ) time transfers are routinely forwarded to BIPM. The TWSTFT and GPS common-view measurements are used in the calculation of TAI. During recent...accuracy time and frequency dissemination methods in the UK. Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) has been under development at NPL...since 1992, and regular TWSTFT sessions began in 1993. NPL was heavily involved in the early TWSTFT work, in particular studies of closing errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moignier, C; Huet, C; Barraux, V
Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MCmore » algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.« less
Caffrey, Emily A; Johansen, Mathew P; Higley, Kathryn A
2015-10-01
Radiological dosimetry for nonhuman biota typically relies on calculations that utilize the Monte Carlo simulations of simple, ellipsoidal geometries with internal radioactivity distributed homogeneously throughout. In this manner it is quick and easy to estimate whole-body dose rates to biota. Voxel models are detailed anatomical phantoms that were first used for calculating radiation dose to humans, which are now being extended to nonhuman biota dose calculations. However, if simple ellipsoidal models provide conservative dose-rate estimates, then the additional labor involved in creating voxel models may be unnecessary for most scenarios. Here we show that the ellipsoidal method provides conservative estimates of organ dose rates to small mammals. Organ dose rates were calculated for environmental source terms from Maralinga, the Nevada Test Site, Hanford and Fukushima using both the ellipsoidal and voxel techniques, and in all cases the ellipsoidal method yielded more conservative dose rates by factors of 1.2-1.4 for photons and 5.3 for beta particles. Dose rates for alpha-emitting radionuclides are identical for each method as full energy absorption in source tissue is assumed. The voxel procedure includes contributions to dose from organ-to-organ irradiation (shown here to comprise 2-50% of total dose from photons and 0-93% of total dose from beta particles) that is not specifically quantified in the ellipsoidal approach. Overall, the voxel models provide robust dosimetry for the nonhuman mammals considered in this study, and though the level of detail is likely extraneous to demonstrating regulatory compliance today, voxel models may nevertheless be advantageous in resolving ongoing questions regarding the effects of ionizing radiation on wildlife.
Variational Dirac-Hartree-Fock calculation of the Breit interaction
NASA Astrophysics Data System (ADS)
Goldman, S. P.
1988-04-01
The calculation of the retarded version of the Breit interaction in the context of the VDHF method is discussed. With the use of Slater-type basis functions, all the terms involved can be calculated in closed form. The results are expressed as an expansion in powers of one-electron energy differences and linear combinations of hypergeometric functions. Convergence is fast and high accuracy is obtained with a small number of terms in the expansion even for high values of the nuclear charge. An added advantage is that the lowest order cancellations occurring in the retardation terms are accounted for exactly a priori. A comparison of the number of terms in the total expansion needed for an accuracy of 12 significant digits in the total energy, as well as a comparison of the results with an without retardation and in the local potential approximation, are presented for the carbon isoelectronic sequence.
Calculation of Thomson scattering spectral fits for interpenetrating flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swadling, G. F., E-mail: george.swadling@imperial.ac.uk; Lebedev, S. V., E-mail: george.swadling@imperial.ac.uk; Burdiak, G. C.
2014-12-15
Collective mode optical Thomson scattering has been used to investigate the interactions of radially convergent ablation flows in Tungsten wire arrays. These experiments were carried out at the Magpie pulsed power facility at Imperial College, London. Analysis of the scattered spectra has provided direct evidence of ablation stream interpenetration on the array axis, and has also revealed a previously unobserved axial deflection of the ablation streams towards the anode as they approach the axis. It is has been suggested that this deflection is caused by the presence of a static magnetic field, advected with the ablation streams, stagnated and accruedmore » around the axis. Analysis of the Thomson scattering spectra involved the calculation and fitting of the multi-component, non-relativistic, Maxwellian spectral density function S (k, ω). The method used to calculate the fits of the data are discussed in detail.« less
NASA Technical Reports Server (NTRS)
Mcmillan, O. J.; Mendenhall, M. R.; Perkins, S. C., Jr.
1984-01-01
Work is described dealing with two areas which are dominated by the nonlinear effects of vortex flows. The first area concerns the stall/spin characteristics of a general aviation wing with a modified leading edge. The second area concerns the high-angle-of-attack characteristics of high performance military aircraft. For each area, the governing phenomena are described as identified with the aid of existing experimental data. Existing analytical methods are reviewed, and the most promising method for each area used to perform some preliminary calculations. Based on these results, the strengths and weaknesses of the methods are defined, and research programs recommended to improve the methods as a result of better understanding of the flow mechanisms involved.
An automated and universal method for measuring mean grain size from a digital image of sediment
Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.
2010-01-01
Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.
McGuigan, John A S; Kay, James W; Elder, Hugh Y
2016-09-01
In Ca(2+) and Mg(2+) buffer solutions the ionised concentrations ([X(2+)]) are either calculated or measured. Calculated values vary by up to a factor of seven due to the following four problems: 1) There is no agreement amongst the tabulated constants in the literature. These constants have usually to be corrected for ionic strength and temperature. 2) The ionic strength correction entails the calculation of the single ion activity coefficient, which involves non-thermodynamic assumptions; the data for temperature correction is not always available. 3) Measured pH is in terms of activity i.e. pHa. pHa measurements are complicated by the change in the liquid junction potentials at the reference electrode making an accurate conversion from H(+) activity to H(+) concentration uncertain. 4) Ligands such as EGTA bind water and are not 100% pure. Ligand purity has to be measured, even when the [X(2+)] are calculated. The calculated [X(2+)] in buffers are so inconsistent that calculation is not an option. Until standards are available, the [X(2+)] in the buffers must be measured. The Ligand Optimisation Method is an accurate and independently verified method of doing this (McGuigan & Stumpff, Anal. Biochem. 436, 29, 2013). Lack of standards means it is not possible to compare the published [Ca(2+)] in the nmolar range, and the apparent constant (K(/)) values for Ca(2+) and Mg(2+) binding to intracellular ligands amongst different laboratories. Standardisation of Ca(2+)/Mg(2+) buffers is now essential. The parameters to achieve this are proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Semiconductor material and method for enhancing solubility of a dopant therein
Sadigh, Babak; Lenosky, Thomas J.; Rubia, Tomas Diaz; Giles, Martin; Caturla, Maria-Jose; Ozolins, Vidvuds; Asta, Mark; Theiss, Silva; Foad, Majeed; Quong, Andrew
2003-09-09
A method for enhancing the equilibrium solubility of boron and indium in silicon. The method involves first-principles quantum mechanical calculations to determine the temperature dependence of the equilibrium solubility of two important p-type dopants in silicon, namely boron and indium, under various strain conditions. The equilibrium thermodynamic solubility of size-mismatched impurities, such as boron and indium in silicon, can be raised significantly if the silicon substrate is strained appropriately. For example, for boron, a 1% compressive strain raises the equilibrium solubility by 100% at 1100.degree. C.; and for indium, a 1% tensile strain at 1100.degree. C., corresponds to an enhancement of the solubility by 200%.
A Semiconductor Material And Method For Enhancing Solubility Of A Dopant Therein
Sadigh, Babak; Lenosky, Thomas J.; Diaz de la Rubia, Tomas; Giles, Martin; Caturla, Maria-Jose; Ozolins, Vidvuds; Asta, Mark; Theiss, Silva; Foad, Majeed; Quong, Andrew
2005-03-29
A method for enhancing the equilibrium solubility of boron ad indium in silicon. The method involves first-principles quantum mechanical calculations to determine the temperature dependence of the equilibrium solubility of two important p-type dopants in silicon, namely boron and indium, under various strain conditions. The equilibrium thermodynamic solubility of size-mismatched impurities, such as boron and indium in silicon, can be raised significantly if the silicon substrate is strained appropriately. For example, for boron, a 1% compressive strain raises the equilibrium solubility by 100% at 1100.degree. C.; and for indium, a 1% tensile strain at 1100.degree. C., corresponds to an enhancement of the solubility by 200%.
Automatic creation of object hierarchies for ray tracing
NASA Technical Reports Server (NTRS)
Goldsmith, Jeffrey; Salmon, John
1987-01-01
Various methods for evaluating generated trees are proposed. The use of the hierarchical extent method of Rubin and Whitted (1980) to find the objects that will be hit by a ray is examined. This method employs tree searching; the construction of a tree of bounding volumes in order to determine the number of objects that will be hit by a ray is discussed. A tree generation algorithm, which uses a heuristic tree search strategy, is described. The effects of shuffling and sorting on the input data are investigated. The cost of inserting an object into the hierarchy during the construction of a tree algorithm is estimated. The steps involved in estimating the number of intersection calculations are presented.
Method for enhancing the solubility of boron and indium in silicon
Sadigh, Babak; Lenosky, Thomas J.; Diaz de la Rubia, Tomas; Giles, Martin; Caturla, Maria-Jose; Ozolins, Vidvuds; Asta, Mark; Theiss, Silva; Foad, Majeed; Quong, Andrew
2002-01-01
A method for enhancing the equilibrium solubility of boron and indium in silicon. The method involves first-principles quantum mechanical calculations to determine the temperature dependence of the equilibrium solubility of two important p-type dopants in silicon, namely boron and indium, under various strain conditions. The equilibrium thermodynamic solubility of size-mismatched impurities, such as boron and indium in silicon, can be raised significantly if the silicon substrate is strained appropriately. For example, for boron, a 1% compressive strain raises the equilibrium solubility by 100% at 1100.degree. C.; and for indium, a 1% tensile strain at 1100.degree. C., corresponds to an enhancement of the solubility by 200%.
NASA Astrophysics Data System (ADS)
Gonzalez Lazo, Eduardo; Cruz Inclán, Carlos M.; Rodríguez Rodríguez, Arturo; Guzmán Martínez, Fernando; Abreu Alfonso, Yamiel; Piñera Hernández, Ibrahin; Leyva Fabelo, Antonio
2017-09-01
A primary approach for evaluating the influence of point defects like vacancies on atom displacement threshold energies values Td in BaTiO3 is attempted. For this purpose Molecular Dynamics Methods, MD, were applied based on previous Td calculations on an ideal tetragonal crystalline structure. It is an important issue in achieving more realistic simulations of radiation damage effects in BaTiO3 ceramic materials. It also involves irradiated samples under severe radiation damage effects due to high fluency expositions. In addition to the above mentioned atom displacement events supported by a single primary knock-on atom, PKA, a new mechanism was introduced. It corresponds to the simultaneous excitation of two close primary knock-on atoms in BaTiO3, which might take place under a high flux irradiation. Therefore, two different BaTiO3 Td MD calculation trials were accomplished. Firstly, single PKA excitations in a defective BaTiO3 tetragonal crystalline structure, consisting in a 2×2×2 BaTiO3 perovskite like super cell, were considered. It contains vacancies on Ba and O atomic positions under the requirements of electrical charge balance. Alternatively, double PKA excitations in a perfect BaTiO3 tetragonal unit cell were also simulated. On this basis, the corresponding primary knock-on atom (PKA) defect formation probability functions were calculated at principal crystal directions, and compared with the previous one we calculated and reported at an ideal BaTiO3 tetrahedral crystal structure. As a general result, a diminution of Td values arises in present calculations in comparison with those calculated for single PKA excitation in an ideal BaTiO3 crystal structure.
Lomond, Jasmine S; Tong, Anthony Z
2011-01-01
Analysis of dissolved methane, ethylene, acetylene, and ethane in water is crucial in evaluating anaerobic activity and investigating the sources of hydrocarbon contamination in aquatic environments. A rapid chromatographic method based on phase equilibrium between water and its headspace is developed for these analytes. The new method requires minimal sample preparation and no special apparatus except those associated with gas chromatography. Instead of Henry's Law used in similar previous studies, partition coefficients are used for the first time to calculate concentrations of dissolved hydrocarbon gases, which considerably simplifies the calculation involved. Partition coefficients are determined to be 128, 27.9, 1.28, and 96.3 at 30°C for methane, ethylene, acetylene, and ethane, respectively. It was discovered that the volume ratio of gas-to-liquid phase is critical to the accuracy of the measurements. The method performance can be readily improved by reducing the volume ratio of the two phases. Method validation shows less than 6% variation in accuracy and precision except at low levels of methane where interferences occur in ambient air. Method detection limits are determined to be in the low ng/L range for all analytes. The performance of the method is further tested using environmental samples collected from various sites in Nova Scotia.
Pretest Predictions for Phase II Ventilation Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yiming Sun
The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, and concrete pipe walls that will be developed during the Phase II ventilation tests involving various test conditions. The results will be used as inputs to validating numerical approach for modeling continuous ventilation, and be used to support the repository subsurface design. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the Phase II ventilation tests, and describe numerical methods that are used to calculate the effects of continuous ventilation. The calculation is limitedmore » to thermal effect only. This engineering work activity is conducted in accordance with the ''Technical Work Plan for: Subsurface Performance Testing for License Application (LA) for Fiscal Year 2001'' (CRWMS M&O 2000d). This technical work plan (TWP) includes an AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', activity evaluation (CRWMS M&O 2000d, Addendum A) that has determined this activity is subject to the YMP quality assurance (QA) program. The calculation is developed in accordance with the AP-3.12Q procedure, ''Calculations''. Additional background information regarding this activity is contained in the ''Development Plan for Ventilation Pretest Predictive Calculation'' (DP) (CRWMS M&O 2000a).« less
NASA Astrophysics Data System (ADS)
Perekhodtseva, Elvira V.
2010-05-01
Development of successful method of forecast of storm winds, including squalls and tornadoes, that often result in human and material losses, could allow one to take proper measures against destruction of buildings and to protect people. Well-in-advance successful forecast (from 12 hours to 48 hour) makes possible to reduce the losses. Prediction of the phenomena involved is a very difficult problem for synoptic till recently. The existing graphic and calculation methods still depend on subjective decision of an operator. Nowadays in Russia there is no hydrodynamic model for forecast of the maximal wind velocity V> 25m/c, hence the main tools of objective forecast are statistical methods using the dependence of the phenomena involved on a number of atmospheric parameters (predictors). . Statistical decisive rule of the alternative and probability forecast of these events was obtained in accordance with the concept of "perfect prognosis" using the data of objective analysis. For this purpose the different teaching samples of present and absent of this storm wind and rainfalls were automatically arranged that include the values of forty physically substantiated potential predictors. Then the empirical statistical method was used that involved diagonalization of the mean correlation matrix R of the predictors and extraction of diagonal blocks of strongly correlated predictors. Thus for these phenomena the most informative predictors were selected without loosing information. The statistical decisive rules for diagnosis and prognosis of the phenomena involved U(X) were calculated for choosing informative vector-predictor. We used the criterion of distance of Mahalanobis and criterion of minimum of entropy by Vapnik-Chervonenkis for the selection predictors. Successful development of hydrodynamic models for short-term forecast and improvement of 36-48h forecasts of pressure, temperature and others parameters allowed us to use the prognostic fields of those models for calculations of the discriminant functions in the nodes of the grid 75x75km and the values of probabilities P of dangerous wind and thus to get fully automated forecasts. . In order to apply the alternative forecast to European part of Russia and Europe the author proposes the empirical threshold values specified for this phenomenon and advance period 36 hours. According to the Pirsey-Obukhov criterion (T), the success of this hydrometeorological-statistical method of forecast of storm wind and tornadoes to 36 -48 hours ahead in the warm season for the territory of Europe part of Russia and Siberia is T = 1-a-b=0,54-0,78 after independent and author experiments during the period 2004-2009 years. A lot of examples of very successful forecasts are submitted at this report for the territory of Europe and Russia. The same decisive rules were applied to the forecast of these phenomena during cold period in 2009-2010 years too. On the first month of 2010 a lot of cases of storm wind with heavy snowfall were observed and were forecasting over the territory of France, Italy and Germany.
A new smoothing function to introduce long-range electrostatic effects in QM/MM calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Dong; Department of Chemistry, University of Wisconsin, Madison, Wisconsin 53706; Duke, Robert E.
2015-07-28
A new method to account for long range electrostatic contributions is proposed and implemented for quantum mechanics/molecular mechanics long range electrostatic correction (QM/MM-LREC) calculations. This method involves the use of the minimum image convention under periodic boundary conditions and a new smoothing function for energies and forces at the cutoff boundary for the Coulomb interactions. Compared to conventional QM/MM calculations without long-range electrostatic corrections, the new method effectively includes effects on the MM environment in the primary image from its replicas in the neighborhood. QM/MM-LREC offers three useful features including the avoidance of calculations in reciprocal space (k-space), with themore » concomitant avoidance of having to reproduce (analytically or approximately) the QM charge density in k-space, and the straightforward availability of analytical Hessians. The new method is tested and compared with results from smooth particle mesh Ewald (PME) for three systems including a box of neat water, a double proton transfer reaction, and the geometry optimization of the critical point structures for the rate limiting step of the DNA dealkylase AlkB. As with other smoothing or shifting functions, relatively large cutoffs are necessary to achieve comparable accuracy with PME. For the double-proton transfer reaction, the use of a 22 Å cutoff shows a close reaction energy profile and geometries of stationary structures with QM/MM-LREC compared to conventional QM/MM with no truncation. Geometry optimization of stationary structures for the hydrogen abstraction step by AlkB shows some differences between QM/MM-LREC and the conventional QM/MM. These differences underscore the necessity of the inclusion of the long-range electrostatic contribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Albert F., E-mail: wagner@anl.gov; Dawes, Richard; Continetti, Robert E.
The measured H(D)OCO survival fractions of the photoelectron-photofragment coincidence experiments by the Continetti group are qualitatively reproduced by tunneling calculations to H(D) + CO{sub 2} on several recent ab initio potential energy surfaces for the HOCO system. The tunneling calculations involve effective one-dimensional barriers based on steepest descent paths computed on each potential energy surface. The resulting tunneling probabilities are converted into H(D)OCO survival fractions using a model developed by the Continetti group in which every oscillation of the H(D)-OCO stretch provides an opportunity to tunnel. Four different potential energy surfaces are examined with the best qualitative agreement with experimentmore » occurring for the PIP-NN surface based on UCCSD(T)-F12a/AVTZ electronic structure calculations and also a partial surface constructed for this study based on CASPT2/AVDZ electronic structure calculations. These two surfaces differ in barrier height by 1.6 kcal/mol but when matched at the saddle point have an almost identical shape along their reaction paths. The PIP surface is a less accurate fit to a smaller ab initio data set than that used for PIP-NN and its computed survival fractions are somewhat inferior to PIP-NN. The LTSH potential energy surface is the oldest surface examined and is qualitatively incompatible with experiment. This surface also has a small discontinuity that is easily repaired. On each surface, four different approximate tunneling methods are compared but only the small curvature tunneling method and the improved semiclassical transition state method produce useful results on all four surfaces. The results of these two methods are generally comparable and in qualitative agreement with experiment on the PIP-NN and CASPT2 surfaces. The original semiclassical transition state theory method produces qualitatively incorrect tunneling probabilities on all surfaces except the PIP. The Eckart tunneling method uses the least amount of information about the reaction path and produces too high a tunneling probability on PIP-NN surface, leading to survival fractions that peak at half their measured values.« less
NASA Astrophysics Data System (ADS)
Lee, Choonik; Jung, Jae Won; Pelletier, Christopher; Pyakuryal, Anil; Lamart, Stephanie; Kim, Jong Oh; Lee, Choonsik
2015-03-01
Organ dose estimation for retrospective epidemiological studies of late effects in radiotherapy patients involves two challenges: radiological images to represent patient anatomy are not usually available for patient cohorts who were treated years ago, and efficient dose reconstruction methods for large-scale patient cohorts are not well established. In the current study, we developed methods to reconstruct organ doses for radiotherapy patients by using a series of computational human phantoms coupled with a commercial treatment planning system (TPS) and a radiotherapy-dedicated Monte Carlo transport code, and performed illustrative dose calculations. First, we developed methods to convert the anatomy and organ contours of the pediatric and adult hybrid computational phantom series to Digital Imaging and Communications in Medicine (DICOM)-image and DICOM-structure files, respectively. The resulting DICOM files were imported to a commercial TPS for simulating radiotherapy and dose calculation for in-field organs. The conversion process was validated by comparing electron densities relative to water and organ volumes between the hybrid phantoms and the DICOM files imported in TPS, which showed agreements within 0.1 and 2%, respectively. Second, we developed a procedure to transfer DICOM-RT files generated from the TPS directly to a Monte Carlo transport code, x-ray Voxel Monte Carlo (XVMC) for more accurate dose calculations. Third, to illustrate the performance of the established methods, we simulated a whole brain treatment for the 10 year-old male phantom and a prostate treatment for the adult male phantom. Radiation doses to selected organs were calculated using the TPS and XVMC, and compared to each other. Organ average doses from the two methods matched within 7%, whereas maximum and minimum point doses differed up to 45%. The dosimetry methods and procedures established in this study will be useful for the reconstruction of organ dose to support retrospective epidemiological studies of late effects in radiotherapy patients.
Improved techniques for predicting spacecraft power
NASA Technical Reports Server (NTRS)
Chmielewski, A. B.
1987-01-01
Radioisotope Thermoelectric Generators (RTGs) are going to supply power for the NASA Galileo and Ulysses spacecraft now scheduled to be launched in 1989 and 1990. The duration of the Galileo mission is expected to be over 8 years. This brings the total RTG lifetime to 13 years. In 13 years, the RTG power drops more than 20 percent leaving a very small power margin over what is consumed by the spacecraft. Thus it is very important to accurately predict the RTG performance and be able to assess the magnitude of errors involved. The paper lists all the error sources involved in the RTG power predictions and describes a statistical method for calculating the tolerance.
Initial values for the integration scheme to compute the eigenvalues for propagation in ducts
NASA Technical Reports Server (NTRS)
Eversman, W.
1977-01-01
A scheme for the calculation of eigenvalues in the problem of acoustic propagation in a two-dimensional duct is described. The computation method involves changing the coupled transcendental nonlinear algebraic equations into an initial value problem involving a nonlinear ordinary differential equation. The simplest approach is to use as initial values the hardwall eigenvalues and to integrate away from these values as the admittance varies from zero to its actual value with a linear variation. The approach leads to a powerful root finding routine capable of computing the transverse and axial wave numbers for two-dimensional ducts for any frequency, lining, admittance and Mach number without requiring initial guesses or starting points.
Osipiuk, Jerzy; Mulligan, Rory; Bargassa, Monireh; Hamilton, John E; Cunningham, Mark A; Joachimiak, Andrzej
2012-06-01
The crystal structure of SO1698 protein from Shewanella oneidensis was determined by a SAD method and refined to 1.57 Å. The structure is a β sandwich that unexpectedly consists of two polypeptides; the N-terminal fragment includes residues 1-116, and the C-terminal one includes residues 117-125. Electron density also displayed the Lys-98 side chain covalently linked to Asp-116. The putative active site residues involved in self-cleavage were identified; point mutants were produced and characterized structurally and in a biochemical assay. Numerical simulations utilizing molecular dynamics and hybrid quantum/classical calculations suggest a mechanism involving activation of a water molecule coordinated by a catalytic aspartic acid.
Panos, Joseph A.; Hoffman, Joshua T.; Wordeman, Samuel C.; Hewett, Timothy E.
2016-01-01
Background Correction of neuromuscular impairments after anterior cruciate ligament injury is vital to successful return to sport. Frontal plane knee control during landing is a common measure of lower-extremity neuromuscular control and asymmetries in neuromuscular control of the knee can predispose injured athletes to additional injury and associated morbidities. Therefore, this study investigated the effects of anterior cruciate ligament injury on knee biomechanics during landing. Methods Two-dimensional frontal plane video of single leg drop, cross over drop, and drop vertical jump dynamic movement trials was analyzed for twenty injured and reconstructed athletes. The position of the knee joint center was tracked in ImageJ software for 500 milliseconds after landing to calculate medio-lateral knee motion velocities and determine normal fluency, the number of times per second knee velocity changed direction. The inverse of this calculation, analytical fluency, was used to associate larger numerical values with fluent movement. Findings Analytical fluency was decreased in involved limbs for single leg drop trials (P=0.0018). Importantly, analytical fluency for single leg drop differed compared to cross over drop trials for involved (P<0.001), but not uninvolved limbs (P=0.5029). For involved limbs, analytical fluency values exhibited a stepwise trend in relative magnitudes. Interpretation Decreased analytical fluency in involved limbs is consistent with previous studies. Fluency asymmetries observed during single leg drop tasks may be indicative of abhorrent landing strategies in the involved limb. Analytical fluency differences in unilateral tasks for injured limbs may represent neuromuscular impairment as a result of injury. PMID:26895446
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, J.; Yu, T.; Papajak, E.
2011-01-01
Many methods for correcting harmonic partition functions for the presence of torsional motions employ some form of one-dimensional torsional treatment to replace the harmonic contribution of a specific normal mode. However, torsions are often strongly coupled to other degrees of freedom, especially other torsions and low-frequency bending motions, and this coupling can make assigning torsions to specific normal modes problematic. Here, we present a new class of methods, called multi-structural (MS) methods, that circumvents the need for such assignments by instead adjusting the harmonic results by torsional correction factors that are determined using internal coordinates. We present three versions ofmore » the MS method: (i) MS-AS based on including all structures (AS), i.e., all conformers generated by internal rotations; (ii) MS-ASCB based on all structures augmented with explicit conformational barrier (CB) information, i.e., including explicit calculations of all barrier heights for internal-rotation barriers between the conformers; and (iii) MS-RS based on including all conformers generated from a reference structure (RS) by independent torsions. In the MS-AS scheme, one has two options for obtaining the local periodicity parameters, one based on consideration of the nearly separable limit and one based on strongly coupled torsions. The latter involves assigning the local periodicities on the basis of Voronoi volumes. The methods are illustrated with calculations for ethanol, 1-butanol, and 1-pentyl radical as well as two one-dimensional torsional potentials. The MS-AS method is particularly interesting because it does not require any information about conformational barriers or about the paths that connect the various structures.« less
Zheng, Jingjing; Yu, Tao; Papajak, Ewa; Alecu, I M; Mielke, Steven L; Truhlar, Donald G
2011-06-21
Many methods for correcting harmonic partition functions for the presence of torsional motions employ some form of one-dimensional torsional treatment to replace the harmonic contribution of a specific normal mode. However, torsions are often strongly coupled to other degrees of freedom, especially other torsions and low-frequency bending motions, and this coupling can make assigning torsions to specific normal modes problematic. Here, we present a new class of methods, called multi-structural (MS) methods, that circumvents the need for such assignments by instead adjusting the harmonic results by torsional correction factors that are determined using internal coordinates. We present three versions of the MS method: (i) MS-AS based on including all structures (AS), i.e., all conformers generated by internal rotations; (ii) MS-ASCB based on all structures augmented with explicit conformational barrier (CB) information, i.e., including explicit calculations of all barrier heights for internal-rotation barriers between the conformers; and (iii) MS-RS based on including all conformers generated from a reference structure (RS) by independent torsions. In the MS-AS scheme, one has two options for obtaining the local periodicity parameters, one based on consideration of the nearly separable limit and one based on strongly coupled torsions. The latter involves assigning the local periodicities on the basis of Voronoi volumes. The methods are illustrated with calculations for ethanol, 1-butanol, and 1-pentyl radical as well as two one-dimensional torsional potentials. The MS-AS method is particularly interesting because it does not require any information about conformational barriers or about the paths that connect the various structures.
Castro-Alvarez, Alejandro; Carneros, Héctor; Sánchez, Dani; Vilarrasa, Jaume
2015-12-18
While B3LYP, M06-2X, and MP2 calculations predict the ΔG° values for exchange equilibria between enamines and ketones with similar acceptable accuracy, the M06-2X/6-311+G(d,p) and MP2/6-311+G(d,p) methods are required for enamine formation reactions (for example, for enamine 5a, arising from 3-methylbutanal and pyrrolidine). Stronger disagreement was observed when calculated energies of hemiaminals (N,O-acetals) and aminals (N,N-acetals) were compared with experimental equilibrium constants, which are reported here for the first time. Although it is known that the B3LYP method does not provide a good description of the London dispersion forces, while M06-2X and MP2 may overestimate them, it is shown here how large the gaps are and that at least single-point calculations at the CCSD(T)/6-31+G(d) level should be used for these reaction intermediates; CCSD(T)/6-31+G(d) and CCSD(T)/6-311+G(d,p) calculations afford ΔG° values in some cases quite close to MP2/6-311+G(d,p) while in others closer to M06-2X/6-311+G(d,p). The effect of solvents is similarly predicted by the SMD, CPCM, and IEFPCM approaches (with energy differences below 1 kcal/mol).
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Correcting for the free energy costs of bond or angle constraints in molecular dynamics simulations
König, Gerhard; Brooks, Bernard R.
2014-01-01
Background Free energy simulations are an important tool in the arsenal of computational biophysics, allowing the calculation of thermodynamic properties of binding or enzymatic reactions. This paper introduces methods to increase the accuracy and precision of free energy calculations by calculating the free energy costs of constraints during post-processing. The primary purpose of employing constraints for these free energy methods is to increase the phase space overlap between ensembles, which is required for accuracy and convergence. Methods The free energy costs of applying or removing constraints are calculated as additional explicit steps in the free energy cycle. The new techniques focus on hard degrees of freedom and use both gradients and Hessian estimation. Enthalpy, vibrational entropy, and Jacobian free energy terms are considered. Results We demonstrate the utility of this method with simple classical systems involving harmonic and anharmonic oscillators, four-atomic benchmark systems, an alchemical mutation of ethane to methanol, and free energy simulations between alanine and serine. The errors for the analytical test cases are all below 0.0007 kcal/mol, and the accuracy of the free energy results of ethane to methanol is improved from 0.15 to 0.04 kcal/mol. For the alanine to serine case, the phase space overlaps of the unconstrained simulations range between 0.15 and 0.9%. The introduction of constraints increases the overlap up to 2.05%. On average, the overlap increases by 94% relative to the unconstrained value and precision is doubled. Conclusions The approach reduces errors arising from constraints by about an order of magnitude. Free energy simulations benefit from the use of constraints through enhanced convergence and higher precision. General Significance The primary utility of this approach is to calculate free energies for systems with disparate energy surfaces and bonded terms, especially in multi-scale molecular mechanics/quantum mechanics simulations. PMID:25218695
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gómez-Carrasco, Susana; Godard, Benjamin; Lique, François
The rate constants required to model the OH{sup +} observations in different regions of the interstellar medium have been determined using state of the art quantum methods. First, state-to-state rate constants for the H{sub 2}(v = 0, J = 0, 1) + O{sup +}({sup 4} S) → H + OH{sup +}(X {sup 3}Σ{sup –}, v', N) reaction have been obtained using a quantum wave packet method. The calculations have been compared with time-independent results to assess the accuracy of reaction probabilities at collision energies of about 1 meV. The good agreement between the simulations and the existing experimental cross sectionsmore » in the 0.01-1 eV energy range shows the quality of the results. The calculated state-to-state rate constants have been fitted to an analytical form. Second, the Einstein coefficients of OH{sup +} have been obtained for all astronomically significant rovibrational bands involving the X {sup 3}Σ{sup –} and/or A {sup 3}Π electronic states. For this purpose, the potential energy curves and electric dipole transition moments for seven electronic states of OH{sup +} are calculated with ab initio methods at the highest level, including spin-orbit terms, and the rovibrational levels have been calculated including the empirical spin-rotation and spin-spin terms. Third, the state-to-state rate constants for inelastic collisions between He and OH{sup +}(X {sup 3}Σ{sup –}) have been calculated using a time-independent close coupling method on a new potential energy surface. All these rates have been implemented in detailed chemical and radiative transfer models. Applications of these models to various astronomical sources show that inelastic collisions dominate the excitation of the rotational levels of OH{sup +}. In the models considered, the excitation resulting from the chemical formation of OH{sup +} increases the line fluxes by about 10% or less depending on the density of the gas.« less
NASA Astrophysics Data System (ADS)
Kinefuchi, K.; Funaki, I.; Shimada, T.; Abe, T.
2012-10-01
Under certain conditions during rocket flights, ionized exhaust plumes from solid rocket motors may interfere with radio frequency transmissions. To understand the relevant physical processes involved in this phenomenon and establish a prediction process for in-flight attenuation levels, we attempted to measure microwave attenuation caused by rocket exhaust plumes in a sea-level static firing test for a full-scale solid propellant rocket motor. The microwave attenuation level was calculated by a coupling simulation of the inviscid-frozen-flow computational fluid dynamics of an exhaust plume and detailed analysis of microwave transmissions by applying a frequency-dependent finite-difference time-domain method with the Drude dispersion model. The calculated microwave attenuation level agreed well with the experimental results, except in the case of interference downstream the Mach disk in the exhaust plume. It was concluded that the coupling estimation method based on the physics of the frozen plasma flow with Drude dispersion would be suitable for actual flight conditions, although the mixing and afterburning in the plume should be considered depending on the flow condition.
Quantitative Determination of Isotope Ratios from Experimental Isotopic Distributions
Kaur, Parminder; O’Connor, Peter B.
2008-01-01
Isotope variability due to natural processes provides important information for studying a variety of complex natural phenomena from the origins of a particular sample to the traces of biochemical reaction mechanisms. These measurements require high-precision determination of isotope ratios of a particular element involved. Isotope Ratio Mass Spectrometers (IRMS) are widely employed tools for such a high-precision analysis, which have some limitations. This work aims at overcoming the limitations inherent to IRMS by estimating the elemental isotopic abundance from the experimental isotopic distribution. In particular, a computational method has been derived which allows the calculation of 13C/12C ratios from the whole isotopic distributions, given certain caveats, and these calculations are applied to several cases to demonstrate their utility. The limitations of the method in terms of the required number of ions and S/N ratio are discussed. For high-precision estimates of the isotope ratios, this method requires very precise measurement of the experimental isotopic distribution abundances, free from any artifacts introduced by noise, sample heterogeneity, or other experimental sources. PMID:17263354
System and method of designing a load bearing layer of an inflatable vessel
NASA Technical Reports Server (NTRS)
Spexarth, Gary R. (Inventor)
2007-01-01
A computer-implemented method is provided for designing a restraint layer of an inflatable vessel. The restraint layer is inflatable from an initial uninflated configuration to an inflated configuration and is constructed from a plurality of interfacing longitudinal straps and hoop straps. The method involves providing computer processing means (e.g., to receive user inputs, perform calculations, and output results) and utilizing this computer processing means to implement a plurality of subsequent design steps. The computer processing means is utilized to input the load requirements of the inflated restraint layer and to specify an inflated configuration of the restraint layer. This includes specifying a desired design gap between pairs of adjacent longitudinal or hoop straps, whereby the adjacent straps interface with a plurality of transversely extending hoop or longitudinal straps at a plurality of intersections. Furthermore, an initial uninflated configuration of the restraint layer that is inflatable to achieve the specified inflated configuration is determined. This includes calculating a manufacturing gap between pairs of adjacent longitudinal or hoop straps that correspond to the specified desired gap in the inflated configuration of the restraint layer.
The frozen nucleon approximation in two-particle two-hole response functions
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.; ...
2017-07-10
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
The frozen nucleon approximation in two-particle two-hole response functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
NASA Astrophysics Data System (ADS)
Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.
2017-11-01
This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.
The historical bases of the Rayleigh and Ritz methods
NASA Astrophysics Data System (ADS)
Leissa, A. W.
2005-11-01
Rayleigh's classical book Theory of Sound was first published in 1877. In it are many examples of calculating fundamental natural frequencies of free vibration of continuum systems (strings, bars, beams, membranes, plates) by assuming the mode shape, and setting the maximum values of potential and kinetic energy in a cycle of motion equal to each other. This procedure is well known as "Rayleigh's Method." In 1908, Ritz laid out his famous method for determining frequencies and mode shapes, choosing multiple admissible displacement functions, and minimizing a functional involving both potential and kinetic energies. He then demonstrated it in detail in 1909 for the completely free square plate. In 1911, Rayleigh wrote a paper congratulating Ritz on his work, but stating that he himself had used Ritz's method in many places in his book and in another publication. Subsequently, hundreds of research articles and many books have appeared which use the method, some calling it the "Ritz method" and others the "Rayleigh-Ritz method." The present article examines the method in detail, as Ritz presented it, and as Rayleigh claimed to have used it. It concludes that, although Rayleigh did solve a few problems which involved minimization of a frequency, these solutions were not by the straightforward, direct method presented by Ritz and used subsequently by others. Therefore, Rayleigh's name should not be attached to the method.
Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.
Pillai, S; Singhvi, I
2008-09-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.
Quantitative Estimation of Itopride Hydrochloride and Rabeprazole Sodium from Capsule Formulation
Pillai, S.; Singhvi, I.
2008-01-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C18 column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies. PMID:21394269
A "Stepping Stone" Approach for Obtaining Quantum Free Energies of Hydration.
Sampson, Chris; Fox, Thomas; Tautermann, Christofer S; Woods, Christopher; Skylaris, Chris-Kriton
2015-06-11
We present a method which uses DFT (quantum, QM) calculations to improve free energies of binding computed with classical force fields (classical, MM). To overcome the incomplete overlap of configurational spaces between MM and QM, we use a hybrid Monte Carlo approach to generate quickly correct ensembles of structures of intermediate states between a MM and a QM/MM description, hence taking into account a great fraction of the electronic polarization of the quantum system, while being able to use thermodynamic integration to compute the free energy of transition between the MM and QM/MM. Then, we perform a final transition from QM/MM to full QM using a one-step free energy perturbation approach. By using QM/MM as a stepping stone toward the full QM description, we find very small convergence errors (<1 kJ/mol) in the transition to full QM. We apply this method to compute hydration free energies, and we obtain consistent improvements over the MM values for all molecules we used in this study. This approach requires large-scale DFT calculations as the full QM systems involved the ligands and all waters in their simulation cells, so the linear-scaling DFT code ONETEP was used for these calculations.
Age estimation using exfoliative cytology and radiovisiography: A comparative study
Nallamala, Shilpa; Guttikonda, Venkateswara Rao; Manchikatla, Praveen Kumar; Taneeru, Sravya
2017-01-01
Introduction: Age estimation is one of the essential factors in establishing the identity of an individual. Among various methods, exfoliative cytology (EC) is a unique, noninvasive technique, involving simple, and pain-free collection of intact cells from the oral cavity for microscopic examination. Objective: The study was undertaken with an aim to estimate the age of an individual from the average cell size of their buccal smears calculated using image analysis morphometric software and the pulp–tooth area ratio in mandibular canine of the same individual using radiovisiography (RVG). Materials and Methods: Buccal smears were collected from 100 apparently healthy individuals. After fixation in 95% alcohol, the smears were stained using Papanicolaou stain. The average cell size was measured using image analysis software (Image-Pro Insight 8.0). The RVG images of mandibular canines were obtained, pulp and tooth areas were traced using AutoCAD 2010 software, and area ratio was calculated. The estimated age was then calculated using regression analysis. Results: The paired t-test between chronological age and estimated age by cell size and pulp–tooth area ratio was statistically nonsignificant (P > 0.05). Conclusion: In the present study, age estimated by pulp–tooth area ratio and EC yielded good results. PMID:29657491
Green's function approach to the Kondo effect in nanosized quantum corrals
NASA Astrophysics Data System (ADS)
Li, Q. L.; Wang, R.; Xie, K. X.; Li, X. X.; Zheng, C.; Cao, R. X.; Miao, B. F.; Sun, L.; Wang, B. G.; Ding, H. F.
2018-04-01
We present a theoretical study of the Kondo effect for a magnetic atom placed inside nanocorrals using Green's function calculations. Based on the standard mapping of the Anderson impurity model to a one-dimensional chain model, we formulate a weak-coupling theory to study the Anderson impurities in a hosting bath with a surface state. With further taking into account the multiple scattering effect of the surrounding atoms, our calculations show that the Kondo resonance width of the atom placed at the center of the nanocorral can be significantly tuned by the corral size, in good agreement with recent experiments [Q. L. Li et al., Phys. Rev. B 97, 035417 (2018), 10.1103/PhysRevB.97.035417]. The method can also be applied to the atom placed at an arbitrary position inside the corral where our calculation shows that the Kondo resonance width also oscillates as the function of its separation from the corral center. The prediction is further confirmed by the low-temperature scanning tunneling microscopy studies where a one-to-one correspondence is found. The good agreement with the experiments validates the generality of the method to the system where multiadatoms are involved.
ARBAN-A new method for analysis of ergonomic effort.
Holzmann, P
1982-06-01
ARBAN is a method for the ergonomic analysis of work, including work situations which involve widely differing body postures and loads. The idea of the method is thal all phases of the analysis process that imply specific knowledge on ergonomics are teken over by filming equipment and a computer routine. All tasks that must be carried out by the investigator in the process of analysis are so designed that they appear as evident by the use of systematic common sense. The ARBAN analysis method contains four steps: 1. Recording of the workplace situation on video or film. 2. Coding the posture and load situation at a number of closely spaced 'frozen' situations. 3. Computerisation. 4. Evaluation of the results. The computer calculates figures for the total ergonomic stress on the whole body as well as on different parts of the body separately. They are presented as 'Ergonomic stress/ time curves', where the heavy load situations occur as peaks of the curve. The work cycle may also be divided into different tasks, where the stress and duration patterns can be compared. The integral of the curves are calculated for single-figure comparison of different tasks as well as different work situations.
Boukabache, Hamza; Escriba, Christophe; Zedek, Sabeha; Medale, Daniel; Rolet, Sebastien; Fourniols, Jean Yves
2012-10-11
The work reported on this paper describes a new methodology implementation for active structural health monitoring of recent aircraft parts made from carbon-fiber-reinforced polymer. This diagnosis is based on a new embedded method that is capable of measuring the local high frequency impedance spectrum of the structure through the calculation of the electro-mechanical impedance of a piezoelectric patch pasted non-permanently onto its surface. This paper involves both the laboratory based E/M impedance method development, its implementation into a CPU with limited resources as well as a comparison with experimental testing data needed to demonstrate the feasibility of flaw detection on composite materials and answer the question of the method reliability. The different development steps are presented and the integration issues are discussed. Furthermore, we present the unique advantages that the reconfigurable electronics through System-on-Chip (SoC) technology brings to the system scaling and flexibility. At the end of this article, we demonstrate the capability of a basic network of sensors mounted onto a real composite aircraft part specimen to capture its local impedance spectrum signature and to diagnosis different delamination sizes using a comparison with a baseline.
Boukabache, Hamza; Escriba, Christophe; Zedek, Sabeha; Medale, Daniel; Rolet, Sebastien; Fourniols, Jean Yves
2012-01-01
The work reported on this paper describes a new methodology implementation for active structural health monitoring of recent aircraft parts made from carbon-fiber-reinforced polymer. This diagnosis is based on a new embedded method that is capable of measuring the local high frequency impedance spectrum of the structure through the calculation of the electro-mechanical impedance of a piezoelectric patch pasted non-permanently onto its surface. This paper involves both the laboratory based E/M impedance method development, its implementation into a CPU with limited resources as well as a comparison with experimental testing data needed to demonstrate the feasibility of flaw detection on composite materials and answer the question of the method reliability. The different development steps are presented and the integration issues are discussed. Furthermore, we present the unique advantages that the reconfigurable electronics through System-on-Chip (SoC) technology brings to the system scaling and flexibility. At the end of this article, we demonstrate the capability of a basic network of sensors mounted onto a real composite aircraft part specimen to capture its local impedance spectrum signature and to diagnosis different delamination sizes using a comparison with a baseline. PMID:23202013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mille, M; Lee, C; Failla, G
Purpose: To use the Attila deterministic solver as a supplement to Monte Carlo for calculating out-of-field organ dose in support of epidemiological studies looking at the risks of second cancers. Supplemental dosimetry tools are needed to speed up dose calculations for studies involving large-scale patient cohorts. Methods: Attila is a multi-group discrete ordinates code which can solve the 3D photon-electron coupled linear Boltzmann radiation transport equation on a finite-element mesh. Dose is computed by multiplying the calculated particle flux in each mesh element by a medium-specific energy deposition cross-section. The out-of-field dosimetry capability of Attila is investigated by comparing averagemore » organ dose to that which is calculated by Monte Carlo simulation. The test scenario consists of a 6 MV external beam treatment of a female patient with a tumor in the left breast. The patient is simulated by a whole-body adult reference female computational phantom. Monte Carlo simulations were performed using MCNP6 and XVMC. Attila can export a tetrahedral mesh for MCNP6, allowing for a direct comparison between the two codes. The Attila and Monte Carlo methods were also compared in terms of calculation speed and complexity of simulation setup. A key perquisite for this work was the modeling of a Varian Clinac 2100 linear accelerator. Results: The solid mesh of the torso part of the adult female phantom for the Attila calculation was prepared using the CAD software SpaceClaim. Preliminary calculations suggest that Attila is a user-friendly software which shows great promise for our intended application. Computational performance is related to the number of tetrahedral elements included in the Attila calculation. Conclusion: Attila is being explored as a supplement to the conventional Monte Carlo radiation transport approach for performing retrospective patient dosimetry. The goal is for the dosimetry to be sufficiently accurate for use in retrospective epidemiological investigations.« less
Numerical Simulation of the Ground Response to the Tire Load Using Finite Element Method
NASA Astrophysics Data System (ADS)
Valaskova, Veronika; Vlcek, Jozef
2017-10-01
Response of the pavement to the excitation caused by the moving vehicle is one of the actual problems of the civil engineering practice. The load from the vehicle is transferred to the pavement structure through contact area of the tires. Experimental studies show nonuniform distribution of the pressure in the area. This non-uniformity is caused by the flexible nature and the shape of the tire and is influenced by the tire inflation. Several tire load patterns, including uniform distribution and point load, were involved in the numerical modelling using finite element method. Applied tire loads were based on the tire contact forces of the lorry Tatra 815. There were selected two procedures for the calculations. The first one was based on the simplification of the vehicle to the half-part model. The characteristics of the vehicle model were verified by the experiment and by the numerical model in the software ADINA, when vehicle behaviour during the ride was investigated. Second step involved application of the calculated contact forces for the front axle as the load on the multi-layered half space representing the pavement structure. This procedure was realized in the software Plaxis and considered various stress patterns for the load. The response of the ground to the vehicle load was then analyzed. Axisymmetric model was established for this procedure. The paper presents the results of the investigation of the contact pressure distribution and corresponding reaction of the pavement to various load distribution patterns. The results show differences in some calculated quantities for different load patterns, which need to be verified by the experimental way when also ground response should be observed.
Assessment of PWR Steam Generator modelling in RELAP5/MOD2. International Agreement Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putney, J.M.; Preece, R.J.
1993-06-01
An assessment of Steam Generator (SG) modelling in the PWR thermal-hydraulic code RELAP5/MOD2 is presented. The assessment is based on a review of code assessment calculations performed in the UK and elsewhere, detailed calculations against a series of commissioning tests carried out on the Wolf Creek PWR and analytical investigations of the phenomena involved in normal and abnormal SG operation. A number of modelling deficiencies are identified and their implications for PWR safety analysis are discussed -- including methods for compensating for the deficiencies through changes to the input deck. Consideration is also given as to whether the deficiencies willmore » still be present in the successor code RELAP5/MOD3.« less
NASA Astrophysics Data System (ADS)
Zúñiga, César; Oyarzún, Diego P.; Martin-Transaco, Rudy; Yáñez-S, Mauricio; Tello, Alejandra; Fuentealba, Mauricio; Cantero-López, Plinio; Arratia-Pérez, Ramiro
2017-11-01
In this work, new fac-Re(CO)3(PyCOOH)2Cl from isonicotinic acid ligand has been prepared. The complex was characterized by structural (single-crystal X-ray diffraction), elemental analysis and spectroscopic (FTIR, NMR, UV-vis spectroscopy) methods. DFT and TDDFT calculations were performed to obtain the electronic transitions involved in their UV-Vis spectrum. The excitation energies agree with the experimental results. The TDDFT calculations suggest that experimental mixed absorption bands at 270 and 314 nm could be assigned to (MLCT-LLCT)/MLCT transitions. Natural Bond Orbitals (NBO) approach has enabled studying the effects of bonding interactions. E(2) energies confirm the occurrence of ICT (Intra-molecular Charge Transfer) within the molecule.
Implementing Shared Memory Parallelism in MCBEND
NASA Astrophysics Data System (ADS)
Bird, Adam; Long, David; Dobson, Geoff
2017-09-01
MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.
NASA Astrophysics Data System (ADS)
Cukras, Janusz; Antušek, Andrej; Holka, Filip; Sadlej, Joanna
2009-06-01
Extensive ab initio calculations of static electric properties of molecular ions of general formula RgH + (Rg = He, Ne, Ar, Kr, Xe) involving the finite field method and coupled cluster CCSD(T) approach have been done. The relativistic effects were taken into account by Douglas-Kroll-Hess approximation. The numerical stability and reliability of calculated values have been tested using the systematic sequence of Dunning's cc-pVXZ-DK and ANO-RCC-VQZP basis sets. The influence of ZPE and pure vibrational contribution has been discussed. The component αzz has increasing trend in RgH + while the relativistic effect on αzz leads to a small increase of this molecular parameter.
NASA Technical Reports Server (NTRS)
Shertzer, Janine; Temkin, A.
2003-01-01
As is well known, the full scattering amplitude can be expressed as an integral involving the complete scattering wave function. We have shown that the integral can be simplified and used in a practical way. Initial application to electron-hydrogen scattering without exchange was highly successful. The Schrodinger equation (SE), which can be reduced to a 2d partial differential equation (pde), was solved using the finite element method. We have now included exchange by solving the resultant SE, in the static exchange approximation, which is reducible to a pair of coupled pde's. The resultant scattering amplitudes, both singlet and triplet, calculated as a function of energy are in excellent agreement with converged partial wave results.
Hierro, Luis A; Gómez-Álvarez, Rosario; Atienza, Pedro
2014-01-01
In studies on the redistributive, vertical, and horizontal effects of health care financing, the sum of the contributions calculated for each financial instrument does not equal the total effects. As a consequence, the final calculations tend to be overestimated or underestimated. The solution proposed here involves the adaptation of the Shapley value to achieve additive results for all the effects and reveals the relative contributions of different instruments to the change of whole-system equity. An understanding of this change would help policy makers attain equitable health care financing. We test the method with the public finance and private payments of health care systems in Denmark and the Netherlands. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Abyar, Fatemeh; Farrokhpour, Hossein
2014-11-01
The photoelectron spectra of some famous steroids, important in biology, were calculated in the gas phase. The selected steroids were 5α-androstane-3,11,17-trione, 4-androstane-3,11,17-trione, cortisol, cortisone, corticosterone, dexamethasone, estradiol and cholesterol. The calculations were performed employing symmetry-adapted cluster/configuration interaction (SAC-CI) method using the 6-311++G(2df,pd) basis set. The population ratios of conformers of each steroid were calculated and used for simulating the photoelectron spectrum of steroid. It was found that more than one conformer contribute to the photoelectron spectra of some steroids. To confirm the calculated photoelectron spectra, they compared with their corresponding experimental spectra. There were no experimental gas phase Hesbnd I photoelectron spectra for some of the steroids of this work in the literature and their calculated spectra can show a part of intrinsic characteristics of this molecules in the gas phase. The canonical molecular orbitals involved in the ionization of each steroid were calculated at the HF/6-311++g(d,p) level of theory. The spectral bands of each steroid were assigned by natural bonding orbital (NBO) calculations. Knowing the electronic structures of steroids helps us to understand their biological activities and find which sites of steroid become active when a modification is performing under a biological pathway.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanguineti, Giuseppe; Califano, Joseph; Stafford, Edward
Purpose: To assess the risk of ipsilateral subclinical neck nodal involvement for early T-stage/node-positive oropharyngeal squamous cell carcinoma. Methods and Materials: Patients undergoing multilevel upfront neck dissection (ND) at Johns Hopkins Hospital within the last 10 years for early clinical T-stage (cT1-2) node-positive (cN+) oropharyngeal squamous cell carcinoma were identified. Pathologic involvement of Levels IB-V was determined. For each nodal level, the negative predictive value of imaging results was computed by using sensitivity/specificity data for computed tomography (CT). This was used to calculate 1 - negative predictive value, or the risk that a negative level on CT harbors subclinical disease.more » Results: One hundred three patients met the criteria. Radical ND was performed in 14.6%; modified radical ND, in 70.9%; and selective ND, in 14.6%. Pathologic positivity rates were 9.5%, 91.3%, 40.8%, 18.0%, and 3.3% for Levels IB-V, respectively. Risks of subclinical disease despite negative CT imaging results were calculated as 3.1%, 76.3%, 17.5%, 6.3%, and 1.0% for Levels IB-V, respectively. Conclusions: Levels IB and V are at very low (<5%) risk of involvement, even with ipsilateral to pathologically proven neck disease; this can guide radiation planning. Levels II and III should be included in high-risk volumes regardless of imaging results, and Level IV should be included within the lowest risk volume.« less
NASA Astrophysics Data System (ADS)
Cronin, Nigel J.; Clegg, Peter J.
2005-04-01
Microwave Endometrial Ablation (MEA) is a technique that can be used for the treatment of abnormal uterine bleeding. The procedure involves sweeping a specially designed microwave applicator throughout the uterine cavity to achieve an ideally uniform depth of tissue necrosis of between 5 and 6mm. We have performed a computer analysis of the MEA procedure in which finite element analysis was used to determine the SAR pattern around the applicator. This was followed by a Green Function based solution of the Bioheat equation to determine the resulting induced temperatures. The method developed is applicable to situations involving a moving microwave source, as used in MEA. The validity of the simulation was verified by measurements in a tissue phantom material using a purpose built applicator and a calibrated pulling device. From the calculated temperatures the depth of necrosis was assessed through integration of the resulting rates of cell death estimated using the Arrhenius equation. The Arrhenius parameters used were derived from published data on BHK cells. Good agreement was seen between the calculated depths of cell necrosis and those found in human in-vivo testing.
NASA Technical Reports Server (NTRS)
Payne, L. L.
1982-01-01
The strength of the bond between optically contacted quartz surfaces was investigated. The Gravity Probe-B (GP-B) experiment to test the theories of general relativity requires extremely precise measurements. The quartz components of the instruments to make these measurements must be held together in a very stable unit. Optical contacting is suggested as a possible method of joining these components. The fundamental forces involved in optical contacting are reviewed and relates calculations of these forces to the results obtained in experiments.
Energy system contribution to 400-metre and 800-metre track running.
Duffield, Rob; Dawson, Brian; Goodman, Carmel
2005-03-01
As a wide range of values has been reported for the relative energetics of 400-m and 800-m track running events, this study aimed to quantify the respective aerobic and anaerobic energy contributions to these events during track running. Sixteen trained 400-m (11 males, 5 females) and 11 trained 800-m (9 males, 2 females) athletes participated in this study. The participants performed (on separate days) a laboratory graded exercsie test and multiple race time-trials. The relative energy system contribution was calculated by multiple methods based upon measures of race VO2, accumulated oxygen deficit (AOD), blood lactate and estimated phosphocreatine degradation (lactate/PCr). The aerobic/anaerobic energy system contribution (AOD method) to the 400-m event was calculated as 41/59% (male) and 45/55% (female). For the 800-m event, an increased aerobic involvement was noted with a 60/40% (male) and 70/30% (female) respective contribution. Significant (P < 0.05) negative correlations were noted between race performance and anaerobic energy system involvement (lactate/PCr) for the male 800-m and female 400-m events (r = - 0.77 and - 0.87 respectively). These track running data compare well with previous estimates of the relative energy system contributions to the 400-m and 800-m events. Additionally, the relative importance and speed of interaction of the respective metabolic pathways has implications to training for these events.
Gluing for Raman lidar systems using the lamp mapping technique.
Walker, Monique; Venable, Demetrius; Whiteman, David N
2014-12-20
In the context of combined analog and photon counting (PC) data acquisition in a Lidar system, glue coefficients are defined as constants used for converting an analog signal into a virtual PC signal. The coefficients are typically calculated using Lidar profile data taken under clear, nighttime conditions since, in the presence of clouds or high solar background, it is difficult to obtain accurate glue coefficients from Lidar backscattered data. Here we introduce a new method in which we use the lamp mapping technique (LMT) to determine glue coefficients in a manner that does not require atmospheric profiles to be acquired and permits accurate glue coefficients to be calculated when adequate Lidar profile data are not available. The LMT involves scanning a halogen lamp over the aperture of a Lidar receiver telescope such that the optical efficiency of the entire detection system is characterized. The studies shown here involve two Raman lidar systems; the first from Howard University and the second from NASA/Goddard Space Flight Center. The glue coefficients determined using the LMT and the Lidar backscattered method agreed within 1.2% for the water vapor channel and within 2.5% for the nitrogen channel for both Lidar systems. We believe this to be the first instance of the use of laboratory techniques for determining the glue coefficients for Lidar data analysis.
Fourth-order self-energy contribution to the two loop Lamb shift
NASA Astrophysics Data System (ADS)
Palur Mallampalli, Subrahmanyam
1998-11-01
The calculation of the two loop Lamb shift in hydrogenic ions involves the numerical evaluation of ten Feynman diagrams. In this thesis, four fourth-order Feynman diagrams including the pure self-energy contributions are evaluated using exact Dirac-Coulomb propagators, so that higher order binding corrections can be extracted by comparing with the known terms in the Z/alpha expansion. The entire calculation is performed in Feynman gauge. One of the vacuum polarization diagrams is evaluated in the Uehling approximation. At low Z, it is seen to be perturbative in Z/alpha, while new predictions for high Z are made. The calculation of the three self-energy diagrams is reorganized into four terms, which we call the PO, M, F and P terms. The PO term is separately gauge invariant while the latter three form a gauge invariant set. The PO term is shown to exhibit the most non-perturbative behavior yet encountered in QED at low Z, so much so that even at Z = 1, the complete result is of the opposite sign as that of the leading term in its Z/alpha expansion. At high Z, we agree with an earlier calculation. The analysis of ultraviolet divergences in the two loop self-energy is complicated by the presence of sub- divergences. All divergences except the self-mass are shown to cancel. The self-mass is then removed by a self- mass counterterm. Parts of the calculation are shown to contain reference state singularities, that finally cancel. A numerical regulator to handle these singularities is described. The M term, an ultraviolet finite quantity, is defined through a subtraction scheme in coordinate space. Being computationally intensive, it is evaluated only at high Z, specifically Z = 83 and 92. The F term involves the evaluation of several Feynman diagrams with free electron propagators. These are computed for a range of values of Z. The P term, also ultraviolet finite, involves Dirac- Coulomb propagators that are best defined in coordinate space, as well as functions associated with the one loop self-energy that are best defined in momentum space. Possible methods of evaluating the P term are discussed.
WE-A-BRE-01: Debate: To Measure or Not to Measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, J; Miften, M; Mihailidis, D
2014-06-15
Recent studies have highlighted some of the limitations of patient-specific pre-treatment IMRT QA measurements with respect to assessing plan deliverability. Pre-treatment QA measurements are frequently performed with detectors in phantoms that do not involve any patient heterogeneities or with an EPID without a phantom. Other techniques have been developed where measurement results are used to recalculate the patient-specific dose volume histograms. Measurements continue to play a fundamental role in understanding the initial and continued performance of treatment planning and delivery systems. Less attention has been focused on the role of computational techniques in a QA program such as calculation withmore » independent dose calculation algorithms or recalculation of the delivery with machine log files or EPID measurements. This session will explore the role of pre-treatment measurements compared to other methods such as computational and transit dosimetry techniques. Efficiency and practicality of the two approaches will also be presented and debated. The speakers will present a history of IMRT quality assurance and debate each other regarding which types of techniques are needed today and for future quality assurance. Examples will be shared of situations where overall quality needed to be assessed with calculation techniques in addition to measurements. Elements where measurements continue to be crucial such as for a thorough end-to-end test involving measurement will be discussed. Operational details that can reduce the gamma tool effectiveness and accuracy for patient-specific pre-treatment IMRT/VMAT QA will be described. Finally, a vision for the future of IMRT and VMAT plan QA will be discussed from a safety perspective. Learning Objectives: Understand the advantages and limitations of measurement and calculation approaches for pre-treatment measurements for IMRT and VMAT planning Learn about the elements of a balanced quality assurance program involving modulated techniques Learn how to use tools and techniques such as an end-to-end test to enhance your IMRT and VMAT QA program.« less
Advanced Neutronics Tools for BWR Design Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santamarina, A.; Hfaiedh, N.; Letellier, R.
2006-07-01
This paper summarizes the developments implemented in the new APOLLO2.8 neutronics tool to meet the required target accuracy in LWR applications, particularly void effects and pin-by-pin power map in BWRs. The Method Of Characteristics was developed to allow efficient LWR assembly calculations in 2D-exact heterogeneous geometry; resonant reaction calculation was improved by the optimized SHEM-281 group mesh, which avoids resonance self-shielding approximation below 23 eV, and the new space-dependent method for resonant mixture that accounts for resonance overlapping. Furthermore, a new library CEA2005, processed from JEFF3.1 evaluations involving feedback from Critical Experiments and LWR P.I.E, is used. The specific '2005-2007more » BWR Plan' settled to demonstrate the validation/qualification of this neutronics tool is described. Some results from the validation process are presented: the comparison of APOLLO2.8 results to reference Monte Carlo TRIPOLI4 results on specific BWR benchmarks emphasizes the ability of the deterministic tool to calculate BWR assembly multiplication factor within 200 pcm accuracy for void fraction varying from 0 to 100%. The qualification process against the BASALA mock-up experiment stresses APOLLO2.8/CEA2005 performances: pin-by-pin power is always predicted within 2% accuracy, reactivity worth of B4C or Hf cruciform control blade, as well as Gd pins, is predicted within 1.2% accuracy. (authors)« less
JTHERGAS: Thermodynamic Estimation from 2D Graphical Representations of Molecules
Blurock, Edward; Warth, V.; Grandmougin, X.; Bounaceur, R.; Glaude, P.A.; Battin-Leclerc, F.
2013-01-01
JTHERGAS is a versatile calculator (implemented in JAVA) to estimate thermodynamic information from two dimensional graphical representations of molecules and radicals involving covalent bonds based on the Benson additivity method. The versatility of JTHERGAS stems from its inherent philosophy that all the fundamental data used in the calculation should be visible, to see exactly where the final values came from, and modifiable, to account for new data that can appear in the literature. The main use of this method is within automatic combustion mechanism generation systems where fast estimation of a large number and variety of chemical species is needed. The implementation strategy is based on meta-atom definitions and substructure analysis allowing a highly extensible database without modification of the core algorithms. Several interfaces for the database and the calculations are provided from terminal line commands, to graphical interfaces to web-services. The first order estimation of thermodynamics is based summing up the contributions of each heavy atom bonding description. Second order corrections due to steric hindrance and ring strain are made. Automatic estimate of contributions due to internal, external and optical symmetries are also made. The thermodynamical data for radicals is calculated by taking the difference due to the lost of a hydrogen radical taking into account changes in symmetry, spin, rotations, vibrations and steric hindrances. The software is public domain and is based on standard libraries such as CDK and CML. PMID:23761949
Teif, Vladimir B
2007-01-01
The transfer matrix methodology is proposed as a systematic tool for the statistical-mechanical description of DNA-protein-drug binding involved in gene regulation. We show that a genetic system of several cis-regulatory modules is calculable using this method, considering explicitly the site-overlapping, competitive, cooperative binding of regulatory proteins, their multilayer assembly and DNA looping. In the methodological section, the matrix models are solved for the basic types of short- and long-range interactions between DNA-bound proteins, drugs and nucleosomes. We apply the matrix method to gene regulation at the O(R) operator of phage lambda. The transfer matrix formalism allowed the description of the lambda-switch at a single-nucleotide resolution, taking into account the effects of a range of inter-protein distances. Our calculations confirm previously established roles of the contact CI-Cro-RNAP interactions. Concerning long-range interactions, we show that while the DNA loop between the O(R) and O(L) operators is important at the lysogenic CI concentrations, the interference between the adjacent promoters P(R) and P(RM) becomes more important at small CI concentrations. A large change in the expression pattern may arise in this regime due to anticooperative interactions between DNA-bound RNA polymerases. The applicability of the matrix method to more complex systems is discussed.
Teif, Vladimir B.
2007-01-01
The transfer matrix methodology is proposed as a systematic tool for the statistical–mechanical description of DNA–protein–drug binding involved in gene regulation. We show that a genetic system of several cis-regulatory modules is calculable using this method, considering explicitly the site-overlapping, competitive, cooperative binding of regulatory proteins, their multilayer assembly and DNA looping. In the methodological section, the matrix models are solved for the basic types of short- and long-range interactions between DNA-bound proteins, drugs and nucleosomes. We apply the matrix method to gene regulation at the OR operator of phage λ. The transfer matrix formalism allowed the description of the λ-switch at a single-nucleotide resolution, taking into account the effects of a range of inter-protein distances. Our calculations confirm previously established roles of the contact CI–Cro–RNAP interactions. Concerning long-range interactions, we show that while the DNA loop between the OR and OL operators is important at the lysogenic CI concentrations, the interference between the adjacent promoters PR and PRM becomes more important at small CI concentrations. A large change in the expression pattern may arise in this regime due to anticooperative interactions between DNA-bound RNA polymerases. The applicability of the matrix method to more complex systems is discussed. PMID:17526526
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
Red Blood Cell Mechanical Fragility Test for Clinical Research Applications.
Ziegler, Luke A; Olia, Salim E; Kameneva, Marina V
2017-07-01
Red blood cell (RBC) susceptibility to mechanically induced hemolysis, or RBC mechanical fragility (MF), is an important parameter in the characterization of erythrocyte membrane health. The rocker bead test (RBT) and associated calculated mechanical fragility index (MFI) is a simple method for the assessment of RBC MF. Requiring a minimum of 15.5 mL of blood and necessitating adjustment of hematocrit (Ht) to a "standard" value (40%), the current RBT is not suitable for use in most studies involving human subjects. To address these limitations, we propose a 6.5 mL reduced volume RBT and corresponding modified MFI (MMFI) that does not require prior Ht adjustment. This new method was assessed for i) correlation to the existing text, ii) to quantify the effect of Ht on MFI, and iii) validation by reexamining the protective effect of plasma proteins on RBC MF. The reduced volume RBT strongly correlated (r = 0.941) with the established large volume RBT at matched Hts, and an equation was developed to calculate MMFI: a numerical estimation (R 2 = 0.923) of MFI if performed with the reduced volume RBT at "standard" (40%) Ht. An inversely proportional relationship was found between plasma protein concentration and RBC MF using the MMFI-reduced volume method, supporting previous literature findings. The new reduced volume RBT and modified MFI will allow for the measurement of RBC MF in clinical and preclinical studies involving humans or small animals. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Meirovitch, Hagai
2010-01-01
The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently, HSMD is being extended for calculating the absolute and relative free energies of ligand-enzyme binding. We describe the whole approach and discuss future directions. 2009 John Wiley & Sons, Ltd.
Swerts, Ben; Chibotaru, Liviu F; Lindh, Roland; Seijo, Luis; Barandiaran, Zoila; Clima, Sergiu; Pierloot, Kristin; Hendrickx, Marc F A
2008-04-01
In this article, we present a fragment model potential approach for the description of the crystalline environment as an extension of the use of embedding ab initio model potentials (AIMPs). The biggest limitation of the embedding AIMP method is the spherical nature of its model potentials. This poses problems as soon as the method is applied to crystals containing strongly covalently bonded structures with highly nonspherical electron densities. The newly proposed method addresses this problem by keeping the full electron density as its model potential, thus allowing one to group sets of covalently bonded atoms into fragments. The implementation in the MOLCAS 7.0 quantum chemistry package of the new method, which we call the embedding fragment ab inito model potential method (embedding FAIMP), is reported here, together with results of CASSCF/CASPT2 calculations. The developed methodology is applied for two test problems: (i) the investigation of the lowest ligand field states (2)A1 and (2)B1 of the Cr(V) defect in the YVO4 crystal and (ii) the investigation of the lowest ligand field and ligand-metal charge transfer (LMCT) states at the Mn(II) substitutional impurity doped into CaCO3. Comparison with similar calculations involving AIMPs for all environmental atoms, including those from covalently bounded units, shows that the FAIMP treatment of the YVO4 units surrounding the CrO4(3-) cluster increases the excitation energy (2)B1 → (2)A1 by ca. 1000 cm(-1) at the CASSCF level of calculation. In the case of the Mn(CO3)6(10-) cluster, the FAIMP treatment of the CO3(2-) units of the environment give smaller corrections, of ca. 100 cm(-1), for the ligand-field excitation energies, which is explained by the larger ligands of this cluster. However, the correction for the energy of the lowest LMCT transition is found to be ca. 600 cm(-1) for the CASSCF and ca. 1300 cm(-1) for the CASPT2 calculation.
Stiffness optimization of non-linear elastic structures
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
2017-11-13
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Assessment of computational prediction of tail buffeting
NASA Technical Reports Server (NTRS)
Edwards, John W.
1990-01-01
Assessments of the viability of computational methods and the computer resource requirements for the prediction of tail buffeting are made. Issues involved in the use of Euler and Navier-Stokes equations in modeling vortex-dominated and buffet flows are discussed and the requirement for sufficient grid density to allow accurate, converged calculations is stressed. Areas in need of basic fluid dynamics research are highlighted: vorticity convection, vortex breakdown, dynamic turbulence modeling for free shear layers, unsteady flow separation for moderately swept, rounded leading-edge wings, vortex flows about wings at high subsonic speeds. An estimate of the computer run time for a buffeting response calculation for a full span F-15 aircraft indicates that an improvement in computer and/or algorithm efficiency of three orders of magnitude is needed to enable routine use of such methods. Attention is also drawn to significant uncertainties in the estimates, in particular with regard to nonlinearities contained within the modeling and the question of the repeatability or randomness of buffeting response.
Technique for Evaluating the Erosive Properties of Ablative Internal Insulation Materials
NASA Technical Reports Server (NTRS)
McComb, J. C.; Hitner, J. M.
1989-01-01
A technique for determining the average erosion rate versus Mach number of candidate internal insulation materials was developed for flight motor applications in 12 inch I.D. test firing hardware. The method involved the precision mounting of a mechanical measuring tool within a conical test cartridge fabricated from either a single insulation material or two non-identical materials each of which constituted one half of the test cartridge cone. Comparison of the internal radii measured at nine longitudinal locations and between eight to thirty two azimuths, depending on the regularity of the erosion pattern before and after test firing, permitted calculation of the average erosion rate and Mach number. Systematic criteria were established for identifying erosion anomalies such as the formation of localized ridges and for excluding such anomalies from the calculations. The method is discussed and results presented for several asbestos-free materials developed in-house for the internal motor case insulation in solid propellant rocket motors.
Optimum quantum receiver for detecting weak signals in PAM communication systems
NASA Astrophysics Data System (ADS)
Sharma, Navneet; Rawat, Tarun Kumar; Parthasarathy, Harish; Gautam, Kumar
2017-09-01
This paper deals with the modeling of an optimum quantum receiver for pulse amplitude modulator (PAM) communication systems. The information bearing sequence {I_k}_{k=0}^{N-1} is estimated using the maximum likelihood (ML) method. The ML method is based on quantum mechanical measurements of an observable X in the Hilbert space of the quantum system at discrete times, when the Hamiltonian of the system is perturbed by an operator obtained by modulating a potential V with a PAM signal derived from the information bearing sequence {I_k}_{k=0}^{N-1}. The measurement process at each time instant causes collapse of the system state to an observable eigenstate. All probabilities of getting different outcomes from an observable are calculated using the perturbed evolution operator combined with the collapse postulate. For given probability densities, calculation of the mean square error evaluates the performance of the receiver. Finally, we present an example involving estimating an information bearing sequence that modulates a quantum electromagnetic field incident on a quantum harmonic oscillator.
Estimating zero-g flow rates in open channels having capillary pumping vanes
NASA Astrophysics Data System (ADS)
Srinivasan, Radhakrishnan
2003-02-01
In vane-type surface tension propellant management devices (PMD) commonly used in satellite fuel tanks, the propellant is transported along guiding vanes from a reservoir at the inlet of the device to a sump at the outlet from where it is pumped to the satellite engine. The pressure gradient driving this free-surface flow under zero-gravity (zero-g) conditions is generated by surface tension and is related to the differential curvatures of the propellant-gas interface at the inlet and outlet of the PMD. A new semi-analytical procedure is prescribed for accurately calculating the extremely small fuel flow rates under reasonably idealized conditions. Convergence of the algorithm is demonstrated by detailed numerical calculations. Owing to the substantial cost and the technical hurdles involved in accurately estimating these minuscule flow rates by either direct numerical simulation or by experimental methods which simulate zero-g conditions in the lab, it is expected that the proposed method will be an indispensable tool in the design and operation of satellite fuel tanks.
Stucke, Kathrin; Kieser, Meinhard
2012-12-10
In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.
Stiffness optimization of non-linear elastic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Influence maximization in social networks under an independent cascade-based model
NASA Astrophysics Data System (ADS)
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
Compensation of flare-induced CD changes EUVL
Bjorkholm, John E [Pleasanton, CA; Stearns, Daniel G [Los Altos, CA; Gullikson, Eric M [Oakland, CA; Tichenor, Daniel A [Castro Valley, CA; Hector, Scott D [Oakland, CA
2004-11-09
A method for compensating for flare-induced critical dimensions (CD) changes in photolithography. Changes in the flare level results in undesirable CD changes. The method when used in extreme ultraviolet (EUV) lithography essentially eliminates the unwanted CD changes. The method is based on the recognition that the intrinsic level of flare for an EUV camera (the flare level for an isolated sub-resolution opaque dot in a bright field mask) is essentially constant over the image field. The method involves calculating the flare and its variation over the area of a patterned mask that will be imaged and then using mask biasing to largely eliminate the CD variations that the flare and its variations would otherwise cause. This method would be difficult to apply to optical or DUV lithography since the intrinsic flare for those lithographies is not constant over the image field.
Speech-Message Extraction from Interference Introduced by External Distributed Sources
NASA Astrophysics Data System (ADS)
Kanakov, V. A.; Mironov, N. A.
2017-08-01
The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.
Graphical evaluation of complexometric titration curves.
Guinon, J L
1985-04-01
A graphical method, based on logarithmic concentration diagrams, for construction, without any calculations, of complexometric titration curves is examined. The titration curves obtained for different kinds of unidentate, bidentate and quadridentate ligands clearly show why only chelating ligands are usually used in titrimetric analysis. The method has also been applied to two practical cases where unidentate ligands are used: (a) the complexometric determination of mercury(II) with halides and (b) the determination of cyanide with silver, which involves both a complexation and a precipitation system; for this purpose construction of the diagrams for the HgCl(2)/HgCl(+)/Hg(2+) and Ag(CN)(2)(-)/AgCN/CN(-) systems is considered in detail.
Slicken 1.0: Program for calculating the orientation of shear on reactivated faults
NASA Astrophysics Data System (ADS)
Xu, Hong; Xu, Shunshan; Nieto-Samaniego, Ángel F.; Alaniz-Álvarez, Susana A.
2017-07-01
The slip vector on a fault is an important parameter in the study of the movement history of a fault and its faulting mechanism. Although there exist many graphical programs to represent the shear stress (or slickenline) orientations on faults, programs to quantitatively calculate the orientation of fault slip based on a given stress field are scarce. In consequence, we develop Slicken 1.0, a software to rapidly calculate the orientation of maximum shear stress on any fault plane. For this direct method of calculating the resolved shear stress on a planar surface, the input data are the unit vector normal to the involved plane, the unit vectors of the three principal stress axes, and the stress ratio. The advantage of this program is that the vertical or horizontal principal stresses are not necessarily required. Due to its nimble design using Java SE 8.0, it runs on most operating systems with the corresponding Java VM. The software program will be practical for geoscience students, geologists and engineers and will help resolve a deficiency in field geology, and structural and engineering geology.
Effects of relativity of RTEX in collisions of U sup q+ with light targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Mau Hsiung.
1990-11-07
We have calculated the resonant transfer and excitation cross sections in collisions of U{sup q+} (q = 82, 89, 90) ion with H{sub 2}, He and C in impulse approximation using the multi-configuration Dirac-Fock method. The calculations were carried out in intermediate coupling with configuration interaction. The quantum electrodynamic and finite nuclear size corrections were included in the calculations of transition energies. The Auger rates were calculated including the contributions from Coulomb as well as the transverse Breit interactions. For U{sup 89+} and U{sup 90+}, effects of relatively not only shift the peak positions but also change the peak structure.more » The total dielectronic recombination strength has been found to increase by 50% due to the effects of relativity. The present theoretical RTEX cross sections for U{sup 90+} in hydrogen agree well with experiment. For U{sup 82+}, Breit interaction had been found to have little effect on the RTEX cross sections involving L-shell excitation. However, the spin-orbit interaction can still make significant change in the peak structure. 24 refs., 4 figs.« less
Risk Assessment During the Final Phase of an Uncontrolled Re-Entry
NASA Astrophysics Data System (ADS)
Gaudel, A.; Hourtolle, C.; Goester, J. F.; Fuentes, N.
2013-09-01
As French National Space Agency, CNES is empowered to monitor compliance with technical regulations of the French Space Operation Act, FSOA, and to take all necessary measures to ensure the safety of people, property, public health and environment for all space operations involving French responsibility at international level.Therefore, CNES developed ELECTRA that calculates the risk for ground population involved in three types of events: rocket launching, controlled re-entry and uncontrolled re-entry. For the first two cases, ELECTRA takes into account degraded cases due to a premature stop of propulsion.Major evolutions were implemented recently on ELECTRA to meet new users' requirements, like the risk assessment during the final phase of uncontrolled re-entry, that can be combined with the computed risk for each country involved by impacts.The purpose of this paper is to provide an overview of the ELECTRA method and main functionalities, and then to highlight these recent improvements.
NASA Astrophysics Data System (ADS)
An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu
2012-11-01
SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.
Methods for determination of inorganic substances in water and fluvial sediments
Fishman, Marvin J.; Friedman, Linda C.
1985-01-01
Chapter Al of the laboratory manual contains methods used by the Geological Survey to analyze samples of water, suspended sediments, and bottom material for their content of inorganic constituents. Included are methods for determining the concentration of dissolved constituents in water, total recoverable and total of constituents in water-suspended sediment samples, and recoverable and total concentrations of constituents in samples of bottom material. Essential definitions are included in the introduction to the manual, along with a brief discussion of the use of significant figures in calculating and reporting analytical results. Quality control in the water-analysis laboratory is discussed, including accuracy and precision of analyses, the use of standard reference water samples, and the operation of an effective quality assurance program. Methods for sample preparation and pretreatment are given also.A brief discussion of the principles of the analytical techniques involved and their particular application to water and sediment analysis is presented. The analytical methods involving these techniques are arranged alphabetically according to constituent. For each method given, the general topics covered are application, principle of the method, interferences, apparatus and reagents required, a detailed description of the analytical procedure, reporting results, units and significant figures, and analytical precision data, when available. More than 125 methods are given for the determination of 70 different inorganic constituents and physical properties of water, suspended sediment, and bottom material.
Measuring the degree of integration for an integrated service network
Ye, Chenglin; Browne, Gina; Grdisa, Valerie S; Beyene, Joseph; Thabane, Lehana
2012-01-01
Background Integration involves the coordination of services provided by autonomous agencies and improves the organization and delivery of multiple services for target patients. Current measures generally do not distinguish between agencies’ perception and expectation. We propose a method for quantifying the agencies’ service integration. Using the data from the Children’s Treatment Network (CTN), we aimed to measure the degree of integration for the CTN agencies in York and Simcoe. Theory and methods We quantified the integration by the agreement between perceived and expected levels of involvement and calculated four scores from different perspectives for each agency. We used the average score to measure the global network integration and examined the sensitivity of the global score. Results Most agencies’ integration scores were <65%. As measured by the agreement between every other agency’s perception and expectation, the overall integration of CTN in Simcoe and York was 44% (95% CI: 39%–49%) and 52% (95% CI: 48%–56%), respectively. The sensitivity analysis showed that the global scores were robust. Conclusion Our method extends existing measures of integration and possesses a good extent of validity. We can also apply the method in monitoring improvement and linking integration with other outcomes. PMID:23593050
NASA Astrophysics Data System (ADS)
Dai, Mingzhi; Khan, Karim; Zhang, Shengnan; Jiang, Kemin; Zhang, Xingye; Wang, Weiliang; Liang, Lingyan; Cao, Hongtao; Wang, Pengjun; Wang, Peng; Miao, Lijing; Qin, Haiming; Jiang, Jun; Xue, Lixin; Chu, Junhao
2016-06-01
Sub-gap density of states (DOS) is a key parameter to impact the electrical characteristics of semiconductor materials-based transistors in integrated circuits. Previously, spectroscopy methodologies for DOS extractions include the static methods, temperature dependent spectroscopy and photonic spectroscopy. However, they might involve lots of assumptions, calculations, temperature or optical impacts into the intrinsic distribution of DOS along the bandgap of the materials. A direct and simpler method is developed to extract the DOS distribution from amorphous oxide-based thin-film transistors (TFTs) based on Dual gate pulse spectroscopy (GPS), introducing less extrinsic factors such as temperature and laborious numerical mathematical analysis than conventional methods. From this direct measurement, the sub-gap DOS distribution shows a peak value on the band-gap edge and in the order of 1017-1021/(cm3·eV), which is consistent with the previous results. The results could be described with the model involving both Gaussian and exponential components. This tool is useful as a diagnostics for the electrical properties of oxide materials and this study will benefit their modeling and improvement of the electrical properties and thus broaden their applications.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003
NASA Astrophysics Data System (ADS)
Bakkari, Karim; Fersi, Riadh; Kebir Hlil, El; Bessais, Lotfi; Thabet Mliki, Najeh
2018-03-01
First-principle calculations combining density functional theory and the full-potential linearized augmented plane wave (FP-LAPW) method are performed to investigate the electronic and magnetic structure of Pr2Co7 in its two polymorphic forms, (2:7 H) and (2:7 R), for the first time. This type of calculation was also performed for PrCo5 and PrCo2 intermetallics. We have computed the valence density of states separately for spin-up and spin-down states in order to investigate the electronic band structure. This is governed by the strong contribution of the partial DOS of 3d-Co bands compared to the partial DOS of the 4f-Pr bands. Such a high ferromagnetic state is discussed in terms of the strong spin polarization observed in the total DOS. The magnetic moments carried by the Co and Pr atoms located in several sites for all compounds are computed. These results mainly indicate that cobalt atoms make a dominant contribution to the magnetic moments. The notable difference in the atomic moments of Pr and Co atoms between different structural slabs is explained in terms of the magnetic characteristics of the PrCo2 and PrCo5 compounds and the local chemical environments of the Pr and Co atoms in different structural slabs of Pr2Co7. From spin-polarized calculations we have simulated the 3d and 4f band population to estimate the local magnetic moments. These results are in accordance with the magnetic moments calculated using the FP-LAPW method. In addition, the exchange interactions J ij are calculated and used as input for M(T) simulations. Involving the data obtained from the electronic structure calculations, the appropriate Padé Table is applied to simulate the magnetization M(T) and to estimate the mean-field Curie temperature. We report a fairly good agreement between the ab initio calculation of magnetization and Curie temperature with the experimental data.
NASA Astrophysics Data System (ADS)
Sikarwar, Nidhi
The noise produced by the low bypass ratio turbofan engines used to power fighter aircraft is a problem for communities near military bases and for personnel working in close proximity to the aircraft. For example, carrier deck personnel are subject to noise exposure that can result in Noise-Induced Hearing Loss which in-turn results in over a billion dollars of disability payments by the Veterans Administration. Several methods have been proposed to reduce the jet noise at the source. These methods include microjet injection of air or water downstream of the jet exit, chevrons, and corrugated nozzle inserts. The last method involves the insertion of corrugated seals into the diverging section of a military-style convergent-divergent jet nozzle (to replace the existing seals). This has been shown to reduce both the broadband shock-associated noise as well as the mixing noise in the peak noise radiation direction. However, the original inserts were designed to be effective for a take-off condition where the jet is over-expanded. The nozzle performance would be expected to degrade at other conditions, such as in cruise at altitude. A new method has been proposed to achieve the same effects as corrugated seals, but using fluidic inserts. This involves injection of air, at relatively low pressures and total mass flow rates, into the diverging section of the nozzle. These fluidic inserts" deflect the flow in the same way as the mechanical inserts. The fluidic inserts represent an active control method, since the injectors can be modified or turned off depending on the jet operating conditions. Noise reductions in the peak noise direction of 5 to 6 dB have been achieved and broadband shock-associated noise is effectively suppressed. There are multiple parameters to be considered in the design of the fluidic inserts. This includes the number and location of the injectors and the pressures and mass flow rates to be used. These could be optimized on an ad hoc basis with multiple experiments or numerical simulations. Alternatively an inverse design method can be used. An adjoint optimization method can be used to achieve the optimum blowing rate. It is shown that the method works for both geometry optimization and active control of the flow in order to deflect the flow in desirable ways. An adjoint optimization method is described. It is used to determine the blowing distribution in the diverging section of a convergent-divergent nozzle that gives a desired pressure distribution in the nozzle. Both the direct and adjoint problems and their associated boundary conditions are developed. The adjoint method is used to determine the blowing distribution required to minimize the shock strength in the nozzle to achieve a known target pressure and to achieve close to an ideally expanded flow pressure. A multi-block structured solver is developed to calculate the flow solution and associated adjoint variables. Two and three-dimensional calculations are performed for internal and external of the nozzle domains. A two step MacCormack scheme based on predictor- corrector technique is was used for some calculations. The four and five stage Runge-Kutta schemes are also used to artificially march in time. A modified Runge-Kutta scheme is used to accelerate the convergence to a steady state. Second order artificial dissipation has been added to stabilize the calculations. The steepest decent method has been used for the optimization of the blowing velocity after the gradients of the cost function with respect to the blowing velocity are calculated using adjoint method. Several examples are given of the optimization of blowing using the adjoint method.
Systematic Approach to Calculate the Concentration of Chemical Species in Multi-Equilibrium Problems
ERIC Educational Resources Information Center
Baeza-Baeza, Juan Jose; Garcia-Alvarez-Coque, Maria Celia
2011-01-01
A general systematic approach is proposed for the numerical calculation of multi-equilibrium problems. The approach involves several steps: (i) the establishment of balances involving the chemical species in solution (e.g., mass balances, charge balance, and stoichiometric balance for the reaction products), (ii) the selection of the unknowns (the…
2013-01-01
Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257
Filatov, Michael; Martínez, Todd J.; Kim, Kwang S.
2017-08-14
An extended variant of the spin-restricted ensemble-referenced Kohn-Sham (REKS) method, the REKS(4,4) method, designed to describe the ground electronic states of strongly multireference systems is modified to enable calculation of excited states within the time-independent variational formalism. The new method, the state-interaction state-averaged REKS(4,4), i.e., SI-SA-REKS(4,4), is capable of describing several excited states of a molecule involving double bond cleavage, polyradical character, or multiple chromophoric units.We demonstrate that the newmethod correctly describes the ground and the lowest singlet excited states of a molecule (ethylene) undergoing double bond cleavage. The applicability of the new method for excitonic states is illustrated withmore » π stacked ethylene and tetracene dimers. We conclude that the new method can describe a wide range of multireference phenomena.« less
Stress-intensity factors for small surface and corner cracks in plates
NASA Technical Reports Server (NTRS)
Raju, I. S.; Atluri, S. N.; Newman, J. C., Jr.
1988-01-01
Three-dimensional finite-element and finite-alternating methods were used to obtain the stress-intensity factors for small surface and corner cracked plates subjected to remote tension and bending loads. The crack-depth-to-crack-length ratios (a/c) ranged from 0.2 to 1 and the crack-depth-to-plate-thickness ratios (a/t) ranged from 0.05 to 0.2. The performance of the finite-element alternating method was studied on these crack configurations. A study of the computational effort involved in the finite-element alternating method showed that several crack configurations could be analyzed with a single rectangular mesh idealization, whereas the conventional finite-element method requires a different mesh for each configuration. The stress-intensity factors obtained with the finite-element-alternating method agreed well (within 5 percent) with those calculated from the finite-element method with singularity elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filatov, Michael; Martínez, Todd J.; Kim, Kwang S.
An extended variant of the spin-restricted ensemble-referenced Kohn-Sham (REKS) method, the REKS(4,4) method, designed to describe the ground electronic states of strongly multireference systems is modified to enable calculation of excited states within the time-independent variational formalism. The new method, the state-interaction state-averaged REKS(4,4), i.e., SI-SA-REKS(4,4), is capable of describing several excited states of a molecule involving double bond cleavage, polyradical character, or multiple chromophoric units.We demonstrate that the newmethod correctly describes the ground and the lowest singlet excited states of a molecule (ethylene) undergoing double bond cleavage. The applicability of the new method for excitonic states is illustrated withmore » π stacked ethylene and tetracene dimers. We conclude that the new method can describe a wide range of multireference phenomena.« less
NASA Astrophysics Data System (ADS)
Plante, Ianik; Devroye, Luc
2017-10-01
Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.
Reda, Ibrahim
2013-10-29
Implementations of the present disclosure involve an apparatus and method to measure the long-wave irradiance of the atmosphere or long-wave source. The apparatus may involve a thermopile, a concentrator and temperature controller. The incoming long-wave irradiance may be reflected from the concentrator to a thermopile receiver located at the bottom of the concentrator to receive the reflected long-wave irradiance. In addition, the thermopile may be thermally connected to a temperature controller to control the device temperature. Through use of the apparatus, the long-wave irradiance of the atmosphere may be calculated from several measurements provided by the apparatus. In addition, the apparatus may provide an international standard of pyrgeometers' calibration that is traceable back to the International System of Units (SI) rather than to a blackbody atmospheric simulator.
Osipiuk, Jerzy; Mulligan, Rory; Bargassa, Monireh; Hamilton, John E.; Cunningham, Mark A.; Joachimiak, Andrzej
2012-01-01
The crystal structure of SO1698 protein from Shewanella oneidensis was determined by a SAD method and refined to 1.57 Å. The structure is a β sandwich that unexpectedly consists of two polypeptides; the N-terminal fragment includes residues 1–116, and the C-terminal one includes residues 117–125. Electron density also displayed the Lys-98 side chain covalently linked to Asp-116. The putative active site residues involved in self-cleavage were identified; point mutants were produced and characterized structurally and in a biochemical assay. Numerical simulations utilizing molecular dynamics and hybrid quantum/classical calculations suggest a mechanism involving activation of a water molecule coordinated by a catalytic aspartic acid. PMID:22493430
Optimal Experiment Design for Thermal Characterization of Functionally Graded Materials
NASA Technical Reports Server (NTRS)
Cole, Kevin D.
2003-01-01
The purpose of the project was to investigate methods to accurately verify that designed , materials meet thermal specifications. The project involved heat transfer calculations and optimization studies, and no laboratory experiments were performed. One part of the research involved study of materials in which conduction heat transfer predominates. Results include techniques to choose among several experimental designs, and protocols for determining the optimum experimental conditions for determination of thermal properties. Metal foam materials were also studied in which both conduction and radiation heat transfer are present. Results of this work include procedures to optimize the design of experiments to accurately measure both conductive and radiative thermal properties. Detailed results in the form of three journal papers have been appended to this report.
Selective Catalytic Combustion Sensors for Reactive Organic Analysis
NASA Technical Reports Server (NTRS)
Innes, W. B.
1971-01-01
Sensors involving a vanadia-alumina catalyst bed-thermocouple assembly satisfy requirements for simple, reproducible and rapid continuous analysis or reactive organics. Responses generally increase with temperature to 400 C and increase to a maximum with flow rate/catalyst volume. Selectivity decreases with temperature. Response time decreases with flow rate and increases with catalyst volume. At chosen optimum conditions calculated response which is additive and linear agrees better with photochemical reactivity than other methods for various automotive sources, and response to vehicle exhaust is insensitive to flow rate. Application to measurement of total reactive organics in vehicle exhaust as well as for gas chromatography detection illustrate utility. The approach appears generally applicable to high thermal effect reactions involving first order kinetics.
First-principles calculations of novel materials
NASA Astrophysics Data System (ADS)
Sun, Jifeng
Computational material simulation is becoming more and more important as a branch of material science. Depending on the scale of the systems, there are many simulation methods, i.e. first-principles calculation (or ab-initio), molecular dynamics, mesoscale methods and continuum methods. Among them, first-principles calculation, which involves density functional theory (DFT) and based on quantum mechanics, has become to be a reliable tool in condensed matter physics. DFT is a single-electron approximation in solving the many-body problems. Intrinsically speaking, both DFT and ab-initio belong to the first-principles calculation since the theoretical background of ab-initio is Hartree-Fock (HF) approximation and both are aimed at solving the Schrodinger equation of the many-body system using the self-consistent field (SCF) method and calculating the ground state properties. The difference is that DFT introduces parameters either from experiments or from other molecular dynamic (MD) calculations to approximate the expressions of the exchange-correlation terms. The exchange term is accurately calculated but the correlation term is neglected in HF. In this dissertation, DFT based first-principles calculations were performed for all the novel materials and interesting materials introduced. Specifically, the DFT theory together with the rationale behind related properties (e.g. electronic, optical, defect, thermoelectric, magnetic) are introduced in Chapter 2. Starting from Chapter 3 to Chapter 5, several representative materials were studied. In particular, a new semiconducting oxytelluride, Ba2TeO is studied in Chapter 3. Our calculations indicate a direct semiconducting character with a band gap value of 2.43 eV, which agrees well with the optical experiment (˜ 2.93 eV). Moreover, the optical and defects properties of Ba2TeO are also systematically investigated with a view to understanding its potential as an optoelectronic or transparent conducting material. We find relatively modest band masses for both electrons and holes suggesting applications. Optical properties show a infrared-red absorption when doped. This could potentially be useful for combining wavelength filtering and transparent conducting functions. Furthermore, our defect calculations show that Ba 2TeO is intrinsically p-type conducting under Ba-poor condition. However, the spontaneous formation of the donor defects may constrain the p-type transport properties and would need to be addressed to enable applications. Chapter 4 mainly devotes to the thermoelectric properties of the famous phase change material, Ge2Sb2Te5 (GST). GST has been used in data storage for more than a decade because of their fast phase switching between metastable crystalline (cubic) and amorphous phases. It also exhibits interesting thermoelectric properties, and we did a systematic study on the two crystalline phases (hexagonal and cubic) and the amorphous phase. We found a high Seebeck coefficient with a broad doping concentrations for both n-type and p-type, at and below room temperatures (300 K) for both the cubic and amorphous phases. This finding will be of crucial interests in further understand the thermoelectric properties experimentally and find device applications in the ultimate goal. Several magnetic materials that involve lanthanide elements are reported in Chapter 5. First of all, the electronic and magnetic properties of the BaLn2O4 (Ln = La-Lu, Y) family compound are studied. The series has been synthesized for the first time in single crystalline form, using a molten metal flux. They crystallize in the CaV 2O4 structure type with primitive orthorhombic symmetry (space group Pnma, #62). Our calculations show an insulating character with band gaps ranging from 3 eV to 4.5 eV for the three representative compounds, BaLa2O4, BaGd2O4 and BaLu 2O4. Moreover, the superexchange magnetism is also studied. Secondly, a strong correlated system with cerium is investigated. As expected, we find a value of 15 states eV-1 that stems from the Ce 4f orbitals at the Fermi energy which indicates intermetallic heavy fermion behavior. The Fermi surface calculation shows nesting feature which might be useful to further understand the antiferromagnetic magnetism. Thirdly, the DFT calculations of another lanthanide oxide involving transition element, LaMo16O44, are also presented. This material crystallizes with a complicated crystal structure consists of MoO6 magnetic clusters. The band structure calculations indicate a spin-polarized half metal feature that comes from different crystallographic sites of Mo since La occurs as trivalent with empty f shell thus no contribution to the magnetic moment. Last but not least, we studied the electronic properties of another newly found oxytellride that having lanthanide, Ba3Yb 2O5Te. We find a insulating behavior with a direct band-gap value of 1.9 eV using the DFT+U methodology.
A priori mesh grading for the numerical calculation of the head-related transfer functions
Ziegelwanger, Harald; Kreuzer, Wolfgang; Majdak, Piotr
2017-01-01
Head-related transfer functions (HRTFs) describe the directional filtering of the incoming sound caused by the morphology of a listener’s head and pinnae. When an accurate model of a listener’s morphology exists, HRTFs can be calculated numerically with the boundary element method (BEM). However, the general recommendation to model the head and pinnae with at least six elements per wavelength renders the BEM as a time-consuming procedure when calculating HRTFs for the full audible frequency range. In this study, a mesh preprocessing algorithm is proposed, viz., a priori mesh grading, which reduces the computational costs in the HRTF calculation process significantly. The mesh grading algorithm deliberately violates the recommendation of at least six elements per wavelength in certain regions of the head and pinnae and varies the size of elements gradually according to an a priori defined grading function. The evaluation of the algorithm involved HRTFs calculated for various geometric objects including meshes of three human listeners and various grading functions. The numerical accuracy and the predicted sound-localization performance of calculated HRTFs were analyzed. A-priori mesh grading appeared to be suitable for the numerical calculation of HRTFs in the full audible frequency range and outperformed uniform meshes in terms of numerical errors, perception based predictions of sound-localization performance, and computational costs. PMID:28239186
Energy distribution among reaction products. VI - F + H2, D2.
NASA Technical Reports Server (NTRS)
Polanyi, J. C.; Woodall, K. B.
1972-01-01
Study of the F + H2 reaction, which is of special theoretical interest since it is one of the simplest examples of an exothermic chemical reaction. The FH2 system involves only 11 electrons, and the computation of a potential-energy hypersurface to chemical accuracy may now be within the reach of ab initio calculations. The 'arrested relaxation' variant of the infrared chemiluminescence method is used to obtain the initial vibrational, rotational and translational energy distributions in the products of exothermic reactions.
Electron transmission through a class of anthracene aldehyde molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petreska, Irina, E-mail: irina.petreska@pmf.ukim.mk; Ohanesjan, Vladimir, E-mail: ohanesjan.vladimir@gmail.com; Pejov, Ljupco, E-mail: ljupcop@pmf.ukim.mk
2016-03-25
Transmission of electrons via metal-molecule-metal junctions, involving rotor-stator anthracene aldehyde molecules is investigated. Two model barriers having input parameters evaluated from accurate ab initio calculations are proposed and the transmission coefficients are obtained by using the quasiclassical approximation. Transmission coefficients further enter in the integral for the net current, utilizing Simmons’ method. Conformational dependence of the tunneling processes is evident and the presence of the side groups enhances the functionality of the future single-molecule based electronic devices.
Sensitivity, Specificity, PPV, and NPV for Predictive Biomarkers
2015-01-01
Molecularly targeted cancer drugs are often developed with companion diagnostics that attempt to identify which patients will have better outcome on the new drug than the control regimen. Such predictive biomarkers are playing an increasingly important role in precision oncology. For diagnostic tests, sensitivity, specificity, positive predictive value, and negative predictive are usually used as performance measures. This paper discusses these indices for predictive biomarkers, provides methods for their calculation with survival or response endpoints, and describes assumptions involved in their use. PMID:26109105
Consequence Assessment Methods for Incidents Involving Releases From Liquefied Natural Gas Carriers
2004-05-13
the downwind direction. The Thomas (1965) correlation is used to calculate flame length . Flame tilt is estimated using an empirical correlation from...follows: From TNO (1997) • Thomas (1963) correlation for flame length • For an experimental LNG pool fire of 16.8-m diameter, a mass burning flux of...m, flame length ranged from 50 to 78 m, and tilt angle from 27 to 35 degrees From Rew (1996) • Work included a review of recent developments in
NASA Astrophysics Data System (ADS)
Britvin, Sergey N.; Rumyantsev, Andrey M.; Zobnina, Anastasia E.; Padkina, Marina V.
2017-02-01
Molecular structure of 1,4-diazabicyclo[3.2.1]octane, a parent ring of TAN1251 family of alkaloids, is herein characterized for the first time in comparison with the structure of nortropane (8-azabicyclo[3.2.1]octane), the parent framework of tropane ring system. The methods of study involve X-ray structural analysis, DFT geometry optimizations with infrared frequency calculations followed by natural bond orbital (NBO) analysis, and vibrational analysis of infrared spectrum.
Fatigue and fracture: Overview
NASA Technical Reports Server (NTRS)
Halford, G. R.
1984-01-01
A brief overview of the status of the fatigue and fracture programs is given. The programs involve the development of appropriate analytic material behavior models for cyclic stress-strain-temperature-time/cyclic crack initiation, and cyclic crack propagation. The underlying thrust of these programs is the development and verification of workable engineering methods for the calculation, in advance of service, of the local cyclic stress-strain response at the critical life governing location in hot section compounds, and the resultant crack initiation and crack growth lifetimes.
Domingues Franceschini, Marston Héracles; Bartholomeus, Harm; van Apeldoorn, Dirk; Suomalainen, Juha; Kooistra, Lammert
2017-10-02
The authors would like to correct Figure 13 and Table A2, as well as the text related to the data presented in both of them, as indicated below, considering that an error in the calculations involving Equation (2), described in the Section 2.8 of the Materials and Methods Section, resulted in the communication of incorrect values [...].
Research in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Murman, Earll M.
1987-01-01
The numerical integration of quasi-one-dimensional unsteady flow problems which involve finite rate chemistry are discussed, and are expressed in terms of conservative form Euler and species conservation equations. Hypersonic viscous calculations for delta wing geometries is also examined. The conical Navier-Stokes equations model was selected in order to investigate the effects of viscous-inviscid interations. The more complete three-dimensional model is beyond the available computing resources. The flux vector splitting method with van Leer's MUSCL differencing is being used. Preliminary results were computed for several conditions.
Drama in Dynamics: Boom, Splash, and Speed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Netzloff, Heather Marie
2004-12-19
The full nature of chemistry and physics cannot be captured by static calculations alone. Dynamics calculations allow the simulation of time-dependent phenomena. This facilitates both comparisons with experimental data and the prediction and interpretation of details not easily obtainable from experiments. Simulations thus provide a direct link between theory and experiment, between microscopic details of a system and macroscopic observed properties. Many types of dynamics calculations exist. The most important distinction between the methods and the decision of which method to use can be described in terms of the size and type of molecule/reaction under consideration and the type andmore » level of accuracy required in the final properties of interest. These considerations must be balanced with available computational codes and resources as simulations to mimic ''real-life'' may require many time steps. As indicated in the title, the theme of this thesis is dynamics. The goal is to utilize the best type of dynamics for the system under study while trying to perform dynamics in the most accurate way possible. As a quantum chemist, this involves some level of first principles calculations by default. Very accurate calculations of small molecules and molecular systems are now possible with relatively high-level ab initio quantum chemistry. For example, a quantum chemical potential energy surface (PES) can be developed ''on-the-fly'' with dynamic reaction path (DRP) methods. In this way a classical trajectory is developed without prior knowledge of the PES. In order to treat solvation processes and the condensed phase, large numbers of molecules are required, especially in predicting bulk behavior. The Effective Fragment Potential (EFP) method for solvation decreases the cost of a fully quantum mechanical calculation by dividing a chemical system into an ab initio region that contains the solute and an ''effective fragment'' region that contains the remaining solvent molecules. But, despite the reduced cost relative to fully QM calculations, the EFP method, due to its complex, QM-based potential, does require more computation time than simple interaction potentials, especially when the method is used for large scale molecular dynamics simulations. Thus, the EFP method was parallelized to facilitate these calculations within the quantum chemistry program GAMESS. The EFP method provides relative energies and structures that are in excellent agreement with the analogous fully quantum results for small water clusters. The ability of the method to predict bulk water properties with a comparable accuracy is assessed by performing EFP molecular dynamics simulations. Molecular dynamics simulations can provide properties that are directly comparable with experimental results, for example radial distribution functions. The molecular PES is a fundamental starting point for chemical reaction dynamics. Many methods can be used to obtain a PES; for example, assuming a global functional form for the PES or, as mentioned above, performing ''on-the-fly'' dynamics with Al or semi-empirical calculations at every molecular configuration. But as the size of the system grows, using electronic structure theory to build a PES and, therefore, study reaction dynamics becomes virtually impossible. The program Grow builds a PES as an interpolation of Al data; the goal is to attempt to produce an accurate PES with the smallest number of Al calculations. The Grow-GAMESS interface was developed to obtain the Al data from GAMESS. Classical or quantum dynamics can be performed on the resulting surface. The interface includes the novel capability to build multi-reference PESs; these types of calculations are applicable to problems ranging from atmospheric chemistry to photochemical reaction mechanisms in organic and inorganic chemistry to fundamental biological phenomena such as photosynthesis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less
Areal Feature Matching Based on Similarity Using Critic Method
NASA Astrophysics Data System (ADS)
Kim, J.; Yu, K.
2015-10-01
In this paper, we propose an areal feature matching method that can be applied for many-to-many matching, which involves matching a simple entity with an aggregate of several polygons or two aggregates of several polygons with fewer user intervention. To this end, an affine transformation is applied to two datasets by using polygon pairs for which the building name is the same. Then, two datasets are overlaid with intersected polygon pairs that are selected as candidate matching pairs. If many polygons intersect at this time, we calculate the inclusion function between such polygons. When the value is more than 0.4, many of the polygons are aggregated as single polygons by using a convex hull. Finally, the shape similarity is calculated between the candidate pairs according to the linear sum of the weights computed in CRITIC method and the position similarity, shape ratio similarity, and overlap similarity. The candidate pairs for which the value of the shape similarity is more than 0.7 are determined as matching pairs. We applied the method to two geospatial datasets: the digital topographic map and the KAIS map in South Korea. As a result, the visual evaluation showed two polygons that had been well detected by using the proposed method. The statistical evaluation indicates that the proposed method is accurate when using our test dataset with a high F-measure of 0.91.