Sample records for computer model calculations

  1. Vehicle - Bridge interaction, comparison of two computing models

    NASA Astrophysics Data System (ADS)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  2. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  3. Computational comparison of quantum-mechanical models for multistep direct reactions

    NASA Astrophysics Data System (ADS)

    Koning, A. J.; Akkermans, J. M.

    1993-02-01

    We have carried out a computational comparison of all existing quantum-mechanical models for multistep direct (MSD) reactions. The various MSD models, including the so-called Feshbach-Kerman-Koonin, Tamura-Udagawa-Lenske and Nishioka-Yoshida-Weidenmüller models, have been implemented in a single computer system. All model calculations thus use the same set of parameters and the same numerical techniques; only one adjustable parameter is employed. The computational results have been compared with experimental energy spectra and angular distributions for several nuclear reactions, namely, 90Zr(p,p') at 80 MeV, 209Bi(p,p') at 62 MeV, and 93Nb(n,n') at 25.7 MeV. In addition, the results have been compared with the Kalbach systematics and with semiclassical exciton model calculations. All quantum MSD models provide a good fit to the experimental data. In addition, they reproduce the systematics very well and are clearly better than semiclassical model calculations. We furthermore show that the calculated predictions do not differ very strongly between the various quantum MSD models, leading to the conclusion that the simplest MSD model (the Feshbach-Kerman-Koonin model) is adequate for the analysis of experimental data.

  4. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.

  5. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  6. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  7. Computer program to solve two-dimensional shock-wave interference problems with an equilibrium chemically reacting air model

    NASA Technical Reports Server (NTRS)

    Glass, Christopher E.

    1990-01-01

    The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.

  8. Computer program to solve two-dimensional shock-wave interference problems with an equilibrium chemically reacting air model

    NASA Astrophysics Data System (ADS)

    Glass, Christopher E.

    1990-08-01

    The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.

  9. Errors in the Calculation of 27Al Nuclear Magnetic Resonance Chemical Shifts

    PubMed Central

    Wang, Xianlong; Wang, Chengfei; Zhao, Hui

    2012-01-01

    Computational chemistry is an important tool for signal assignment of 27Al nuclear magnetic resonance spectra in order to elucidate the species of aluminum(III) in aqueous solutions. The accuracy of the popular theoretical models for computing the 27Al chemical shifts was evaluated by comparing the calculated and experimental chemical shifts in more than one hundred aluminum(III) complexes. In order to differentiate the error due to the chemical shielding tensor calculation from that due to the inadequacy of the molecular geometry prediction, single-crystal X-ray diffraction determined structures were used to build the isolated molecule models for calculating the chemical shifts. The results were compared with those obtained using the calculated geometries at the B3LYP/6-31G(d) level. The isotropic chemical shielding constants computed at different levels have strong linear correlations even though the absolute values differ in tens of ppm. The root-mean-square difference between the experimental chemical shifts and the calculated values is approximately 5 ppm for the calculations based on the X-ray structures, but more than 10 ppm for the calculations based on the computed geometries. The result indicates that the popular theoretical models are adequate in calculating the chemical shifts while an accurate molecular geometry is more critical. PMID:23203134

  10. 75 FR 75961 - Notice of Implementation of the Wind Erosion Prediction System for Soil Erodibility System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ... Wind Erosion Prediction System for Soil Erodibility System Calculations for the Natural Resources... Erosion Prediction System (WEPS) for soil erodibility system calculations scheduled for implementation for... computer model is a process-based, daily time-step computer model that predicts soil erosion via simulation...

  11. Hybrid reduced order modeling for assembly calculations

    DOE PAGES

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...

    2015-08-14

    While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less

  12. Investigations on the sensitivity of the computer code TURBO-2D

    NASA Astrophysics Data System (ADS)

    Amon, B.

    1994-12-01

    The two-dimensional computer model TURBO-2D for the calculation of two-phase flow was used to calculate the cold injection of fuel into a model chamber. Investigations of the influence of the input parameter on its sensitivity relative to the obtained results were made. In addition to that calculations were performed and compared using experimental injection pressure data and corresponding averaged injection parameter.

  13. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    NASA Astrophysics Data System (ADS)

    Nigg, D. W.; Wheeler, F. J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.

  14. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less

  15. Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei

    DOE PAGES

    Dytrych, T.; Maris, P.; Launey, K. D.; ...

    2016-06-22

    We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU3-selected subspaces. We demonstrate LSU3shell’s strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and significant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis affords memory savings in calculations of states withmore » a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less

  16. Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dytrych, T.; Maris, Pieter; Launey, K. D.

    2016-06-09

    We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations ofmore » states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less

  17. The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation

    PubMed Central

    Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt

    2010-01-01

    Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.

  18. Accelerating three-dimensional FDTD calculations on GPU clusters for electromagnetic field simulation.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2012-01-01

    Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.

  19. Survey of Turbulence Models for the Computation of Turbulent Jet Flow and Noise

    NASA Technical Reports Server (NTRS)

    Nallasamy, N.

    1999-01-01

    The report presents an overview of jet noise computation utilizing the computational fluid dynamic solution of the turbulent jet flow field. The jet flow solution obtained with an appropriate turbulence model provides the turbulence characteristics needed for the computation of jet mixing noise. A brief account of turbulence models that are relevant for the jet noise computation is presented. The jet flow solutions that have been directly used to calculate jet noise are first reviewed. Then, the turbulent jet flow studies that compute the turbulence characteristics that may be used for noise calculations are summarized. In particular, flow solutions obtained with the k-e model, algebraic Reynolds stress model, and Reynolds stress transport equation model are reviewed. Since, the small scale jet mixing noise predictions can be improved by utilizing anisotropic turbulence characteristics, turbulence models that can provide the Reynolds stress components must now be considered for jet flow computations. In this regard, algebraic stress models and Reynolds stress transport models are good candidates. Reynolds stress transport models involve more modeling and computational effort and time compared to algebraic stress models. Hence, it is recommended that an algebraic Reynolds stress model (ASM) be implemented in flow solvers to compute the Reynolds stress components.

  20. Computational model for fuel component supply into a combustion chamber of LRE

    NASA Astrophysics Data System (ADS)

    Teterev, A. V.; Mandrik, P. A.; Rudak, L. V.; Misyuchenko, N. I.

    2017-12-01

    A 2D-3D computational model for calculating a flow inside jet injectors that feed fuel components to a combustion chamber of a liquid rocket engine is described. The model is based on the gasdynamic calculation of compressible medium. Model software provides calculation of both one- and two-component injectors. Flow simulation in two-component injectors is realized using the scheme of separate supply of “gas-gas” or “gas-liquid” fuel components. An algorithm for converting a continuous liquid medium into a “cloud” of drops is described. Application areas of the developed model and the results of 2D simulation of injectors to obtain correction factors in the calculation formulas for fuel supply are discussed.

  1. BCM: toolkit for Bayesian analysis of Computational Models using samplers.

    PubMed

    Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A

    2016-10-21

    Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.

  2. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  3. On the computation of molecular surface correlations for protein docking using fourier techniques.

    PubMed

    Sakk, Eric

    2007-08-01

    The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.

  4. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  5. Implementation of a Thermodynamic Solver within a Computer Program for Calculating Fission-Product Release Fractions

    NASA Astrophysics Data System (ADS)

    Barber, Duncan Henry

    During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A benchmark calculation demonstrates the improvement in agreement of the total inventory of those chemical elements included in the RMC fuel model to an ORIGEN-S calculation. ORIGEN-S is the Oak Ridge isotope generation and depletion computer program. The Gibbs energy minimizer requires a chemical database containing coefficients from which the Gibbs energy of pure compounds, gas and liquid mixtures, and solid solutions can be calculated. The RMC model of irradiated uranium dioxide fuel has been converted into the required format. The Gibbs energy minimizer has been incorporated into a new model of fission-product vaporization from the fuel surface. Calculated release fractions using the new code have been compared to results calculated with SOURCE IST 2.0P11 and to results of tests used in the validation of SOURCE 2.0. The new code shows improvements in agreement with experimental releases for a number of nuclides. Of particular significance is the better agreement between experimental and calculated release fractions for 140La. The improved agreement reflects the inclusion in the RMC model of the solubility of lanthanum (III) oxide (La2O3) in the fuel matrix. Calculated lanthanide release fractions from earlier computer programs were a challenge to environmental qualification analysis of equipment for some accident scenarios. The new prototype computer program would alleviate this concern. Keywords: Nuclear Engineering; Material Science; Thermodynamics; Radioactive Material, Gibbs Energy Minimization, Actinide Generation and Depletion, FissionProduct Generation and Depletion.

  6. VNAP2: A Computer Program for Computation of Two-dimensional, Time-dependent, Compressible, Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Cline, M. C.

    1981-01-01

    A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  7. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  8. Analytical effective tensor for flow-through composites

    DOEpatents

    Sviercoski, Rosangela De Fatima [Los Alamos, NM

    2012-06-19

    A machine, method and computer-usable medium for modeling an average flow of a substance through a composite material. Such a modeling includes an analytical calculation of an effective tensor K.sup.a suitable for use with a variety of media. The analytical calculation corresponds to an approximation to the tensor K, and follows by first computing the diagonal values, and then identifying symmetries of the heterogeneity distribution. Additional calculations include determining the center of mass of the heterogeneous cell and its angle according to a defined Cartesian system, and utilizing this angle into a rotation formula to compute the off-diagonal values and determining its sign.

  9. Validation of the MCNP computational model for neutron flux distribution with the neutron activation analysis measurement

    NASA Astrophysics Data System (ADS)

    Tiyapun, K.; Chimtin, M.; Munsorn, S.; Somchit, S.

    2015-05-01

    The objective of this work is to demonstrate the method for validating the predication of the calculation methods for neutron flux distribution in the irradiation tubes of TRIGA research reactor (TRR-1/M1) using the MCNP computer code model. The reaction rate using in the experiment includes 27Al(n, α)24Na and 197Au(n, γ)198Au reactions. Aluminium (99.9 wt%) and gold (0.1 wt%) foils and the gold foils covered with cadmium were irradiated in 9 locations in the core referred to as CT, C8, C12, F3, F12, F22, F29, G5, and G33. The experimental results were compared to the calculations performed using MCNP which consisted of the detailed geometrical model of the reactor core. The results from the experimental and calculated normalized reaction rates in the reactor core are in good agreement for both reactions showing that the material and geometrical properties of the reactor core are modelled very well. The results indicated that the difference between the experimental measurements and the calculation of the reactor core using the MCNP geometrical model was below 10%. In conclusion the MCNP computational model which was used to calculate the neutron flux and reaction rate distribution in the reactor core can be used for others reactor core parameters including neutron spectra calculation, dose rate calculation, power peaking factors calculation and optimization of research reactor utilization in the future with the confidence in the accuracy and reliability of the calculation.

  10. Discrete element weld model, phase 2

    NASA Technical Reports Server (NTRS)

    Prakash, C.; Samonds, M.; Singhal, A. K.

    1987-01-01

    A numerical method was developed for analyzing the tungsten inert gas (TIG) welding process. The phenomena being modeled include melting under the arc and the flow in the melt under the action of buoyancy, surface tension, and electromagnetic forces. The latter entails the calculation of the electric potential and the computation of electric current and magnetic field therefrom. Melting may occur at a single temperature or over a temperature range, and the electrical and thermal conductivities can be a function of temperature. Results of sample calculations are presented and discussed at length. A major research contribution has been the development of numerical methodology for the calculation of phase change problems in a fixed grid framework. The model has been implemented on CHAM's general purpose computer code PHOENICS. The inputs to the computer model include: geometric parameters, material properties, and weld process parameters.

  11. SWB-A modified Thornthwaite-Mather Soil-Water-Balance code for estimating groundwater recharge

    USGS Publications Warehouse

    Westenbroek, S.M.; Kelson, V.A.; Dripps, W.R.; Hunt, R.J.; Bradbury, K.R.

    2010-01-01

    A Soil-Water-Balance (SWB) computer code has been developed to calculate spatial and temporal variations in groundwater recharge. The SWB model calculates recharge by use of commonly available geographic information system (GIS) data layers in combination with tabular climatological data. The code is based on a modified Thornthwaite-Mather soil-water-balance approach, with components of the soil-water balance calculated at a daily timestep. Recharge calculations are made on a rectangular grid of computational elements that may be easily imported into a regional groundwater-flow model. Recharge estimates calculated by the code may be output as daily, monthly, or annual values.

  12. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  13. A model for calculating expected performance of the Apollo unified S-band (USB) communication system

    NASA Technical Reports Server (NTRS)

    Schroeder, N. W.

    1971-01-01

    A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.

  14. High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME

    NASA Astrophysics Data System (ADS)

    Otis, Richard A.; Liu, Zi-Kui

    2017-05-01

    One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.

  15. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  16. Advanced earth observation spacecraft computer-aided design software: Technical, user and programmer guide

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.; Krauze, L. D.

    1983-01-01

    The IDEAS computer of NASA is a tool for interactive preliminary design and analysis of LSS (Large Space System). Nine analysis modules were either modified or created. These modules include the capabilities of automatic model generation, model mass properties calculation, model area calculation, nonkinematic deployment modeling, rigid-body controls analysis, RF performance prediction, subsystem properties definition, and EOS science sensor selection. For each module, a section is provided that contains technical information, user instructions, and programmer documentation.

  17. Computing the Power-Density Spectrum for an Engineering Model

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1982-01-01

    Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.

  18. CELSS scenario analysis: Breakeven calculations

    NASA Technical Reports Server (NTRS)

    Mason, R. M.

    1980-01-01

    A model of the relative mass requirements of food production components in a controlled ecological life support system (CELSS) based on regenerative concepts is described. Included are a discussion of model scope, structure, and example calculations. Computer programs for cultivar and breakeven calculations are also included.

  19. The DART dispersion analysis research tool: A mechanistic model for predicting fission-product-induced swelling of aluminum dispersion fuels. User`s guide for mainframe, workstation, and personal computer applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.

    1995-08-01

    This report describes the primary physical models that form the basis of the DART mechanistic computer model for calculating fission-product-induced swelling of aluminum dispersion fuels; the calculated results are compared with test data. In addition, DART calculates irradiation-induced changes in the thermal conductivity of the dispersion fuel, as well as fuel restructuring due to aluminum fuel reaction, amorphization, and recrystallization. Input instructions for execution on mainframe, workstation, and personal computers are provided, as is a description of DART output. The theory of fission gas behavior and its effect on fuel swelling is discussed. The behavior of these fission products inmore » both crystalline and amorphous fuel and in the presence of irradiation-induced recrystallization and crystalline-to-amorphous-phase change phenomena is presented, as are models for these irradiation-induced processes.« less

  20. On some methods for improving time of reachability sets computation for the dynamic system control problem

    NASA Astrophysics Data System (ADS)

    Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir

    2016-12-01

    The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.

  1. Comparison of model results transporting the odd nitrogen family with results transporting separate odd nitrogen species

    NASA Technical Reports Server (NTRS)

    Douglass, Anne R.; Jackman, Charles H.; Stolarski, Richard S.

    1989-01-01

    A fast two-dimensional residual circulation stratospheric family transport model, designed to minimize computer requirements, is developed. The model was used to calculate the ambient and perturbed atmospheres in which odd nitrogen species are transported as a family, and the results were compared with calculations in which HNO3, N2O5, ClONO2, and HO2NO2 are transported separately. It was found that ozone distributions computed by the two models for a present-day atmosphere are nearly identical. Good agreement was also found between calculated species concentrations and the ozone response, indicating the general applicability of the odd-nitrogen family approximations.

  2. Heats of Segregation of BCC Binaries from ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2004-01-01

    We compare dilute-limit heats of segregation for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent LMTO-based parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation, while the ab initio calculations are performed without relaxation. Results are discussed within the context of a segregation model driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  3. Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.

  4. Computer program for Stirling engine performance calculations

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.

    1983-01-01

    The thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer to support its development as a possible alternative to the automobile spark ignition engine. The computer model is documented. The documentation includes a user's manual, symbols list, a test case, comparison of model predictions with test results, and a description of the analytical equations used in the model.

  5. Heats of Segregation of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  6. IAQ MODEL FOR WINDOWS - RISK VERSION 1.0 USER MANUAL

    EPA Science Inventory

    The manual describes the use of the computer model, RISK, to calculate individual exposure to indoor air pollutants from sources. The model calculates exposure due to individual, as opposed to population, activity patterns and source use. The model also provides the capability to...

  7. Surface Segregation Energies of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy method. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameterization. Quantum approximate segregation energies are computed with and without atomistic relaxation. The ab initio calculations are performed without relaxation for the most part, but predicted relaxations from quantum approximate calculations are used in selected cases to compute approximate relaxed ab initio segregation energies. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with other quantum approximate and ab initio theoretical work, and available experimental results.

  8. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  9. A computer graphics based model for scattering from objects of arbitrary shapes in the optical region

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.

    1991-01-01

    A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.

  10. Evaluation of protective shielding thickness for diagnostic radiology rooms: theory and computer simulation.

    PubMed

    Costa, Paulo R; Caldas, Linda V E

    2002-01-01

    This work presents the development and evaluation using modern techniques to calculate radiation protection barriers in clinical radiographic facilities. Our methodology uses realistic primary and scattered spectra. The primary spectra were computer simulated using a waveform generalization and a semiempirical model (the Tucker-Barnes-Chakraborty model). The scattered spectra were obtained from published data. An analytical function was used to produce attenuation curves from polychromatic radiation for specified kVp, waveform, and filtration. The results of this analytical function are given in ambient dose equivalent units. The attenuation curves were obtained by application of Archer's model to computer simulation data. The parameters for the best fit to the model using primary and secondary radiation data from different radiographic procedures were determined. They resulted in an optimized model for shielding calculation for any radiographic room. The shielding costs were about 50% lower than those calculated using the traditional method based on Report No. 49 of the National Council on Radiation Protection and Measurements.

  11. Application of linear regression analysis in accuracy assessment of rolling force calculations

    NASA Astrophysics Data System (ADS)

    Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.

    1998-10-01

    Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.

  12. PHYSICOCHEMICAL PROPERTY CALCULATIONS

    EPA Science Inventory

    Computer models have been developed to estimate a wide range of physical-chemical properties from molecular structure. The SPARC modeling system approaches calculations as site specific reactions (pKa, hydrolysis, hydration) and `whole molecule' properties (vapor pressure, boilin...

  13. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    The Poloidal Diverter Experiment (PDX) facility at Princeton University is the first operating tokamak to require substantial radiation shielding. A calculational model has been developed to estimate the radiation dose in the PDX control room and at the site boundary due to the skyshine effect. An efficient one-dimensional method is used to compute the neutron and capture gamma leakage currents at the top surface of the PDX roof shield. This method employs an S /SUB n/ calculation in slab geometry and, for the PDX, is superior to spherical models found in the literature. If certain conditions are met, the slabmore » model provides the exact probability of leakage out the top surface of the roof for fusion source neutrons and for capture gamma rays produced in the PDX floor and roof shield. The model also provides the correct neutron and capture gamma leakage current spectra and angular distributions, averaged over the top roof shield surface. For the PDX, this method is nearly as accurate as multidimensional techniques for computing the roof leakage and is much less costly. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab S /SUB n/ calculation. The capture gamma dose is computed using a simple point-kernel single-scatter method.« less

  14. Computer Series, 37: Bits and Pieces, 14.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1983-01-01

    Thirteen computer/calculator programs (available from authors) are described. These include: representation of molecules as 3-D models; animated 3-D graphical display of line drawings of molecules; principles of Fourier-transform nuclear magnetic resonance; tutorial program for pH calculation; balancing chemical reactions using a hand-held…

  15. Ridge: a computer program for calculating ridge regression estimates

    Treesearch

    Donald E. Hilt; Donald W. Seegrist

    1977-01-01

    Least-squares coefficients for multiple-regression models may be unstable when the independent variables are highly correlated. Ridge regression is a biased estimation procedure that produces stable estimates of the coefficients. Ridge regression is discussed, and a computer program for calculating the ridge coefficients is presented.

  16. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    NASA Astrophysics Data System (ADS)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  17. 46 CFR 162.060-26 - Land-based testing requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... (iv) The manufacturer of the BWMS must demonstrate by using mathematical modeling, computational fluid dynamics modeling, and/or by calculations, that any downscaling will not affect the ultimate functioning... mathematical and computational fluid dynamics modeling) must be clearly identified in the Experimental Design...

  18. 46 CFR 162.060-26 - Land-based testing requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... (iv) The manufacturer of the BWMS must demonstrate by using mathematical modeling, computational fluid dynamics modeling, and/or by calculations, that any downscaling will not affect the ultimate functioning... mathematical and computational fluid dynamics modeling) must be clearly identified in the Experimental Design...

  19. 46 CFR 162.060-26 - Land-based testing requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (iv) The manufacturer of the BWMS must demonstrate by using mathematical modeling, computational fluid dynamics modeling, and/or by calculations, that any downscaling will not affect the ultimate functioning... mathematical and computational fluid dynamics modeling) must be clearly identified in the Experimental Design...

  20. Models of MOS and SOS devices

    NASA Technical Reports Server (NTRS)

    Gassaway, J. D.; Mahmood, Q.; Trotter, J. D.

    1980-01-01

    Quarterly report describes progress in three programs: dc sputtering machine for aluminum and aluminum alloys; two dimensional computer modeling of MOS transistors; and development of computer techniques for calculating redistribution diffusion of dopants in silicon on sapphire films.

  1. The calculation of theoretical chromospheric models and the interpretation of the solar spectrum

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.

    1994-01-01

    Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for nonradiative heating, and for solar activity in general.

  2. The calculation of theoretical chromospheric models and the interpretation of solar spectra from rockets and spacecraft

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.

    1993-01-01

    Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.

  3. Quantitative ROESY analysis of computational models: structural studies of citalopram and β-cyclodextrin complexes by (1) H-NMR and computational methods.

    PubMed

    Ali, Syed Mashhood; Shamim, Shazia

    2015-07-01

    Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.

  4. A stirling engine computer model for performance calculations

    NASA Technical Reports Server (NTRS)

    Tew, R.; Jefferies, K.; Miao, D.

    1978-01-01

    To support the development of the Stirling engine as a possible alternative to the automobile spark-ignition engine, the thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer. The modeling techniques used are presented. The performance of an existing rhombic-drive Stirling engine was simulated by use of this computer program, and some typical results are presented. Engine tests are planned in order to evaluate this model.

  5. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  6. Comparison of measured temperatures, thermal stresses and creep residues with predictions on a built-up titanium structure

    NASA Technical Reports Server (NTRS)

    Jenkins, Jerald M.

    1987-01-01

    Temperature, thermal stresses, and residual creep stresses were studied by comparing laboratory values measured on a built-up titanium structure with values calculated from finite-element models. Several such models were used to examine the relationship between computational thermal stresses and thermal stresses measured on a built-up structure. Element suitability, element density, and computational temperature discrepancies were studied to determine their impact on measured and calculated thermal stress. The optimum number of elements is established from a balance between element density and suitable safety margins, such that the answer is acceptably safe yet is economical from a computational viewpoint. It is noted that situations exist where relatively small excursions of calculated temperatures from measured values result in far more than proportional increases in thermal stress values. Measured residual stresses due to creep significantly exceeded the values computed by the piecewise linear elastic strain analogy approach. The most important element in the computation is the correct definition of the creep law. Computational methodology advances in predicting residual stresses due to creep require significantly more viscoelastic material characterization.

  7. A computer program for the calculation of laminar and turbulent boundary layer flows

    NASA Technical Reports Server (NTRS)

    Dwyer, H. A.; Doss, E. D.; Goldman, A. L.

    1972-01-01

    The results are presented of a study to produce a computer program to calculate laminar and turbulent boundary layer flows. The program is capable of calculating the following types of flow: (1) incompressible or compressible, (2) two dimensional or axisymmetric, and (3) flows with significant transverse curvature. Also, the program can handle a large variety of boundary conditions, such as blowing or suction, arbitrary temperature distributions and arbitrary wall heat fluxes. The program has been specialized to the calculation of equilibrium air flows and all of the thermodynamic and transport properties used are for air. For the turbulent transport properties, the eddy viscosity approach has been used. Although the eddy viscosity models are semi-empirical, the model employed in the program has corrections for pressure gradients, suction and blowing and compressibility. The basic method of approach is to put the equations of motion into a finite difference form and then solve them by use of a digital computer. The program is written in FORTRAN 4 and requires small amounts of computer time on most scientific machines. For example, most laminar flows can be calculated in less than one minute of machine time, while turbulent flows usually require three or four minutes.

  8. Models of the solvent-accessible surface of biopolymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.E.

    1996-09-01

    Many biopolymers such as proteins, DNA, and RNA have been studied because they have important biomedical roles and may be good targets for therapeutic action in treating diseases. This report describes how plastic models of the solvent-accessible surface of biopolymers were made. Computer files containing sets of triangles were calculated, then used on a stereolithography machine to make the models. Small (2 in.) models were made to test whether the computer calculations were done correctly. Also, files of the type (.stl) required by any ISO 9001 rapid prototyping machine were written onto a CD-ROM for distribution to American companies.

  9. An evaluation of TRAC-PF1/MOD1 computer code performance during posttest simulations of Semiscale MOD-2C feedwater line break transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, D.G.: Watkins, J.C.

    This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less

  10. The application of the large particles method of numerical modeling of the process of carbonic nanostructures synthesis in plasma

    NASA Astrophysics Data System (ADS)

    Abramov, G. V.; Gavrilov, A. N.

    2018-03-01

    The article deals with the numerical solution of the mathematical model of the particles motion and interaction in multicomponent plasma by the example of electric arc synthesis of carbon nanostructures. The high order of the particles and the number of their interactions requires a significant input of machine resources and time for calculations. Application of the large particles method makes it possible to reduce the amount of computation and the requirements for hardware resources without affecting the accuracy of numerical calculations. The use of technology of GPGPU parallel computing using the Nvidia CUDA technology allows organizing all General purpose computation on the basis of the graphical processor graphics card. The comparative analysis of different approaches to parallelization of computations to speed up calculations with the choice of the algorithm in which to calculate the accuracy of the solution shared memory is used. Numerical study of the influence of particles density in the macro particle on the motion parameters and the total number of particle collisions in the plasma for different modes of synthesis has been carried out. The rational range of the coherence coefficient of particle in the macro particle is computed.

  11. Arc program documentation

    NASA Technical Reports Server (NTRS)

    Mcmillan, J. D.

    1976-01-01

    A description of the input and output files and the data control cards for the altimeter residual computation (ARC) computer program is given. The program acts as the final altimeter preprocessor before the data is reformatted for external users. It calculates all parameters necessary for the computation of the altimeter observation residuals and the sea surface height. Mathematical models used for calculating tropospheric refraction, geoid height, tide height, ephemeris, and orbit geometry are described.

  12. Proceedings: Conference on Computers in Chemical Education and Research, Dekalb, Illinois, 19-23 July 1971.

    ERIC Educational Resources Information Center

    1971

    Computers have effected a comprehensive transformation of chemistry. Computers have greatly enhanced the chemist's ability to do model building, simulations, data refinement and reduction, analysis of data in terms of models, on-line data logging, automated control of experiments, quantum chemistry and statistical and mechanical calculations, and…

  13. Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields, volume 3

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    The computer programs developed to calculate the shock wave precursor and the method of using them are described. This method calculated the precursor flow field in a nitrogen gas including the effects of emission and absorption of radiation on the energy and composition of gas. The radiative transfer is calculated including the effects of absorption and emission through the line as well as the continuum process in the shock layer and through the continuum processes only in the precursor. The effects of local thermodynamic nonequilibrium in the shock layer and precursor regions are also included in the radiative transfer calculations. Three computer programs utilized by this computational scheme to calculate the precursor flow field solution for a given shock layer flow field are discussed.

  14. An emulator for minimizing finite element analysis implementation resources

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.

    1982-01-01

    A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.

  15. Time-Dependent Modeling of Underwater Explosions by Convolving Similitude Source with Bandlimited Impulse from the CASS/GRAB Model

    DTIC Science & Technology

    2015-06-30

    401) 832-8689 Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39-18 i TABLE OF CONTENTS Section Page LIST OF ILLUSTRATIONS...calculated with a high degree of accuracy—leading to intensive computational calculations and long computational times when dealing with range-depth fields...be obtained using similitude analysis; it allows the comparison of differing explosive weights and provides the means to scale the pressure, energy

  16. Process for computing geometric perturbations for probabilistic analysis

    DOEpatents

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  17. Computer modelling of BaY2F8: defect structure, rare earth doping and optical behaviour

    NASA Astrophysics Data System (ADS)

    Amaral, J. B.; Couto Dos Santos, M. A.; Valerio, M. E. G.; Jackson, R. A.

    2005-10-01

    BaY2F8, when doped with rare earth elements, is a material of interest in the development of solid-state laser systems, especially for use in the infrared region. This paper presents the application of a computational technique, which combines atomistic modelling and crystal field calculations, in a study of rare earth doping of the material. Atomistic modelling is used to calculate the intrinsic defect structure and the symmetry and detailed geometry of the dopant ion-host lattice system, and this information is then used to calculate the crystal field parameters, which are an important indicator in assessing the optical behaviour of the dopant-crystal system. Energy levels are then calculated for the Dy3+-substituted material, and comparisons with the results of recent experimental work are made.

  18. An Evaluation of the Scattering Law for Light and Heavy Water in ENDF-6 Format, Based on Experimental Data and Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Márquez Damián, J. I.; Granada, J. R.; Malaspina, D. C.

    2014-04-01

    In this work we present an evaluation in ENDF-6 format of the scattering law for light and heavy water computed using the LEAPR module of NJOY99. The models used in this evaluation are based on experimental data on light water dynamics measured by Novikov, partial structure factors obtained by Soper, and molecular dynamics calculations performed with GROMACS using a reparameterized version of the flexible SPC model by Toukan and Rahman. The models use the Egelstaff-Schofield diffusion equation for translational motion, and a continuous spectrum calculated from the velocity autocorrelation function computed with GROMACS. The scattering law for H in H2O is computed using the incoherent approximation, and the scattering law D and O in D2O are computed using the Sköld approximation for coherent scattering. The calculations show significant improvement over ENDF/B-VI and ENDF/B-VII when compared with measurements of the total cross section, differential scattering experiments and quasi-elastic neutron scattering experiments (QENS).

  19. Hydrostatic calculations of axisymmetric flow and its stability for the AGCE model

    NASA Technical Reports Server (NTRS)

    Miller, T. L.; Gall, R. L.

    1981-01-01

    Baroclinic waves in the atmospherics general circulation experiment (AGCE) apparatus by the use of numerical hydrostatic primitive equation models were determined. The calculation is accomplished by using an axisymmetric primitive equation model to compute, for a given set of experimental parameters, a steady state axisymmetric flow and then testing this axisymmetric flow for stability using a linear primitive equation model. Some axisymmetric flows are presented together with preliminary stability calculations.

  20. An accelerated line-by-line option for MODTRAN combining on-the-fly generation of line center absorption within 0.1 cm-1 bins and pre-computed line tails

    NASA Astrophysics Data System (ADS)

    Berk, Alexander; Conforti, Patrick; Hawes, Fred

    2015-05-01

    A Line-By-Line (LBL) option is being developed for MODTRAN6. The motivation for this development is two-fold. Firstly, when MODTRAN is validated against an independent LBL model, it is difficult to isolate the source of discrepancies. One must verify consistency between pressure, temperature and density profiles, between column density calculations, between continuum and particulate data, between spectral convolution methods, and more. Introducing a LBL option directly within MODTRAN will insure common elements for all calculations other than those used to compute molecular transmittances. The second motivation for the LBL upgrade is that it will enable users to compute high spectral resolution transmittances and radiances for the full range of current MODTRAN applications. In particular, introducing the LBL feature into MODTRAN will enable first-principle calculations of scattered radiances, an option that is often not readily available with LBL models. MODTRAN will compute LBL transmittances within one 0.1 cm-1 spectral bin at a time, marching through the full requested band pass. The LBL algorithm will use the highly accurate, pressure- and temperature-dependent MODTRAN Padé approximant fits of the contribution from line tails to define the absorption from all molecular transitions centered more than 0.05 cm-1 from each 0.1 cm-1 spectral bin. The beauty of this approach is that the on-the-fly computations for each 0.1 cm-1 bin will only require explicit LBL summing of transitions centered within a 0.2 cm-1 spectral region. That is, the contribution from the more distant lines will be pre-computed via the Padé approximants. The status of the LBL effort will be presented. This will include initial thermal and solar radiance calculations, validation calculations, and self-validations of the MODTRAN band model against its own LBL calculations.

  1. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  2. Investigation of Molecule-Surface Interactions With Overtone Absorption Spectroscopy and Computational Methods

    DTIC Science & Technology

    2010-11-01

    method at a fraction of the computational cost . The overtone frequency serves as the bridge between the molecule-surface interaction model and...the computational cost of utilizing higher levels of theory such as MP2. The second task is the calculation of absorption frequencies as a function...the methyl C-H bonds, and n\\ and inn are the carbon and hydrogen atomic masses, respectively. The calculation of the fundamental and overtone

  3. Computer modelling of the optical behaviour of rare earth dopants in BaY2F8

    NASA Astrophysics Data System (ADS)

    Jackson, R. A.; Valerio, M. E. G.; Couto Dos Santos, M. A.; Amaral, J. B.

    2005-01-01

    BaY2F8, when doped with rare earth elements is a material of interest in the development of solid-state laser systems, especially for use in the infrared region. This paper presents the application of a new computational technique, which combines atomistic modelling and crystal field calculations in a study of rare earth doping of the material. Atomistic modelling is used to calculate the symmetry and detailed geometry of the dopant ion-host lattice system, and this information is then used to calculate the crystal field parameters, which are an important indicator in assessing the optical behaviour of the dopant-crystal system. Comparisons with the results of recent experimental work on this material are made.

  4. ATHENA 3D: A finite element code for ultrasonic wave propagation

    NASA Astrophysics Data System (ADS)

    Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.

    2014-04-01

    The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.

  5. Concentrator optical characterization using computer mathematical modelling and point source testing

    NASA Technical Reports Server (NTRS)

    Dennison, E. W.; John, S. L.; Trentelman, G. F.

    1984-01-01

    The optical characteristics of a paraboloidal solar concentrator are analyzed using the intercept factor curve (a format for image data) to describe the results of a mathematical model and to represent reduced data from experimental testing. This procedure makes it possible not only to test an assembled concentrator, but also to evaluate single optical panels or to conduct non-solar tests of an assembled concentrator. The use of three-dimensional ray tracing computer programs to calculate the mathematical model is described. These ray tracing programs can include any type of optical configuration from simple paraboloids to array of spherical facets and can be adapted to microcomputers or larger computers, which can graphically display real-time comparison of calculated and measured data.

  6. Critical Analysis of Cluster Models and Exchange-Correlation Functionals for Calculating Magnetic Shielding in Molecular Solids.

    PubMed

    Holmes, Sean T; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil

    2015-11-10

    Calculations of the principal components of magnetic-shielding tensors in crystalline solids require the inclusion of the effects of lattice structure on the local electronic environment to obtain significant agreement with experimental NMR measurements. We assess periodic (GIPAW) and GIAO/symmetry-adapted cluster (SAC) models for computing magnetic-shielding tensors by calculations on a test set containing 72 insulating molecular solids, with a total of 393 principal components of chemical-shift tensors from 13C, 15N, 19F, and 31P sites. When clusters are carefully designed to represent the local solid-state environment and when periodic calculations include sufficient variability, both methods predict magnetic-shielding tensors that agree well with experimental chemical-shift values, demonstrating the correspondence of the two computational techniques. At the basis-set limit, we find that the small differences in the computed values have no statistical significance for three of the four nuclides considered. Subsequently, we explore the effects of additional DFT methods available only with the GIAO/cluster approach, particularly the use of hybrid-GGA functionals, meta-GGA functionals, and hybrid meta-GGA functionals that demonstrate improved agreement in calculations on symmetry-adapted clusters. We demonstrate that meta-GGA functionals improve computed NMR parameters over those obtained by GGA functionals in all cases, and that hybrid functionals improve computed results over the respective pure DFT functional for all nuclides except 15N.

  7. Input guide for computer programs to generate thermodynamic data for air and Freon CF4

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Penny, M. M.; Baker, L. R., Jr.

    1975-01-01

    FORTRAN computer programs were developed to calculate the thermodynamic properties of Freon 14 and air for isentropic expansion from given plenum conditions. Thermodynamic properties for air are calculated with equations derived from the Beattie-Bridgeman nonstandard equation of state and, for Freon 14, with equations derived from the Redlich-Quang nonstandard equation of state. These two gases are used in scale model testing of model rocket nozzle flow fields which requires simulation of the prototype plume shape with a cold flow test approach. Utility of the computer programs for use in analytical prediction of flow fields is enhanced by arranging card or tape output of the data in a format compatible with a method-of-characteristics computer program.

  8. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  9. Temperature Calculations in the Coastal Modeling System

    DTIC Science & Technology

    2017-04-01

    tide) and river discharge at model boundaries, wave radiation stress, and wind forcing over a model computational domain. Physical processes calculated...calculated in the CMS using the following meteorological parameters: solar radiation, cloud cover, air temperature, wind speed, and surface water temperature...during a clear (i.e., cloudless) sky (Wm-2); CLDC is the cloud cover fraction (0-1.0); SWR is the surface reflection coefficient; and SHDf is the

  10. Towards quantum chemistry on a quantum computer.

    PubMed

    Lanyon, B P; Whitfield, J D; Gillett, G G; Goggin, M E; Almeida, M P; Kassal, I; Biamonte, J D; Mohseni, M; Powell, B J; Barbieri, M; Aspuru-Guzik, A; White, A G

    2010-02-01

    Exact first-principles calculations of molecular properties are currently intractable because their computational cost grows exponentially with both the number of atoms and basis set size. A solution is to move to a radically different model of computing by building a quantum computer, which is a device that uses quantum systems themselves to store and process data. Here we report the application of the latest photonic quantum computer technology to calculate properties of the smallest molecular system: the hydrogen molecule in a minimal basis. We calculate the complete energy spectrum to 20 bits of precision and discuss how the technique can be expanded to solve large-scale chemical problems that lie beyond the reach of modern supercomputers. These results represent an early practical step toward a powerful tool with a broad range of quantum-chemical applications.

  11. CORY: A Computer Program for Determining Dimension Stock Yields

    Treesearch

    Charles C Brunner; Marshall S. White; Fred M. Lamb; James G. Schroeder

    1989-01-01

    CORY is a computer program that calculates random-width, fixed-length cutting yields and best sawing sequences for either rip- or crosscut-first operations. It differs from other yield calculating programs by evaluating competing cuttings through conflict resolution models. Comparisons with Program YIELD resulted in a 9 percent greater cutting volume and a 98 percent...

  12. Modeling the complete Otto cycle: Preliminary version. [computer programming

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.; Mcbride, B. J.

    1977-01-01

    A description is given of the equations and the computer program being developed to model the complete Otto cycle. The program incorporates such important features as: (1) heat transfer, (2) finite combustion rates, (3) complete chemical kinetics in the burned gas, (4) exhaust gas recirculation, and (5) manifold vacuum or supercharging. Changes in thermodynamic, kinetic and transport data as well as model parameters can be made without reprogramming. Preliminary calculations indicate that: (1) chemistry and heat transfer significantly affect composition and performance, (2) there seems to be a strong interaction among model parameters, and (3) a number of cycles must be calculated in order to obtain steady-state conditions.

  13. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  14. Assessment of three-dimensional inviscid codes and loss calculations for turbine aerodynamic computations

    NASA Technical Reports Server (NTRS)

    Povinelli, L. A.

    1984-01-01

    An assessment of several three dimensional inviscid turbine aerodynamic computer codes and loss models used at the NASA Lewis Research Center is presented. Five flow situations are examined, for which both experimental data and computational results are available. The five flows form a basis for the evaluation of the computational procedures. It was concluded that stator flows may be calculated with a high degree of accuracy, whereas, rotor flow fields are less accurately determined. Exploitation of contouring, learning, bowing, and sweeping will require a three dimensional viscous analysis technique.

  15. Computing Fault Displacements from Surface Deformations

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy

    2006-01-01

    Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work

  16. A Hybrid Approach To Tandem Cylinder Noise

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2004-01-01

    Aeolian tone generation from tandem cylinders is predicted using a hybrid approach. A standard computational fluid dynamics (CFD) code is used to compute the unsteady flow around the cylinders, and the acoustics are calculated using the acoustic analogy. The CFD code is nominally second order in space and time and includes several turbulence models, but the SST k - omega model is used for most of the calculations. Significant variation is observed between laminar and turbulent cases, and with changes in the turbulence model. A two-dimensional implementation of the Ffowcs Williams-Hawkings (FW-H) equation is used to predict the far-field noise.

  17. Vapor-liquid equilibrium thermodynamics of N2 + CH4 - Model and Titan applications

    NASA Technical Reports Server (NTRS)

    Thompson, W. R.; Zollweg, John A.; Gabis, David H.

    1992-01-01

    A thermodynamic model is presented for vapor-liquid equilibrium in the N2 + CH4 system, which is implicated in calculations of the Titan tropospheric clouds' vapor-liquid equilibrium thermodynamics. This model imposes constraints on the consistency of experimental equilibrium data, and embodies temperature effects by encompassing enthalpy data; it readily calculates the saturation criteria, condensate composition, and latent heat for a given pressure-temperature profile of the Titan atmosphere. The N2 content of condensate is about half of that computed from Raoult's law, and about 30 percent greater than that computed from Henry's law.

  18. Two reduced form air quality modeling techniques for rapidly calculating pollutant mitigation potential across many sources, locations and precursor emission types

    EPA Science Inventory

    Due to the computational cost of running regional-scale numerical air quality models, reduced form models (RFM) have been proposed as computationally efficient simulation tools for characterizing the pollutant response to many different types of emission reductions. The U.S. Envi...

  19. Assessment of computational prediction of tail buffeting

    NASA Technical Reports Server (NTRS)

    Edwards, John W.

    1990-01-01

    Assessments of the viability of computational methods and the computer resource requirements for the prediction of tail buffeting are made. Issues involved in the use of Euler and Navier-Stokes equations in modeling vortex-dominated and buffet flows are discussed and the requirement for sufficient grid density to allow accurate, converged calculations is stressed. Areas in need of basic fluid dynamics research are highlighted: vorticity convection, vortex breakdown, dynamic turbulence modeling for free shear layers, unsteady flow separation for moderately swept, rounded leading-edge wings, vortex flows about wings at high subsonic speeds. An estimate of the computer run time for a buffeting response calculation for a full span F-15 aircraft indicates that an improvement in computer and/or algorithm efficiency of three orders of magnitude is needed to enable routine use of such methods. Attention is also drawn to significant uncertainties in the estimates, in particular with regard to nonlinearities contained within the modeling and the question of the repeatability or randomness of buffeting response.

  20. EOSPEC: a complementary toolbox for MODTRAN calculations

    NASA Astrophysics Data System (ADS)

    Dion, Denis

    2016-09-01

    For more than a decade, Defence Research and Development Canada (DRDC) has been developing a Library of computer models for the calculations of atmospheric effects on EO-IR sensor performances. The Library, called EOSPEC-LIB (EO-IR Sensor PErformance Computation LIBrary) has been designed as a complement to MODTRAN, the radiative transfer code developed by the Air Force Research Laboratory and Spectral Science Inc. in the USA. The Library comprises modules for the definition of the atmospheric conditions, including aerosols, and provides modules for the calculation of turbulence and fine refraction effects. SMART (Suite for Multi-resolution Atmospheric Radiative Transfer), a key component of EOSPEC, allows one to perform fast computations of transmittances and radiances using MODTRAN through a wide-band correlated-k computational approach. In its most recent version, EOSPEC includes a MODTRAN toolbox whose functions help generate in a format compatible to MODTRAN 5 and 6 atmospheric and aerosol profiles, user-defined refracted optical paths and inputs for configuring the MODTRAN sea radiance (BRDF) model. The paper gives an overall description of the EOSPEC features and capacities. EOSPEC provides augmented capabilities for computations in the lower atmosphere, and for computations in maritime environments.

  1. Quantum chemical determination of young?s modulus of lignin. Calculations on ß-O-4' model compound

    Treesearch

    Thomas Elder

    2007-01-01

    The calculation of Young?s modulus of lignin has been examined by subjecting a dimeric model compound to strain, coupled with the determination of energy and stress. The computational results, derived from quantum chemical calculations, are in agreement with available experimental results. Changes in geometry indicate that modifications in dihedral angles occur in...

  2. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClanahan, Richard; De Leon, Phillip L.

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  3. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE PAGES

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  4. 3D gain modeling of LMJ and NIF amplifiers

    NASA Astrophysics Data System (ADS)

    LeTouze, Geoffroy; Cabourdin, Olivier; Mengue, J. F.; Guenet, Mireille; Grebot, Eric; Seznec, Stephane E.; Jancaitis, Kenneth S.; Marshall, Christopher D.; Zapata, Luis E.; Erlandson, A. E.

    1999-07-01

    A 3D ray-trace model has been developed to predict the performance of flashlamp pumped laser amplifiers. The computer program, written in C++, includes a graphical display option using the Open Inventor library, as well as a parser and a loader allowing the user to easily model complex multi-segment amplifier systems. It runs both on a workstation cluster at LLNL, and on the T3E Cray at CEA. We will discuss how we have reduce the required computation time without changing precision by optimizing the parameters which set the discretization level of the calculation. As an example, the sample of calculation points is chosen to fit the pumping profile through the thickness of amplifier slabs. We will show the difference in pump rates with our latest model as opposed to those produced by our earlier 2.5D code AmpModel. We will also present the results of calculations which model surfaces and other 3D effects such as top and bottom refelcotr positions and reflectivity which could not be included in the 2.5D model. This new computer model also includes a full 3D calculation of the amplified spontaneous emission rate in the laser slab, as opposed to the 2.5D model which tracked only the variation in the gain across the transverse dimensions of the slab. We will present the impact of this evolution of the model on the predicted stimulated decay rate and the resulting gain distribution. Comparison with most recent AmpLab experimental result will be presented, in the different typical NIF and LMJ configurations.

  5. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  6. Ballistics Modeling for Non-Axisymmetric Hypervelocity Smart Bullets

    DTIC Science & Technology

    2014-06-03

    can in principle come from experiments or computational fluid dynamics ( CFD ) calculations. CFD calculations are carried out for a standard bullet...come from experiments or com- putational fluid dynamics ( CFD ) calculations. CFD calculations are carried out for a standard bullet (0.308” 168 grain...11 2. Spin and Pitch Damping 11 3. Magnus Moment 12 IV. CFD Simulations and Ballistic Trajectories 12 A. CFD Modeling of a Standard Bullet 12 B

  7. Fast computation of high energy elastic collision scattering angle for electric propulsion plume simulation

    NASA Astrophysics Data System (ADS)

    Araki, Samuel J.

    2016-11-01

    In the plumes of Hall thrusters and ion thrusters, high energy ions experience elastic collisions with slow neutral atoms. These collisions involve a process of momentum exchange, altering the initial velocity vectors of the collision pair. In addition to the momentum exchange process, ions and atoms can exchange electrons, resulting in slow charge-exchange ions and fast atoms. In these simulations, it is particularly important to accurately perform computations of ion-atom elastic collisions in determining the plume current profile and assessing the integration of spacecraft components. The existing models are currently capable of accurate calculation but are not fast enough such that the calculation can be a bottleneck of plume simulations. This study investigates methods to accelerate an ion-atom elastic collision calculation that includes both momentum- and charge-exchange processes. The scattering angles are pre-computed through a classical approach with ab initio spin-orbit free potential and are stored in a two-dimensional array as functions of impact parameter and energy. When performing a collision calculation for an ion-atom pair, the scattering angle is computed by a table lookup and multiple linear interpolations, given the relative energy and randomly determined impact parameter. In order to further accelerate the calculations, the number of collision calculations is reduced by properly defining two cut-off cross-sections for the elastic scattering. In the MCC method, the target atom needs to be sampled; however, it is confirmed that initial target atom velocity does not play a significant role in typical electric propulsion plume simulations such that the sampling process is unnecessary. With these implementations, the computational run-time to perform a collision calculation is reduced significantly compared to previous methods, while retaining the accuracy of the high fidelity models.

  8. An Eye Model for Computational Dosimetry Using A Multi-Scale Voxel Phantom

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-06-01

    The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  9. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    NASA Astrophysics Data System (ADS)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  10. Computer program for a four-cylinder-Stirling-engine controls simulation

    NASA Technical Reports Server (NTRS)

    Daniels, C. J.; Lorenzo, C. F.

    1982-01-01

    A four cylinder Stirling engine, transient engine simulation computer program is presented. The program is intended for controls analysis. The associated engine model was simplified to shorten computer calculation time. The model includes engine mechanical drive dynamics and vehicle load effects. The computer program also includes subroutines that allow: (1) acceleration of the engine by addition of hydrogen to the system, and (2) braking of the engine by short circuiting of the working spaces. Subroutines to calculate degraded engine performance (e.g., due to piston ring and piston rod leakage) are provided. Input data required to run the program are described and flow charts are provided. The program is modular to allow easy modification of individual routines. Examples of steady state and transient results are presented.

  11. Technical Note: spektr 3.0-A computational tool for x-ray spectrum modeling and analysis.

    PubMed

    Punnoose, J; Xu, J; Sisniega, A; Zbijewski, W; Siewerdsen, J H

    2016-08-01

    A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. The spektr code generates x-ray spectra (photons/mm(2)/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20-150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30-140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available.

  12. Calculation of recirculating flow behind flame-holders

    NASA Astrophysics Data System (ADS)

    Zeng, Q.; Sheng, Y.; Zhou, Q.

    1985-10-01

    Adoptability of standard K-epsilon turbulence model for numerical calculation of recirculating flow is discussed. Many computations of recirculating flows behind bluff-bodies used as flame-holders in afterburner of aeroengine have been completed. Blocking-off method to treat the incline walls of the flame-holder gives good results. In isothermal recirculating flows the flame-holder wall is assumed to be isolated. Therefore, it is possible to remove the inactive zone from the calculation domain in programming to save computer time. The computation for a V-shaped flame-holder exhibits an interesting phenomenon that the recirculation zone extends to the cavity of the flame-holder.

  13. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM (FOR MICRO- COMPUTERS)

    EPA Science Inventory

    Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...

  14. Calculation of transonic aileron buzz

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Bailey, H. E.

    1979-01-01

    An implicit finite-difference computer code that uses a two-layer algebraic eddy viscosity model and exact geometric specification of the airfoil has been used to simulate transonic aileron buzz. The calculated results, which were performed on both the Illiac IV parallel computer processor and the Control Data 7600 computer, are in essential agreement with the original expository wind-tunnel data taken in the Ames 16-Foot Wind Tunnel just after World War II. These results and a description of the pertinent numerical techniques are included.

  15. Process air quality data

    NASA Technical Reports Server (NTRS)

    Butler, C. M.; Hogge, J. E.

    1978-01-01

    Air quality sampling was conducted. Data for air quality parameters, recorded on written forms, punched cards or magnetic tape, are available for 1972 through 1975. Computer software was developed to (1) calculate several daily statistical measures of location, (2) plot time histories of data or the calculated daily statistics, (3) calculate simple correlation coefficients, and (4) plot scatter diagrams. Computer software was developed for processing air quality data to include time series analysis and goodness of fit tests. Computer software was developed to (1) calculate a larger number of daily statistical measures of location, and a number of daily monthly and yearly measures of location, dispersion, skewness and kurtosis, (2) decompose the extended time series model and (3) perform some goodness of fit tests. The computer program is described, documented and illustrated by examples. Recommendations are made for continuation of the development of research on processing air quality data.

  16. Calculation of Macrosegregation in an Ingot

    NASA Technical Reports Server (NTRS)

    Poirier, D. R.; Maples, A. L.

    1986-01-01

    Report describes both two-dimensional theoretical model of macrosegregation (separating into regions of discrete composition) in solidification of binary alloy in chilled rectangular mold and interactive computer program embodying model. Model evolved from previous ones limited to calculating effects of interdendritic fluid flow on final macrosegregation for given input temperature field under assumption of no fluid in bulk melt.

  17. On the analysis of para-ammonia observations

    NASA Technical Reports Server (NTRS)

    Kuiper, T. B. H.

    1994-01-01

    The intensities and optical depths of the (1, 1), (2, 2), and (2, 1) inversion transitions of ammonia can be calculated quite accurately without solving the equations of statistical equilibrium. A two-temperature partition function suffices. The excitation of the K-ladders can be approximated by using a temperature obtained from a two-level model with the (2, 1) and (1, 1) levels. Distribution of populations between the ladders is described with the kinetic temperature. This enables one to compute the (1, 1) and (2, 1) inversion transition excitation temperatures and optical depths. To compute the (2, 2) brightness temperatures, the fractional population of the (2, 2) doublet is computed from the population of the (1, 1) doublet using the 'true rotation temperature,' which is calculated using a three-level model with the (2, 1), (2, 2), and (1, 1) levels. In spite of some iterative steps, the calculation is quite fast.

  18. User's manual for PANDA II: A computer code for calculating equations of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerley, G.I.

    1991-07-18

    PANDA is an interactive computer code that is used to compute equations of state (EOS) for many classes of materials over a wide range of densities and temperatures. The first step in the development of a general EOS model is to determine the EOS for a one- component system, consisting of a single solid or fluid phase and a single chemical species. The results of several such calculations can then be combined to construct EOS for multiphase and multicomponent systems. For one-component solids and fluids, PANDA offers a variety of options for modeling various contributions to the EOS: the zeromore » Kelvin isotherm, lattice vibrations, fluid degrees of freedom, thermal electronic excitation and ionization, and molecular vibrational and rotational degrees of freedom. Two options are available for computing EOS for multicomponent systems from separate EOS for the individual species and phases. The phase transition model is used for a system of immiscible phases, each having the same chemical composition. In the mixture model, the components can be either miscible or immiscible and can have different chemical compositions; mixtures cab be either inert or reactive. PANDA provides over 50 commands that are used to define the EOS models, to make calculations and compare the models to experimental data, and to generate and maintain tabular EOS libraries for use in hydrocodes and other applications. Versions of the code available for the Cray (UNICOS and CTSS), SUN (UNIX), and VAX(VMS) machines, and a small version is available for personal computers (DOS). This report describes the EOS models, use of the commands, and several sample problems. 92 refs., 7 figs., 10 tabs.« less

  19. National Stormwater Calculator User's Guide – VERSION 1.1

    EPA Science Inventory

    This document is the user's guide for running EPA's National Stormwater Calculator (http://www.epa.gov/nrmrl/wswrd/wq/models/swc/). The National Stormwater Calculator is a simple to use tool for computing small site hydrology for any location within the US.

  20. Radiative Heating Methodology for the Huygens Probe

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth

    2007-01-01

    The radiative heating environment for the Huygens probe near peak heating conditions for Titan entry is investigated in this paper. The task of calculating the radiation-coupled flowfield, accounting for non-Boltzmann and non-optically thin radiation, is simplified to a rapid yet accurate calculation. This is achieved by using the viscous-shock layer (VSL) technique for the stagnation-line flowfield calculation and a modified smeared rotational band (SRB) model for the radiation calculation. These two methods provide a computationally efficient alternative to a Navier-Stokes flowfield and line-by-line radiation calculation. The results of the VSL technique are shown to provide an excellent comparison with the Navier-Stokes results of previous studies. It is shown that a conventional SRB approach is inadequate for the partially optically-thick conditions present in the Huygens shock-layer around the peak heating trajectory points. A simple modification is proposed to the SRB model that improves its accuracy in these partially optically-thick conditions. This modified approach, labeled herein as SRBC, is compared throughout this study with a detailed line-by-line (LBL) calculation and is shown to compare within 5% in all cases. The SRBC method requires many orders-of-magnitude less computational time than the LBL method, which makes it ideal for coupling to the flowfield. The application of a collisional-radiative (CR) model for determining the population of the CN electronic states, which govern the radiation for Huygens entry, is discussed and applied. The non-local absorption term in the CR model is formulated in terms of an escape factor, which is then curve-fit with temperature. Although the curve-fit is an approximation, it is shown to compare well with the exact escape factor calculation, which requires a computationally intensive iteration procedure.

  1. Recent advances in QM/MM free energy calculations using reference potentials.

    PubMed

    Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L

    2015-05-01

    Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.

  2. Multiplexed Predictive Control of a Large Commercial Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Richter, hanz; Singaraju, Anil; Litt, Jonathan S.

    2008-01-01

    Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.

  3. Cyclic injection, storage, and withdrawal of heated water in a sandstone aquifer at St. Paul, Minnesota: Analysis of thermal data and nonisothermal modeling of short-term test cycles

    USGS Publications Warehouse

    Miller, Robert T.; Delin, G.N.

    1994-01-01

    A three-dimensional, anisotropic, nonisothermal, ground-water-flow, and thermal-energy-transport model was constructed to simulate the four short-term test cycles. The model was used to simulate the entire short-term testing period of approximately 400 days. The only model properties varied during model calibration were longitudinal and transverse thermal dispersivities, which, for final calibration, were simulated as 3.3 and 0.33 meters, respectively. The model was calibrated by comparing model-computed results to (1) measured temperatures at selected altitudes in four observation wells, (2) measured temperatures at the production well, and (3) calculated thermal efficiencies of the aquifer. Model-computed withdrawal-water temperatures were within an average of about 3 percent of measured values and model-computed aquifer-thermal efficiencies were within an average of about 5 percent of calculated values for the short-term test cycles. These data indicate that the model accurately simulated thermal-energy storage within the Franconia-Ironton-Galesville aquifer.

  4. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  5. Preliminary Modulus and Breakage Calculations on Cellulose Models

    USDA-ARS?s Scientific Manuscript database

    The Young’s modulus of polymers can be calculated by stretching molecular models with the computer. The molecule is stretched and the derivative of the changes in stored potential energy for several displacements, divided by the molecular cross-section area, is the stress. The modulus is the slope o...

  6. Operational procedure for computer program for design point characteristics of a gas generator or a turbojet lift engine for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1972-01-01

    The computer program described calculates the design-point characteristics of a gas generator or a turbojet lift engine for V/STOL applications. The program computes the dimensions and mass, as well as the thermodynamic performance of the model engine and its components. The program was written in FORTRAN 4 language. Provision has been made so that the program accepts input values in either SI Units or U.S. Customary Units. Each engine design-point calculation requires less than 0.5 second of 7094 computer time.

  7. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, James A., Jr.

    1997-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.

  8. Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data

    NASA Technical Reports Server (NTRS)

    Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.

    2004-01-01

    A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.

  9. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    NASA Astrophysics Data System (ADS)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  10. A deterministic partial differential equation model for dose calculation in electron radiotherapy.

    PubMed

    Duclous, R; Dubroca, B; Frank, M

    2010-07-07

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of delta electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  11. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of δ electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  12. Computation of Southern Pine Site Index Using a TI-59 Calculator

    Treesearch

    Robert M. Farrar

    1983-01-01

    A program is described that permits computation of site index in the field using a Texas Instruments model TI-59 programmable, hand-held, battery-powered calculator. Based on a series of equations developed by R.M. Farrar, Jr., for the site index curves in USDA Miscellaneous Publication 50, the program can accommodate any index base age, tree age, and height within...

  13. Computational modeling for prediction of the shear stress of three-dimensional isotropic and aligned fiber networks.

    PubMed

    Park, Seungman

    2017-09-01

    Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present computational models will provide new tools for predicting accurate functional properties and designing fibrous porous materials, thereby significantly advancing tissue engineering. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Large-scale ground motion simulation using GPGPU

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number of cores. Finally, we applied GPU calculation to the simulation of the 2011 Tohoku-oki earthquake. The model was constructed using a slip model from inversion of strong motion data (Suzuki et al., 2012), and a geological- and geophysical-based velocity structure model comprising all the Tohoku and Kanto regions as well as the large source area, which consists of about 1.9 billion grids. The overall characteristics of observed velocity seismograms for a longer period than range of 8 s were successfully reproduced (Maeda et al., 2012 AGU meeting). The turn around time for 50 thousand-step calculation (which correspond to 416 s in seismograph) using 100 GPUs was 52 minutes which is fairly short, especially considering this is the performance for the realistic and complex model.

  15. Optics Program Modified for Multithreaded Parallel Computing

    NASA Technical Reports Server (NTRS)

    Lou, John; Bedding, Dave; Basinger, Scott

    2006-01-01

    A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.

  16. A computational fluid dynamics simulation of a supersonic chemical oxygen-iodine laser

    NASA Astrophysics Data System (ADS)

    Waichman, K.; Rybalkin, V.; Katz, A.; Dahan, Z.; Barmashenko, B. D.; Rosenwaks, S.

    2007-05-01

    The dissociation of I II molecules at the optical axis of a supersonic chemical oxygen-iodine laser (COIL) was studied via detailed measurements and three dimensional computational fluid dynamics calculations. Comparing the measurements and the calculations enabled critical examination of previously proposed dissociation mechanisms and suggestion of a mechanism consistent with the experimental and theoretical results. The gain, I II dissociation fraction and temperature at the optical axis, calculated using Heidner's model (R.F. Heidner III et al., J. Phys. Chem. 87, 2348 (1983)), are much lower than those measured experimentally. Agreement with the experimental results was reached by using Heidner's model supplemented by Azyazov-Heaven's model (V.N. Azyazov and M.C. Heaven, AIAA J. 44, 1593 (2006)) where I II(A') and vibrationally excited O II(a1Δ) are significant dissociation intermediates.

  17. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, W; Farr, J

    2015-06-15

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MCmore » simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.« less

  18. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  19. Algebraic model checking for Boolean gene regulatory networks.

    PubMed

    Tran, Quoc-Nam

    2011-01-01

    We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.

  20. HyPEP FY06 Report: Models and Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DOE report

    2006-09-01

    The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations andmore » many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the following years.« less

  1. Predicting solar radiation based on available weather indicators

    NASA Astrophysics Data System (ADS)

    Sauer, Frank Joseph

    Solar radiation prediction models are complex and require software that is not available for the household investor. The processing power within a normal desktop or laptop computer is sufficient to calculate similar models. This barrier to entry for the average consumer can be fixed by a model simple enough to be calculated by hand if necessary. Solar radiation modeling has been historically difficult to predict and accurate models have significant assumptions and restrictions on their use. Previous methods have been limited to linear relationships, location restrictions, or input data limits to one atmospheric condition. This research takes a novel approach by combining two techniques within the computational limits of a household computer; Clustering and Hidden Markov Models (HMMs). Clustering helps limit the large observation space which restricts the use of HMMs. Instead of using continuous data, and requiring significantly increased computations, the cluster can be used as a qualitative descriptor of each observation. HMMs incorporate a level of uncertainty and take into account the indirect relationship between meteorological indicators and solar radiation. This reduces the complexity of the model enough to be simply understood and accessible to the average household investor. The solar radiation is considered to be an unobservable state that each household will be unable to measure. The high temperature and the sky coverage are already available through the local or preferred source of weather information. By using the next day's prediction for high temperature and sky coverage, the model groups the data and then predicts the most likely range of radiation. This model uses simple techniques and calculations to give a broad estimate for the solar radiation when no other universal model exists for the average household.

  2. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  3. Early differentiation of the Moon: Experimental and modeling studies

    NASA Technical Reports Server (NTRS)

    Longhi, J.

    1986-01-01

    Major accomplishments include the mapping out of liquidus boundaries of lunar and meteoritic basalts at low pressure; the refinement of computer models that simulate low pressure fractional crystallization; the development of a computer model to calculate high pressure partial melting of the lunar and Martian interiors; and the proposal of a hypothesis of early lunar differentiation based upon terrestrial analogs.

  4. Simulation and optimization of a dc SQUID with finite capacitance

    NASA Astrophysics Data System (ADS)

    de Waal, V. J.; Schrijner, P.; Llurba, R.

    1984-02-01

    This paper deals with the calculations of the noise and the optimization of the energy resolution of a dc SQUID with finite junction capacitance. Up to now noise calculations of dc SQUIDs were performed using a model without parasitic capacitances across the Josephson junctions. As the capacitances limit the performance of the SQUID, for a good optimization one must take them into account. The model consists of two coupled nonlinear second-order differential equations. The equations are very suitable for simulation with an analog circuit. We implemented the model on a hybrid computer. The noise spectrum from the model is calculated with a fast Fourier transform. A calculation of the energy resolution for one set of parameters takes about 6 min of computer time. Detailed results of the optimization are given for products of inductance and temperature of LT=1.2 and 5 nH K. Within a range of β and β c between 1 and 2, which is optimum, the energy resolution is nearly independent of these variables. In this region the energy resolution is near the value calculated without parasitic capacitances. Results of the optimized energy resolution are given as a function of LT between 1.2 and 10 mH K.

  5. A computer program for calculating relative-transmissivity input arrays to aid model calibration

    USGS Publications Warehouse

    Weiss, Emanuel

    1982-01-01

    A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.

  6. A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations

    NASA Astrophysics Data System (ADS)

    Jayaram, V.; Crain, K.; Keller, G. R.

    2011-12-01

    We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element approach on different CPU-GPU system configurations. The algorithm calculates the expected gravity at station locations where the observed gravity and FTG data were acquired. This algorithm can be used for all fast forward model calculations of 3D geologic interpretations for data from airborne, space and submarine gravity, and FTG instrumentation.

  7. TERRA: a computer code for simulating the transport of environmentally released radionuclides through agriculture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baes, C.F. III; Sharp, R.D.; Sjoreen, A.L.

    1984-11-01

    TERRA is a computer code which calculates concentrations of radionuclides and ingrowing daughters in surface and root-zone soil, produce and feed, beef, and milk from a given deposition rate at any location in the conterminous United States. The code is fully integrated with seven other computer codes which together comprise a Computerized Radiological Risk Investigation System, CRRIS. Output from either the long range (> 100 km) atmospheric dispersion code RETADD-II or the short range (<80 km) atmospheric dispersion code ANEMOS, in the form of radionuclide air concentrations and ground deposition rates by downwind location, serves as input to TERRA. User-definedmore » deposition rates and air concentrations may also be provided as input to TERRA through use of the PRIMUS computer code. The environmental concentrations of radionuclides predicted by TERRA serve as input to the ANDROS computer code which calculates population and individual intakes, exposures, doses, and risks. TERRA incorporates models to calculate uptake from soil and atmospheric deposition on four groups of produce for human consumption and four groups of livestock feeds. During the environmental transport simulation, intermediate calculations of interception fraction for leafy vegetables, produce directly exposed to atmospherically depositing material, pasture, hay, and silage are made based on location-specific estimates of standing crop biomass. Pasture productivity is estimated by a model which considers the number and types of cattle and sheep, pasture area, and annual production of other forages (hay and silage) at a given location. Calculations are made of the fraction of grain imported from outside the assessment area. TERRA output includes the above calculations and estimated radionuclide concentrations in plant produce, milk, and a beef composite by location.« less

  8. A comprehensive analytical model of rotorcraft aerodynamics and dynamics. Part 2: User's manual

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1980-01-01

    The use of a computer program for a comprehensive analytical model of rotorcraft aerodynamics and dynamics is described. The program calculates the loads and motion of helicopter rotors and airframe. First the trim solution is obtained, then the flutter, flight dynamics, and/or transient behavior can be calculated. Either a new job can be initiated or further calculations can be performed for an old job.

  9. Predicting the Shifts of Absorption Maxima of Azulene Derivatives Using Molecular Modeling and ZINDO CI Calculations of UV-Vis Spectra

    ERIC Educational Resources Information Center

    Patalinghug, Wyona C.; Chang, Maharlika; Solis, Joanne

    2007-01-01

    The deep blue color of azulene is drastically changed by the addition of substituents such as CH[subscript 3], F, or CHO. Computational semiempirical methods using ZINDO CI are used to model azulene and azulene derivatives and to calculate their UV-vis spectra. The calculated spectra are used to show the trends in absorption band shifts upon…

  10. Computational Design and Discovery of Ni-Based Alloys and Coatings: Thermodynamic Approaches Validated by Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zi-Kui; Gleeson, Brian; Shang, Shunli

    This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities,more » which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.« less

  11. Analysis of JSI TRIGA MARK II reactor physical parameters calculated with TRIPOLI and MCNP.

    PubMed

    Henry, R; Tiselj, I; Snoj, L

    2015-03-01

    New computational model of the JSI TRIGA Mark II research reactor was built for TRIPOLI computer code and compared with existing MCNP code model. The same modelling assumptions were used in order to check the differences of the mathematical models of both Monte Carlo codes. Differences between the TRIPOLI and MCNP predictions of keff were up to 100pcm. Further validation was performed with analyses of the normalized reaction rates and computations of kinetic parameters for various core configurations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Computational Study of Axisymmetric Off-Design Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DalBello, Teryn; Georgiadis, Nicholas; Yoder, Dennis; Keith, Theo

    2003-01-01

    Computational Fluid Dynamics (CFD) analyses of axisymmetric circular-arc boattail nozzles operating off-design at transonic Mach numbers have been completed. These computations span the very difficult transonic flight regime with shock-induced separations and strong adverse pressure gradients. External afterbody and internal nozzle pressure distributions computed with the Wind code are compared with experimental data. A range of turbulence models were examined, including the Explicit Algebraic Stress model. Computations have been completed at freestream Mach numbers of 0.9 and 1.2, and nozzle pressure ratios (NPR) of 4 and 6. Calculations completed with variable time-stepping (steady-state) did not converge to a true steady-state solution. Calculations obtained using constant timestepping (timeaccurate) indicate less variations in flow properties compared with steady-state solutions. This failure to converge to a steady-state solution was the result of using variable time-stepping with large-scale separations present in the flow. Nevertheless, time-averaged boattail surface pressure coefficient and internal nozzle pressures show reasonable agreement with experimental data. The SST turbulence model demonstrates the best overall agreement with experimental data.

  13. Thermal/structural modeling of a large scale in situ overtest experiment for defense high level waste at the Waste Isolation Pilot Plant Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, H.S.; Stone, C.M.; Krieg, R.D.

    Several large scale in situ experiments in bedded salt formations are currently underway at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, USA. In these experiments, the thermal and creep responses of salt around several different underground room configurations are being measured. Data from the tests are to be compared to thermal and structural responses predicted in pretest reference calculations. The purpose of these comparisons is to evaluate computational models developed from laboratory data prior to fielding of the in situ experiments. In this paper, the computational models used in the pretest reference calculation for one of themore » large scale tests, The Overtest for Defense High Level Waste, are described; and the pretest computed thermal and structural responses are compared to early data from the experiment. The comparisons indicate that computed and measured temperatures for the test agree to within ten percent but that measured deformation rates are between two and three times greater than corresponsing computed rates. 10 figs., 3 tabs.« less

  14. Quantum-assisted biomolecular modelling.

    PubMed

    Harris, Sarah A; Kendon, Vivien M

    2010-08-13

    Our understanding of the physics of biological molecules, such as proteins and DNA, is limited because the approximations we usually apply to model inert materials are not, in general, applicable to soft, chemically inhomogeneous systems. The configurational complexity of biomolecules means the entropic contribution to the free energy is a significant factor in their behaviour, requiring detailed dynamical calculations to fully evaluate. Computer simulations capable of taking all interatomic interactions into account are therefore vital. However, even with the best current supercomputing facilities, we are unable to capture enough of the most interesting aspects of their behaviour to properly understand how they work. This limits our ability to design new molecules, to treat diseases, for example. Progress in biomolecular simulation depends crucially on increasing the computing power available. Faster classical computers are in the pipeline, but these provide only incremental improvements. Quantum computing offers the possibility of performing huge numbers of calculations in parallel, when it becomes available. We discuss the current open questions in biomolecular simulation, how these might be addressed using quantum computation and speculate on the future importance of quantum-assisted biomolecular modelling.

  15. Management Sciences Division Annual Report (10th)

    DTIC Science & Technology

    1993-01-01

    of the Weapon System Management Information System (WSMIS). TheI Aircraft Sustainability Model ( ASM ) is the computational technique employed by...provisioning. We enhanced the capabilities of RBIRD by using the Aircraft Sustainability Model ( ASM ) for the spares calculation. ASM offers many... ASM for several years to 3 compute spares for war. It is also fully compatible with the Air Force’s peacetime spares computation system (D041). This

  16. Thermophysics Characterization of Multiply Ionized Air Plasma Absorption of Laser Radiation

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Rhodes, Robert; Turner, Jim (Technical Monitor)

    2002-01-01

    The impact of multiple ionization of air plasma on the inverse Bremsstrahlung absorption of laser radiation is investigated for air breathing laser propulsion. Thermochemical properties of multiply ionized air plasma species are computed for temperatures up to 200,000 deg K, using hydrogenic approximation of the electronic partition function; And those for neutral air molecules are also updated for temperatures up to 50,000 deg K, using available literature data. Three formulas for absorption are calculated and a general formula is recommended for multiple ionization absorption calculation. The plasma composition required for absorption calculation is obtained by increasing the degree of ionization sequentially, up to quadruple ionization, with a series of thermal equilibrium computations. The calculated second ionization absorption coefficient agrees reasonably well with that of available data. The importance of multiple ionization modeling is demonstrated with the finding that area under the quadruple ionization curve of absorption is found to be twice that of single ionization. The effort of this work is beneficial to the computational plasma aerodynamics modeling of laser lightcraft performance.

  17. GPU-based Green's function simulations of shear waves generated by an applied acoustic radiation force in elastic and viscoelastic models.

    PubMed

    Yang, Yiqun; Urban, Matthew W; McGough, Robert J

    2018-05-15

    Shear wave calculations induced by an acoustic radiation force are very time-consuming on desktop computers, and high-performance graphics processing units (GPUs) achieve dramatic reductions in the computation time for these simulations. The acoustic radiation force is calculated using the fast near field method and the angular spectrum approach, and then the shear waves are calculated in parallel with Green's functions on a GPU. This combination enables rapid evaluation of shear waves for push beams with different spatial samplings and for apertures with different f/#. Relative to shear wave simulations that evaluate the same algorithm on an Intel i7 desktop computer, a high performance nVidia GPU reduces the time required for these calculations by a factor of 45 and 700 when applied to elastic and viscoelastic shear wave simulation models, respectively. These GPU-accelerated simulations also compared to measurements in different viscoelastic phantoms, and the results are similar. For parametric evaluations and for comparisons with measured shear wave data, shear wave simulations with the Green's function approach are ideally suited for high-performance GPUs.

  18. Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model

    NASA Astrophysics Data System (ADS)

    Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man

    2017-03-01

    Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.

  19. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  20. HADOC: a computer code for calculation of external and inhalation doses from acute radionuclide releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strenge, D.L.; Peloquin, R.A.

    The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure modemore » are also printed if requested.« less

  1. Empirical Estimation of Local Dielectric Constants: Toward Atomistic Design of Collagen Mimetic Peptides

    PubMed Central

    Pike, Douglas H.; Nanda, Vikas

    2017-01-01

    One of the key challenges in modeling protein energetics is the treatment of solvent interactions. This is particularly important in the case of peptides, where much of the molecule is highly exposed to solvent due to its small size. In this study, we develop an empirical method for estimating the local dielectric constant based on an additive model of atomic polarizabilities. Calculated values match reported apparent dielectric constants for a series of Staphylococcus aureus nuclease mutants. Calculated constants are used to determine screening effects on Coulombic interactions and to determine solvation contributions based on a modified Generalized Born model. These terms are incorporated into the protein modeling platform protCAD, and benchmarked on a data set of collagen mimetic peptides for which experimentally determined stabilities are available. Computing local dielectric constants using atomistic protein models and the assumption of additive atomic polarizabilities is a rapid and potentially useful method for improving electrostatics and solvation calculations that can be applied in the computational design of peptides. PMID:25784456

  2. The web system for operative description of air quality in the city

    NASA Astrophysics Data System (ADS)

    Barth, A. A.; Starchenko, A. V.; Fazliev, A. Z.

    2009-04-01

    Development and implementation of information-computational system (ICS) is described. The system is oriented on the collective usage of the calculation's facilities in order to determine the air quality on the basis of photochemical model. The ICS has been implemented on the basis of the middleware of ATMOS web-portal [1, 2]. The data and calculation layer of this ICS includes: Mathematical model of pollution transport based on transport differential equations. The model describes propagation, scattering and chemical transformation of the pollutants in the atmosphere [3]. The model may use averaged data value for city or forecast results obtained with help of the Chaser model.[4] Atmospheric boundary layer model (ABLM) [3] is used for operative numerical prediction of the meteorological parameters. These are such parameters as speed and direction of the wind, humidity and temperature of the air, which are necessary for the transport impurity model to operate. The model may use data assimilation of meteorological measurements data (including land based observations and the results of remote sensing of vertical structure of the atmosphere) or the weather forecast results obtained with help of the Semi-Lagrange model [5]. Applications for manipulation of data: An application for downloading parameters of atmospheric surface layer and remote sensing of vertical structure of the atmosphere from the web sites (http://meteo.infospace.ru and http://weather.uwyo.edu); An application for uploading these data into the ICS database; An application for transformation of the uploaded data into the internal data format of the system. At present this ICS is a part of "Climate" web site located in ATMOS portal [5]. The database is based on the data schemes providing the calculation in ICS workflow. The applications manipulated with the data are working in automatic regime. The workflow oriented on computation of physical parameters contains: The application for the calculation of geostrophic wind components on the base of Eckman equations; The applications for solution of the equations derived from ABL and transport of impurity models. The application for representation of calculation results in tabular and graphical forms. "Cyberia" cluster [6] located in Tomsk State University is used for computation of the impurity transport equations. References: Gordov E.P., V. N. Lykosov, and A. Z. Fazliev, Web portal on environmental sciences "ATMOS"// Advances in Geoscience, 2006, v. 8, p. 33-38. ATMOS web-portal http://atmos.iao.ru/middleware/ Belikov D.A., Starchenko A.V. Numerical investigation of secondary air pollutions formation near industrial center // Computational technologies. 2005. V. 10. Special issue. Proceedings of the International Conference and the School of Young Scientists "Computational and informational technologies for environmental sciences" (CITES 2005). Tomsk, 13-23 March 2005. Part 2. P. 99-105 Sudo, K., Takahashi M., Kurokawa J., Akimoto H. CHASER: A global chemical model of the troposphere. Model description, J. Geophys. Res., 2002, Vol.107(D17), P. 4339. Tolstykh M.A., Fadeev R.Y. Semi-Lagrangian variable-resolution weather prediction model and its further development // Computational technologies. 2006. V. 11. Special issue. P. 176-184 ATMOS web-portal http://climate.atmos.math.tsu.ru/ Tomsk state university, Interregional computational center http://skif.tsu.ru

  3. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  4. Graded meshes in bio-thermal problems with transmission-line modeling method.

    PubMed

    Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G

    2014-10-01

    In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. An incompressible two-dimensional multiphase particle-in-cell model for dense particle flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snider, D.M.; O`Rourke, P.J.; Andrews, M.J.

    1997-06-01

    A two-dimensional, incompressible, multiphase particle-in-cell (MP-PIC) method is presented for dense particle flows. The numerical technique solves the governing equations of the fluid phase using a continuum model and those of the particle phase using a Lagrangian model. Difficulties associated with calculating interparticle interactions for dense particle flows with volume fractions above 5% have been eliminated by mapping particle properties to a Eulerian grid and then mapping back computed stress tensors to particle positions. This approach utilizes the best of Eulerian/Eulerian continuum models and Eulerian/Lagrangian discrete models. The solution scheme allows for distributions of types, sizes, and density of particles,more » with no numerical diffusion from the Lagrangian particle calculations. The computational method is implicit with respect to pressure, velocity, and volume fraction in the continuum solution thus avoiding courant limits on computational time advancement. MP-PIC simulations are compared with one-dimensional problems that have analytical solutions and with two-dimensional problems for which there are experimental data.« less

  6. A Kirchhoff approach to seismic modeling and prestack depth migration

    NASA Astrophysics Data System (ADS)

    Liu, Zhen-Yue

    1993-05-01

    The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.

  7. Point charge representation of multicenter multipole moments in calculation of electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1993-01-01

    Distributed Point Charge Models (PCM) for CO, (H2O)2, and HS-SH molecules have been computed from analytical expressions using multi-center multipole moments. The point charges (set of charges including both atomic and non-atomic positions) exactly reproduce both molecular and segmental multipole moments, thus constituting an accurate representation of the local anisotropy of electrostatic properties. In contrast to other known point charge models, PCM can be used to calculate not only intermolecular, but also intramolecular interactions. Comparison of these results with more accurate calculations demonstrated that PCM can correctly represent both weak and strong (intramolecular) interactions, thus indicating the merit of extending PCM to obtain improved potentials for molecular mechanics and molecular dynamics computational methods.

  8. PROGRAM VSAERO: A computer program for calculating the non-linear aerodynamic characteristics of arbitrary configurations: User's manual

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1982-01-01

    VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.

  9. Transient Solid Dynamics Simulations on the Sandia/Intel Teraflop Computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attaway, S.; Brown, K.; Gardner, D.

    1997-12-31

    Transient solid dynamics simulations are among the most widely used engineering calculations. Industrial applications include vehicle crashworthiness studies, metal forging, and powder compaction prior to sintering. These calculations are also critical to defense applications including safety studies and weapons simulations. The practical importance of these calculations and their computational intensiveness make them natural candidates for parallelization. This has proved to be difficult, and existing implementations fail to scale to more than a few dozen processors. In this paper we describe our parallelization of PRONTO, Sandia`s transient solid dynamics code, via a novel algorithmic approach that utilizes multiple decompositions for differentmore » key segments of the computations, including the material contact calculation. This latter calculation is notoriously difficult to perform well in parallel, because it involves dynamically changing geometry, global searches for elements in contact, and unstructured communications among the compute nodes. Our approach scales to at least 3600 compute nodes of the Sandia/Intel Teraflop computer (the largest set of nodes to which we have had access to date) on problems involving millions of finite elements. On this machine we can simulate models using more than ten- million elements in a few tenths of a second per timestep, and solve problems more than 3000 times faster than a single processor Cray Jedi.« less

  10. K-TIF: a two-fluid computer program for downcomer flow dynamics. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amsden, A.A.; Harlow, F.H.

    1977-10-01

    The K-TIF computer program has been developed for numerical solution of the time-varying dynamics of steam and water in a pressurized water reactor downcomer. The current status of physical and mathematical modeling is presented in detail. The report also contains a complete description of the numerical solution technique, a full description and listing of the computer program, instructions for its use, with a sample printout for a specific test problem. A series of calculations, performed with no change in the modeling parameters, shows consistent agreement with the experimental trends over a wide range of conditions, which gives confidence to themore » calculations as a basis for investigating the complicated physics of steam-water flows in the downcomer.« less

  11. Standardized input for Hanford environmental impact statements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, B.A.

    1981-05-01

    Models and computer programs for simulating the environmental behavior of radionuclides in the environment and the resulting radiation dose to humans have been developed over the years by the Environmental Analysis Section staff, Ecological Sciences Department at the Pacific Northwest Laboratory (PNL). Methodologies have evolved for calculating raidation doses from many exposure pathways for any type of release mechanism. Depending on the situation or process being simulated, different sets of computer programs, assumptions, and modeling techniques must be used. This report is a compilation of recommended computer programs and necessary input information for use in calculating doses to members ofmore » the general public for environmental impact statements prepared for DOE activities to be conducted on or near the Hanford Reservation.« less

  12. Young’s modulus calculations for cellulose Iß by MM3 and quantum mechanics

    USDA-ARS?s Scientific Manuscript database

    Quantum mechanics (QM) and molecular mechanics (MM) calculations were performed to elucidate Young’s moduli for a series of cellulose Iß models. Computations using the second generation empirical force field MM3 with a disaccharide cellulose model, 1,4'-O-dimethyl-ß-cellobioside (DMCB), and an analo...

  13. Estimating the weight of Douglas-fir tree boles and logs with an iterative computer model.

    Treesearch

    Dale R. Waddell; Dale L Weyermann; Michael B. Lambert

    1987-01-01

    A computer model that estimates the green weights of standing trees was developed and validated for old-growth Douglas-fir. The model calculates the green weight for the entire bole, for the bole to any merchantable top, and for any log length within the bole. The model was validated by estimating the bias and accuracy of an independent subsample selected from the...

  14. A combined registration and finite element analysis method for fast estimation of intraoperative brain shift; phantom and animal model study.

    PubMed

    Mohammadi, Amrollah; Ahmadian, Alireza; Rabbani, Shahram; Fattahi, Ehsan; Shirani, Shapour

    2017-12-01

    Finite element models for estimation of intraoperative brain shift suffer from huge computational cost. In these models, image registration and finite element analysis are two time-consuming processes. The proposed method is an improved version of our previously developed Finite Element Drift (FED) registration algorithm. In this work the registration process is combined with the finite element analysis. In the Combined FED (CFED), the deformation of whole brain mesh is iteratively calculated by geometrical extension of a local load vector which is computed by FED. While the processing time of the FED-based method including registration and finite element analysis was about 70 s, the computation time of the CFED was about 3.2 s. The computational cost of CFED is almost 50% less than similar state of the art brain shift estimators based on finite element models. The proposed combination of registration and structural analysis can make the calculation of brain deformation much faster. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Accelerating Climate and Weather Simulations through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  16. The Application of COMSOL Multiphysics Package on the Modelling of Complex 3-D Lithospheric Electrical Resistivity Structures - A Case Study from the Proterozoic Orogenic belt within the North China Craton

    NASA Astrophysics Data System (ADS)

    Guo, L.; Yin, Y.; Deng, M.; Guo, L.; Yan, J.

    2017-12-01

    At present, most magnetotelluric (MT) forward modelling and inversion codes are based on finite difference method. But its structured mesh gridding cannot be well adapted for the conditions with arbitrary topography or complex tectonic structures. By contrast, the finite element method is more accurate in calculating complex and irregular 3-D region and has lower requirement of function smoothness. However, the complexity of mesh gridding and limitation of computer capacity has been affecting its application. COMSOL Multiphysics is a cross-platform finite element analysis, solver and multiphysics full-coupling simulation software. It achieves highly accurate numerical simulations with high computational performance and outstanding multi-field bi-directional coupling analysis capability. In addition, its AC/DC and RF module can be used to easily calculate the electromagnetic responses of complex geological structures. Using the adaptive unstructured grid, the calculation is much faster. In order to improve the discretization technique of computing area, we use the combination of Matlab and COMSOL Multiphysics to establish a general procedure for calculating the MT responses for arbitrary resistivity models. The calculated responses include the surface electric and magnetic field components, impedance components, magnetic transfer functions and phase tensors. Then, the reliability of this procedure is certificated by 1-D, 2-D and 3-D and anisotropic forward modeling tests. Finally, we establish the 3-D lithospheric resistivity model for the Proterozoic Wutai-Hengshan Mts. within the North China Craton by fitting the real MT data collected there. The reliability of the model is also verified by induced vectors and phase tensors. Our model shows more details and better resolution, compared with the previously published 3-D model based on the finite difference method. In conclusion, COMSOL Multiphysics package is suitable for modeling the 3-D lithospheric resistivity structures under complex tectonic deformation backgrounds, which could be a good complement to the existing finite-difference inversion algorithms.

  17. Generalization techniques to reduce the number of volume elements for terrain effect calculations in fully analytical gravitational modelling

    NASA Astrophysics Data System (ADS)

    Benedek, Judit; Papp, Gábor; Kalmár, János

    2018-04-01

    Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable situation without the loss of reliability of the calculated gravity field parameters.

  18. A coolant flow simulation in fast reactor wire-wrapped assembly

    NASA Astrophysics Data System (ADS)

    Volkov, V. Yu.; Belova, O. V.; Krutikov, A. A.; Skibin, A. P.

    2013-06-01

    A CFD model of a 19-rod wire-wrapped fuel assembly is developed. The effect the size of computation mesh in the calculated region and the type of turbulence models have on the pressure drop between the inlet to and outlet from the calculated region is investigated. The possibility of shifting from low-Reynolds to high-Reynolds turbulence models is substantiated. Such a shift allows the mesh size in the calculated region to be reduced by approximately a factor of 18. The obtained results are in good agreement with the empirical dependences and international calculations.

  19. Computational model for noncontact atomic force microscopy: energy dissipation of cantilever.

    PubMed

    Senda, Yasuhiro; Blomqvist, Janne; Nieminen, Risto M

    2016-09-21

    We propose a computational model for noncontact atomic force microscopy (AFM) in which the atomic force between the cantilever tip and the surface is calculated using a molecular dynamics method, and the macroscopic motion of the cantilever is modeled by an oscillating spring. The movement of atoms in the tip and surface is connected with the oscillating spring using a recently developed coupling method. In this computational model, the oscillation energy is dissipated, as observed in AFM experiments. We attribute this dissipation to the hysteresis and nonconservative properties of the interatomic force that acts between the atoms in the tip and sample surface. The dissipation rate strongly depends on the parameters used in the computational model.

  20. A new computational approach to simulate pattern formation in Paenibacillus dendritiformis bacterial colonies

    NASA Astrophysics Data System (ADS)

    Tucker, Laura Jane

    Under the harsh conditions of limited nutrient and hard growth surface, Paenibacillus dendritiformis in agar plates form two classes of patterns (morphotypes). The first class, called the dendritic morphotype, has radially directed branches. The second class, called the chiral morphotype, exhibits uniform handedness. The dendritic morphotype has been modeled successfully using a continuum model on a regular lattice; however, a suitable computational approach was not known to solve a continuum chiral model. This work details a new computational approach to solving the chiral continuum model of pattern formation in P. dendritiformis. The approach utilizes a random computational lattice and new methods for calculating certain derivative terms found in the model.

  1. Desktop computer graphics for RMS/payload handling flight design

    NASA Technical Reports Server (NTRS)

    Homan, D. J.

    1984-01-01

    A computer program, the Multi-Adaptive Drawings, Renderings and Similitudes (MADRAS) program, is discussed. The modeling program, written for a desktop computer system (the Hewlett-Packard 9845/C), is written in BASIC and uses modular construction of objects while generating both wire-frame and hidden-line drawings from any viewpoint. The dimensions and placement of objects are user definable. Once the hidden-line calculations are made for a particular viewpoint, the viewpoint may be rotated in pan, tilt, and roll without further hidden-line calculations. The use and results of this program are discussed.

  2. Modeling of Pressure Drop During Refrigerant Condensation in Pipe Minichannels

    NASA Astrophysics Data System (ADS)

    Sikora, Małgorzata; Bohdal, Tadeusz

    2017-12-01

    Investigations of refrigerant condensation in pipe minichannels are very challenging and complicated issue. Due to the multitude of influences very important is mathematical and computer modeling. Its allows for performing calculations for many different refrigerants under different flow conditions. A large number of experimental results published in the literature allows for experimental verification of correctness of the models. In this work is presented a mathematical model for calculation of flow resistance during condensation of refrigerants in the pipe minichannel. The model was developed in environment based on conservation equations. The results of calculations were verified by authors own experimental investigations results.

  3. Computer Model for Sizing Rapid Transit Tunnel Diameters

    DOT National Transportation Integrated Search

    1976-01-01

    A computer program was developed to assist the determination of minimum tunnel diameters for electrified rapid transit systems. Inputs include vehicle shape, walkway location, clearances, and track geometrics. The program written in FORTRAN IV calcul...

  4. LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W

    2008-01-01

    Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less

  5. Derivation and computation of discrete-delay and continuous-delay SDEs in mathematical biology.

    PubMed

    Allen, Edward J

    2014-06-01

    Stochastic versions of several discrete-delay and continuous-delay differential equations, useful in mathematical biology, are derived from basic principles carefully taking into account the demographic, environmental, or physiological randomness in the dynamic processes. In particular, stochastic delay differential equation (SDDE) models are derived and studied for Nicholson's blowflies equation, Hutchinson's equation, an SIS epidemic model with delay, bacteria/phage dynamics, and glucose/insulin levels. Computational methods for approximating the SDDE models are described. Comparisons between computational solutions of the SDDEs and independently formulated Monte Carlo calculations support the accuracy of the derivations and of the computational methods.

  6. QMC Goes BOINC: Using Public Resource Computing to Perform Quantum Monte Carlo Calculations

    NASA Astrophysics Data System (ADS)

    Rainey, Cameron; Engelhardt, Larry; Schröder, Christian; Hilbig, Thomas

    2008-10-01

    Theoretical modeling of magnetic molecules traditionally involves the diagonalization of quantum Hamiltonian matrices. However, as the complexity of these molecules increases, the matrices become so large that this process becomes unusable. An additional challenge to this modeling is that many repetitive calculations must be performed, further increasing the need for computing power. Both of these obstacles can be overcome by using a quantum Monte Carlo (QMC) method and a distributed computing project. We have recently implemented a QMC method within the Spinhenge@home project, which is a Public Resource Computing (PRC) project where private citizens allow part-time usage of their PCs for scientific computing. The use of PRC for scientific computing will be described in detail, as well as how you can contribute to the project. See, e.g., L. Engelhardt, et. al., Angew. Chem. Int. Ed. 47, 924 (2008). C. Schröoder, in Distributed & Grid Computing - Science Made Transparent for Everyone. Principles, Applications and Supporting Communities. (Weber, M.H.W., ed., 2008). Project URL: http://spin.fh-bielefeld.de

  7. A STUDY OF PREDICTED BONE MARROW DISTRIBUTION ON CALCULATED MARROW DOSE FROM EXTERNAL RADIATION EXPOSURES USING TWO SETS OF IMAGE DATA FOR THE SAME INDIVIDUAL

    PubMed Central

    Caracappa, Peter F.; Chao, T. C. Ephraim; Xu, X. George

    2010-01-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body. PMID:19430219

  8. A study of predicted bone marrow distribution on calculated marrow dose from external radiation exposures using two sets of image data for the same individual.

    PubMed

    Caracappa, Peter F; Chao, T C Ephraim; Xu, X George

    2009-06-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body.

  9. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part I: boron neutron capture therapy models.

    PubMed

    Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.

  10. Workshop on Engineering Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A. (Editor); Liou, W. W. (Editor); Shabbir, A. (Editor); Shih, T.-H. (Editor)

    1992-01-01

    Discussed here is the future direction of various levels of engineering turbulence modeling related to computational fluid dynamics (CFD) computations for propulsion. For each level of computation, there are a few turbulence models which represent the state-of-the-art for that level. However, it is important to know their capabilities as well as their deficiencies in order to help engineers select and implement the appropriate models in their real world engineering calculations. This will also help turbulence modelers perceive the future directions for improving turbulence models. The focus is on one-point closure models (i.e., from algebraic models to higher order moment closure schemes and partial differential equation methods) which can be applied to CFD computations. However, other schemes helpful in developing one-point closure models, are also discussed.

  11. Modeling macro-and microstructures of Gas-Metal-Arc Welded HSLA-100 steel

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Debroy, T.

    1999-06-01

    Fluid flow and heat transfer during gas-metal-arc welding (GMAW) of HSLA-100 steel were studied using a transient, three-dimensional, turbulent heat transfer and fluid flow model. The temperature and velocity fields, cooling rates, and shape and size of the fusion and heat-affected zones (HAZs) were calculated. A continuous-cooling-transformation (CCT) diagram was computed to aid in the understanding of the observed weld metal microstructure. The computed results demonstrate that the dissipation of heat and momentum in the weld pool is significantly aided by turbulence, thus suggesting that previous modeling results based on laminar flow need to be re-examined. A comparison of the calculated fusion and HAZ geometries with their corresponding measured values showed good agreement. Furthermore, “finger” penetration, a unique geometric characteristic of gas-metal-arc weld pools, could be satisfactorily predicted from the model. The ability to predict these geometric variables and the agreement between the calculated and the measured cooling rates indicate the appropriateness of using a turbulence model for accurate calculations. The microstructure of the weld metal consisted mainly of acicular ferrite with small amounts of bainite. At high heat inputs, small amounts of allotriomorphic and Widmanstätten ferrite were also observed. The observed microstructures are consistent with those expected from the computed CCT diagram and the cooling rates. The results presented here demonstrate significant promise for understanding both macro-and microstructures of steel welds from the combination of the fundamental principles from both transport phenomena and phase transformation theory.

  12. NEURAL NETWORK MODELLING OF CARDIAC DOSE CONVERSION COEFFICIENT FOR ARBITRARY X-RAY SPECTRA.

    PubMed

    Kadri, O; Manai, K

    2016-12-01

    In this article, an approach to compute the dose conversion coefficients (DCCs) is described for the computational voxel phantom 'High-Definition Reference Korean-Man' (HDRK-Man) using artificial neural networks (ANN). For this purpose, the voxel phantom was implemented into the Monte Carlo (MC) transport toolkit GEANT4, and the DCCs for more than 30 tissues and organs, due to a broad parallel beam of monoenergetic photons with energy ranging from 15 to 150 keV by a step of 5 keV, were calculated. To study the influence of patient size on DCC values, DCC calculation was performed, for a representative body size population, using five different sizes covering the range of 80-120 % magnification of the original HDRK-Man. The focus of the present study was on the computation of DCC for the human heart. ANN calculation and MC simulation results were compared, and good agreement was observed showing that ANNs can be used as an efficient tool for modelling DCCs for the computational voxel phantom. ANN approach appears to be a significant advance over the time-consuming MC methods for DCC calculation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. CFD Models of a Serpentine Inlet, Fan, and Nozzle

    NASA Technical Reports Server (NTRS)

    Chima, R. V.; Arend, D. J.; Castner, R. S.; Slater, J. W.; Truax, P. P.

    2010-01-01

    Several computational fluid dynamics (CFD) codes were used to analyze the Versatile Integrated Inlet Propulsion Aerodynamics Rig (VIIPAR) located at NASA Glenn Research Center. The rig consists of a serpentine inlet, a rake assembly, inlet guide vanes, a 12-in. diameter tip-turbine driven fan stage, exit rakes or probes, and an exhaust nozzle with a translating centerbody. The analyses were done to develop computational capabilities for modeling inlet/fan interaction and to help interpret experimental data. Three-dimensional Reynolds averaged Navier-Stokes (RANS) calculations of the fan stage were used to predict the operating line of the stage, the effects of leakage from the turbine stream, and the effects of inlet guide vane (IGV) setting angle. Coupled axisymmetric calculations of a bellmouth, fan, and nozzle were used to develop techniques for coupling codes together and to investigate possible effects of the nozzle on the fan. RANS calculations of the serpentine inlet were coupled to Euler calculations of the fan to investigate the complete inlet/fan system. Computed wall static pressures along the inlet centerline agreed reasonably well with experimental data but computed total pressures at the aerodynamic interface plane (AIP) showed significant differences from the data. Inlet distortion was shown to reduce the fan corrected flow and pressure ratio, and was not completely eliminated by passage through the fan

  14. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU.

    PubMed

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ∼600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ∼0.25  s/excitation source. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  15. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    PubMed

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  16. Thermal modeling of lesion growth with radiofrequency ablation devices

    PubMed Central

    Chang, Isaac A; Nguyen, Uyen D

    2004-01-01

    Background Temperature is a frequently used parameter to describe the predicted size of lesions computed by computational models. In many cases, however, temperature correlates poorly with lesion size. Although many studies have been conducted to characterize the relationship between time-temperature exposure of tissue heating to cell damage, to date these relationships have not been employed in a finite element model. Methods We present an axisymmetric two-dimensional finite element model that calculates cell damage in tissues and compare lesion sizes using common tissue damage and iso-temperature contour definitions. The model accounts for both temperature-dependent changes in the electrical conductivity of tissue as well as tissue damage-dependent changes in local tissue perfusion. The data is validated using excised porcine liver tissues. Results The data demonstrate the size of thermal lesions is grossly overestimated when calculated using traditional temperature isocontours of 42°C and 47°C. The computational model results predicted lesion dimensions that were within 5% of the experimental measurements. Conclusion When modeling radiofrequency ablation problems, temperature isotherms may not be representative of actual tissue damage patterns. PMID:15298708

  17. Recent advances in QM/MM free energy calculations using reference potentials☆

    PubMed Central

    Duarte, Fernanda; Amrein, Beat A.; Blaha-Nelson, David; Kamerlin, Shina C.L.

    2015-01-01

    Background Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Scope of review Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. Major conclusions The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. General significance As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. PMID:25038480

  18. Determination of Scaled Wind Turbine Rotor Characteristics from Three Dimensional RANS Calculations

    NASA Astrophysics Data System (ADS)

    Burmester, S.; Gueydon, S.; Make, M.

    2016-09-01

    Previous studies have shown the importance of 3D effects when calculating the performance characteristics of a scaled down turbine rotor [1-4]. In this paper the results of 3D RANS (Reynolds-Averaged Navier-Stokes) computations by Make and Vaz [1] are taken to calculate 2D lift and drag coefficients. These coefficients are assigned to FAST (Blade Element Momentum Theory (BEMT) tool from NREL) as input parameters. Then, the rotor characteristics (power and thrust coefficients) are calculated using BEMT. This coupling of RANS and BEMT was previously applied by other parties and is termed here the RANS-BEMT coupled approach. Here the approach is compared to measurements carried out in a wave basin at MARIN applying Froude scaled wind, and the direct 3D RANS computation. The data of both a model and full scale wind turbine are used for the validation and verification. The flow around a turbine blade at full scale has a more 2D character than the flow properties around a turbine blade at model scale (Make and Vaz [1]). Since BEMT assumes 2D flow behaviour, the results of the RANS-BEMT coupled approach agree better with the results of the CFD (Computational Fluid Dynamics) simulation at full- than at model-scale.

  19. Digital Maps, Matrices and Computer Algebra

    ERIC Educational Resources Information Center

    Knight, D. G.

    2005-01-01

    The way in which computer algebra systems, such as Maple, have made the study of complex problems accessible to undergraduate mathematicians with modest computational skills is illustrated by some large matrix calculations, which arise from representing the Earth's surface by digital elevation models. Such problems are often considered to lie in…

  20. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  1. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  2. FASTGRASS: A mechanistic model for the prediction of Xe, I, Cs, Te, Ba, and Sr release from nuclear fuel under normal and severe-accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.; Zawadzki, S.A.

    The primary physical/chemical models that form the basis of the FASTGRASS mechanistic computer model for calculating fission-product release from nuclear fuel are described. Calculated results are compared with test data and the major mechanisms affecting the transport of fission products during steady-state and accident conditions are identified.

  3. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution.

    PubMed

    Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn

    2013-03-06

    Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.

  4. Higgs boson mass in the standard model at two-loop order and beyond

    DOE PAGES

    Martin, Stephen P.; Robertson, David G.

    2014-10-01

    We calculate the mass of the Higgs boson in the standard model in terms of the underlying Lagrangian parameters at complete 2-loop order with leading 3-loop corrections. A computer program implementing the results is provided. The program also computes and minimizes the standard model effective potential in Landau gauge at 2-loop order with leading 3-loop corrections.

  5. Development of a computational model for predicting solar wind flows past nonmagnetic terrestrial planets

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Spreiter, J. R.

    1983-01-01

    A computational model for the determination of the detailed plasma and magnetic field properties of the global interaction of the solar wind with nonmagnetic terrestrial planetary obstacles is described. The theoretical method is based on an established single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of supersonic, super-Alfvenic solar wind flow past terrestrial ionospheres.

  6. Patient-Specific Modeling of Hemodynamics: Supporting Surgical Planning in a Fontan Circulation Correction.

    PubMed

    van Bakel, Theodorus M J; Lau, Kevin D; Hirsch-Romano, Jennifer; Trimarchi, Santi; Dorfman, Adam L; Figueroa, C Alberto

    2018-04-01

    Computational fluid dynamics (CFD) is a modeling technique that enables calculation of the behavior of fluid flows in complex geometries. In cardiovascular medicine, CFD methods are being used to calculate patient-specific hemodynamics for a variety of applications, such as disease research, noninvasive diagnostics, medical device evaluation, and surgical planning. This paper provides a concise overview of the methods to perform patient-specific computational analyses using clinical data, followed by a case study where CFD-supported surgical planning is presented in a patient with Fontan circulation complicated by unilateral pulmonary arteriovenous malformations. In closing, the challenges for implementation and adoption of CFD modeling in clinical practice are discussed.

  7. Quantum Computation Using Optically Coupled Quantum Dot Arrays

    NASA Technical Reports Server (NTRS)

    Pradhan, Prabhakar; Anantram, M. P.; Wang, K. L.; Roychowhury, V. P.; Saini, Subhash (Technical Monitor)

    1998-01-01

    A solid state model for quantum computation has potential advantages in terms of the ease of fabrication, characterization, and integration. The fundamental requirements for a quantum computer involve the realization of basic processing units (qubits), and a scheme for controlled switching and coupling among the qubits, which enables one to perform controlled operations on qubits. We propose a model for quantum computation based on optically coupled quantum dot arrays, which is computationally similar to the atomic model proposed by Cirac and Zoller. In this model, individual qubits are comprised of two coupled quantum dots, and an array of these basic units is placed in an optical cavity. Switching among the states of the individual units is done by controlled laser pulses via near field interaction using the NSOM technology. Controlled rotations involving two or more qubits are performed via common cavity mode photon. We have calculated critical times, including the spontaneous emission and switching times, and show that they are comparable to the best times projected for other proposed models of quantum computation. We have also shown the feasibility of accessing individual quantum dots using the NSOM technology by calculating the photon density at the tip, and estimating the power necessary to perform the basic controlled operations. We are currently in the process of estimating the decoherence times for this system; however, we have formulated initial arguments which seem to indicate that the decoherence times will be comparable, if not longer, than many other proposed models.

  8. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  9. Dynamic modeling of spacecraft in a collisionless plasma

    NASA Technical Reports Server (NTRS)

    Katz, I.; Parks, D. E.; Wang, S. S.; Wilson, A.

    1977-01-01

    A new computational model is described which can simulate the charging of complex geometrical objects in three dimensions. Two sample calculations are presented. In the first problem, the capacitance to infinity of a complex object similar to a satellite with solar array paddles is calculated. The second problem concerns the dynamical charging of a conducting cube partially covered with a thin dielectric film. In this calculation, the photoemission results in differential charging of the object.

  10. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Radman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    The computer programs and derivations generated in support of the modeling and design optimization program are presented. Programs for the buck regulator, boost regulator, and buck-boost regulator are described. The computer program for the design optimization calculations is presented. Constraints for the boost and buck-boost converter were derived. Derivations of state-space equations and transfer functions are presented. Computer lists for the converters are presented, and the input parameters justified.

  11. Analytical modeling of helicopter static and dynamic induced velocity in GRASP

    NASA Technical Reports Server (NTRS)

    Kunz, Donald L.; Hodges, Dewey H.

    1987-01-01

    The methodology used by the General Rotorcraft Aeromechanical Stability Program (GRASP) to model the characteristics of the flow through a helicopter rotor in hovering or axial flight is described. Since the induced flow plays a significant role in determining the aeroelastic properties of rotorcraft, the computation of the induced flow is an important aspect of the program. Because of the combined finite-element/multibody methodology used as the basis for GRASP, the implementation of induced velocity calculations presented an unusual challenge to the developers. To preserve the modelling flexibility and generality of the code, it was necessary to depart from the traditional methods of computing the induced velocity. This is accomplished by calculating the actuator disc contributions to the rotor loads in a separate element called the air mass element, and then performing the calculations of the aerodynamic forces on individual blade elements within the aeroelastic beam element.

  12. Challenging Density Functional Theory Calculations with Hemes and Porphyrins.

    PubMed

    de Visser, Sam P; Stillman, Martin J

    2016-04-07

    In this paper we review recent advances in computational chemistry and specifically focus on the chemical description of heme proteins and synthetic porphyrins that act as both mimics of natural processes and technological uses. These are challenging biochemical systems involved in electron transfer as well as biocatalysis processes. In recent years computational tools have improved considerably and now can reproduce experimental spectroscopic and reactivity studies within a reasonable error margin (several kcal·mol(-1)). This paper gives recent examples from our groups, where we investigated heme and synthetic metal-porphyrin systems. The four case studies highlight how computational modelling can correctly reproduce experimental product distributions, predicted reactivity trends and guide interpretation of electronic structures of complex systems. The case studies focus on the calculations of a variety of spectroscopic features of porphyrins and show how computational modelling gives important insight that explains the experimental spectra and can lead to the design of porphyrins with tuned properties.

  13. Atomistic calculations of interface elastic properties in noncoherent metallic bilayers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mi Changwen; Jun, Sukky; Kouris, Demitris A.

    2008-02-15

    The paper describes theoretical and computational studies associated with the interface elastic properties of noncoherent metallic bicrystals. Analytical forms of interface energy, interface stresses, and interface elastic constants are derived in terms of interatomic potential functions. Embedded-atom method potentials are then incorporated into the model to compute these excess thermodynamics variables, using energy minimization in a parallel computing environment. The proposed model is validated by calculating surface thermodynamic variables and comparing them with preexisting data. Next, the interface elastic properties of several fcc-fcc bicrystals are computed. The excess energies and stresses of interfaces are smaller than those on free surfacesmore » of the same crystal orientations. In addition, no negative values of interface stresses are observed. Current results can be applied to various heterogeneous materials where interfaces assume a prominent role in the systems' mechanical behavior.« less

  14. Global Coordinates and Exact Aberration Calculations Applied to Physical Optics Modeling of Complex Optical Systems

    NASA Astrophysics Data System (ADS)

    Lawrence, G.; Barnard, C.; Viswanathan, V.

    1986-11-01

    Historically, wave optics computer codes have been paraxial in nature. Folded systems could be modeled by "unfolding" the optical system. Calculation of optical aberrations is, in general, left for the analyst to do with off-line codes. While such paraxial codes were adequate for the simpler systems being studied 10 years ago, current problems such as phased arrays, ring resonators, coupled resonators, and grazing incidence optics require a major advance in analytical capability. This paper describes extension of the physical optics codes GLAD and GLAD V to include a global coordinate system and exact ray aberration calculations. The global coordinate system allows components to be positioned and rotated arbitrarily. Exact aberrations are calculated for components in aligned or misaligned configurations by using ray tracing to compute optical path differences and diffraction propagation. Optical path lengths between components and beam rotations in complex mirror systems are calculated accurately so that coherent interactions in phased arrays and coupled devices may be treated correctly.

  15. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.

  16. Multi-keV x-ray sources from metal-lined cylindrical hohlraums

    NASA Astrophysics Data System (ADS)

    Jacquet, L.; Girard, F.; Primout, M.; Villette, B.; Stemmler, Ph.

    2012-08-01

    As multi-keV x-ray sources, plastic hohlraums with inner walls coated with titanium, copper, and germanium have been fired on Omega in September 2009. For all the targets, the measured and calculated multi-keV x-ray power time histories are in a good qualitative agreement. In the same irradiation conditions, measured multi-keV x-ray conversion rates are ˜6%-8% for titanium, ˜2% for copper, and ˜0.5% for germanium. For titanium and copper hohlraums, the measured conversion rates are about two times higher than those given by hydroradiative computations. Conversely, for the germanium hohlraum, a rather good agreement is found between measured and computed conversion rates. To explain these findings, multi-keV integrated emissivities calculated with RADIOM [M. Busquet, Phys. Fluids 85, 4191 (1993)], the nonlocal-thermal-equilibrium atomic physics model used in our computations, have been compared to emissivities obtained from different other models. These comparisons provide an attractive way to explain the discrepancies between experimental and calculated quantitative results.

  17. Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele

    2014-11-01

    Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less

  18. 50 Years of Army Computing From ENIAC to MSRC

    DTIC Science & Technology

    2000-09-01

    processing capability. The scientifi c visualization program was started in 1984 to provide tools and expertise to help researchers graphically...and materials, forces modeling, nanoelectronics, electromagnetics and acoustics, signal image processing , and simulation and modeling. The ARL...mechanical and electrical calculating equipment, punch card data processing equipment, analog computers, and early digital machines. Before beginning, we

  19. Conformal anomaly of some 2-d Z (n) models

    NASA Astrophysics Data System (ADS)

    William, Peter

    1991-01-01

    We describe a numerical calculation of the conformal anomaly in the case of some two-dimensional statistical models undergoing a second-order phase transition, utilizing a recently developed method to compute the partition function exactly. This computation is carried out on a massively parallel CM2 machine, using the finite size scaling behaviour of the free energy.

  20. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  1. Aerodynamic heating and surface temperatures on vehicles for computer-aided design studies

    NASA Technical Reports Server (NTRS)

    Dejarnette, F. R.; Kania, L. A.; Chitty, A.

    1983-01-01

    A computer subprogram has been developed to calculate aerodynamic and radiative heating rates and to determine surface temperatures by integrating the heating rates along the trajectory of a vehicle. Convective heating rates are calculated by applying the axisymmetric analogue to inviscid surface streamlines and using relatively simple techniques to calculate laminar, transitional, or turbulent heating rates. Options are provided for the selection of gas model, transition criterion, turbulent heating method, Reynolds Analogy factor, and entropy-layer swallowing effects. Heating rates are compared to experimental data, and the time history of surface temperatures are given for a high-speed trajectory. The computer subprogram is developed for preliminary design and mission analysis where parametric studies are needed at all speeds.

  2. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution

    PubMed Central

    2013-01-01

    Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171

  3. User's guide to PHREEQC (Version 2) : a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations

    USGS Publications Warehouse

    Parkhurst, David L.; Appelo, C.A.J.

    1999-01-01

    PHREEQC version 2 is a computer program written in the C programming language that is designed to perform a wide variety of low-temperature aqueous geochemical calculations. PHREEQC is based on an ion-association aqueous model and has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations involving reversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and irreversible reactions, which include specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters, within specified compositional uncertainty limits.New features in PHREEQC version 2 relative to version 1 include capabilities to simulate dispersion (or diffusion) and stagnant zones in 1D-transport calculations, to model kinetic reactions with user-defined rate expressions, to model the formation or dissolution of ideal, multicomponent or nonideal, binary solid solutions, to model fixed-volume gas phases in addition to fixed-pressure gas phases, to allow the number of surface or exchange sites to vary with the dissolution or precipitation of minerals or kinetic reactants, to include isotope mole balances in inverse modeling calculations, to automatically use multiple sets of convergence parameters, to print user-defined quantities to the primary output file and (or) to a file suitable for importation into a spreadsheet, and to define solution compositions in a format more compatible with spreadsheet programs. This report presents the equations that are the basis for chemical equilibrium, kinetic, transport, and inverse-modeling calculations in PHREEQC; describes the input for the program; and presents examples that demonstrate most of the program's capabilities.

  4. A program code generator for multiphysics biological simulation using markup languages.

    PubMed

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  5. Calculating potential fields using microchannel spatial light modulators

    NASA Technical Reports Server (NTRS)

    Reid, Max B.

    1993-01-01

    We describe and present experimental results of the optical calculation of potential field maps suitable for mobile robot navigation. The optical computation employs two write modes of a microchannel spatial light modulator (MSLM). In one mode, written patterns expand spatially, and this characteristic is used to create an extended two dimensional function representing the influence of the goal in a robot's workspace. Distinct obstacle patterns are written in a second, non-expanding, mode. A model of the mechanisms determining MSLM write mode characteristics is developed and used to derive the optical calculation time for full potential field maps. Field calculations at a few hertz are possible with current technology, and calculation time vs. map size scales favorably in comparison to digital electronic computation.

  6. Raster-Based Approach to Solar Pressure Modeling

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W. II

    2013-01-01

    An algorithm has been developed to take advantage of the graphics processing hardware in modern computers to efficiently compute high-fidelity solar pressure forces and torques on spacecraft, taking into account the possibility of self-shading due to the articulation of spacecraft components such as solar arrays. The process is easily extended to compute other results that depend on three-dimensional attitude analysis, such as solar array power generation or free molecular flow drag. The impact of photons upon a spacecraft introduces small forces and moments. The magnitude and direction of the forces depend on the material properties of the spacecraft components being illuminated. The parts of the components being lit depends on the orientation of the craft with respect to the Sun, as well as the gimbal angles for any significant moving external parts (solar arrays, typically). Some components may shield others from the Sun. The purpose of this innovation is to enable high-fidelity computation of solar pressure and power generation effects of illuminated portions of spacecraft, taking self-shading from spacecraft attitude and movable components into account. The key idea in this innovation is to compute results dependent upon complicated geometry by using an image to break the problem into thousands or millions of sub-problems with simple geometry, and then the results from the simpler problems are combined to give high-fidelity results for the full geometry. This process is performed by constructing a 3D model of a spacecraft using an appropriate computer language (OpenGL), and running that model on a modern computer's 3D accelerated video processor. This quickly and accurately generates a view of the model (as shown on a computer screen) that takes rotation and articulation of spacecraft components into account. When this view is interpreted as the spacecraft as seen by the Sun, then only the portions of the craft visible in the view are illuminated. The view as shown on the computer screen is composed of up to millions of pixels. Each of those pixels is associated with a small illuminated area of the spacecraft. For each pixel, it is possible to compute its position, angle (surface normal) from the view direction, and the spacecraft material (and therefore, optical coefficients) associated with that area. With this information, the area associated with each pixel can be modeled as a simple flat plate for calculating solar pressure. The vector sum of these individual flat plate models is a high-fidelity approximation of the solar pressure forces and torques on the whole vehicle. In addition to using optical coefficients associated with each spacecraft material to calculate solar pressure, a power generation coefficient is added for computing solar array power generation from the sum of the illuminated areas. Similarly, other area-based calculations, such as free molecular flow drag, are also enabled. Because the model rendering is separated from other calculations, it is relatively easy to add a new model to explore a new vehicle or mission configuration. Adding a new model is performed by adding OpenGL code, but a future version might read a mesh file exported from a computer-aided design (CAD) system to enable very rapid turnaround for new designs

  7. Intrinsic frame transport for a model of nematic liquid crystal

    NASA Astrophysics Data System (ADS)

    Cozzini, S.; Rull, L. F.; Ciccotti, G.; Paolini, G. V.

    1997-02-01

    We present a computer simulation study of the dynamical properties of a nematic liquid crystal model. The diffusional motion of the nematic director is taken into account in our calculations in order to give a proper estimate of the transport coefficients. Differently from other groups we do not attempt to stabilize the director through rigid constraints or applied external fields. We instead define an intrinsic frame which moves along with the director at each step of the simulation. The transport coefficients computed in the intrinsic frame are then compared against the ones calculated in the fixed laboratory frame, to show the inadequacy of the latter for systems with less than 500 molecules. Using this general scheme on the Gay-Berne liquid crystal model, we evidence the natural motion of the director and attempt to quantify its intrinsic time scale and size dependence. Through extended simulations of systems of different size we calculate the diffusion and viscosity coefficients of this model and compare our results with values previously obtained with fixed director.

  8. Study of Wind Effects on Unique Buildings

    NASA Astrophysics Data System (ADS)

    Olenkov, V.; Puzyrev, P.

    2017-11-01

    The article deals with a numerical simulation of wind effects on the building of the Church of the Intercession of the Holy Virgin in the village Bulzi of the Chelyabinsk region. We presented a calculation algorithm and obtained pressure fields, velocity fields and the fields of kinetic energy of a wind stream, as well as streamlines. Computational fluid dynamic (CFD) evolved three decades ago at the interfaces of calculus mathematics and theoretical hydromechanics and has become a separate branch of science the subject of which is a numerical simulation of different fluid and gas flows as well as the solution of arising problems with the help of methods that involve computer systems. This scientific field which is of a great practical value is intensively developing. The increase in CFD-calculations is caused by the improvement of computer technologies, creation of multipurpose easy-to-use CFD-packagers that are available to a wide group of researchers and cope with various tasks. Such programs are not only competitive in comparison with physical experiments but sometimes they provide the only opportunity to answer the research questions. The following advantages of computer simulation can be pointed out: a) Reduction in time spent on design and development of a model in comparison with a real experiment (variation of boundary conditions). b) Numerical experiment allows for the simulation of conditions that are not reproducible with environmental tests (use of ideal gas as environment). c) Use of computational gas dynamics methods provides a researcher with a complete and ample information that is necessary to fully describe different processes of the experiment. d) Economic efficiency of computer calculations is more attractive than an experiment. e) Possibility to modify a computational model which ensures efficient timing (change of the sizes of wall layer cells in accordance with the chosen turbulence model).

  9. Computational techniques for solar wind flows past terrestrial planets: Theory and computer programs

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Chaussee, D. S.; Trudinger, B. C.; Spreiter, J. R.

    1977-01-01

    The interaction of the solar wind with terrestrial planets can be predicted using a computer program based on a single fluid, steady, dissipationless, magnetohydrodynamic model to calculate the axisymmetric, supersonic, super-Alfvenic solar wind flow past both magnetic and nonmagnetic planets. The actual calculations are implemented by an assemblage of computer codes organized into one program. These include finite difference codes which determine the gas-dynamic solution, together with a variety of special purpose output codes for determining and automatically plotting both flow field and magnetic field results. Comparisons are made with previous results, and results are presented for a number of solar wind flows. The computational programs developed are documented and are presented in a general user's manual which is included.

  10. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  11. Reverse logistics system planning for recycling computers hardware: A case study

    NASA Astrophysics Data System (ADS)

    Januri, Siti Sarah; Zulkipli, Faridah; Zahari, Siti Meriam; Shamsuri, Siti Hajar

    2014-09-01

    This paper describes modeling and simulation of reverse logistics networks for collection of used computers in one of the company in Selangor. The study focuses on design of reverse logistics network for used computers recycling operation. Simulation modeling, presented in this work allows the user to analyze the future performance of the network and to understand the complex relationship between the parties involved. The findings from the simulation suggest that the model calculates processing time and resource utilization in a predictable manner. In this study, the simulation model was developed by using Arena simulation package.

  12. Efficient calculation of atomic rate coefficients in dense plasmas

    NASA Astrophysics Data System (ADS)

    Aslanyan, Valentin; Tallents, Greg J.

    2017-03-01

    Modelling electron statistics in a cold, dense plasma by the Fermi-Dirac distribution leads to complications in the calculations of atomic rate coefficients. The Pauli exclusion principle slows down the rate of collisions as electrons must find unoccupied quantum states and adds a further computational cost. Methods to calculate these coefficients by direct numerical integration with a high degree of parallelism are presented. This degree of optimization allows the effects of degeneracy to be incorporated into a time-dependent collisional-radiative model. Example results from such a model are presented.

  13. Technical Note: spektr 3.0—A computational tool for x-ray spectrum modeling and analysis

    PubMed Central

    Punnoose, J.; Xu, J.; Sisniega, A.; Zbijewski, W.; Siewerdsen, J. H.

    2016-01-01

    Purpose: A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The spektr code generates x-ray spectra (photons/mm2/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available. PMID:27487888

  14. Technical Note: SPEKTR 3.0—A computational tool for x-ray spectrum modeling and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Punnoose, J.; Xu, J.; Sisniega, A.

    2016-08-15

    Purpose: A computational toolkit (SPEKTR 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a MATLAB (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The SPEKTR code generates x-ray spectra (photons/mm{sup 2}/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins overmore » beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, SPEKTR, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the SPEKTR function library, UI, and optimization tool are available.« less

  15. Radiative transfer modelling inside thermal protection system using hybrid homogenization method for a backward Monte Carlo method coupled with Mie theory

    NASA Astrophysics Data System (ADS)

    Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.

    2012-06-01

    A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.

  16. Locomotive crashworthiness report : volume 4 : additional freight locomotive calculations

    DOT National Transportation Integrated Search

    1995-07-01

    Previously developed computer models (see volume 1) are used to carry out additional calculations for evaluation of road freight locomotive crashworthiness. The effect of fewer locomotives (as would be expected after transition from DC motor to highe...

  17. Free molecular collision cross section calculation methods for nanoparticles and complex ions with energy accommodation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larriba, Carlos, E-mail: clarriba@umn.edu; Hogan, Christopher J.

    2013-10-15

    The structures of nanoparticles, macromolecules, and molecular clusters in gas phase environments are often studied via measurement of collision cross sections. To directly compare structure models to measurements, it is hence necessary to have computational techniques available to calculate the collision cross sections of structural models under conditions matching measurements. However, presently available collision cross section methods contain the underlying assumption that collision between gas molecules and structures are completely elastic (gas molecule translational energy conserving) and specular, while experimental evidence suggests that in the most commonly used background gases for measurements, air and molecular nitrogen, gas molecule reemission ismore » largely inelastic (with exchange of energy between vibrational, rotational, and translational modes) and should be treated as diffuse in computations with fixed structural models. In this work, we describe computational techniques to predict the free molecular collision cross sections for fixed structural models of gas phase entities where inelastic and non-specular gas molecule reemission rules can be invoked, and the long range ion-induced dipole (polarization) potential between gas molecules and a charged entity can be considered. Specifically, two calculation procedures are described detail: a diffuse hard sphere scattering (DHSS) method, in which structures are modeled as hard spheres and collision cross sections are calculated for rectilinear trajectories of gas molecules, and a diffuse trajectory method (DTM), in which the assumption of rectilinear trajectories is relaxed and the ion-induced dipole potential is considered. Collision cross section calculations using the DHSS and DTM methods are performed on spheres, models of quasifractal aggregates of varying fractal dimension, and fullerene like structures. Techniques to accelerate DTM calculations by assessing the contribution of grazing gas molecule collisions (gas molecules with altered trajectories by the potential interaction) without tracking grazing trajectories are further discussed. The presented calculation techniques should enable more accurate collision cross section predictions under experimentally relevant conditions than pre-existing approaches, and should enhance the ability of collision cross section measurement schemes to discern the structures of gas phase entities.« less

  18. EnviroLand: A Simple Computer Program for Quantitative Stream Assessment.

    ERIC Educational Resources Information Center

    Dunnivant, Frank; Danowski, Dan; Timmens-Haroldson, Alice; Newman, Meredith

    2002-01-01

    Introduces the Enviroland computer program which features lab simulations of theoretical calculations for quantitative analysis and environmental chemistry, and fate and transport models. Uses the program to demonstrate the nature of linear and nonlinear equations. (Author/YDS)

  19. QEDMOD: Fortran program for calculating the model Lamb-shift operator

    NASA Astrophysics Data System (ADS)

    Shabaev, V. M.; Tupitsyn, I. I.; Yerokhin, V. A.

    2018-02-01

    We present Fortran package QEDMOD for computing the model QED operator hQED that can be used to account for the Lamb shift in accurate atomic-structure calculations. The package routines calculate the matrix elements of hQED with the user-specified one-electron wave functions. The operator can be used to calculate Lamb shift in many-electron atomic systems with a typical accuracy of few percent, either by evaluating the matrix element of hQED with the many-electron wave function, or by adding hQED to the Dirac-Coulomb-Breit Hamiltonian.

  20. GPU-based ultra-fast dose calculation using a finite size pencil beam model.

    PubMed

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B

    2009-10-21

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  1. Selection of a model of Earth's albedo radiation, practical calculation of its effect on the trajectory of a satellite

    NASA Technical Reports Server (NTRS)

    Walch, J. J.

    1985-01-01

    Theoretical models of Earth's albedo radiation was proposed. By comparing disturbing accelerations computed from a model to those measured in flight with the CACTUS Accelerometer, modified according to the results. Computation of the satellite orbit perturbations from a model is very long because for each position of this satellite the fluxes coming from each elementary surface of the terrestrial portion visible from the satellite must be summed. The speed of computation is increased ten times without significant loss of accuracy thanks to a stocking of some intermediate results. Now it is possible to confront the orbit perturbations computed from the selected model with the measurements of these perturbations found with satellite as LAGEOS.

  2. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  3. Theoretical calculations of physico-chemical and spectroscopic properties of bioinorganic systems: current limits and perspectives.

    PubMed

    Rokob, Tibor András; Srnec, Martin; Rulíšek, Lubomír

    2012-05-21

    In the last decade, we have witnessed substantial progress in the development of quantum chemical methodologies. Simultaneously, robust solvation models and various combined quantum and molecular mechanical (QM/MM) approaches have become an integral part of quantum chemical programs. Along with the steady growth of computer power and, more importantly, the dramatic increase of the computer performance to price ratio, this has led to a situation where computational chemistry, when exercised with the proper amount of diligence and expertise, reproduces, predicts, and complements the experimental data. In this perspective, we review some of the latest achievements in the field of theoretical (quantum) bioinorganic chemistry, concentrating mostly on accurate calculations of the spectroscopic and physico-chemical properties of open-shell bioinorganic systems by wave-function (ab initio) and DFT methods. In our opinion, the one-to-one mapping between the calculated properties and individual molecular structures represents a major advantage of quantum chemical modelling since this type of information is very difficult to obtain experimentally. Once (and only once) the physico-chemical, thermodynamic and spectroscopic properties of complex bioinorganic systems are quantitatively reproduced by theoretical calculations may we consider the outcome of theoretical modelling, such as reaction profiles and the various decompositions of the calculated parameters into individual spatial or physical contributions, to be reliable. In an ideal situation, agreement between theory and experiment may imply that the practical problem at hand, such as the reaction mechanism of the studied metalloprotein, can be considered as essentially solved.

  4. Numerical Modeling of Three-Dimensional Confined Flows

    NASA Technical Reports Server (NTRS)

    Greywall, M. S.

    1981-01-01

    A three dimensional confined flow model is presented. The flow field is computed by calculating velocity and enthalpy along a set of streamlines. The finite difference equations are obtained by applying conservation principles to streamtubes constructed around the chosen streamlines. With appropriate substitutions for the body force terms, the approach computes three dimensional magnetohydrodynamic channel flows. A listing of a computer code, based on this approach is presented in FORTRAN IV language. The code computes three dimensional compressible viscous flow through a rectangular duct, with the duct cross section specified along the axis.

  5. Atmospheric transmission computer program CP

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Barnett, T. L.; Korb, C. L.; Hanby, W.; Dillinger, A. E.

    1974-01-01

    A computer program is described which allows for calculation of the effects of carbon dioxide, water vapor, methane, ozone, carbon monoxide, and nitrous oxide on earth resources remote sensing techniques. A flow chart of the program and operating instructions are provided. Comparisons are made between the atmospheric transmission obtained from laboratory and spacecraft spectrometer data and that obtained from a computer prediction using a model atmosphere and radiosonde data. Limitations of the model atmosphere are discussed. The computer program listings, input card formats, and sample runs for both radiosonde data and laboratory data are included.

  6. Modeling and analysis of Soil Erosion processes by the River Basins model: The Case Study of the Krivacki Potok Watershed, Montenegro

    NASA Astrophysics Data System (ADS)

    Vujacic, Dusko; Barovic, Goran; Mijanovic, Dragica; Spalevic, Velibor; Curovic, Milic; Tanaskovic, Vjekoslav; Djurovic, Nevenka

    2016-04-01

    The objective of this research was to study soil erosion processes in one of Northern Montenegrin watersheds, the Krivacki Potok Watershed of the Polimlje River Basin, using modeling techniques: the River Basins computer-graphic model, based on the analytical Erosion Potential Method (EPM) of Gavrilovic for calculation of runoff and soil loss. Our findings indicate a low potential of soil erosion risk, with 554 m³ yr-1 of annual sediment yield; an area-specific sediment yield of 180 m³km-2 yr-1. The calculation outcomes were validated for the entire 57 River Basins of Polimlje, through measurements of lake sediment deposition at the Potpec hydropower plant dam. According to our analysis, the Krivacki Potok drainage basin is with the relatively low sediment discharge; according to the erosion type, it is mixed erosion. The value of the Z coefficient was calculated on 0.297, what indicates that the river basin belongs to 4th destruction category (of five). The calculated peak discharge from the river basin was 73 m3s-1 for the incidence of 100 years and there is a possibility for large flood waves to appear in the studied river basin. Using the adequate computer-graphic and analytical modeling tools, we improved the knowledge on the soil erosion processes of the river basins of this part of Montenegro. The computer-graphic River Basins model of Spalevic, which is based on the EPM analytical method of Gavrilovic, is highly recommended for soil erosion modelling in other river basins of the Southeastern Europe. This is because of its reliable detection and appropriate classification of the areas affected by the soil loss caused by soil erosion, at the same time taking into consideration interactions between the various environmental elements such as Physical-Geographical Features, Climate, Geological, Pedological characteristics, including the analysis of Land Use, all calculated at the catchment scale.

  7. Airborne antenna pattern calculations

    NASA Technical Reports Server (NTRS)

    Knerr, T. J.; Mielke, R. R.

    1981-01-01

    Progress on the development of modeling software, testing software against caclulated data from program VPAP and measured patterns, and calculating roll plane patterns for general aviation aircraft is reported. Major objectives are the continued development of computer software for aircraft modeling and use of this software and program OSUVOL to calculate principal plane and volumetric radiation patterns. The determination of proper placement of antennas on aircraft to meet the requirements of the Microwave Landing System is discussed. An overview of the performed work, and an example of a roll plane model for the Piper PA-31T Cheyenne aircraft and the resulting calculated roll plane radiation pattern are included.

  8. Thermomechanical conditions and stresses on the friction stir welding tool

    NASA Astrophysics Data System (ADS)

    Atthipalli, Gowtam

    Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.

  9. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  10. Rotorcraft Noise Model

    NASA Technical Reports Server (NTRS)

    Lucas, Michael J.; Marcolini, Michael A.

    1997-01-01

    The Rotorcraft Noise Model (RNM) is an aircraft noise impact modeling computer program being developed for NASA-Langley Research Center which calculates sound levels at receiver positions either on a uniform grid or at specific defined locations. The basic computational model calculates a variety of metria. Acoustic properties of the noise source are defined by two sets of sound pressure hemispheres, each hemisphere being centered on a noise source of the aircraft. One set of sound hemispheres provides the broadband data in the form of one-third octave band sound levels. The other set of sound hemispheres provides narrowband data in the form of pure-tone sound pressure levels and phase. Noise contours on the ground are output graphically or in tabular format, and are suitable for inclusion in Environmental Impact Statements or Environmental Assessments.

  11. Some practical turbulence modeling options for Reynolds-averaged full Navier-Stokes calculations of three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1993-01-01

    New turbulence modeling options recently implemented for the 3-D version of Proteus, a Reynolds-averaged compressible Navier-Stokes code, are described. The implemented turbulence models include: the Baldwin-Lomax algebraic model, the Baldwin-Barth one-equation model, the Chien k-epsilon model, and the Launder-Sharma k-epsilon model. Features of this turbulence modeling package include: well documented and easy to use turbulence modeling options, uniform integration of turbulence models from different classes, automatic initialization of turbulence variables for calculations using one- or two-equation turbulence models, multiple solid boundaries treatment, and fully vectorized L-U solver for one- and two-equation models. Validation test cases include the incompressible and compressible flat plate turbulent boundary layers, turbulent developing S-duct flow, and glancing shock wave/turbulent boundary layer interaction. Good agreement is obtained between the computational results and experimental data. Sensitivity of the compressible turbulent solutions with the method of y(sup +) computation, the turbulent length scale correction, and some compressibility corrections are examined in detail. The test cases show that the highly optimized one-and two-equation turbulence models can be used in routine 3-D Navier-Stokes computations with no significant increase in CPU time as compared with the Baldwin-Lomax algebraic model.

  12. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  13. Analytical and Experimental Study to Improve Computer Models for Mixing and Dilution of Soluble Hazardous Chemicals.

    DTIC Science & Technology

    1982-08-01

    Trajectory and Concentration of Various Plumes 59 IV.2 Tank and Cargo Geometry Assumed for Discharge Rate Calculation Using HACS Venting Rate Model 61...Discharge Rate Calculation Using HACS Venting Rate Model 62 IV.4 Original Test Plan for Validation of the Continuous Spill Model 66 IV.5 Final Test Plan...at t= 0. exEyEz = turbulent diffusivities. p = water density. Pc = chemical density. Symbols Used Only in Continuous-Spill Models for a Steady River b

  14. A model for the accurate computation of the lateral scattering of protons in water

    NASA Astrophysics Data System (ADS)

    Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.

    2016-02-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  15. Basic Modeling of the Solar Atmosphere and Spectrum

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.; Wagner, William J. (Technical Monitor)

    2000-01-01

    During the last three years we have continued the development of extensive computer programs for constructing realistic models of the solar atmosphere and for calculating detailed spectra to use in the interpretation of solar observations. This research involves two major interrelated efforts: work by Avrett and Loeser on the Pandora computer program for optically thick non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed high-resolution synthesis of the solar spectrum using data for over 58 million atomic and molecular lines. Our objective is to construct atmospheric models from which the calculated spectra agree as well as possible with high-and low-resolution observations over a wide wavelength range. Such modeling leads to an improved understanding of the physical processes responsible for the structure and behavior of the atmosphere.

  16. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  17. Algorithms for the Computation of Debris Risk

    NASA Technical Reports Server (NTRS)

    Matney, Mark J.

    2017-01-01

    Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of satellites. A number of tools have been developed in NASA’s Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA’s Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper presents an introduction to these algorithms and the assumptions upon which they are based.

  18. Algorithms for the Computation of Debris Risks

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2017-01-01

    Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.

  19. An automated procedure for calculating system matrices from perturbation data generated by an EAI Pacer and 100 hybrid computer system

    NASA Technical Reports Server (NTRS)

    Milner, E. J.; Krosel, S. M.

    1977-01-01

    Techniques are presented for determining the elements of the A, B, C, and D state variable matrices for systems simulated on an EAI Pacer 100 hybrid computer. An automated procedure systematically generates disturbance data necessary to linearize the simulation model and stores these data on a floppy disk. A separate digital program verifies this data, calculates the elements of the system matrices, and prints these matrices appropriately labeled. The partial derivatives forming the elements of the state variable matrices are approximated by finite difference calculations.

  20. batman: BAsic Transit Model cAlculatioN in Python

    NASA Astrophysics Data System (ADS)

    Kreidberg, Laura

    2015-11-01

    I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .

  1. Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code

    NASA Astrophysics Data System (ADS)

    Sabotinov, Luben; Chevrier, Patrick

    The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.

  2. Toward ab initio molecular dynamics modeling for sum-frequency generation spectra; an efficient algorithm based on surface-specific velocity-velocity correlation function.

    PubMed

    Ohto, Tatsuhiko; Usui, Kota; Hasegawa, Taisuke; Bonn, Mischa; Nagata, Yuki

    2015-09-28

    Interfacial water structures have been studied intensively by probing the O-H stretch mode of water molecules using sum-frequency generation (SFG) spectroscopy. This surface-specific technique is finding increasingly widespread use, and accordingly, computational approaches to calculate SFG spectra using molecular dynamics (MD) trajectories of interfacial water molecules have been developed and employed to correlate specific spectral signatures with distinct interfacial water structures. Such simulations typically require relatively long (several nanoseconds) MD trajectories to allow reliable calculation of the SFG response functions through the dipole moment-polarizability time correlation function. These long trajectories limit the use of computationally expensive MD techniques such as ab initio MD and centroid MD simulations. Here, we present an efficient algorithm determining the SFG response from the surface-specific velocity-velocity correlation function (ssVVCF). This ssVVCF formalism allows us to calculate SFG spectra using a MD trajectory of only ∼100 ps, resulting in the substantial reduction of the computational costs, by almost an order of magnitude. We demonstrate that the O-H stretch SFG spectra at the water-air interface calculated by using the ssVVCF formalism well reproduce those calculated by using the dipole moment-polarizability time correlation function. Furthermore, we applied this ssVVCF technique for computing the SFG spectra from the ab initio MD trajectories with various density functionals. We report that the SFG responses computed from both ab initio MD simulations and MD simulations with an ab initio based force field model do not show a positive feature in its imaginary component at 3100 cm(-1).

  3. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    PubMed

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  4. HSTRESS: A computer program to calculate the height of a hydraulic fracture in a multi-layered stress medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    A computer code for calculating hydraulic fracture height and width in a stressed-layer medium has been modified for easy use on a personal computer. HSTRESS allows for up to 51 layers having different thicknesses, stresses and fracture toughnesses. The code can calculate fracture height versus pressure or pressure versus fracture height, depending on the design model in which the data will be used. At any pressure/height, a width profile is calculated and an equivalent width factor and flow resistance factor are determined. This program is written in FORTRAN. Graphics use PLOT88 software by Plotworks, Inc., but the graphics software mustmore » be obtained by the user because of licensing restrictions. A version without graphics can also be run. This code is available through the National Energy Software Center (NESC), operated by Argonne National Laboratory. 14 refs., 21 figs.« less

  5. Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations

    NASA Astrophysics Data System (ADS)

    Romanihin, S. M.; Tronin, I. V.

    2016-09-01

    We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.

  6. ASR4: A computer code for fitting and processing 4-gage anelastic strain recovery data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    A computer code for analyzing four-gage Anelastic Strain Recovery (ASR) data has been modified for use on a personal computer. This code fits the viscoelastic model of Warpinski and Teufel to measured ASR data, calculates the stress orientation directly, and computes stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and its calculates stress magnitudes using Blanton's approach, assuming sufficient input data are available. The program is written in FORTRAN, compiled with Ryan-McFarland Version 2.4. Graphics use PLOT88 software by Plotworks, Inc., but the graphics software must be obtained by themore » user because of licensing restrictions. A version without graphics can also be run. This code is available through the National Energy Software Center (NESC), operated by Argonne National Laboratory. 5 refs., 3 figs.« less

  7. Ground temperature measurement by PRT-5 for maps experiment

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.

  8. GPU-based Green’s function simulations of shear waves generated by an applied acoustic radiation force in elastic and viscoelastic models

    NASA Astrophysics Data System (ADS)

    Yang, Yiqun; Urban, Matthew W.; McGough, Robert J.

    2018-05-01

    Shear wave calculations induced by an acoustic radiation force are very time-consuming on desktop computers, and high-performance graphics processing units (GPUs) achieve dramatic reductions in the computation time for these simulations. The acoustic radiation force is calculated using the fast near field method and the angular spectrum approach, and then the shear waves are calculated in parallel with Green’s functions on a GPU. This combination enables rapid evaluation of shear waves for push beams with different spatial samplings and for apertures with different f/#. Relative to shear wave simulations that evaluate the same algorithm on an Intel i7 desktop computer, a high performance nVidia GPU reduces the time required for these calculations by a factor of 45 and 700 when applied to elastic and viscoelastic shear wave simulation models, respectively. These GPU-accelerated simulations also compared to measurements in different viscoelastic phantoms, and the results are similar. For parametric evaluations and for comparisons with measured shear wave data, shear wave simulations with the Green’s function approach are ideally suited for high-performance GPUs.

  9. Modeling macro-and microstructures of gas-metal-arc welded HSLA-100 steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Z.; Debroy, T.

    1999-06-01

    Fluid flow and heat transfer during gas-metal-arc welding (GMAW) of HSLA-100 steel were studied using a transient, three-dimensional, turbulent heat transfer and fluid flow model. The temperature and velocity fields, cooling rates, and shape and size of the fusion and heat-affected zones (HAZs) were calculated. A continuous-cooling-transformation (CCT) diagram was computed to aid in the understanding of the observed weld metal microstructure. The computed results demonstrate that the dissipation of heat and momentum in the weld pool is significantly aided by turbulence,m thus suggesting that previous modeling results based on laminar flow need to be re-examined. A comparison of themore » calculated fusion and HAZ geometries with their corresponding measured values showed good agreement. Furthermore, finger penetration, a unique geometric characteristic of gas-metal-arc weld pools, could be satisfactorily predicted from the model. The ability to predict these geometric variables and the agreement between the calculated and the measured cooling rates indicate the appropriateness of using a turbulence model for accurate calculations. The microstructure of the weld metal consisted mainly of acicular ferrite with small amounts of bainite. At high heat inputs, small amounts of allotriomorphic and Widmanstaetten ferrite were also observed. The observed microstructures are consistent with those expected from the computed CCT diagram and the cooling rates. The results presented here demonstrate significant promise for understanding both macro-and microstructures of steel welds from the combination of the fundamental principles from both transport phenomena and phase transformation theory.« less

  10. UCODE, a computer code for universal inverse modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.

  11. Computer simulations of austenite decomposition of microalloyed 700 MPa steel during cooling

    NASA Astrophysics Data System (ADS)

    Pohjonen, Aarne; Paananen, Joni; Mourujärvi, Juho; Manninen, Timo; Larkiola, Jari; Porter, David

    2018-05-01

    We present computer simulations of austenite decomposition to ferrite and bainite during cooling. The phase transformation model is based on Johnson-Mehl-Avrami-Kolmogorov type equations. The model is parameterized by numerical fitting to continuous cooling data obtained with Gleeble thermo-mechanical simulator and it can be used for calculation of the transformation behavior occurring during cooling along any cooling path. The phase transformation model has been coupled with heat conduction simulations. The model includes separate parameters to account for the incubation stage and for the kinetics after the transformation has started. The incubation time is calculated with inversion of the CCT transformation start time. For heat conduction simulations we employed our own parallelized 2-dimensional finite difference code. In addition, the transformation model was also implemented as a subroutine in commercial finite-element software Abaqus which allows for the use of the model in various engineering applications.

  12. The effectiveness of element downsizing on a three-dimensional finite element model of bone trabeculae in implant biomechanics.

    PubMed

    Sato, Y; Wadamoto, M; Tsuga, K; Teixeira, E R

    1999-04-01

    More validity of finite element analysis in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To investigate the effectiveness of element downsizing on the construction of a three-dimensional finite element bone trabeculae model, with different element sizes (600, 300, 150 and 75 microm) models were constructed and stress induced by vertical 10 N loading was analysed. The difference in von Mises stress values between the models with 600 and 300 microm element sizes was larger than that between 300 and 150 microm. On the other hand, no clear difference of stress values was detected among the models with 300, 150 and 75 microm element sizes. Downsizing of elements from 600 to 300 microm is suggested to be effective in the construction of a three-dimensional finite element bone trabeculae model for possible saving of computer memory and calculation time in the laboratory.

  13. Permeability of model porous medium formed by random discs

    NASA Astrophysics Data System (ADS)

    Gubaidullin, A. A.; Gubkin, A. S.; Igoshin, D. E.; Ignatev, P. A.

    2018-03-01

    Two-dimension model of the porous medium with skeleton of randomly located overlapping discs is proposed. The geometry and computational grid are built in open package Salome. Flow of Newtonian liquid in longitudinal and transverse directions is calculated and its flow rate is defined. The numerical solution of the Navier-Stokes equations for a given pressure drop at the boundaries of the area is realized in the open package OpenFOAM. Calculated value of flow rate is used for defining of permeability coefficient on the base of Darcy law. For evaluating of representativeness of computational domain the permeability coefficients in longitudinal and transverse directions are compered.

  14. Modeling inelastic phonon scattering in atomic- and molecular-wire junctions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2005-11-01

    Computationally inexpensive approximations describing electron-phonon scattering in molecular-scale conductors are derived from the nonequilibrium Green’s function method. The accuracy is demonstrated with a first-principles calculation on an atomic gold wire. Quantitative agreement between the full nonequilibrium Green’s function calculation and the newly derived expressions is obtained while simplifying the computational burden by several orders of magnitude. In addition, analytical models provide intuitive understanding of the conductance including nonequilibrium heating and provide a convenient way of parameterizing the physics. This is exemplified by fitting the expressions to the experimentally observed conductances through both an atomic gold wire and a hydrogen molecule.

  15. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  16. Downstream Effects on Orbiter Leeside Flow Separation for Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.; Pulsonetti, Maria V.; Weilmuenster, K. James

    2005-01-01

    Discrepancies between experiment and computation for shuttle leeside flow separation, which came to light in the Columbia accident investigation, are resolved. Tests were run in the Langley Research Center 20-Inch Hypersonic CF4 Tunnel with a baseline orbiter model and two extended trailing edge models. The extended trailing edges altered the wing leeside separation lines, moving the lines toward the fuselage, proving that wing trailing edge modeling does affect the orbiter leeside flow. Computations were then made with a wake grid. These calculations more closely matched baseline experiments. Thus, the present findings demonstrate that it is imperative to include the wake flow domain in CFD calculations in order to accurately predict leeside flow separation for hypersonic vehicles at high angles of attack.

  17. Efficient approach to obtain free energy gradient using QM/MM MD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asada, Toshio; Koseki, Shiro; The Research Institute for Molecular Electronic Devices

    2015-12-31

    The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means ofmore » FEG and the nudged elastic band (NEB) method.« less

  18. New statistical scission-point model to predict fission fragment observables

    NASA Astrophysics Data System (ADS)

    Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie

    2015-09-01

    The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.

  19. SCEC Earthquake System Science Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes were run on NSF TeraGrid sites including simulations that use the full PSC Big Ben supercomputer (4096 cores) and simulations that ran on more than 10K cores at TACC Ranger. The SCEC/CME group used scientific workflow tools and grid-computing to run more than 1.5 million jobs at NCSA for the CyberShake project. Visualizations produced by a SCEC/CME researcher of the 10Hz ShakeOut 1.2 scenario simulation data were used by USGS in ShakeOut publications and public outreach efforts. OpenSHA was ported onto an NSF supercomputer and was used to produce very high resolution hazard PSHA maps that contained more than 1.6 million hazard curves.

  20. On the origin of the electrostatic potential difference at a liquid-vacuum interface.

    PubMed

    Harder, Edward; Roux, Benoît

    2008-12-21

    The microscopic origin of the interface potential calculated from computer simulations is elucidated by considering a simple model of molecules near an interface. The model posits that molecules are isotropically oriented and their charge density is Gaussian distributed. Molecules that have a charge density that is more negative toward their interior tend to give rise to a negative interface potential relative to the gaseous phase, while charge densities more positive toward their interior give rise to a positive interface potential. The interface potential for the model is compared to the interface potential computed from molecular dynamics simulations of the nonpolar vacuum-methane system and the polar vacuum-water interface system. The computed vacuum-methane interface potential from a molecular dynamics simulation (-220 mV) is captured with quantitative precision by the model. For the vacuum-water interface system, the model predicts a potential of -400 mV compared to -510 mV, calculated from a molecular dynamics simulation. The physical implications of this isotropic contribution to the interface potential is examined using the example of ion solvation in liquid methane.

  1. Different treatment modalities of fusiform basilar trunk aneurysm: study on computational hemodynamics.

    PubMed

    Wu, Chen; Xu, Bai-Nan; Sun, Zheng-Hui; Wang, Fu-Yu; Liu, Lei; Zhang, Xiao-Jun; Zhou, Ding-Biao

    2012-01-01

    Unclippable fusiform basilar trunk aneurysm is a formidable condition for surgical treatment. The aim of this study was to establish a computational model and to investigate the hemodynamic characteristics in a fusiform basilar trunk aneurysm. The three-dimensional digital model of a fusiform basilar trunk aneurysm was constructed using MIMICS, ANSYS and CFX software. Different hemodynamic modalities and border conditions were assigned to the model. Thirty points were selected randomly on the wall and within the aneurysm. Wall total pressure (WTP), wall shear stress (WSS), and blood flow velocity of each point were calculated and hemodynamic status was compared between different modalities. The quantitative average values of the 30 points on the wall and within the aneurysm were obtained by computational calculation point by point. The velocity and WSS in modalities A and B were different from those of the remaining 5 modalities; and the WTP in modalities A, E and F were higher than those of the remaining 4 modalities. The digital model of a fusiform basilar artery aneurysm is feasible and reliable. This model could provide some important information to clinical treatment options.

  2. GPU based 3D feature profile simulation of high-aspect ratio contact hole etch process under fluorocarbon plasmas

    NASA Astrophysics Data System (ADS)

    Chun, Poo-Reum; Lee, Se-Ah; Yook, Yeong-Geun; Choi, Kwang-Sung; Cho, Deog-Geun; Yu, Dong-Hun; Chang, Won-Seok; Kwon, Deuk-Chul; Im, Yeon-Ho

    2013-09-01

    Although plasma etch profile simulation has been attracted much interest for developing reliable plasma etching, there still exist big gaps between current research status and predictable modeling due to the inherent complexity of plasma process. As an effort to address this issue, we present 3D feature profile simulation coupled with well-defined plasma-surface kinetic model for silicon dioxide etching process under fluorocarbon plasmas. To capture the realistic plasma surface reaction behaviors, a polymer layer based surface kinetic model was proposed to consider the simultaneous polymer deposition and oxide etching. Finally, the realistic plasma surface model was used for calculation of speed function for 3D topology simulation, which consists of multiple level set based moving algorithm, and ballistic transport module. In addition, the time consumable computations in the ballistic transport calculation were improved drastically by GPU based numerical computation, leading to the real time computation. Finally, we demonstrated that the surface kinetic model could be coupled successfully for 3D etch profile simulations in high-aspect ratio contact hole plasma etching.

  3. Computing Mass Properties From AutoCAD

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  4. Equalization of energy density in boiling water reactors (as exemplified by WB-50). Development and testing of WB -50 computational model on the basis of MCU-RR code

    NASA Astrophysics Data System (ADS)

    Chertkov, Yu B.; Disyuk, V. V.; Pimenov, E. Yu; Aksenova, N. V.

    2017-01-01

    Within the framework of research in possibility and prospects of power density equalization in boiling water reactors (as exemplified by WB-50) a work was undertaken to improve prior computational model of the WB-50 reactor implemented in MCU-RR software. Analysis of prior works showed that critical state calculations have deviation of calculated reactivity exceeding ±0.3 % (ΔKef/Kef) for minimum concentrations of boric acid in the reactor water and reaching 2 % for maximum concentration values. Axial coefficient of nonuniform burnup distribution reaches high values in the WB-50 reactor. Thus, the computational model needed refinement to take into account burnup inhomogeneity along the fuel assembly height. At this stage, computational results with mean square deviation of less than 0.7 % (ΔKef/Kef) and dispersion of design values of ±1 % (ΔK/K) shall be deemed acceptable. Further lowering of these parameters apparently requires root cause analysis of such large values and paying more attention to experimental measurement techniques.

  5. Assessment of an Euler-Interacting Boundary Layer Method Using High Reynolds Number Transonic Flight Data

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Maddalon, Dal V.

    1998-01-01

    Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.

  6. A primer for biomedical scientists on how to execute model II linear regression analysis.

    PubMed

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  7. Optical properties of light absorbing carbon aggregates mixed with sulfate: assessment of different model geometries for climate forcing calculations.

    PubMed

    Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin

    2012-04-23

    Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America

  8. Assessment of the computational uncertainty of temperature rise and SAR in the eyes and brain under far-field exposure from 1 to 10 GHz

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka

    2009-06-01

    This paper presents finite-difference time-domain (FDTD) calculations of specific absorption rate (SAR) values in the head under plane-wave exposure from 1 to 10 GHz using a resolution of 0.5 mm in adult male and female voxel models. Temperature rise due to the power absorption is calculated by the bioheat equation using a multigrid method solver. The computational accuracy is investigated by repeating the calculations with resolutions of 1 mm and 2 mm and comparing the results. Cubically averaged 10 g SAR in the eyes and brain and eye-averaged SAR are calculated and compared to the corresponding temperature rise as well as the recommended limits for exposure. The results suggest that 2 mm resolution should only be used for frequencies smaller than 2.5 GHz, and 1 mm resolution only under 5 GHz. Morphological differences in models seemed to be an important cause of variation: differences in results between the two different models were usually larger than the computational error due to the grid resolution, and larger than the difference between the results for open and closed eyes. Limiting the incident plane-wave power density to smaller than 100 W m-2 was sufficient for ensuring that the temperature rise in the eyes and brain were less than 1 °C in the whole frequency range.

  9. A Parallel Numerical Micromagnetic Code Using FEniCS

    NASA Astrophysics Data System (ADS)

    Nagy, L.; Williams, W.; Mitchell, L.

    2013-12-01

    Many problems in the geosciences depend on understanding the ability of magnetic minerals to provide stable paleomagnetic recordings. Numerical micromagnetic modelling allows us to calculate the domain structures found in naturally occurring magnetic materials. However the computational cost rises exceedingly quickly with respect to the size and complexity of the geometries that we wish to model. This problem is compounded by the fact that the modern processor design no longer focuses on the speed at which calculations are performed, but rather on the number of computational units amongst which we may distribute our calculations. Consequently to better exploit modern computational resources our micromagnetic simulations must "go parallel". We present a parallel and scalable micromagnetics code written using FEniCS. FEniCS is a multinational collaboration involving several institutions (University of Cambridge, University of Chicago, The Simula Research Laboratory, etc.) that aims to provide a set of tools for writing scientific software; in particular software that employs the finite element method. The advantages of this approach are the leveraging of pre-existing projects from the world of scientific computing (PETSc, Trilinos, Metis/Parmetis, etc.) and exposing these so that researchers may pose problems in a manner closer to the mathematical language of their domain. Our code provides a scriptable interface (in Python) that allows users to not only run micromagnetic models in parallel, but also to perform pre/post processing of data.

  10. A data driven approach using Takagi-Sugeno models for computationally efficient lumped floodplain modelling

    NASA Astrophysics Data System (ADS)

    Wolfs, Vincent; Willems, Patrick

    2013-10-01

    Many applications in support of water management decisions require hydrodynamic models with limited calculation time, including real time control of river flooding, uncertainty and sensitivity analyses by Monte-Carlo simulations, and long term simulations in support of the statistical analysis of the model simulation results (e.g. flood frequency analysis). Several computationally efficient hydrodynamic models exist, but little attention is given to the modelling of floodplains. This paper presents a methodology that can emulate output from a full hydrodynamic model by predicting one or several levels in a floodplain, together with the flow rate between river and floodplain. The overtopping of the embankment is modelled as an overflow at a weir. Adaptive neuro fuzzy inference systems (ANFIS) are exploited to cope with the varying factors affecting the flow. Different input sets and identification methods are considered in model construction. Because of the dual use of simplified physically based equations and data-driven techniques, the ANFIS consist of very few rules with a low number of input variables. A second calculation scheme can be followed for exceptionally large floods. The obtained nominal emulation model was tested for four floodplains along the river Dender in Belgium. Results show that the obtained models are accurate with low computational cost.

  11. Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).

    PubMed

    Bag, Arijit; Ghorai, Pradip Kr

    2016-05-01

    Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  13. Computation of the intensities of parametric holographic scattering patterns in photorefractive crystals.

    PubMed

    Schwalenberg, Simon

    2005-06-01

    The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.

  14. Research in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The numerical integration of quasi-one-dimensional unsteady flow problems which involve finite rate chemistry are discussed, and are expressed in terms of conservative form Euler and species conservation equations. Hypersonic viscous calculations for delta wing geometries is also examined. The conical Navier-Stokes equations model was selected in order to investigate the effects of viscous-inviscid interations. The more complete three-dimensional model is beyond the available computing resources. The flux vector splitting method with van Leer's MUSCL differencing is being used. Preliminary results were computed for several conditions.

  15. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  16. On a fast calculation of structure factors at a subatomic resolution.

    PubMed

    Afonine, P V; Urzhumtsev, A

    2004-01-01

    In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9 A. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for 'small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst. 4, 362-367; Ten Eyck (1977). Acta Cryst. A33, 486-492] may be much more computationally efficient, but contains some parameters (grid step, 'effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested.

  17. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-01

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still within safety limits; however, for 7.9542 g and 8.838 g (× 106 cm-1) the limits were exceeded.

  18. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    NASA Astrophysics Data System (ADS)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  19. Dynamic modeling of parallel robots for computed-torque control implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codourey, A.

    1998-12-01

    In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less

  20. Implementation and performance of FDPS: a framework for developing parallel particle simulation codes

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-08-01

    We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.

  1. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  2. Preliminary topical report on comparison reactor disassembly calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLaughlin, T.P.

    1975-11-01

    Preliminary results of comparison disassembly calculations for a representative LMFBR model (2100-l voided core) and arbitrary accident conditions are described. The analytical methods employed were the computer programs: FX2- POOL, PAD, and VENUS-II. The calculated fission energy depositions are in good agreement, as are measures of the destructive potential of the excursions, kinetic energy, and work. However, in some cases the resulting fuel temperatures are substantially divergent. Differences in the fission energy deposition appear to be attributable to residual inconsistencies in specifying the comparison cases. In contrast, temperature discrepancies probably stem from basic differences in the energy partition models inherentmore » in the codes. Although explanations of the discrepancies are being pursued, the preliminary results indicate that all three computational methods provide a consistent, global characterization of the contrived disassembly accident. (auth)« less

  3. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  4. Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2011-01-01

    Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.

  5. Hollow cathodes as electron emitting plasma contactors Theory and computer modeling

    NASA Technical Reports Server (NTRS)

    Davis, V. A.; Katz, I.; Mandell, M. J.; Parks, D. E.

    1987-01-01

    Several researchers have suggested using hollow cathodes as plasma contactors for electrodynamic tethers, particularly to prevent the Shuttle Orbiter from charging to large negative potentials. Previous studies have shown that fluid models with anomalous scattering can describe the electron transport in hollow cathode generated plasmas. An improved theory of the hollow cathode plasmas is developed and computational results using the theory are compared with laboratory experiments. Numerical predictions for a hollow cathode plasma source of the type considered for use on the Shuttle are presented, as are three-dimensional NASCAP/LEO calculations of the emitted ion trajectories and the resulting potentials in the vicinity of the Orbiter. The computer calculations show that the hollow cathode plasma source makes vastly superior contact with the ionospheric plasma compared with either an electron gun or passive ion collection by the Orbiter.

  6. Digital model analysis of the principal artesian aquifer, Savannah, Georgia area

    USGS Publications Warehouse

    Counts, H.B.; Krause, R.E.

    1977-01-01

    A digital model of the principal artesian aquifer has been developed for the Savannah, Georgia, area. The model simulates the response of the aquifer system to various hydrologic stresses. Model results of the water levels and water-level changes are shown on maps. Computations may be extended in time, indicating changes in pumpage were applied to the system and probable results calculated. Drawdown or water-level differences were computed, showing comparisons of different water management alternatives. (Woodard-USGS)

  7. An analytical model for highly seperated flow on airfoils at low speeds

    NASA Technical Reports Server (NTRS)

    Zunnalt, G. W.; Naik, S. N.

    1977-01-01

    A computer program was developed to solve the low speed flow around airfoils with highly separated flow. A new flow model included all of the major physical features in the separated region. Flow visualization tests also were made which gave substantiation to the validity of the model. The computation involves the matching of the potential flow, boundary layer and flows in the separated regions. Head's entrainment theory was used for boundary layer calculations and Korst's jet mixing analysis was used in the separated regions. A free stagnation point aft of the airfoil and a standing vortex in the separated region were modelled and computed.

  8. Computer simulation of the metastatic progression.

    PubMed

    Wedemann, Gero; Bethge, Anja; Haustein, Volker; Schumacher, Udo

    2014-01-01

    A novel computer model based on a discrete event simulation procedure describes quantitatively the processes underlying the metastatic cascade. Analytical functions describe the size of the primary tumor and the metastases, while a rate function models the intravasation events of the primary tumor and metastases. Events describe the behavior of the malignant cells until the formation of new metastases. The results of the computer simulations are in quantitative agreement with clinical data determined from a patient with hepatocellular carcinoma in the liver. The model provides a more detailed view on the process than a conventional mathematical model. In particular, the implications of interventions on metastasis formation can be calculated.

  9. A multi-GPU real-time dose simulation software framework for lung radiotherapy.

    PubMed

    Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A

    2012-09-01

    Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was <1%. Our analyses show that the framework applied to deformable lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.

  10. Feasibility study of a procedure to detect and warn of low level wind shear

    NASA Technical Reports Server (NTRS)

    Turkel, B. S.; Kessel, P. A.; Frost, W.

    1981-01-01

    A Doppler radar system which provides an aircraft with advanced warning of longitudinal wind shear is described. This system uses a Doppler radar beamed along the glide slope linked with an on line microprocessor containing a two dimensional, three degree of freedom model of the motion of an aircraft including pilot/autopilot control. The Doppler measured longitudinal glide slope winds are entered into the aircraft motion model, and a simulated controlled aircraft trajectory is calculated. Several flight path deterioration parameters are calculated from the computed aircraft trajectory information. The aircraft trajectory program, pilot control models, and the flight path deterioration parameters are discussed. The performance of the computer model and a test pilot in a flight simulator through longitudinal and vertical wind fields characteristic of a thunderstorm wind field are compared.

  11. Direct numerical simulations and modeling of a spatially-evolving turbulent wake

    NASA Technical Reports Server (NTRS)

    Cimbala, John M.

    1994-01-01

    Understanding of turbulent free shear flows (wakes, jets, and mixing layers) is important, not only for scientific interest, but also because of their appearance in numerous practical applications. Turbulent wakes, in particular, have recently received increased attention by researchers at NASA Langley. The turbulent wake generated by a two-dimensional airfoil has been selected as the test-case for detailed high-resolution particle image velocimetry (PIV) experiments. This same wake has also been chosen to enhance NASA's turbulence modeling efforts. Over the past year, the author has completed several wake computations, while visiting NASA through the 1993 and 1994 ASEE summer programs, and also while on sabbatical leave during the 1993-94 academic year. These calculations have included two-equation (K-omega and K-epsilon) models, algebraic stress models (ASM), full Reynolds stress closure models, and direct numerical simulations (DNS). Recently, there has been mutually beneficial collaboration of the experimental and computational efforts. In fact, these projects have been chosen for joint presentation at the NASA Turbulence Peer Review, scheduled for September 1994. DNS calculations are presently underway for a turbulent wake at Re(sub theta) = 1000 and at a Mach number of 0.20. (Theta is the momentum thickness, which remains constant in the wake of a two dimensional body.) These calculations utilize a compressible DNS code written by M. M. Rai of NASA Ames, and modified for the wake by J. Cimbala. The code employs fifth-order accurate upwind-biased finite differencing for the convective terms, fourth-order accurate central differencing for the viscous terms, and an iterative-implicit time-integration scheme. The computational domain for these calculations starts at x/theta = 10, and extends to x/theta = 610. Fully developed turbulent wake profiles, obtained from experimental data from several wake generators, are supplied at the computational inlet, along with appropriate noise. After some adjustment period, the flow downstream of the inlet develops into a fully three-dimensional turbulent wake. Of particular interest in the present study is the far wake spreading rate and the self-similar mean and turbulence profiles. At the time of this writing, grid resolution studies are underway, and a code is being written to calculate turbulence statistics from these wake calculations; the statistics will be compared to those from the ongoing PIV wake measurements, those of previous experiments, and those predicted by the various turbulence models. These calculations will lead to significant long-term benefits for the turbulence modeling effort. In particular, quantities such as the pressure-strain correlation and the dissipation rate tensor can be easily calculated from the DNS results, whereas these quantities are nearly impossible to measure experimentally. Improvements to existing turbulence models (and development of new models) require knowledge about flow quantities such as these. Present turbulence models do a very good job at prediction of the shape of the mean velocity and Reynolds stress profiles in a turbulent wake, but significantly underpredict the magnitude of the stresses and the spreading rate of the wake. Thus, the turbulent wake is an ideal flow for turbulence modeling research. By careful comparison and analysis of each term in the modeled Reynolds stress equations, the DNS data can show where deficiencies in the models exist; improvements to the models can then be attempted.

  12. Radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper we analyze the accuracy and efficiency of several radiative transfer models for inferring cloud parameters from radiances measured by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR). The radiative transfer models are the exact discrete ordinate and matrix operator methods with matrix exponential, and the approximate asymptotic and equivalent Lambertian cloud models. To deal with the computationally expensive radiative transfer calculations, several acceleration techniques such as, for example, the telescoping technique, the method of false discrete ordinate, the correlated k-distribution method and the principal component analysis (PCA) are used. We found that, for the EPIC oxygen A-band absorption channel at 764 nm, the exact models using the correlated k-distribution in conjunction with PCA yield an accuracy better than 1.5% and a computation time of 18 s for radiance calculations at 5 viewing zenith angles.

  13. Model Package Report: Hanford Soil Inventory Model SIM v.2 Build 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Will E.; Zaher, U.; Mehta, S.

    The Hanford Soil Inventory Model (SIM) is a tool for the estimation of inventory of contaminants that were released to soil from liquid discharges during the U.S. Department of Energy’s Hanford Site operations. This model package report documents the construction and development of a second version of SIM (SIM-v2) to support the needs of Hanford Site Composite Analysis. The SIM-v2 is implemented using GoldSim Pro®1 software with a new model architecture that preserves the uncertainty in inventory estimates while reducing the computational burden (compared to the previous version) and allowing more traceability and transparency in calculation methodology. The calculation architecturemore » is designed in such a manner that future updates to the waste stream composition along with addition or deletion of waste sites can be performed with relative ease. In addition, the new computational platform allows for continued hardware upgrade.« less

  14. The temperature dependence of inelastic light scattering from small particles for use in combustion diagnostic instrumentation

    NASA Technical Reports Server (NTRS)

    Cloud, Stanley D.

    1987-01-01

    A computer calculation of the expected angular distribution of coherent anti-Stokes Raman scattering (CARS) from micrometer size polystyrene spheres based on a Mie-type model, and a pilot experiment to test the feasibility of measuring CARS angular distributions from micrometer size polystyrene spheres by simply suspending them in water are discussed. The computer calculations predict a very interesting structure in the angular distributions that depends strongly on the size and relative refractive index of the spheres.

  15. Computer-Aided Construction of Chemical Kinetic Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, William H.

    2014-12-31

    The combustion chemistry of even simple fuels can be extremely complex, involving hundreds or thousands of kinetically significant species. The most reasonable way to deal with this complexity is to use a computer not only to numerically solve the kinetic model, but also to construct the kinetic model in the first place. Because these large models contain so many numerical parameters (e.g. rate coefficients, thermochemistry) one never has sufficient data to uniquely determine them all experimentally. Instead one must work in “predictive” mode, using theoretical rather than experimental values for many of the numbers in the model, and as appropriatemore » refining the most sensitive numbers through experiments. Predictive chemical kinetics is exactly what is needed for computer-aided design of combustion systems based on proposed alternative fuels, particularly for early assessment of the value and viability of proposed new fuels before those fuels are commercially available. This project was aimed at making accurate predictive chemical kinetics practical; this is a challenging goal which requires a range of science advances. The project spanned a wide range from quantum chemical calculations on individual molecules and elementary-step reactions, through the development of improved rate/thermo calculation procedures, the creation of algorithms and software for constructing and solving kinetic simulations, the invention of methods for model-reduction while maintaining error control, and finally comparisons with experiment. Many of the parameters in the models were derived from quantum chemistry calculations, and the models were compared with experimental data measured in our lab or in collaboration with others.« less

  16. A Novel Method to Compute Breathing Volumes via Motion Capture Systems: Design and Experimental Trials.

    PubMed

    Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio

    2017-10-01

    Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2  = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2  = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.

  17. GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT

    NASA Astrophysics Data System (ADS)

    Strubbe, David A.

    GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.

  18. Data Reduction Functions for the Langley 14- by 22-Foot Subsonic Tunnel

    NASA Technical Reports Server (NTRS)

    Boney, Andy D.

    2014-01-01

    The Langley 14- by 22-Foot Subsonic Tunnel's data reduction software utilizes six major functions to compute the acquired data. These functions calculate engineering units, tunnel parameters, flowmeters, jet exhaust measurements, balance loads/model attitudes, and model /wall pressures. The input (required) variables, the output (computed) variables, and the equations and/or subfunction(s) associated with each major function are discussed.

  19. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    DTIC Science & Technology

    2017-09-19

    Results show that the finite element computational models accurately match analytical calculations, and that the composite material studied in this...products. 15. SUBJECT TERMS Finite Element Analysis, Structural Acoustics, Fiber-Reinforced Composites, Physics-Based Modeling 16. SECURITY...2 4 FINITE ELEMENT MODEL DESCRIPTION

  20. Development of Computational Simulation Tools to Model Weapon Propulsors

    DTIC Science & Technology

    2004-01-01

    Calculation in Permanent Magnet Motors with Rotor Eccentricity: With Slotting Effect Considered," IEEE Transactions on Magnetics, Volume 34, No. 4, 2253-2266...1998). [3] Lieu, Dennis K., Kim, Ungtae. "Magnetic Field Calculation in Permanent Magnet Motors with Rotor Eccentricity: Without Slotting Effect

  1. Double-multiple streamtube model for Darrieus in turbines

    NASA Technical Reports Server (NTRS)

    Paraschivoiu, I.

    1981-01-01

    An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.

  2. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  3. Modelling Equilibrium and Fractional Crystallization in the System MgO-FeO-CaO-Al2O3-SiO2

    NASA Technical Reports Server (NTRS)

    Herbert, F.

    1985-01-01

    A mathematical modelling technique for use in petrogenesis calculations in the system MgO-FeO-CaO-Al2O3-SiO2 is reported. Semiempirical phase boundary and elemental distribution information was combined with mass balance to compute approximate equilibrium crystallization paths for arbitrary system compositions. The calculation is applicable to a range of system compositions and fractionation calculations are possible. The goal of the calculation is the computation of the composition and quantity of each phase present as a function of the degree of solidification. The degree of solidification is parameterized by the heat released by the solidifying phases. The mathematical requirement for the solution of this problem is: (1) An equation constraining the composition of the magma for each solid phase in equilibrium with the liquidus phase, and (2) an equation for each solid phase and each component giving the distribution of that element between that phase and the magma.

  4. Exposure calculation code module for reactor core analysis: BURNER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Cunningham, G.W.

    1979-02-01

    The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also providesmore » user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.« less

  5. Three-dimensional turbopump flowfield analysis

    NASA Technical Reports Server (NTRS)

    Sharma, O. P.; Belford, K. A.; Ni, R. H.

    1992-01-01

    A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.

  6. Validation of vibration-dissociation coupling models in hypersonic non-equilibrium separated flows

    NASA Astrophysics Data System (ADS)

    Shoev, G.; Oblapenko, G.; Kunova, O.; Mekhonoshina, M.; Kustova, E.

    2018-03-01

    The validation of recently developed models of vibration-dissociation coupling is discussed in application to numerical solutions of the Navier-Stokes equations in a two-temperature approximation for a binary N2/N flow. Vibrational-translational relaxation rates are computed using the Landau-Teller formula generalized for strongly non-equilibrium flows obtained in the framework of the Chapman-Enskog method. Dissociation rates are calculated using the modified Treanor-Marrone model taking into account the dependence of the model parameter on the vibrational state. The solutions are compared to those obtained using traditional Landau-Teller and Treanor-Marrone models, and it is shown that for high-enthalpy flows, the traditional and recently developed models can give significantly different results. The computed heat flux and pressure on the surface of a double cone are in a good agreement with experimental data available in the literature on low-enthalpy flow with strong thermal non-equilibrium. The computed heat flux on a double wedge qualitatively agrees with available data for high-enthalpy non-equilibrium flows. Different contributions to the heat flux calculated using rigorous kinetic theory methods are evaluated. Quantitative discrepancy of numerical and experimental data is discussed.

  7. Challenging Density Functional Theory Calculations with Hemes and Porphyrins

    PubMed Central

    de Visser, Sam P.; Stillman, Martin J.

    2016-01-01

    In this paper we review recent advances in computational chemistry and specifically focus on the chemical description of heme proteins and synthetic porphyrins that act as both mimics of natural processes and technological uses. These are challenging biochemical systems involved in electron transfer as well as biocatalysis processes. In recent years computational tools have improved considerably and now can reproduce experimental spectroscopic and reactivity studies within a reasonable error margin (several kcal·mol−1). This paper gives recent examples from our groups, where we investigated heme and synthetic metal-porphyrin systems. The four case studies highlight how computational modelling can correctly reproduce experimental product distributions, predicted reactivity trends and guide interpretation of electronic structures of complex systems. The case studies focus on the calculations of a variety of spectroscopic features of porphyrins and show how computational modelling gives important insight that explains the experimental spectra and can lead to the design of porphyrins with tuned properties. PMID:27070578

  8. Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharrati, Hedi; Agrebi, Amel; Karaoui, Mohamed-Karim

    2007-04-15

    X-ray buildup factors of lead in broad beam geometry for energies from 15 to 150 keV are determined using the general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C). The obtained buildup factors data are fitted to a modified three parameter Archer et al. model for ease in calculating the broad beam transmission with computer at any tube potentials/filters combinations in diagnostic energies range. An example for their use to compute the broad beam transmission at 70, 100, 120, and 140 kVp is given. The calculated broad beam transmission is compared to data derived from literature, presenting good agreement.more » Therefore, the combination of the buildup factors data as determined and a mathematical model to generate x-ray spectra provide a computationally based solution to broad beam transmission for lead barriers in shielding x-ray facilities.« less

  9. Computer simulation of surface and film processes

    NASA Technical Reports Server (NTRS)

    Tiller, W. A.; Halicioglu, M. T.

    1984-01-01

    All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.

  10. A computer program to calculate the longitudinal aerodynamic characteristics of upper-surface-blown wing-flap configurations

    NASA Technical Reports Server (NTRS)

    Mendenhall, M. R.

    1978-01-01

    A user's manual is presented for a computer program in which a vortex-lattice lifting-surface method is used to model the wing and multiple flaps. The engine wake model consists of a series of closely spaced vortex rings with rectangular cross sections. The jet wake is positioned such that the lower boundary of the jet is tangent to the wing and flap upper surfaces. The two potential flow models are used to calculate the wing-flap loading distribution including the influence of the wakes from up to two engines on the semispan. The method is limited to the condition where the flow and geometry of the configurations are symmetric about the vertical plane containing the wing root chord. The results include total configuration forces and moments, individual lifting-surface load distributions, pressure distributions, flap hinge moments, and flow field calculation at arbitrary field points. The use of the program, preparation of input, the output, program listing, and sample cases are described.

  11. Accurate pKa calculation of the conjugate acids of alkanolamines, alkaloids and nucleotide bases by quantum chemical methods.

    PubMed

    Gangarapu, Satesh; Marcelis, Antonius T M; Zuilhof, Han

    2013-04-02

    The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum model (CPCM) and universal solvation models (SMD, SM8). G3, SCS-MP2 and M11-L methods coupled with SMD and SM8 solvation models perform well for alkanolamines with mean unsigned errors below 0.20 pKa units, in all cases. Extending this method to the pKa calculation of 35 nitrogen-containing compounds spanning 12 pKa units showed an excellent correlation between experimental and computational pKa values of these 35 amines with the computationally low-cost SM8/M11-L density functional approach. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Calculating Henry’s Constants of Charged Molecules Using SPARC

    EPA Science Inventory

    SPARC Performs Automated Reasoning in Chemistry is a computer program designed to model physical and chemical properties of molecules solely based on thier chemical structure. SPARC uses a toolbox of mechanistic perturbation models to model intermolecular interactions. SPARC has ...

  13. Current status of computational methods for transonic unsteady aerodynamics and aeroelastic applications

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Malone, John B.

    1992-01-01

    The current status of computational methods for unsteady aerodynamics and aeroelasticity is reviewed. The key features of challenging aeroelastic applications are discussed in terms of the flowfield state: low-angle high speed flows and high-angle vortex-dominated flows. The critical role played by viscous effects in determining aeroelastic stability for conditions of incipient flow separation is stressed. The need for a variety of flow modeling tools, from linear formulations to implementations of the Navier-Stokes equations, is emphasized. Estimates of computer run times for flutter calculations using several computational methods are given. Applications of these methods for unsteady aerodynamic and transonic flutter calculations for airfoils, wings, and configurations are summarized. Finally, recommendations are made concerning future research directions.

  14. An integrated computational tool for precipitation simulation

    NASA Astrophysics Data System (ADS)

    Cao, W.; Zhang, F.; Chen, S.-L.; Zhang, C.; Chang, Y. A.

    2011-07-01

    Computer aided materials design is of increasing interest because the conventional approach solely relying on experimentation is no longer viable within the constraint of available resources. Modeling of microstructure and mechanical properties during precipitation plays a critical role in understanding the behavior of materials and thus accelerating the development of materials. Nevertheless, an integrated computational tool coupling reliable thermodynamic calculation, kinetic simulation, and property prediction of multi-component systems for industrial applications is rarely available. In this regard, we are developing a software package, PanPrecipitation, under the framework of integrated computational materials engineering to simulate precipitation kinetics. It is seamlessly integrated with the thermodynamic calculation engine, PanEngine, to obtain accurate thermodynamic properties and atomic mobility data necessary for precipitation simulation.

  15. Analysis of stationary availability factor of two-level backbone computer networks with arbitrary topology

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.

  16. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  17. Computational simulation of the creep-rupture process in filamentary composite materials

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.; Hackett, Robert M.

    1991-01-01

    A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.

  18. Kinetic barriers in the isomerization of substituted ureas: implications for computer-aided drug design.

    PubMed

    Loeffler, Johannes R; Ehmki, Emanuel S R; Fuchs, Julian E; Liedl, Klaus R

    2016-05-01

    Urea derivatives are ubiquitously found in many chemical disciplines. N,N'-substituted ureas may show different conformational preferences depending on their substitution pattern. The high energetic barrier for isomerization of the cis and trans state poses additional challenges on computational simulation techniques aiming at a reproduction of the biological properties of urea derivatives. Herein, we investigate energetics of urea conformations and their interconversion using a broad spectrum of methodologies ranging from data mining, via quantum chemistry to molecular dynamics simulation and free energy calculations. We find that the inversion of urea conformations is inherently slow and beyond the time scale of typical simulation protocols. Therefore, extra care needs to be taken by computational chemists to work with appropriate model systems. We find that both knowledge-driven approaches as well as physics-based methods may guide molecular modelers towards accurate starting structures for expensive calculations to ensure that conformations of urea derivatives are modeled as adequately as possible.

  19. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  20. MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method

    USGS Publications Warehouse

    Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.

    2003-01-01

    A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).

  1. A network-analysis-based comparative study of the throughput behavior of polymer melts in barrier screw geometries

    NASA Astrophysics Data System (ADS)

    Aigner, M.; Köpplmayr, T.; Kneidinger, C.; Miethlinger, J.

    2014-05-01

    Barrier screws are widely used in the plastics industry. Due to the extreme diversity of their geometries, describing the flow behavior is difficult and rarely done in practice. We present a systematic approach based on networks that uses tensor algebra and numerical methods to model and calculate selected barrier screw geometries in terms of pressure, mass flow, and residence time. In addition, we report the results of three-dimensional simulations using the commercially available ANSYS Polyflow software. The major drawbacks of three-dimensional finite-element-method (FEM) simulations are that they require vast computational power and, large quantities of memory, and consume considerable time to create a geometric model created by computer-aided design (CAD) and complete a flow calculation. Consequently, a modified 2.5-dimensional finite volume method, termed network analysis is preferable. The results obtained by network analysis and FEM simulations correlated well. Network analysis provides an efficient alternative to complex FEM software in terms of computing power and memory consumption. Furthermore, typical barrier screw geometries can be parameterized and used for flow calculations without timeconsuming CAD-constructions.

  2. Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres

    NASA Astrophysics Data System (ADS)

    Liu, Quanhua; Weng, Fuzhong

    2006-12-01

    The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.

  3. Statistical equilibrium calculations for silicon in early-type model stellar atmospheres

    NASA Technical Reports Server (NTRS)

    Kamp, L. W.

    1976-01-01

    Line profiles of 36 multiplets of silicon (Si) II, III, and IV were computed for a grid of model atmospheres covering the range from 15,000 to 35,000 K in effective temperature and 2.5 to 4.5 in log (gravity). The computations involved simultaneous solution of the steady-state statistical equilibrium equations for the populations and of the equation of radiative transfer in the lines. The variables were linearized, and successive corrections were computed until a minimal accuracy of 1/1000 in the line intensities was reached. The common assumption of local thermodynamic equilibrium (LTE) was dropped. The model atmospheres used also were computed by non-LTE methods. Some effects that were incorporated into the calculations were the depression of the continuum by free electrons, hydrogen and ionized helium line blocking, and auto-ionization and dielectronic recombination, which later were found to be insignificant. Use of radiation damping and detailed electron (quadratic Stark) damping constants had small but significant effects on the strong resonance lines of Si III and IV. For weak and intermediate-strength lines, large differences with respect to LTE computations, the results of which are also presented, were found in line shapes and strengths. For the strong lines the differences are generally small, except for the models at the hot, low-gravity extreme of our range. These computations should be useful in the interpretation of the spectra of stars in the spectral range B0-B5, luminosity classes III, IV, and V.

  4. A local-circulation model for Darrieus vertical-axis wind turbines

    NASA Astrophysics Data System (ADS)

    Masse, B.

    1986-04-01

    A new computational model for the aerodynamics of the vertical-axis wind turbine is presented. Based on the local-circulation method generalized for curved blades, combined with a wake model for the vertical-axis wind turbine, it differs markedly from current models based on variations in the streamtube momentum and vortex models using the lifting-line theory. A computer code has been developed to calculate the loads and performance of the Darrieus vertical-axis wind turbine. The results show good agreement with experimental data and compare well with other methods.

  5. Stereolithographic models of the solvent-accessible surface of biopolymers. Topical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradford, J.; Noel, P.; Emery, J.D.

    1996-11-01

    The solvent-accessible surfaces of several biopolymers were calculated. As part of the DOE education outreach activity, two high school students participated in this project. Computer files containing sets of triangles were produced. These files are called stl files and are the ISO 9001 standard. They have been written onto CD-ROMs for distribution to American companies. Stereolithographic models were made of some of them to ensure that the computer calculations were done correctly. Stereolithographic models were made of interleukin 1{beta} (IL-1{beta}), three antibodies (an anti-p-azobenzene arsonate, an anti-Brucella A cell wall polysaccharide, and an HIV neutralizing antibody), a triple stranded coiledmore » coil, and an engrailed homeodomain. Also, the biopolymers and their files are described.« less

  6. Development of hybrid computer plasma models for different pressure regimes

    NASA Astrophysics Data System (ADS)

    Hromadka, Jakub; Ibehej, Tomas; Hrach, Rudolf

    2016-09-01

    With increased performance of contemporary computers during last decades numerical simulations became a very powerful tool applicable also in plasma physics research. Plasma is generally an ensemble of mutually interacting particles that is out of the thermodynamic equilibrium and for this reason fluid computer plasma models give results with only limited accuracy. On the other hand, much more precise particle models are often limited only on 2D problems because of their huge demands on the computer resources. Our contribution is devoted to hybrid modelling techniques that combine advantages of both modelling techniques mentioned above, particularly to their so-called iterative version. The study is focused on mutual relations between fluid and particle models that are demonstrated on the calculations of sheath structures of low temperature argon plasma near a cylindrical Langmuir probe for medium and higher pressures. Results of a simple iterative hybrid plasma computer model are also given. The authors acknowledge the support of the Grant Agency of Charles University in Prague (project 220215).

  7. A user's manual for DELSOL3: A computer code for calculating the optical performance and optimal system design for solar thermal central receiver plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kistler, B.L.

    DELSOL3 is a revised and updated version of the DELSOL2 computer program (SAND81-8237) for calculating collector field performance and layout and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design based on energy cost. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and externalmore » cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. DELSOL3 maintains the advantages of speed and accuracy which are characteristics of DELSOL2.« less

  8. Development of full wave code for modeling RF fields in hot non-uniform plasmas

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.

  9. Comparisons of Calculations with PARTRAC and NOREC: Transport of Electrons in Liquid Water

    PubMed Central

    Dingfelder, M.; Ritchie, R. H.; Turner, J. E.; Friedland, W.; Paretzke, H. G.; Hamm, R. N.

    2013-01-01

    Monte Carlo computer models that simulate the detailed, event-by-event transport of electrons in liquid water are valuable for the interpretation and understanding of findings in radiation chemistry and radiation biology. Because of the paucity of experimental data, such efforts must rely on theoretical principles and considerable judgment in their development. Experimental verification of numerical input is possible to only a limited extent. Indirect support for model validity can be gained from a comparison of details between two independently developed computer codes as well as the observable results calculated with them. In this study, we compare the transport properties of electrons in liquid water using two such models, PARTRAC and NOREC. Both use interaction cross sections based on plane-wave Born approximations and a numerical parameterization of the complex dielectric response function for the liquid. The models are described and compared, and their similarities and differences are highlighted. Recent developments in the field are discussed and taken into account. The calculated stopping powers, W values, and slab penetration characteristics are in good agreement with one another and with other independent sources. PMID:18439039

  10. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  11. Forward calculation of gravity and its gradient using polyhedral representation of density interfaces: an application of spherical or ellipsoidal topographic gravity effect

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chen, Chao

    2018-02-01

    A density interface modeling method using polyhedral representation is proposed to construct 3-D models of spherical or ellipsoidal interfaces such as the terrain surface of the Earth and applied to forward calculating gravity effect of topography and bathymetry for regional or global applications. The method utilizes triangular facets to fit undulation of the target interface. The model maintains almost equal accuracy and resolution at different locations of the globe. Meanwhile, the exterior gravitational field of the model, including its gravity and gravity gradients, is obtained simultaneously using analytic solutions. Additionally, considering the effect of distant relief, an adaptive computation process is introduced to reduce the computational burden. Then features and errors of the method are analyzed. Subsequently, the method is applied to an area for the ellipsoidal Bouguer shell correction as an example and the result is compared to existing methods, which shows our method provides high accuracy and great computational efficiency. Suggestions for further developments and conclusions are drawn at last.

  12. Economic evaluation of alternative crossbreeding systems involving four breeds of swine. I. The simulation model.

    PubMed

    McLaren, D G; Buchanan, D S; Williams, J E

    1987-10-01

    A static, deterministic computer model, programmed in Microsoft Basic for IBM PC and Apple Macintosh computers, was developed to calculate production efficiency (cost per kg of product) for nine alternative types of crossbreeding system involving four breeds of swine. The model simulates efficiencies for four purebred and 60 alternative two-, three- and four-breed rotation, rotaterminal, backcross and static cross systems. Crossbreeding systems were defined as including all purebred, crossbred and commercial matings necessary to maintain a total of 10,000 farrowings. Driving variables for the model are mean conception rate at first service and for an 8-wk breeding season, litter size born, preweaning survival rate, postweaning average daily gain, feed-to-gain ratio and carcass backfat. Predictions are computed using breed direct genetic and maternal effects for the four breeds, plus individual, maternal and paternal specific heterosis values, input by the user. Inputs required to calculate the number of females farrowing in each sub-system include the proportion of males and females replaced each breeding cycle in purebred and crossbred populations, the proportion of male and female offspring in seedstock herds that become breeding animals, and the number of females per boar. Inputs required to calculate the efficiency of terminal production (cost-to-product ratio) for each sub-system include breeding herd feed intake, gilt development costs, feed costs and labor and overhead costs. Crossbreeding system efficiency is calculated as the weighted average of sub-system cost-to-product ratio values, weighting by the number of females farrowing in each sub-system.

  13. GPU COMPUTING FOR PARTICLE TRACKING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Song, Kai; Muriki, Krishna

    2011-03-25

    This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less

  14. Trace metal speciation in natural waters: Computational vs. analytical

    USGS Publications Warehouse

    Nordstrom, D. Kirk

    1996-01-01

    Improvements in the field sampling, preservation, and determination of trace metals in natural waters have made many analyses more reliable and less affected by contamination. The speciation of trace metals, however, remains controversial. Chemical model speciation calculations do not necessarily agree with voltammetric, ion exchange, potentiometric, or other analytical speciation techniques. When metal-organic complexes are important, model calculations are not usually helpful and on-site analytical separations are essential. Many analytical speciation techniques have serious interferences and only work well for a limited subset of water types and compositions. A combined approach to the evaluation of speciation could greatly reduce these uncertainties. The approach proposed would be to (1) compare and contrast different analytical techniques with each other and with computed speciation, (2) compare computed trace metal speciation with reliable measurements of solubility, potentiometry, and mean activity coefficients, and (3) compare different model calculations with each other for the same set of water analyses, especially where supplementary data on speciation already exist. A comparison and critique of analytical with chemical model speciation for a range of water samples would delineate the useful range and limitations of these different approaches to speciation. Both model calculations and analytical determinations have useful and different constraints on the range of possible speciation such that they can provide much better insight into speciation when used together. Major discrepancies in the thermodynamic databases of speciation models can be evaluated with the aid of analytical speciation, and when the thermodynamic models are highly consistent and reliable, the sources of error in the analytical speciation can be evaluated. Major thermodynamic discrepancies also can be evaluated by simulating solubility and activity coefficient data and testing various chemical models for their range of applicability. Until a comparative approach such as this is taken, trace metal speciation will remain highly uncertain and controversial.

  15. Relativistic Zeroth-Order Regular Approximation Combined with Nonhybrid and Hybrid Density Functional Theory: Performance for NMR Indirect Nuclear Spin-Spin Coupling in Heavy Metal Compounds.

    PubMed

    Moncho, Salvador; Autschbach, Jochen

    2010-01-12

    A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.

  16. Modeling Materials: Design for Planetary Entry, Electric Aircraft, and Beyond

    NASA Technical Reports Server (NTRS)

    Thompson, Alexander; Lawson, John W.

    2014-01-01

    NASA missions push the limits of what is possible. The development of high-performance materials must keep pace with the agency's demanding, cutting-edge applications. Researchers at NASA's Ames Research Center are performing multiscale computational modeling to accelerate development times and further the design of next-generation aerospace materials. Multiscale modeling combines several computationally intensive techniques ranging from the atomic level to the macroscale, passing output from one level as input to the next level. These methods are applicable to a wide variety of materials systems. For example: (a) Ultra-high-temperature ceramics for hypersonic aircraft-we utilized the full range of multiscale modeling to characterize thermal protection materials for faster, safer air- and spacecraft, (b) Planetary entry heat shields for space vehicles-we computed thermal and mechanical properties of ablative composites by combining several methods, from atomistic simulations to macroscale computations, (c) Advanced batteries for electric aircraft-we performed large-scale molecular dynamics simulations of advanced electrolytes for ultra-high-energy capacity batteries to enable long-distance electric aircraft service; and (d) Shape-memory alloys for high-efficiency aircraft-we used high-fidelity electronic structure calculations to determine phase diagrams in shape-memory transformations. Advances in high-performance computing have been critical to the development of multiscale materials modeling. We used nearly one million processor hours on NASA's Pleiades supercomputer to characterize electrolytes with a fidelity that would be otherwise impossible. For this and other projects, Pleiades enables us to push the physics and accuracy of our calculations to new levels.

  17. Quantification of root water uptake in soil using X-ray computed tomography and image-based modelling.

    PubMed

    Daly, Keith R; Tracy, Saoirse R; Crout, Neil M J; Mairhofer, Stefan; Pridmore, Tony P; Mooney, Sacha J; Roose, Tiina

    2018-01-01

    Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system. © 2017 John Wiley & Sons Ltd.

  18. Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth

    2014-12-01

    There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.

  19. Dispersion model studies for Space Shuttle environmental effects activities

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The NASA/MSFC REED computer code was developed for predicting concentrations, dosage, and deposition downwind from rocket vehicle launches. The calculation procedures and results of nine studies using the code are presented. Topics include plume expansion, hydrazine concentrations, and hazard calculations for postulated fuel spills.

  20. Response surface method in geotechnical/structural analysis, phase 1

    NASA Astrophysics Data System (ADS)

    Wong, F. S.

    1981-02-01

    In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.

  1. Theoretical investigation of rotationally inelastic collisions of CH(X2Π) with hydrogen atoms

    NASA Astrophysics Data System (ADS)

    Dagdigian, Paul J.

    2017-06-01

    We report calculations of state-to-state cross sections for collision-induced rotational transitions of CH(X2Π) with atomic hydrogen. These calculations employed the four adiabatic potential energy surfaces correlating CH(X2Π) + H(2S), computed in this work through the multi-reference configuration interaction method [MRCISD + Q(Davidson)]. Because of the presence of deep wells on three of the potential energy surfaces, the scattering calculations were carried out using the quantum statistical method of Manolopoulos and co-workers [Chem. Phys. Lett. 343, 356 (2001)]. The computed cross sections included contributions from only direct scattering since the CH2 collision complex is expected to decay predominantly to C + H2. Rotationally energy transfer rate constants were computed for this system since these are required for astrophysical modeling.

  2. Adaptation to life in aeolian sand: how the sandfish lizard, Scincus scincus, prevents sand particles from entering its lungs.

    PubMed

    Stadler, Anna T; Vihar, Boštjan; Günther, Mathias; Huemer, Michaela; Riedl, Martin; Shamiyeh, Stephanie; Mayrhofer, Bernhard; Böhme, Wolfgang; Baumgartner, Werner

    2016-11-15

    The sandfish lizard, Scincus scincus (Squamata: Scincidae), spends nearly its whole life in aeolian sand and only comes to the surface for foraging, defecating and mating. It is not yet understood how the animal can respire without sand particles entering its respiratory organs when buried under thick layers of sand. In this work, we integrated biological studies, computational calculations and physical experiments to understand this phenomenon. We present a 3D model of the upper respiratory system based on a detailed histological analysis. A 3D-printed version of this model was used in combination with characteristic ventilation patterns for computational calculations and fluid mechanics experiments. By calculating the velocity field, we identified a sharp decrease in velocity in the anterior part of the nasal cavity where mucus and cilia are present. The experiments with the 3D-printed model validate the calculations: particles, if present, were found only in the same area as suggested by the calculations. We postulate that the sandfish has an aerodynamic filtering system; more specifically, that the characteristic morphology of the respiratory channel coupled with specific ventilation patterns prevent particles from entering the lungs. © 2016. Published by The Company of Biologists Ltd.

  3. Oblique Longwave Infrared Atmospheric Compensation

    DTIC Science & Technology

    2017-09-14

    Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Computational Modeling...calculated assuming an emissivity of water), the emissivity was computed for the rivers and compared against the true emissivity of water. The RMS...disturbed earth in various soil types [15], tripwires [12], clouds [29, 37], aircraft coating degradation [44], and targets obscured by clutter [35] or

  4. Multiple Scattering Principal Component-based Radiative Transfer Model (PCRTM) from Far IR to UV-Vis

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wu, W.; Yang, Q.

    2017-12-01

    Modern satellite hyperspectral satellite remote sensors such as AIRS, CrIS, IASI, CLARREO all require accurate and fast radiative transfer models that can deal with multiple scattering of clouds and aerosols to explore the information contents. However, performing full radiative transfer calculations using multiple stream methods such as discrete ordinate (DISORT), doubling and adding (AD), successive order of scattering order of scattering (SOS) are very time consuming. We have developed a principal component-based radiative transfer model (PCRTM) to reduce the computational burden by orders of magnitudes while maintain high accuracy. By exploring spectral correlations, the PCRTM reduce the number of radiative transfer calculations in frequency domain. It further uses a hybrid stream method to decrease the number of calls to the computational expensive multiple scattering calculations with high stream numbers. Other fast parameterizations have been used in the infrared spectral region reduce the computational time to milliseconds for an AIRS forward simulation (2378 spectral channels). The PCRTM has been development to cover spectral range from far IR to UV-Vis. The PCRTM model have been be used for satellite data inversions, proxy data generation, inter-satellite calibrations, spectral fingerprinting, and climate OSSE. We will show examples of applying the PCRTM to single field of view cloudy retrievals of atmospheric temperature, moisture, traces gases, clouds, and surface parameters. We will also show how the PCRTM are used for the NASA CLARREO project.

  5. Computational analysis of high resolution unsteady airloads for rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.

    1994-01-01

    The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.

  6. APOLLO: a general code for transport, slowing-down and thermalization calculations in heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kavenoky, A.

    1973-01-01

    From national topical meeting on mathematical models and computational techniques for analysis of nuclear systems; Ann Arbor, Michigan, USA (8 Apr 1973). In mathematical models and computational techniques for analysis of nuclear systems. APOLLO calculates the space-and-energy-dependent flux for a one dimensional medium, in the multigroup approximation of the transport equation. For a one dimensional medium, refined collision probabilities have been developed for the resolution of the integral form of the transport equation; these collision probabilities increase accuracy and save computing time. The interaction between a few cells can also be treated by the multicell option of APOLLO. The diffusionmore » coefficient and the material buckling can be computed in the various B and P approximations with a linearly anisotropic scattering law, even in the thermal range of the spectrum. Eventually this coefficient is corrected for streaming by use of Benoist's theory. The self-shielding of the heavy isotopes is treated by a new and accurate technique which preserves the reaction rates of the fundamental fine structure flux. APOLLO can perform a depletion calculation for one cell, a group of cells or a complete reactor. The results of an APOLLO calculation are the space-and-energy-dependent flux, the material buckling or any reaction rate; these results can also be macroscopic cross sections used as input data for a 2D or 3D depletion and diffusion code in reactor geometry. 10 references. (auth)« less

  7. Pareto Joint Inversion of Love and Quasi Rayleigh's waves - synthetic study

    NASA Astrophysics Data System (ADS)

    Bogacz, Adrian; Dalton, David; Danek, Tomasz; Miernik, Katarzyna; Slawinski, Michael A.

    2017-04-01

    In this contribution the specific application of Pareto joint inversion in solving geophysical problem is presented. Pareto criterion combine with Particle Swarm Optimization were used to solve geophysical inverse problems for Love and Quasi Rayleigh's waves. Basic theory of forward problem calculation for chosen surface waves is described. To avoid computational problems some simplification were made. This operation allowed foster and more straightforward calculation without lost of solution generality. According to the solving scheme restrictions, considered model must have exact two layers, elastic isotropic surface layer and elastic isotropic half space with infinite thickness. The aim of the inversion is to obain elastic parameters and model geometry using dispersion data. In calculations different case were considered, such as different number of modes for different wave types and different frequencies. Created solutions are using OpenMP standard for parallel computing, which help in reduction of computational times. The results of experimental computations are presented and commented. This research was performed in the context of The Geomechanics Project supported by Husky Energy. Also, this research was partially supported by the Natural Sciences and Engineering Research Council of Canada, grant 238416-2013, and by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.

  8. A Model to Predict Final Cost Growth in a Weapon System Development Program

    DTIC Science & Technology

    1975-08-01

    Manual Calculation ..... .............. ... 117 11. Data and Results of 3 x 2 Manual Calculation .................... .. 119 12. Quarterly F-5E... manually calculated. The data and results are in Table 10. The computer program for these calculations is listed in Figure 12 with results in Figure...13. Table 10 Data and Results of 2 x 2 Manual Calculation Outcome Aspect Poor Acceptable 1 .5 .5 2 .3 .7 The total number of events possible are: (2)2

  9. Anharmonic Vibrational Spectroscopy on Metal Transition Complexes

    NASA Astrophysics Data System (ADS)

    Latouche, Camille; Bloino, Julien; Barone, Vincenzo

    2014-06-01

    Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.

  10. Composite space charge density functions for the calculation of gamma sensitivity of self-powered neutron detectors, using Warren's model

    NASA Astrophysics Data System (ADS)

    Mahant, A. K.; Rao, P. S.; Misra, S. C.

    1994-07-01

    In the calculational model developed by Warren and Shah for the computation of the gamma sensitivity ( Sγ) it has been observed that the computed Sγ value is quite sensitive to the space charge distribution function assumed for the insulator region and the energy of the gamma photons. The Sγ of SPNDs with Pt, Co and V emitters (manufactured by Thermocoax, France) has been measured at 60Co photon energy and a good correlation between the measured and computed values has been obtained using a composite space charge density function (CSCD), the details of which are presented in this paper. The arguments are extended for evaluating the Sγ values of several SPNDs for which Warren and Shah reported the measured values for a prompt fission gamma spectrum obtained in a swimming pool reactor. These results are also discussed.

  11. A method for estimating the probability of lightning causing a methane ignition in an underground mine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sacks, H.K.; Novak, T.

    2008-03-15

    During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controlsmore » to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.« less

  12. Validation of a Solid Rocket Motor Internal Environment Model

    NASA Technical Reports Server (NTRS)

    Martin, Heath T.

    2017-01-01

    In a prior effort, a thermal/fluid model of the interior of Penn State University's laboratory-scale Insulation Test Motor (ITM) was constructed to predict both the convective and radiative heat transfer to the interior walls of the ITM with a minimum of empiricism. These predictions were then compared to values of total and radiative heat flux measured in a previous series of ITM test firings to assess the capabilities and shortcomings of the chosen modeling approach. Though the calculated fluxes reasonably agreed with those measured during testing, this exercise revealed means of improving the fidelity of the model to, in the case of the thermal radiation, enable direct comparison of the measured and calculated fluxes and, for the total heat flux, compute a value indicative of the average measured condition. By replacing the P1-Approximation with the discrete ordinates (DO) model for the solution of the gray radiative transfer equation, the radiation intensity field in the optically thin region near the radiometer is accurately estimated, allowing the thermal radiation flux to be calculated on the heat-flux sensor itself, which was then compared directly to the measured values. Though the fully coupling the wall thermal response with the flow model was not attempted due to the excessive computational time required, a separate wall thermal response model was used to better estimate the average temperature of the graphite surfaces upstream of the heat flux gauges and improve the accuracy of both the total and radiative heat flux computations. The success of this modeling approach increases confidence in the ability of state-of-the-art thermal and fluid modeling to accurately predict SRM internal environments, offers corrections to older methods, and supplies a tool for further studies of the dynamics of SRM interiors.

  13. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  14. Calculation of solar wind flows about terrestrial planets

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Spreiter, J. R.

    1982-01-01

    A computational model was developed for the determination of the plasma and magnetic field properties of the global interaction of the solar wind with terrestrial planetary magneto/ionospheres. The theoretical method is based on an established single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of supersonic, super Alfvenic solar wind flow past terrestrial planets. A summary is provided of the important research results.

  15. Electromagnetic scattering calculations on the Intel Touchstone Delta

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Patterson, Jean; Scott, David

    1992-01-01

    During the first year's operation of the Intel Touchstone Delta system, software which solves the electric field integral equations for fields scattered from arbitrarily shaped objects has been transferred to the Delta. To fully realize the Delta's resources, an out-of-core dense matrix solution algorithm that utilizes some or all of the 90 Gbyte of concurrent file system (CFS) has been used. The largest calculation completed to date computes the fields scattered from a perfectly conducting sphere modeled by 48,672 unknown functions, resulting in a complex valued dense matrix needing 37.9 Gbyte of storage. The out-of-core LU matrix factorization algorithm was executed in 8.25 h at a rate of 10.35 Gflops. Total time to complete the calculation was 19.7 h-the additional time was used to compute the 48,672 x 48,672 matrix entries, solve the system for a given excitation, and compute observable quantities. The calculation was performed in 64-b precision.

  16. Virtual screening of integrase inhibitors by large scale binding free energy calculations: the SAMPL4 challenge

    PubMed Central

    Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.

    2014-01-01

    As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704

  17. How much crosstalk can be allowed in a stereoscopic system at various grey levels?

    NASA Astrophysics Data System (ADS)

    Shestak, Sergey; Kim, Daesik; Kim, Yongie

    2012-03-01

    We have calculated a perceptual threshold of stereoscopic crosstalk on the basis of mathematical model of human vision sensitivity. Instead of linear model of just noticeable difference (JND) known as Weber's law we applied nonlinear Barten's model. The predicted crosstalk threshold varies with the background luminance. The calculated values of threshold are in a reasonable agreement with known experimental data. We calculated perceptual threshold of crosstalk for various combinations of the applied grey level. This result can be applied for the assessment of grey-to-grey crosstalk compensation. Further computational analysis of the applied model predicts the increase of the displayable image contrast with reduction of the maximum displayable luminance.

  18. The consideration of atmospheric stability within wind farm AEP calculations

    NASA Astrophysics Data System (ADS)

    Schmidt, Jonas; Chang, Chi-Yao; Dörenkämper, Martin; Salimi, Milad; Teichmann, Tim; Stoevesandt, Bernhard

    2016-09-01

    The annual energy production of an existing wind farm including thermal stratification is calculated with two different methods and compared to the average of three years of SCADA data. The first method is based on steady state computational fluid dynamics simulations and the assumption of Reynolds-similarity at hub height. The second method is a wake modelling calculation, where a new stratification transformation model was imposed on the Jensen an Ainslie wake models. The inflow states for both approaches were obtained from one year WRF simulation data of the site. Although all models underestimate the mean wind speed and wake effects, the results from the phenomenological wake transformation are compatible with high-fidelity simulation results.

  19. Evaluation of the ERP dispersion model using Darlington tracer-study data. Report No. 90-200-K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, S.C.

    1990-01-01

    In this study, site-boundary atmospheric dilution factors calculated by the atmospheric dispersion model used in the ERP (Emergency Response Planning) computer code were compared to data collected during the Darlington tracer study. The purpose of this comparison was to obtain estimates of model uncertainty under a variety of conditions. This report provides background on ERP, the ERP dispersion model and the Darlington tracer study. Model evaluation techniques are discussed briefly, and the results of the comparison of model calculations with the field data are presented and reviewed.

  20. Analysis of Compression Pad Cavities for the Orion Heatshield

    NASA Technical Reports Server (NTRS)

    Thompson, Richard A.; Lessard, Victor R.; Jentink, Thomas N.; Zoby, Ernest V.

    2009-01-01

    Current results of a program for analysis of the compression pad cavities on the Orion heatshield are reviewed. The program was supported by experimental tests, engineering modeling, and applied computations with an emphasis on the latter presented in this paper. The computational tools and approach are described along with calculated results for wind tunnel and flight conditions. Correlations of the computed results are shown which can produce a credible prediction of heating augmentation due to cavity disturbances. The models developed for use in preliminary design of the Orion heatshield are presented.

  1. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Mcbride, Bonnie J.

    1994-01-01

    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  2. Modelling of DNA-protein recognition

    NASA Technical Reports Server (NTRS)

    Rein, R.; Garduno, R.; Colombano, S.; Nir, S.; Haydock, K.; Macelroy, R. D.

    1980-01-01

    Computer model-building procedures using stereochemical principles together with theoretical energy calculations appear to be, at this stage, the most promising route toward the elucidation of DNA-protein binding schemes and recognition principles. A review of models and bonding principles is conducted and approaches to modeling are considered, taking into account possible di-hydrogen-bonding schemes between a peptide and a base (or a base pair) of a double-stranded nucleic acid in the major groove, aspects of computer graphic modeling, and a search for isogeometric helices. The energetics of recognition complexes is discussed and several models for peptide DNA recognition are presented.

  3. Calculating the nutrient composition of recipes with computers.

    PubMed

    Powers, P M; Hoover, L W

    1989-02-01

    The objective of this research project was to compare the nutrient values computed by four commonly used computerized recipe calculation methods. The four methods compared were the yield factor, retention factor, summing, and simplified retention factor methods. Two versions of the summing method were modeled. Four pork entrée recipes were selected for analysis: roast pork, pork and noodle casserole, pan-broiled pork chops, and pork chops with vegetables. Assumptions were made about changes expected to occur in the ingredients during preparation and cooking. Models were designed to simulate the algorithms of the calculation methods using a microcomputer spreadsheet software package. Identical results were generated in the yield factor, retention factor, and summing-cooked models for roast pork. The retention factor and summing-cooked models also produced identical results for the recipe for pan-broiled pork chops. The summing-raw model gave the highest value for water in all four recipes and the lowest values for most of the other nutrients. A superior method or methods was not identified. However, on the basis of the capabilities provided with the yield factor and retention factor methods, more serious consideration of these two methods is recommended.

  4. Predicting Flutter and Forced Response in Turbomachinery

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Adamczyk, John J.; Srivastava, Rakesh; Bakhle, Milind A.; Shabbir, Aamir; Chen, Jen-Ping; Janus, J. Mark; To, Wai-Ming; Barter, John

    2005-01-01

    TURBO-AE is a computer code that enables detailed, high-fidelity modeling of aeroelastic and unsteady aerodynamic characteristics for prediction of flutter, forced response, and blade-row interaction effects in turbomachinery. Flow regimes that can be modeled include subsonic, transonic, and supersonic, with attached and/or separated flow fields. The three-dimensional Reynolds-averaged Navier-Stokes equations are solved numerically to obtain extremely accurate descriptions of unsteady flow fields in multistage turbomachinery configurations. Blade vibration is simulated by use of a dynamic-grid-deformation technique to calculate the energy exchange for determining the aerodynamic damping of vibrations of blades. The aerodynamic damping can be used to assess the stability of a blade row. TURBO-AE also calculates the unsteady blade loading attributable to such external sources of excitation as incoming gusts and blade-row interactions. These blade loadings, along with aerodynamic damping, are used to calculate the forced responses of blades to predict their fatigue lives. Phase-lagged boundary conditions based on the direct-store method are used to calculate nonzero interblade phase-angle oscillations; this practice eliminates the need to model multiple blade passages, and, hence, enables large savings in computational resources.

  5. An accurate model for the computation of the dose of protons in water.

    PubMed

    Embriaco, A; Bellinzona, V E; Fontana, A; Rotondi, A

    2017-06-01

    The accurate and fast calculation of the dose in proton radiation therapy is an essential ingredient for successful treatments. We propose a novel approach with a minimal number of parameters. The approach is based on the exact calculation of the electromagnetic part of the interaction, namely the Molière theory of the multiple Coulomb scattering for the transversal 1D projection and the Bethe-Bloch formula for the longitudinal stopping power profile, including a gaussian energy straggling. To this e.m. contribution the nuclear proton-nucleus interaction is added with a simple two-parameter model. Then, the non gaussian lateral profile is used to calculate the radial dose distribution with a method that assumes the cylindrical symmetry of the distribution. The results, obtained with a fast C++ based computational code called MONET (MOdel of ioN dosE for Therapy), are in very good agreement with the FLUKA MC code, within a few percent in the worst case. This study provides a new tool for fast dose calculation or verification, possibly for clinical use. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Development of FullWave : Hot Plasma RF Simulation Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei

    2017-10-01

    Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.

  7. AdapChem

    NASA Technical Reports Server (NTRS)

    Oluwole, Oluwayemisi O.; Wong, Hsi-Wu; Green, William

    2012-01-01

    AdapChem software enables high efficiency, low computational cost, and enhanced accuracy on computational fluid dynamics (CFD) numerical simulations used for combustion studies. The software dynamically allocates smaller, reduced chemical models instead of the larger, full chemistry models to evolve the calculation while ensuring the same accuracy to be obtained for steady-state CFD reacting flow simulations. The software enables detailed chemical kinetic modeling in combustion CFD simulations. AdapChem adapts the reaction mechanism used in the CFD to the local reaction conditions. Instead of a single, comprehensive reaction mechanism throughout the computation, a dynamic distribution of smaller, reduced models is used to capture accurately the chemical kinetics at a fraction of the cost of the traditional single-mechanism approach.

  8. Original analytic solution of a half-bridge modelled as a statically indeterminate system

    NASA Astrophysics Data System (ADS)

    Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra; Barhalescu, Mihaela

    2016-12-01

    The paper presents an original computer based analytical model of a half-bridge belonging to a circular settling tank. The primary unknown is computed using the force method, the coefficients of the canonical equation being calculated using either the discretization of the bending moment diagram in trapezoids, or using the relations specific to the polygons. A second algorithm based on the method of initial parameters is also presented. Analyzing the new solution we came to the conclusion that most of the computer code developed for other model may be reused. The results are useful to evaluate the behavior of the structure and to compare with the results of the finite element models.

  9. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    PubMed

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  10. Extensible Computational Chemistry Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-08-09

    ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing the power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of themore » inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

  11. Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering

    PubMed Central

    Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.

    2011-01-01

    Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761

  12. BurnMan: Towards a multidisciplinary toolkit for reproducible deep Earth science

    NASA Astrophysics Data System (ADS)

    Myhill, R.; Cottaar, S.; Heister, T.; Rose, I.; Unterborn, C. T.; Dannberg, J.; Martin-Short, R.

    2016-12-01

    BurnMan (www.burnman.org) is an open-source toolbox to compute thermodynamic and thermoelastic properties as a function of pressure and temperature using published mineral physical parameters and equations-of-state. The framework is user-friendly, written in Python, and modular, allowing the user to implement their own equations of state, endmember and solution model libraries, geotherms, and averaging schemes. Here we introduce various new modules, which can be used to: Fit thermodynamic variables to data from high pressure static and shock wave experiments, Calculate equilibrium assemblages given a bulk composition, pressure and temperature, Calculate chemical potentials and oxygen fugacities for given assemblages Compute 3D synthetic seismic models using output from geodynamic models and compare these results with global seismic tomographic models, Create input files for synthetic seismogram codes. Users can contribute scripts that reproduce the results from peer-reviewed articles and practical demonstrations (e.g. Cottaar et al., 2014).

  13. Computational Cosmology: From the Early Universe to the Large Scale Structure.

    PubMed

    Anninos, Peter

    2001-01-01

    In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations (and numerical methods applied to specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  14. Computational Cosmology: from the Early Universe to the Large Scale Structure.

    PubMed

    Anninos, Peter

    1998-01-01

    In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations addressing specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  15. Ab initio results for intermediate-mass, open-shell nuclei

    NASA Astrophysics Data System (ADS)

    Baker, Robert B.; Dytrych, Tomas; Launey, Kristina D.; Draayer, Jerry P.

    2017-01-01

    A theoretical understanding of nuclei in the intermediate-mass region is vital to astrophysical models, especially for nucleosynthesis. Here, we employ the ab initio symmetry-adapted no-core shell model (SA-NCSM) in an effort to push first-principle calculations across the sd-shell region. The ab initio SA-NCSM's advantages come from its ability to control the growth of model spaces by including only physically relevant subspaces, which allows us to explore ultra-large model spaces beyond the reach of other methods. We report on calculations for 19Ne and 20Ne up through 13 harmonic oscillator shells using realistic interactions and discuss the underlying structure as well as implications for various astrophysical reactions. This work was supported by the U.S. NSF (OCI-0904874 and ACI -1516338) and the U.S. DOE (DE-SC0005248), and also benefitted from the Blue Waters sustained-petascale computing project and high performance computing resources provided by LSU.

  16. Progress in high-lift aerodynamic calculations

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1993-01-01

    The current work presents progress in the effort to numerically simulate the flow over high-lift aerodynamic components, namely, multi-element airfoils and wings in either a take-off or a landing configuration. The computational approach utilizes an incompressible flow solver and an overlaid chimera grid approach. A detailed grid resolution study is presented for flow over a three-element airfoil. Two turbulence models, a one-equation Baldwin-Barth model and a two equation k-omega model are compared. Excellent agreement with experiment is obtained for the lift coefficient at all angles of attack, including the prediction of maximum lift when using the two-equation model. Results for two other flap riggings are shown. Three-dimensional results are presented for a wing with a square wing-tip as a validation case. Grid generation and topology is discussed for computing the flow over a T-39 Sabreliner wing with flap deployed and the initial calculations for this geometry are presented.

  17. The Stochastic Parcel Model: A deterministic parameterization of stochastically entraining convection

    DOE PAGES

    Romps, David M.

    2016-03-01

    Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less

  18. Optical model calculations of 14.6A GeV silicon fragmentation cross sections

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Khan, Ferdous; Tripathi, Ram K.

    1993-01-01

    An optical potential abrasion-ablation collision model is used to calculate hadronic dissociation cross sections for a 14.6 A GeV(exp 28) Si beam fragmenting in aluminum, tin, and lead targets. The frictional-spectator-interaction (FSI) contributions are computed with two different formalisms for the energy-dependent mean free path. These estimates are compared with experimental data and with estimates obtained from semi-empirical fragmentation models commonly used in galactic cosmic ray transport studies.

  19. Mathematical modeling of high and low temperature heat pipes

    NASA Technical Reports Server (NTRS)

    Chi, S. W.

    1971-01-01

    Mathematical models are developed for calculating heat-transfer limitations of high-temperature heat pipes and heat-transfer limitations and temperature gradient of low temperature heat pipes. Calculated results are compared with the available experimental data from various sources to increase confidence in the present math models. Complete listings of two computer programs for high- and low-temperature heat pipes respectively are appended. These programs enable the performance of heat pipes with wrapped-screen, rectangular-groove or screen-covered rectangular-groove wick to be predicted.

  20. Applications of SMART: A DRDC Atmospheric Radiative Transfer Library Optimized for Wide Band Computations

    DTIC Science & Technology

    2011-06-28

    DRDC accurate refracted path calculation – 2-stream (flux) and DISORT (N-stream) MS calculations – Lambert and sea surface (DRDC analytical model ) BRDF ...display a currently valid OMB control number. 1. REPORT DATE 28 JUN 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND...radiance – MODTRAN molecular extinctions (CK) • Seamless integration of MOD4v3r1 – MODTRAN and DRDC aerosol models – Falling snow model (DRDC

  1. Evaluation of the effect of postural and gravitational variations on the distribution of pulmonary blood flow via an image-based computational model.

    PubMed

    Burrowes, K S; Hunter, P J; Tawhai, M H

    2005-01-01

    We have developed an image-based computational model of blood flow within the human pulmonary circulation in order to investigate the distribution of flow under various conditions of posture and gravity. Geometric models of the lobar surfaces and largest arterial and venous vessels were derived from multi-detector row X-ray computed tomography. The remaining blood vessels were generated using a volume-filling branching algorithm. Equations representing conservation of mass and momentum are solved within the vascular geometry to calculate pressure, radius, and velocity distributions. Flow solutions are obtained within the model in the upright, inverted, prone, and supine postures and in the upright posture with and without gravity. Additional equations representing large deformation mechanics are used to calculate the change in lung geometry and pressure distributions within the lung in the various postures - creating a coupled, co-dependent model of mechanics and flow. The embedded vascular meshes deform in accordance with the lung geometry. Results illustrate a persistent flow gradient from the top to the bottom of the lung even in the absence of gravity and in all postures, indicating that vascular branching structure is largely responsible for the distribution of flow.

  2. Computer modeling of high-voltage solar array experiment using the NASCAP/LEO (NASA Charging Analyzer Program/Low Earth Orbit) computer code

    NASA Astrophysics Data System (ADS)

    Reichl, Karl O., Jr.

    1987-06-01

    The relationship between the Interactions Measurement Payload for Shuttle (IMPS) flight experiment and the low Earth orbit plasma environment is discussed. Two interactions (parasitic current loss and electrostatic discharge on the array) may be detrimental to mission effectiveness. They result from the spacecraft's electrical potentials floating relative to plasma ground to achieve a charge flow equilibrium into the spacecraft. The floating potentials were driven by external biases applied to a solar array module of the Photovoltaic Array Space Power (PASP) experiment aboard the IMPS test pallet. The modeling was performed using the NASA Charging Analyzer Program/Low Earth Orbit (NASCAP/LEO) computer code which calculates the potentials and current collection of high-voltage objects in low Earth orbit. Models are developed by specifying the spacecraft, environment, and orbital parameters. Eight IMPS models were developed by varying the array's bias voltage and altering its orientation relative to its motion. The code modeled a typical low Earth equatorial orbit. NASCAP/LEO calculated a wide variety of possible floating potential and current collection scenarios. These varied directly with both the array bias voltage and with the vehicle's orbital orientation.

  3. Code for Calculating Regional Seismic Travel Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  4. Extension of the TDCR model to compute counting efficiencies for radionuclides with complex decay schemes.

    PubMed

    Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch

    2014-05-01

    The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions). © 2013 Published by Elsevier Ltd.

  5. Computing Programs for Determining Traffic Flows from Roundabouts

    NASA Astrophysics Data System (ADS)

    Boroiu, A. A.; Tabacu, I.; Ene, A.; Neagu, E.; Boroiu, A.

    2017-10-01

    For modelling road traffic at the level of a road network it is necessary to specify the flows of all traffic currents at each intersection. These data can be obtained by direct measurements at the traffic light intersections, but in the case of a roundabout this is not possible directly and the literature as well as the traffic modelling software doesn’t offer ways to solve this issue. Two sets of formulas are proposed by which all traffic flows from the roundabouts with 3 or 4 arms are calculated based on the streams that can be measured. The objective of this paper is to develop computational programs to operate with these formulas. For each of the two sets of analytical relations, a computational program was developed in the Java operating language. The obtained results fully confirm the applicability of the calculation programs. The final stage for capitalizing these programs will be to make them web pages in HTML format, so that they can be accessed and used on the Internet. The achievements presented in this paper are an important step to provide a necessary tool for traffic modelling because these computational programs can be easily integrated into specialized software.

  6. Cell-model prediction of the melting of a Lennard-Jones solid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holian, B.L.

    The classical free energy of the Lennard-Jones 6-12 solid is computed from a single-particle anharmonic cell model with a correction to the entropy given by the classical correlational entropy of quasiharmonic lattice dynamics. The free energy of the fluid is obtained from the Hansen-Ree analytic fit to Monte Carlo equation-of-state calculations. The resulting predictions of the solid-fluid coexistence curves by this corrected cell model of the solid are in excellent agreement with the computer experiments.

  7. User's guide for a computer program to analyze the LRC 16 ft transonic dynamics tunnel cable mount system

    NASA Technical Reports Server (NTRS)

    Barbero, P.; Chin, J.

    1973-01-01

    The theoretical derivation of the set of equations is discussed which is applicable to modeling the dynamic characteristics of aeroelastically-scaled models flown on the two-cable mount system in a 16 ft transonic dynamics tunnel. The computer program provided for the analysis is also described. The program calculates model trim conditions as well as 3 DOF longitudinal and lateral/directional dynamic conditions for various flying cable and snubber cable configurations. Sample input and output are included.

  8. Mathematical model for steady state, simple ampholyte isoelectric focusing: Development, computer simulation and implementation

    NASA Technical Reports Server (NTRS)

    Palusinski, O. A.; Allgyer, T. T.

    1979-01-01

    The elimination of Ampholine from the system by establishing the pH gradient with simple ampholytes is proposed. A mathematical model was exercised at the level of the two-component system by using values for mobilities, diffusion coefficients, and dissociation constants representative of glutamic acid and histidine. The constants assumed in the calculations are reported. The predictions of the model and computer simulation of isoelectric focusing experiments are in direct importance to obtain Ampholine-free, stable pH gradients.

  9. Computational Modeling of Age-Differences In a Visually Demanding Driving Task: Vehicle Detection

    DTIC Science & Technology

    1997-10-07

    overall estimate of d’ for each scene was calculated from the two levels using the method described in MacMillan and Creelman [13]. MODELING VEHICLE...Scialfa, "Visual and auditory aging," In J. Birren & K. W. Schaie (Eds.) Handbook of the Psychology of Aging (4th edition), 1996, New York: Academic...Computational models of Visual Processing, 1991, Boston MA: MIT Press. [13] N. A. MacMillan & C. D. Creelman , Detection Theory: A User’s Guide, 1991

  10. Calculation of thermodynamic functions of aluminum plasma for high-energy-density systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaev, V. V., E-mail: shumaev@student.bmstu.ru

    The results of calculating the degree of ionization, the pressure, and the specific internal energy of aluminum plasma in a wide temperature range are presented. The TERMAG computational code based on the Thomas–Fermi model was used at temperatures T > 105 K, and the ionization equilibrium model (Saha model) was applied at lower temperatures. Quantitatively similar results were obtained in the temperature range where both models are applicable. This suggests that the obtained data may be joined to produce a wide-range equation of state.

  11. Application of Mathematical and Three-Dimensional Computer Modeling Tools in the Planning of Processes of Fuel and Energy Complexes

    NASA Astrophysics Data System (ADS)

    Aksenova, Olesya; Nikolaeva, Evgenia; Cehlár, Michal

    2017-11-01

    This work aims to investigate the effectiveness of mathematical and three-dimensional computer modeling tools in the planning of processes of fuel and energy complexes at the planning and design phase of a thermal power plant (TPP). A solution for purification of gas emissions at the design development phase of waste treatment systems is proposed employing mathematical and three-dimensional computer modeling - using the E-nets apparatus and the development of a 3D model of the future gas emission purification system. Which allows to visualize the designed result, to select and scientifically prove economically feasible technology, as well as to ensure the high environmental and social effect of the developed waste treatment system. The authors present results of a treatment of planned technological processes and the system for purifying gas emissions in terms of E-nets. using mathematical modeling in the Simulink application. What allowed to create a model of a device from the library of standard blocks and to perform calculations. A three-dimensional model of a system for purifying gas emissions has been constructed. It allows to visualize technological processes and compare them with the theoretical calculations at the design phase of a TPP and. if necessary, make adjustments.

  12. Advanced Technology for Portable Personal Visualization

    DTIC Science & Technology

    1991-03-01

    walking. The building model is created using AutoCAD. Realism is enhanced by calculating a radiosity solution for the lighting model. This has an added...lighting, color combinations and decor. Due to the computationally intensive nature of the radiosity solution, modeling changes cannot be made on-line

  13. Finite area combustor theoretical rocket performance

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Mcbride, Bonnie J.

    1988-01-01

    Previous to this report, the computer program of NASA SP-273 and NASA TM-86885 was capable of calculating theoretical rocket performance based only on the assumption of an infinite area combustion chamber (IAC). An option was added to this program which now also permits the calculation of rocket performance based on the assumption of a finite area combustion chamber (FAC). In the FAC model, the combustion process in the cylindrical chamber is assumed to be adiabatic, but nonisentropic. This results in a stagnation pressure drop from the injector face to the end of the chamber and a lower calculated performance for the FAC model than the IAC model.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huff, Kathryn D.

    Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less

  15. Calculated hydrographs for unsteady research flows at selected sites along the Colorado River downstream from Glen Canyon Dam, Arizona, 1990 and 1991

    USGS Publications Warehouse

    Griffin, Eleanor R.; Wiele, Stephen M.

    1996-01-01

    A one-dimensional model of unsteady discharge waves was applied to research flowr that were released from Glen Canyon Dam in support of the Glen Canyon Environmental Studies. These research flows extended over periods of 11 days during which the discharge followed specific, regular patterns repeated on a daily cycle that were similar to the daily releases for power generation. The model was used to produce discharge hydrographs at 38 selected sites in Marble and Grand Canyons for each of nine unsteady flows released from the dam in 1990 and 1991. In each case, the discharge computed from stage measurements and the associated stage-discharge relation at the streamflow-gaging station just below the dam (09379910 Colorado River Hlow Glen Canyon Dam) was routed to Diamond Creek, which is 386 kilometers downstream. Steady and unsteady tributary inflows downstream from the dam were included in the model calculations. Steady inflow to the river from tributaries downstream from the dam was determined for each case by comparing the steady base flow preceding and following the unsteady flow measured at six streamflow-gaging stations between Glen Canyon Dam and Diamond Creek. During three flow periods, significant unsteady inflow was received from the Paria River, or the Little Colorado River, or both. The amount and timing of unsteady inflow was determined using the discharge computed from records of streamflow-gaging stations on the tributaries. Unsteady flow then was added to the flow calculated by the model at the appropriate location. Hydrographs were calculated using the model at 5 streamflow-gaging stations downstream from the dam and at 33 beach study sites. Accuracy of model results was evaluated by comparing the results to discharge hydrographs computed from the records of the five streamflow-gaging stations between Lees Ferry and Lake Mead. Results show that model predictions of wave speed and shape agree well with data from the five streamflow-gaging stations.

  16. Numerical analysis of hypersonic turbulent film cooling flows

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Chen, C. P.; Wei, H.

    1992-01-01

    As a building block, numerical capabilities for predicting heat flux and turbulent flowfields of hypersonic vehicles require extensive model validations. Computational procedures for calculating turbulent flows and heat fluxes for supersonic film cooling with parallel slot injections are described in this study. Two injectant mass flow rates with matched and unmatched pressure conditions using the database of Holden et al. (1990) are considered. To avoid uncertainties associated with the boundary conditions in testing turbulence models, detailed three-dimensional flowfields of the injection nozzle were calculated. Two computational fluid dynamics codes, GASP and FDNS, with the algebraic Baldwin-Lomax and k-epsilon models with compressibility corrections were used. It was found that the B-L model which resolves near-wall viscous sublayer is very sensitive to the inlet boundary conditions at the nozzle exit face. The k-epsilon models with improved wall functions are less sensitive to the inlet boundary conditions. The testings show that compressibility corrections are necessary for the k-epsilon model to realistically predict the heat fluxes of the hypersonic film cooling problems.

  17. Wide-angle x-ray scattering and solid-state nuclear magnetic resonance data combined to test models for cellulose microfibrils in mung bean cell walls.

    PubMed

    Newman, Roger H; Hill, Stefan J; Harris, Philip J

    2013-12-01

    A synchrotron wide-angle x-ray scattering study of mung bean (Vigna radiata) primary cell walls was combined with published solid-state nuclear magnetic resonance data to test models for packing of (1→4)-β-glucan chains in cellulose microfibrils. Computer-simulated peak shapes, calculated for 36-chain microfibrils with perfect order or uncorrelated disorder, were sharper than those in the experimental diffractogram. Introducing correlated disorder into the models broaden the simulated peaks but only when the disorder was increased to unrealistic magnitudes. Computer-simulated diffractograms, calculated for 24- and 18-chain models, showed good fits to experimental data. Particularly good fits to both x-ray and nuclear magnetic resonance data were obtained for collections of 18-chain models with mixed cross-sectional shapes and occasional twinning. Synthesis of 18-chain microfibrils is consistent with a model for cellulose-synthesizing complexes in which three cellulose synthase polypeptides form a particle and six particles form a rosette.

  18. Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.

    2003-01-01

    The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and calculated the generalized aerodynamic forces, eigenvalues, and response amplitudes. The eigenvalues determine the flutter frequency and damping. As a test case, the flutter of a helical fan was calculated with LINFLUX and compared with calculations from TURBO-AE, a nonlinear time domain code, and from ASTROP2, a code based on linear unsteady aerodynamics.

  19. The Finer Details: Climate Modeling

    NASA Technical Reports Server (NTRS)

    2000-01-01

    If you want to know whether you will need sunscreen or an umbrella for tomorrow's picnic, you can simply read the local weather report. However, if you are calculating the impact of gas combustion on global temperatures, or anticipating next year's rainfall levels to set water conservation policy, you must conduct a more comprehensive investigation. Such complex matters require long-range modeling techniques that predict broad trends in climate development rather than day-to-day details. Climate models are built from equations that calculate the progression of weather-related conditions over time. Based on the laws of physics, climate model equations have been developed to predict a number of environmental factors, for example: 1. Amount of solar radiation that hits the Earth. 2. Varying proportions of gases that make up the air. 3. Temperature at the Earth's surface. 4. Circulation of ocean and wind currents. 5. Development of cloud cover. Numerical modeling of the climate can improve our understanding of both the past and, the future. A model can confirm the accuracy of environmental measurements taken. in, the past and can even fill in gaps in those records. In addition, by quantifying the relationship between different aspects of climate, scientists can estimate how a future change in one aspect may alter the rest of the world. For example, could an increase in the temperature of the Pacific Ocean somehow set off a drought on the other side of the world? A computer simulation could lead to an answer for this and other questions. Quantifying the chaotic, nonlinear activities that shape our climate is no easy matter. You cannot run these simulations on your desktop computer and expect results by the time you have finished checking your morning e-mail. Efficient and accurate climate modeling requires powerful computers that can process billions of mathematical calculations in a single second. The NCCS exists to provide this degree of vast computing capability.

  20. Pinevol: a user's guide to a volume calculator for southern pines

    Treesearch

    Daniel J. Leduc

    2006-01-01

    Taper functions describe a model of the actual geometric shape of a tree. When this shape is assumed to be known, volume by any log rule and to any merchantability standard can be calculated. PINEVOL is a computer program for calculating the volume of the major southern pines using species-specific bole taper functions. It can use the Doyle, Scribner, or International...

  1. Study of Gamow-Teller strength and associated weak-rates on odd-A nuclei in stellar matter

    NASA Astrophysics Data System (ADS)

    Majid, Muhammad; Nabi, Jameel-Un; Riaz, Muhammad

    In a recent study by Cole et al. [A. L. Cole et al., Phys. Rev. C 86 (2012) 015809], it was concluded that quasi-particle random phase approximation (QRPA) calculations show larger deviations and overestimate the total experimental Gamow-Teller (GT) strength. It was also concluded that QRPA calculated electron capture rates exhibit larger deviation than those derived from the measured GT strength distributions. The main purpose of this study is to probe the findings of the Cole et al. paper. This study gives useful information on the performance of QRPA-based nuclear models. As per simulation results, the capturing of electrons that occur on medium heavy isotopes have a significant role in decreasing the ratio of electron-to-baryon content of the stellar interior during the late stages of core evolution. We report the calculation of allowed charge-changing transitions strength for odd-A fp-shell nuclei (45Sc and 55Mn) by employing the deformed pn-QRPA approach. The computed GT transition strength is compared with previous theoretical calculations and measured data. For stellar applications, the corresponding electron capture rates are computed and compared with rates using previously calculated and measured GT values. Our finding shows that our calculated results are in decent accordance with measured data. At higher stellar temperature, our calculated electron capture rates are larger than those calculated by independent particle model (IPM) and shell model. It was further concluded that at low temperature and high density regions, the positron emission weak-rates from 45Sc and 55Mn may be neglected in simulation codes.

  2. Design of Rail Instrumentation for Wind Tunnel Sonic Boom Measurements and Computational-Experimental Comparisons

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Elmiligui, A.; Aftosmis, M.; Morgenstern, J.; Durston, D.; Thomas, S.

    2012-01-01

    An innovative pressure rail concept for wind tunnel sonic boom testing of modern aircraft configurations with very low overpressures was designed with an adjoint-based solution-adapted Cartesian grid method. The computational method requires accurate free-air calculations of a test article as well as solutions modeling the influence of rail and tunnel walls. Specialized grids for accurate Euler and Navier-Stokes sonic boom computations were used on several test articles including complete aircraft models with flow-through nacelles. The computed pressure signatures are compared with recent results from the NASA 9- x 7-foot Supersonic Wind Tunnel using the advanced rail design.

  3. Operational procedure for computer program for design point characteristics of a compressed-air generator with through-flow combustor for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1971-01-01

    The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.

  4. Constitutive Modeling of the Thermomechanical Behavior of Rock Salt

    NASA Astrophysics Data System (ADS)

    Hampel, A.

    2016-12-01

    For the safe disposal of heat-generating high-level radioactive waste in rock salt formations, highly reliable numerical simulations of the thermomechanical and hydraulic behavior of the host rock have to be performed. Today, the huge progress in computer technology has enabled experts to calculate large and detailed computer models of underground repositories. However, the big ad­van­ces in computer technology are only beneficial when the applied material models and modeling procedures also meet very high demands. They result from the fact that the evaluation of the long-term integrity of the geological barrier requires an extra­polation of a highly nonlinear deforma­tion behavior to up to 1 million years, while the underlying experimental investigations in the laboratory or in situ have a duration of only days, weeks or at most some years. Several advanced constitutive models were developed and continuously improved to describe the dependences of various deformation phenomena in rock salt on in-situ relevant boundary conditions: transient and steady-state creep, evolution of damage and dilatancy in the DRZ, failure, post-failure behavior, residual strength, damage and dilatancy reduction, and healing. In a joint project series between 2004 and 2016, fundamental features of the advanced models were investigated and compared in detail with benchmark calculations. The study included procedures for the determination of characteristic salt-type-specific model parameter values and for the performance of numerical calculations of underground structures. Based on the results of this work and on specific laboratory investigations, the rock mechanical modeling is currently developed further in a common research project of experts from Germany and the United States. In this presentation, an overview about the work and results of the project series is given and the current joint research project WEIMOS is introduced.

  5. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y.

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct featuresmore » of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two different media. Optimization of the simulation parameters and the use of VR techniques saved a significant amount of computation time. Finally, parallelization of the simulations improved even further the calculation time, which reached 1 day for a typical irradiation case envisaged in the forthcoming clinical trials in MRT. An example of MRT treatment in a dog's head is presented, showing the performance of the calculation engine. Conclusions: The development of the first MC-based calculation engine for the future TPS devoted to MRT has been accomplished. This will constitute an essential tool for the future clinical trials on pets at the ESRF. The MC engine is able to calculate dose distributions in micrometer-sized bins in complex voxelized CT structures in a reasonable amount of time. Minimization of the computation time by using several approaches has led to timings that are adequate for pet radiotherapy at synchrotron facilities. The next step will consist in its integration into a user-friendly graphical front-end.« less

  6. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy.

    PubMed

    Martinez-Rovira, I; Sempau, J; Prezado, Y

    2012-05-01

    Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two different media. Optimization of the simulation parameters and the use of VR techniques saved a significant amount of computation time. Finally, parallelization of the simulations improved even further the calculation time, which reached 1 day for a typical irradiation case envisaged in the forthcoming clinical trials in MRT. An example of MRT treatment in a dog's head is presented, showing the performance of the calculation engine. The development of the first MC-based calculation engine for the future TPS devoted to MRT has been accomplished. This will constitute an essential tool for the future clinical trials on pets at the ESRF. The MC engine is able to calculate dose distributions in micrometer-sized bins in complex voxelized CT structures in a reasonable amount of time. Minimization of the computation time by using several approaches has led to timings that are adequate for pet radiotherapy at synchrotron facilities. The next step will consist in its integration into a user-friendly graphical front-end.

  7. Monte Carlo modeling of a 6 and 18 MV Varian Clinac medical accelerator for in-field and out-of-field dose calculations: development and validation

    PubMed Central

    Bednarz, Bryan; Xu, X George

    2012-01-01

    There is a serious and growing concern about the increased risk of radiation-induced second cancers and late tissue injuries associated with radiation treatment. To better understand and to more accurately quantify non-target organ doses due to scatter and leakage radiation from medical accelerators, a detailed Monte Carlo model of the medical linear accelerator is needed. This paper describes the development and validation of a detailed accelerator model of the Varian Clinac operating at 6 and 18 MV beam energies. Over 100 accelerator components have been defined and integrated using the Monte Carlo code MCNPX. A series of in-field and out-of-field dose validation studies were performed. In-field dose distributions calculated using the accelerator models were tuned to match measurement data that are considered the de facto ‘gold standard’ for the Varian Clinac accelerator provided by the manufacturer. Field sizes of 4 cm × 4 cm, 10 cm × 10 cm, 20 cm × 20 cm and 40 cm × 40 cm were considered. The local difference between calculated and measured dose on the percent depth dose curve was less than 2% for all locations. The local difference between calculated and measured dose on the dose profile curve was less than 2% in the plateau region and less than 2 mm in the penumbra region for all locations. Out-of-field dose profiles were calculated and compared to measurement data for both beam energies for field sizes of 4 cm × 4 cm, 10 cm × 10 cm and 20 cm × 20 cm. For all field sizes considered in this study, the average local difference between calculated and measured dose for the 6 and 18 MV beams was 14 and 16%, respectively. In addition, a method for determining neutron contamination in the 18 MV operating model was validated by comparing calculated in-air neutron fluence with reported calculations and measurements. The average difference between calculated and measured neutron fluence was 20%. As one of the most detailed accelerator models for both in-field and out-of-field dose calculations, the model will be combined with anatomically realistic computational patient phantoms into a computational framework to calculate non-target organ doses to patients from various radiation treatment plans. PMID:19141879

  8. Radiative gas dynamics of the Fire-II superorbital space vehicle

    NASA Astrophysics Data System (ADS)

    Surzhikov, S. T.

    2016-03-01

    The rates of convective and radiative heating of the Fire-II reentry vehicle are calculated, and the results are compared with experimental flight data. The computational model is based on solving a complete set of equations for (i) the radiative gas dynamics of a physically and chemically nonequilibrium viscous heatconducting gas and (ii) radiative transfer in 2D axisymmetric statement. The spectral optical parameters of high-temperature gases are calculated using ab initio quasi-classical and quantum-mechanical methods. The transfer of selective thermal radiation in terms of atomic lines is calculated using the line-by-line method on a specially generated computational grid that is nonuniform in radiation wavelength.

  9. Development and Validation of a New Fallout Transport Method Using Variable Spectral Winds

    NASA Astrophysics Data System (ADS)

    Hopkins, Arthur Thomas

    A new method has been developed to incorporate variable winds into fallout transport calculations. The method uses spectral coefficients derived by the National Meteorological Center. Wind vector components are computed with the coefficients along the trajectories of falling particles. Spectral winds are used in the two-step method to compute dose rate on the ground, downwind of a nuclear cloud. First, the hotline is located by computing trajectories of particles from an initial, stabilized cloud, through spectral winds, to the ground. The connection of particle landing points is the hotline. Second, dose rate on and around the hotline is computed by analytically smearing the falling cloud's activity along the ground. The feasibility of using specgtral winds for fallout particle transport was validated by computing Mount St. Helens ashfall locations and comparing calculations to fallout data. In addition, an ashfall equation was derived for computing volcanic ash mass/area on the ground. Ashfall data and the ashfall equation were used to back-calculate an aggregated particle size distribution for the Mount St. Helens eruption cloud. Further validation was performed by comparing computed and actual trajectories of a high explosive dust cloud (DIRECT COURSE). Using an error propagation formula, it was determined that uncertainties in spectral wind components produce less than four percent of the total dose rate variance. In summary, this research demonstrated the feasibility of using spectral coefficients for fallout transport calculations, developed a two-step smearing model to treat variable winds, and showed that uncertainties in spectral winds do not contribute significantly to the error in computed dose rate.

  10. Thermodynamic equilibrium-air correlations for flowfield applications

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Moss, J. N.

    1981-01-01

    Equilibrium-air thermodynamic correlations have been developed for flowfield calculation procedures. A comparison between the postshock results computed by the correlation equations and detailed chemistry calculations is very good. The thermodynamic correlations are incorporated in an approximate inviscid flowfield code with a convective heating capability for the purpose of defining the thermodynamic environment through the shock layer. Comparisons of heating rates computed by the approximate code and a viscous-shock-layer method are good. In addition to presenting the thermodynamic correlations, the impact of several viscosity models on the convective heat transfer is demonstrated.

  11. A computer program for calculating laminar and turbulent boundary layers for two-dimensional time-dependent flows

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Carr, L. W.

    1978-01-01

    A computer program is described which provides solutions of two dimensional equations appropriate to laminar and turbulent boundary layers for boundary conditions with an external flow which fluctuates in magnitude. The program is based on the numerical solution of the governing boundary layer equations by an efficient two point finite difference method. An eddy viscosity formulation was used to model the Reynolds shear stress term. The main features of the method are briefly described and instructions for the computer program with a listing are provided. Sample calculations to demonstrate its usage and capabilities for laminar and turbulent unsteady boundary layers with an external flow which fluctuated in magnitude are presented.

  12. Establishing a relationship between maximum torque production of isolated joints to simulate EVA ratchet push-pull maneuver: A case study

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash; Maida, James; Hasson, Scott; Greenisen, Michael; Woolford, Barbara

    1993-01-01

    As manned exploration of space continues, analytical evaluation of human strength characteristics is critical. These extraterrestrial environments will spawn issues of human performance which will impact the designs of tools, work spaces, and space vehicles. Computer modeling is an effective method of correlating human biomechanical and anthropometric data with models of space structures and human work spaces. The aim of this study is to provide biomechanical data from isolated joints to be utilized in a computer modeling system for calculating torque resulting from any upper extremity motions: in this study, the ratchet wrench push-pull operation (a typical extravehicular activity task). Established here are mathematical relationships used to calculate maximum torque production of isolated upper extremity joints. These relationships are a function of joint angle and joint velocity.

  13. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  14. Investigations of mechanical, electronic, and magnetic properties of non-magnetic MgTe and ferro-magnetic Mg0.75 TM 0.25Te (TM = Fe, Co, Ni): An ab-initio calculation

    NASA Astrophysics Data System (ADS)

    Q, Mahmood; S, M. Alay-e.-Abbas; I, Mahmood; Mahmood, Asif; N, A. Noor

    2016-04-01

    The mechanical, electronic and magnetic properties of non-magnetic MgTe and ferro-magnetic (FM) Mg0.75 TM 0.25Te (TM = Fe, Co, Ni) in the zinc-blende phase are studied by ab-initio calculations for the first time. We use the generalized gradient approximation functional for computing the structural stability, and mechanical properties, while the modified Becke and Johnson local (spin) density approximation (mBJLDA) is utilized for determining the electronic and magnetic properties. By comparing the energies of non-magnetic and FM calculations, we find that the compounds are stable in the FM phase, which is confirmed by their structural stabilities in terms of enthalpy of formation. Detailed descriptions of elastic properties of Mg0.75 TM 0.25Te alloys in the FM phase are also presented. For electronic properties, the spin-polarized electronic band structures and density of states are computed, showing that these compounds are direct bandgap materials with strong hybridizations of TM 3d states and Te p states. Further, the ferromagnetism is discussed in terms of the Zener free electron model, RKKY model and double exchange model. The charge density contours in the (110) plane are calculated to study bonding properties. The spin exchange splitting and crystal field splitting energies are also calculated. The distribution of electron spin density is employed in computing the magnetic moments appearing at the magnetic sites (Fe, Co, Ni), as well as at the non-magnetic sites (Mg, Te). It is found that the p-d hybridization causes not only magnetic moments on the magnetic sites but also induces negligibly small magnetic moments at the non-magnetic sites.

  15. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  16. A computational workflow for designing silicon donor qubits

    DOE PAGES

    Humble, Travis S.; Ericson, M. Nance; Jakowski, Jacek; ...

    2016-09-19

    Developing devices that can reliably and accurately demonstrate the principles of superposition and entanglement is an on-going challenge for the quantum computing community. Modeling and simulation offer attractive means of testing early device designs and establishing expectations for operational performance. However, the complex integrated material systems required by quantum device designs are not captured by any single existing computational modeling method. We examine the development and analysis of a multi-staged computational workflow that can be used to design and characterize silicon donor qubit systems with modeling and simulation. Our approach integrates quantum chemistry calculations with electrostatic field solvers to performmore » detailed simulations of a phosphorus dopant in silicon. We show how atomistic details can be synthesized into an operational model for the logical gates that define quantum computation in this particular technology. In conclusion, the resulting computational workflow realizes a design tool for silicon donor qubits that can help verify and validate current and near-term experimental devices.« less

  17. Development of a model to compute the extension of life supporting zones for Earth-like exoplanets.

    PubMed

    Neubauer, David; Vrtala, Aron; Leitner, Johannes J; Firneis, Maria G; Hitzenberger, Regina

    2011-12-01

    A radiative convective model to calculate the width and the location of the life supporting zone (LSZ) for different, alternative solvents (i.e. other than water) is presented. This model can be applied to the atmospheres of the terrestrial planets in the solar system as well as (hypothetical, Earth-like) terrestrial exoplanets. Cloud droplet formation and growth are investigated using a cloud parcel model. Clouds can be incorporated into the radiative transfer calculations. Test runs for Earth, Mars and Titan show a good agreement of model results with observations.

  18. Biological production models as elements of coupled, atmosphere-ocean models for climate research

    NASA Technical Reports Server (NTRS)

    Platt, Trevor; Sathyendranath, Shubha

    1991-01-01

    Process models of phytoplankton production are discussed with respect to their suitability for incorporation into global-scale numerical ocean circulation models. Exact solutions are given for integrals over the mixed layer and the day of analytic, wavelength-independent models of primary production. Within this class of model, the bias incurred by using a triangular approximation (rather than a sinusoidal one) to the variation of surface irradiance through the day is computed. Efficient computation algorithms are given for the nonspectral models. More exact calculations require a spectrally sensitive treatment. Such models exist but must be integrated numerically over depth and time. For these integrations, resolution in wavelength, depth, and time are considered and recommendations made for efficient computation. The extrapolation of the one-(spatial)-dimension treatment to large horizontal scale is discussed.

  19. Description of a computer program and numerical techniques for developing linear perturbation models from nonlinear systems simulations

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1978-01-01

    A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.

  20. Electromagnetic plasma simulation in realistic geometries

    NASA Astrophysics Data System (ADS)

    Brandon, S.; Ambrosiano, J. J.; Nielsen, D.

    1991-08-01

    Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.

  1. Calculation of turbulence-driven secondary motion in ducts with arbitrary cross section

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    Calculation methods for turbulent duct flows are generalized for ducts with arbitrary cross-sections. The irregular physical geometry is transformed into a regular one in computational space, and the flow equations are solved with a finite-volume numerical procedure. The turbulent stresses are calculated with an algebraic stress model derived by simplifying model transport equations for the individual Reynolds stresses. Two variants of such a model are considered. These procedures enable the prediction of both the turbulence-driven secondary flow and the anisotropy of the Reynolds stresses, in contrast to some of the earlier calculation methods. Model predictions are compared to experimental data for developed flow in triangular duct, trapezoidal duct and a rod-bundle geometry. The correct trends are predicted, and the quantitative agreement is mostly fair. The simpler variant of the algebraic stress model procured better agreement with the measured data.

  2. BEHAVE: fire behavior prediction and fuel modeling system-BURN Subsystem, part 1

    Treesearch

    Patricia L. Andrews

    1986-01-01

    Describes BURN Subsystem, Part 1, the operational fire behavior prediction subsystem of the BEHAVE fire behavior prediction and fuel modeling system. The manual covers operation of the computer program, assumptions of the mathematical models used in the calculations, and application of the predictions.

  3. Variational calculation of macrostate transition rates

    NASA Astrophysics Data System (ADS)

    Ulitsky, Alex; Shalloway, David

    1998-08-01

    We develop the macrostate variational method (MVM) for computing reaction rates of diffusive conformational transitions in multidimensional systems by a variational coarse-grained "macrostate" decomposition of the Smoluchowski equation. MVM uses multidimensional Gaussian packets to identify and focus computational effort on the "transition region," a localized, self-consistently determined region in conformational space positioned roughly between the macrostates. It also determines the "transition direction" which optimally specifies the projected potential of mean force for mean first-passage time calculations. MVM is complementary to variational transition state theory in that it can efficiently solve multidimensional problems but does not accommodate memory-friction effects. It has been tested on model 1- and 2-dimensional potentials and on the 12-dimensional conformational transition between the isoforms of a microcluster of six-atoms having only van der Waals interactions. Comparison with Brownian dynamics calculations shows that MVM obtains equivalent results at a fraction of the computational cost.

  4. Data supporting the prediction of the properties of eutectic organic phase change materials.

    PubMed

    Kahwaji, Samer; White, Mary Anne

    2018-04-01

    The data presented in this article include the molar masses, melting temperatures, latent heats of fusion and temperature-dependent heat capacities of fifteen fatty acid phase change materials (PCMs). The data are used in conjunction with the thermodynamic models discussed in Kahwaji and White (2018) [1] to develop a computational tool that calculates the eutectic compositions and thermal properties of eutectic mixtures of PCMs. The computational tool is part of this article and consists of a Microsoft Excel® file available in Mendeley Data repository [2]. A description of the computational tool along with the properties of nearly 100 binary mixtures of fatty acid PCMs calculated using this tool are also included in the present article. The Excel® file is designed such that it can be easily modified or expanded by users to calculate the properties of eutectic mixtures of other classes of PCMs.

  5. The radiation environment of OSO missions from 1974 to 1978

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1973-01-01

    Trapped particle radiation levels on several OSO missions were calculated for nominal trajectories using improved computational methods and new electron environment models. Temporal variations of the electron fluxes were considered and partially accounted for. Magnetic field calculations were performed with a current field model and extrapolated to a later epoch with linear time terms. Orbital flux integration results, which are presented in graphical and tabular form, are analyzed, explained, and discussed.

  6. Web-phreeq: a WWW instructional tool for modeling the distribution of chemical species in water

    NASA Astrophysics Data System (ADS)

    Saini-Eidukat, Bernhardt; Yahin, Andrew

    1999-05-01

    A WWW-based tool, WEB-PHREEQ, was developed for classroom teaching and for routine calculation of low temperature aqueous speciation. Accessible with any computer that has an internet-connected forms-capable WWW-browser, WEB-PHREEQ provides user interface and other support for modeling, creates a properly formatted input file, passes it to the public domain program PHREEQC and returns the output to the WWW browser. Users can calculate the equilibrium speciation of a solution over a range of temperatures or can react solid minerals or gases with a particular water and examine the resulting chemistry. WEB-PHREEQ is one of a number of interactive distributed-computing programs available on the WWW that are of interest to geoscientists.

  7. Assessment of cell death mechanisms triggered by 177Lu-anti-CD20 in lymphoma cells.

    PubMed

    Azorín-Vega, E; Rojas-Calderón, E; Martínez-Ventura, B; Ramos-Bernal, J; Serrano-Espinoza, L; Jiménez-Mancilla, N; Ordaz-Rosado, D; Ferro-Flores, G

    2018-08-01

    The aim of this research was to evaluate the cell cycle redistribution and activation of early and late apoptotic pathways in lymphoma cells after treatment with 177 Lu-anti-CD20. Experimental and computer models were used to calculate the radiation absorbed dose to cancer cell nuclei. The computer model (Monte Carlo, PENELOPE) consisted of twenty spheres representing cells with an inner sphere (cell nucleus) embedded in culture media. Radiation emissions of the radiopharmaceutical located in cell membranes and in culture media were considered for nuclei dose calculations. Flow cytometric analyses demonstrated that doses as low as 4.8Gy are enough to induce cell cycle arrest and activate late apoptotic pathways. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. 3D nozzle flow simulations including state-to-state kinetics calculation

    NASA Astrophysics Data System (ADS)

    Cutrone, L.; Tuttafesta, M.; Capitelli, M.; Schettino, A.; Pascazio, G.; Colonna, G.

    2014-12-01

    In supersonic and hypersonic flows, thermal and chemical non-equilibrium is one of the fundamental aspects that must be taken into account for the accurate characterization of the plasma. In this paper, we present an optimized methodology to approach plasma numerical simulation by state-to-state kinetics calculations in a fully 3D Navier-Stokes CFD solver. Numerical simulations of an expanding flow are presented aimed at comparing the behavior of state-to-state chemical kinetics models with respect to the macroscopic thermochemical non-equilibrium models that are usually used in the numerical computation of high temperature hypersonic flows. The comparison is focused both on the differences in the numerical results and on the computational effort associated with each approach.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This journal contains 7 articles pertaining to astrophysics. The first article is an overview of the other 6 articles and also a tribute to Jim Wilson and his work in the fields of general relativity and numerical astrophysics. The six articles are on the following subjects: (1) computer simulations of black hole accretion; (2) calculations on the collapse of the iron core of a massive star; (3) stellar-collapse models which reveal a possible site for nucleosynthesis of elements heavier than iron; (4) modeling sources for gravitational radiation; (5) the development of a computer program for finite-difference mesh calculations and itsmore » applications to astrophysics; (6) the existence of neutrinos with nonzero rest mass are used to explain the universe. Abstracts of each of the articles were prepared separately. (SC)« less

  10. Progress in Earth System Modeling since the ENIAC Calculation

    NASA Astrophysics Data System (ADS)

    Fung, I.

    2009-05-01

    The success of the first numerical weather prediction experiment on the ENIAC computer in 1950 was hinged on the expansion of the meteorological observing network, which led to theoretical advances in atmospheric dynamics and subsequently the implementation of the simplified equations on the computer. This paper briefly reviews the progress in Earth System Modeling and climate observations, and suggests a strategy to sustain and expand the observations needed to advance climate science and prediction.

  11. Phased models for evaluating the performability of computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.; Meyer, J. F.

    1979-01-01

    A phase-by-phase modelling technique is introduced to evaluate a fault tolerant system's ability to execute different sets of computational tasks during different phases of the control process. Intraphase processes are allowed to differ from phase to phase. The probabilities of interphase state transitions are specified by interphase transition matrices. Based on constraints imposed on the intraphase and interphase transition probabilities, various iterative solution methods are developed for calculating system performability.

  12. Computer program for the reservoir model of metabolic crossroads.

    PubMed

    Ribeiro, J M; Juzgado, D; Crespo, E; Sillero, A

    1990-01-01

    A program containing 344 sentences, written in BASIC and adapted to run in personal computers (PC) has been developed to simulate the reservoir model of metabolic crossroads. The program draws the holes of the reservoir with shapes reflecting the Vmax, Km (S0.5) and cooperativity coefficients (n) of the enzymes and calculates both the actual velocities and the percentage of contribution of every enzyme to the overall removal of their common substrate.

  13. First-principles calculations of mobility

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Karthik

    First-principles calculations can be a powerful predictive tool for studying, modeling and understanding the fundamental scattering mechanisms impacting carrier transport in materials. In the past, calculations have provided important qualitative insights, but numerical accuracy has been limited due to computational challenges. In this talk, we will discuss some of the challenges involved in calculating electron-phonon scattering and carrier mobility, and outline approaches to overcome them. Topics will include the limitations of models for electron-phonon interaction, the importance of grid sampling, and the use of Gaussian smearing to replace energy-conserving delta functions. Using prototypical examples of oxides that are of technological importance-SrTiO3, BaSnO3, Ga2O3, and WO3-we will demonstrate computational approaches to overcome these challenges and improve the accuracy. One approach that leads to a distinct improvement in the accuracy is the use of analytic functions for the band dispersion, which allows for an exact solution of the energy-conserving delta function. For select cases, we also discuss direct quantitative comparisons with experimental results. The computational approaches and methodologies discussed in the talk are general and applicable to other materials, and greatly improve the numerical accuracy of the calculated transport properties, such as carrier mobility, conductivity and Seebeck coefficient. This work was performed in collaboration with B. Himmetoglu, Y. Kang, W. Wang, A. Janotti and C. G. Van de Walle, and supported by the LEAST Center, the ONR EXEDE MURI, and NSF.

  14. Methodology for extracting local constants from petroleum cracking flows

    DOEpatents

    Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.

    2000-01-01

    A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.

  15. Earth's external magnetic fields at low orbital altitudes

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.

    1990-01-01

    Under our Jun. 1987 proposal, Magnetic Signatures of Near-Earth Distributed Currents, we proposed to render operational a modeling procedure that had been previously developed to compute the magnetic effects of distributed currents flowing in the magnetosphere-ionosphere system. After adaptation of the software to our computing environment we would apply the model to low altitude satellite orbits and would utilize the MAGSAT data suite to guide the analysis. During the first year, basic computer codes to run model systems of Birkeland and ionospheric currents and several graphical output routines were made operational on a VAX 780 in our research facility. Software performance was evaluated using an input matchstick ionospheric current array, field aligned currents were calculated and magnetic perturbations along hypothetical satellite orbits were calculated. The basic operation of the model was verified. Software routines to analyze and display MAGSAT satellite data in terms of deviations with respect to the earth's internal field were also made operational during the first year effort. The complete set of MAGSAT data to be used for evaluation of the models was received at the end of the first year. A detailed annual report in May 1989 described these first year activities completely. That first annual report is included by reference in this final report. This document summarizes our additional activities during the second year of effort and describes the modeling software, its operation, and includes as an attachment the deliverable computer software specified under the contract.

  16. New 2D diffraction model and its applications to terahertz parallel-plate waveguide power splitters

    PubMed Central

    Zhang, Fan; Song, Kaijun; Fan, Yong

    2017-01-01

    A two-dimensional (2D) diffraction model for the calculation of the diffraction field in 2D space and its applications to terahertz parallel-plate waveguide power splitters are proposed in this paper. Compared with the Huygens-Fresnel principle in three-dimensional (3D) space, the proposed model provides an approximate analytical expression to calculate the diffraction field in 2D space. The diffraction filed is regarded as the superposition integral in 2D space. The calculated results obtained from the proposed diffraction model agree well with the ones by software HFSS based on the element method (FEM). Based on the proposed 2D diffraction model, two parallel-plate waveguide power splitters are presented. The splitters consist of a transmitting horn antenna, reflectors, and a receiving antenna array. The reflector is cylindrical parabolic with superimposed surface relief to efficiently couple the transmitted wave into the receiving antenna array. The reflector is applied as computer-generated holograms to match the transformed field to the receiving antenna aperture field. The power splitters were optimized by a modified real-coded genetic algorithm. The computed results of the splitters agreed well with the ones obtained by software HFSS verify the novel design method for power splitter, which shows good applied prospects of the proposed 2D diffraction model. PMID:28181514

  17. Two-way FSI modelling of blood flow through CCA accounting on-line medical diagnostics in hypertension

    NASA Astrophysics Data System (ADS)

    Czechowicz, K.; Badur, J.; Narkiewicz, K.

    2014-08-01

    Flow parameters can induce pathological changes in the arteries. We propose a method to asses those parameters using a 3D computer model of the flow in the Common Carotid Artery. Input data was acquired using an automatic 2D ultrasound wall tracking system. This data has been used to generate a 3D geometry of the artery. The diameter and wall thickness have been assessed individually for every patient, but the artery has been taken as a 75mm straight tube. The Young's modulus for the arterial walls was calculated using the pulse pressure, diastolic (minimal) diameter and wall thickness (IMT). Blood flow was derived from the pressure waveform using a 2-parameter Windkessel model. The blood is assumed to be non-Newtonian. The computational models were generated and calculated using commercial code. The coupling method required the use of Arbitrary Lagrangian-Euler formulation to solve Navier-Stokes and Navier-Lame equations in a moving domain. The calculations showed that the distention of the walls in the model is not significantly different from the measurements. Results from the model have been used to locate additional risk factors, such as wall shear stress or circumferential stress, that may predict adverse hypertension complications.

  18. Integrated model of the shallow and deep hydrothermal systems in the East Mesa area, Imperial Valley, California

    USGS Publications Warehouse

    Riney, T. David; Pritchett, J.W.; Rice, L.F.

    1982-01-01

    Geological, geophysical, thermal, petrophysical and hydrological data available for the East Mesa hydrothermal system that are pertinent to the construction of a computer model of the natural flow of heat and fluid mass within the system are assembled and correlated. A conceptual model of the full system is developed and a subregion selected for quantitative modeling. By invoking the .Boussinesq approximation, valid for describing the natural flow of heat and mass in a liquid hydrothermal system, it is found practical to carry computer simulations far enough in time to ensure that steady-state conditions are obtained. Initial calculations for an axisymmetric model approximating the system demonstrate that the vertical formation permeability of the deep East Mesa system must be very low (kv ~ 0.25 to 0.5 md). Since subsurface temperature and surface heat flow data exhibit major deviations from the axisymmetric approximation, exploratory three-dimensional calculations are performed to assess the effects of various mechanisms which might operate to produce such observed asymmetries. A three-dimensional model evolves from this iterative data synthesis and computer analysis which includes a hot fluid convective source distributed along a leaky fault radiating northward from the center of the hot spot and realistic variations in the reservoir formation properties.

  19. Hadronic light-by-light scattering contribution to the muon anomalous magnetic moment from lattice QCD.

    PubMed

    Blum, Thomas; Chowdhury, Saumitra; Hayakawa, Masashi; Izubuchi, Taku

    2015-01-09

    The most compelling possibility for a new law of nature beyond the four fundamental forces comprising the standard model of high-energy physics is the discrepancy between measurements and calculations of the muon anomalous magnetic moment. Until now a key part of the calculation, the hadronic light-by-light contribution, has only been accessible from models of QCD, the quantum description of the strong force, whose accuracy at the required level may be questioned. A first principles calculation with systematically improvable errors is needed, along with the upcoming experiments, to decisively settle the matter. For the first time, the form factor that yields the light-by-light scattering contribution to the muon anomalous magnetic moment is computed in such a framework, lattice QCD+QED and QED. A nonperturbative treatment of QED is used and checked against perturbation theory. The hadronic contribution is calculated for unphysical quark and muon masses, and only the diagram with a single quark loop is computed for which statistically significant signals are obtained. Initial results are promising, and the prospect for a complete calculation with physical masses and controlled errors is discussed.

  20. Calculation of the Curie temperature of Ni using first principles based Wang-Landau Monte-Carlo

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Yin, Junqi; Li, Ying Wai; Nicholson, Don

    2015-03-01

    We combine constrained first principles density functional with a Wang-Landau Monte Carlo algorithm to calculate the Curie temperature of Ni. Mapping the magnetic interactions in Ni onto a Heisenberg like model to underestimates the Curie temperature. Using a model we show that the addition of the magnitude of the local magnetic moments can account for the difference in the calculated Curie temperature. For ab initio calculations, we have extended our Locally Selfconsistent Multiple Scattering (LSMS) code to constrain the magnitude of the local moments in addition to their direction and apply the Replica Exchange Wang-Landau method to sample the larger phase space efficiently to investigate Ni where the fluctuation in the magnitude of the local magnetic moments is of importance equal to their directional fluctuations. We will present our results for Ni where we compare calculations that consider only the moment directions and those including fluctuations of the magnetic moment magnitude on the Curie temperature. This research was sponsored by the Department of Energy, Offices of Basic Energy Science and Advanced Computing. We used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory, supported by US DOE under contract DE-AC05-00OR22725.

  1. Improvement and speed optimization of numerical tsunami modelling program using OpenMP technology

    NASA Astrophysics Data System (ADS)

    Chernov, A.; Zaytsev, A.; Yalciner, A.; Kurkin, A.

    2009-04-01

    Currently, the basic problem of tsunami modeling is low speed of calculations which is unacceptable for services of the operative notification. Existing algorithms of numerical modeling of hydrodynamic processes of tsunami waves are developed without taking the opportunities of modern computer facilities. There is an opportunity to have considerable acceleration of process of calculations by using parallel algorithms. We discuss here new approach to parallelization tsunami modeling code using OpenMP Technology (for multiprocessing systems with the general memory). Nowadays, multiprocessing systems are easily accessible for everyone. The cost of the use of such systems becomes much lower comparing to the costs of clusters. This opportunity also benefits all programmers to apply multithreading algorithms on desktop computers of researchers. Other important advantage of the given approach is the mechanism of the general memory - there is no necessity to send data on slow networks (for example Ethernet). All memory is the common for all computing processes; it causes almost linear scalability of the program and processes. In the new version of NAMI DANCE using OpenMP technology and multi-threading algorithm provide 80% gain in speed in comparison with the one-thread version for dual-processor unit. The speed increased and 320% gain was attained for four core processor unit of PCs. Thus, it was possible to reduce considerably time of performance of calculations on the scientific workstations (desktops) without complete change of the program and user interfaces. The further modernization of algorithms of preparation of initial data and processing of results using OpenMP looks reasonable. The final version of NAMI DANCE with the increased computational speed can be used not only for research purposes but also in real time Tsunami Warning Systems.

  2. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  3. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, J.E.; Roussin, R.W.; Gilpin, H.

    A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports -more » ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.« less

  5. Modification of the MML turbulence model for adverse pressure gradient flows. M.S. Thesis - Akron Univ., 1993

    NASA Technical Reports Server (NTRS)

    Conley, Julianne M.

    1994-01-01

    Computational fluid dynamics is being used increasingly to predict flows for aerospace propulsion applications, yet there is still a need for an easy to use, computationally inexpensive turbulence model capable of accurately predicting a wide range of turbulent flows. The Baldwin-Lomax model is the most widely used algebraic model, even though it has known difficulties calculating flows with strong adverse pressure gradients and large regions of separation. The modified mixing length model (MML) was developed specifically to handle the separation which occurs on airfoils and has given significantly better results than the Baldwin-Lomax model. The success of these calculations warrants further evaluation and development of MML. The objective of this work was to evaluate the performance of MML for zero and adverse pressure gradient flows, and modify it as needed. The Proteus Navier-Stokes code was used for this study and all results were compared with experimental data and with calculations made using the Baldwin-Lomax algebraic model, which is currently available in Proteus. The MML model was first evaluated for zero pressure gradient flow over a flat plate, then modified to produce the proper boundary layer growth. Additional modifications, based on experimental data for three adverse pressure gradient flows, were also implemented. The adapted model, called MMLPG (modified mixing length model for pressure gradient flows), was then evaluated for a typical propulsion flow problem, flow through a transonic diffuser. Three cases were examined: flow with no shock, a weak shock and a strong shock. The results of these calculations indicate that the objectives of this study have been met. Overall, MMLPG is capable of accurately predicting the adverse pressure gradient flows examined in this study, giving generally better agreement with experimental data than the Baldwin-Lomax model.

  6. Dose computation for therapeutic electron beams

    NASA Astrophysics Data System (ADS)

    Glegg, Martin Mackenzie

    The accuracy of electron dose calculations performed by two commercially available treatment planning computers, Varian Cadplan and Helax TMS, has been assessed. Measured values of absorbed dose delivered by a Varian 2100C linear accelerator, under a wide variety of irradiation conditions, were compared with doses calculated by the treatment planning computers. Much of the motivation for this work was provided by a requirement to verify the accuracy of calculated electron dose distributions in situations encountered clinically at Glasgow's Beatson Oncology Centre. Calculated dose distributions are required in a significant minority of electron treatments, usually in cases involving treatment to the head and neck. Here, therapeutic electron beams are subject to factors which may cause non-uniformity in the distribution of dose, and which may complicate the calculation of dose. The beam shape is often irregular, the beam may enter the patient at an oblique angle or at an extended source to skin distance (SSD), tissue inhomogeneities can alter the dose distribution, and tissue equivalent material (such as wax) may be added to reduce dose to critical organs. Technological advances have allowed the current generation of treatment planning computers to implement dose calculation algorithms with the ability to model electron beams in these complex situations. These calculations have, however, yet to be verified by measurement. This work has assessed the accuracy of calculations in a number of specific instances. Chapter two contains a comparison of measured and calculated planar electron isodose distributions. Three situations were considered: oblique incidence, incidence on an irregular surface (such as that which would be arise from the use of wax to reduce dose to spinal cord), and incidence on a phantom containing a small air cavity. Calculations were compared with measurements made by thermoluminescent dosimetry (TLD) in a WTe electron solid water phantom. Chapter three assesses the planning computers' ability to model electron beam penumbra at extended SSD. Calculations were compared with diode measurements in a water phantom. Further measurements assessed doses in the junction region produced by abutting an extended SSD electron field with opposed photon fields. Chapter four describes an investigation of the size and shape of the region enclosed by the 90% isodose line when produced by limiting the electron beam with square and elliptical apertures. The 90% isodose line was chosen because clinical treatments are often prescribed such that a given volume receives at least 90% dose. Calculated and measured dose distributions were compared in a plane normal to the beam central axis. Measurements were made by film dosimetry. While chapters two to four examine relative doses, chapter five assesses the accuracy of absolute dose (or output) calculations performed by the planning computers. Output variation with SSD and field size was examined. Two further situations already assessed for the distribution of relative dose were also considered: an obliquely incident field, and a field incident on an irregular surface. The accuracy of calculations was assessed against criteria stipulated by the International Commission on Radiation Units and Measurement (ICRU). The Varian Cadplan and Helax TMS treatment planning systems produce acceptable accuracy in the calculation of relative dose from therapeutic electron beams in most commonly encountered situations. When interpreting clinical dose distributions, however, knowledge of the limitations of the calculation algorithm employed by each system is required in order to identify the minority of situations where results are not accurate. The calculation of absolute dose is too inaccurate to implement in a clinical environment. (Abstract shortened by ProQuest.).

  7. Development of a reactive burn model based on an explicit viscoplastic pore collapse model

    NASA Astrophysics Data System (ADS)

    Bouton, E.; Lefrançois, A.; Belmas, R.

    2017-01-01

    The aim of this study is to develop a reactive burn model based upon a microscopic hot spot model to compute the shock-initiation of pressed TATB high explosives. Such a model has been implemented in a lagrangian hydrodynamic code. In our calculations, 8 pore radii, ranging from 40 nm to 0.63 μm, have been taken into account and the porosity fraction associated to each void radius has been deduced from the Ultra-Small-Angle X-ray Scattering measurements (USAXS) for PBX-9502. The last parameter of our model is a burn rate that depends on three variables. The first two are the reaction progress variable and the lead shock pressure, the last one is the chemical reaction site number produced in the flow and calculated by the microscopic model. This burn rate has been calibrated by fitting pressure, velocity profiles and run distances to detonation. As the computed results are in close agreement with the measured ones, this model is able to perform a wide variety of numerical simulations including single, double shock waves and the desensitization phenomenon.

  8. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    NASA Astrophysics Data System (ADS)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API interface to our Enhanced Magnetic Model (EMM).

  9. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  10. Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2007-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  11. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  12. A general-purpose framework to simulate musculoskeletal system of human body: using a motion tracking approach.

    PubMed

    Ehsani, Hossein; Rostami, Mostafa; Gudarzi, Mohammad

    2016-02-01

    Computation of muscle force patterns that produce specified movements of muscle-actuated dynamic models is an important and challenging problem. This problem is an undetermined one, and then a proper optimization is required to calculate muscle forces. The purpose of this paper is to develop a general model for calculating all muscle activation and force patterns in an arbitrary human body movement. For this aim, the equations of a multibody system forward dynamics, which is considered for skeletal system of the human body model, is derived using Lagrange-Euler formulation. Next, muscle contraction dynamics is added to this model and forward dynamics of an arbitrary musculoskeletal system is obtained. For optimization purpose, the obtained model is used in computed muscle control algorithm, and a closed-loop system for tracking desired motions is derived. Finally, a popular sport exercise, biceps curl, is simulated by using this algorithm and the validity of the obtained results is evaluated via EMG signals.

  13. The effectiveness of a new algorithm on a three-dimensional finite element model construction of bone trabeculae in implant biomechanics.

    PubMed

    Sato, Y; Teixeira, E R; Tsuga, K; Shindoi, N

    1999-08-01

    More validity of finite element analysis (FEA) in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To evaluate the effectiveness of a new algorithm established for more valid FEA model construction without downsizing, three-dimensional FEA bone trabeculae models with different element sizes (300, 150 and 75 micron) were constructed. Four algorithms of stepwise (1 to 4 ranks) assignment of Young's modulus accorded with bone volume in the individual cubic element was used and then stress distribution against vertical loading was analysed. The model with 300 micron element size, with 4 ranks of Young's moduli accorded with bone volume in each element presented similar stress distribution to the model with the 75 micron element size. These results show that the new algorithm was effective, and the use of the 300 micron element for bone trabeculae representation was proposed, without critical changes in stress values and for possible savings on computer memory and calculation time in the laboratory.

  14. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  15. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access these models. In some cases, the CME system also provides alternatives to the SCEC community models. The CME system hosts a collection of community geophysical software codes. These codes include seismic hazard analysis (SHA) programs developed by the SCEC/USGS OpenSHA group. Also, the CME system hosts anelastic wave propagation codes including Kim Olsen's Finite Difference code and Carnegie Mellon's Hercules Finite Element tool chain. The CME system can execute a workflow, that is, a series of geophysical computations using the output of one processing step as the input to a subsequent step. Our workflow capability utilizes grid-based computing software that can submit calculations to a pool of computing resources as well as data management tools that help us maintain an association between data files and metadata descriptions of those files. The CME system maintains, and provides access to, a collection of valuable geophysical data sets. The current CME Digital Library holdings include a collection of 60 ground motion simulation results calculated by a SCEC/PEER working group and a collection of Greens Functions calculated for 33 TriNet broadband receiver sites in the Los Angeles area.

  16. Thermodynamics of Anharmonic Systems: Uncoupled Mode Approximations for Molecules

    DOE PAGES

    Li, Yi-Pei; Bell, Alexis T.; Head-Gordon, Martin

    2016-05-26

    The partition functions, heat capacities, entropies, and enthalpies of selected molecules were calculated using uncoupled mode (UM) approximations, where the full-dimensional potential energy surface for internal motions was modeled as a sum of independent one-dimensional potentials for each mode. The computational cost of such approaches scales the same with molecular size as standard harmonic oscillator vibrational analysis using harmonic frequencies (HO hf). To compute thermodynamic properties, a computational protocol for obtaining the energy levels of each mode was established. The accuracy of the UM approximation depends strongly on how the one-dimensional potentials of each modes are defined. If the potentialsmore » are determined by the energy as a function of displacement along each normal mode (UM-N), the accuracies of the calculated thermodynamic properties are not significantly improved versus the HO hf model. Significant improvements can be achieved by constructing potentials for internal rotations and vibrations using the energy surfaces along the torsional coordinates and the remaining vibrational normal modes, respectively (UM-VT). For hydrogen peroxide and its isotopologs at 300 K, UM-VT captures more than 70% of the partition functions on average. By con trast, the HO hf model and UM-N can capture no more than 50%. For a selected test set of C2 to C8 linear and branched alkanes and species with different moieties, the enthalpies calculated using the HO hf model, UM-N, and UM-VT are all quite accurate comparing with reference values though the RMS errors of the HO model and UM-N are slightly higher than UM-VT. However, the accuracies in entropy calculations differ significantly between these three models. For the same test set, the RMS error of the standard entropies calculated by UM-VT is 2.18 cal mol -1 K -1 at 1000 K. By contrast, the RMS error obtained using the HO model and UM-N are 6.42 and 5.73 cal mol -1 K -1, respectively. For a test set composed of nine alkanes ranging from C5 to C8, the heat capacities calculated with the UM-VT model agree with the experimental values to within a RMS error of 0.78 cal mol -1 K -1 , which is less than one-third of the RMS error of the HO hf (2.69 cal mol -1 K -1) and UM-N (2.41 cal mol -1 K -1) models.« less

  17. Scoping Calculations of Power Sources for Nuclear Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Difilippo, F. C.

    1994-01-01

    This technical memorandum describes models and calculational procedures to fully characterize the nuclear island of power sources for nuclear electric propulsion. Two computer codes were written: one for the gas-cooled NERVA derivative reactor and the other for liquid metal-cooled fuel pin reactors. These codes are going to be interfaced by NASA with the balance of plant in order to make scoping calculations for mission analysis.

  18. MARC calculations for the second WIPP structural benchmark problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, H.S.

    1981-05-01

    This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.

  19. Simulation Computation of 430 Ferritic Stainless Steel Solidification

    NASA Astrophysics Data System (ADS)

    Pang, Ruipeng; Li, Changrong; Wang, Fuming; Hu, Lifu

    The solidification structure of 430 ferritic stainless steel has been calculated in the solidification process by using 3D-CAFE model under the condition of water cooling. The calculated results consistent with those obtained from experiment. Under watercooling condition, the solidification structure consists of chilled layer, columnar grain zone, transition zone and equiaxed grain zone.

  20. Validation of DNA-based identification software by computation of pedigree likelihood ratios.

    PubMed

    Slooten, K

    2011-08-01

    Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Simulation des fuites neutroniques a l'aide d'un modele B1 heterogene pour des reacteurs a neutrons rapides et a eau legere

    NASA Astrophysics Data System (ADS)

    Faure, Bastien

    The neutronic calculation of a reactor's core is usually done in two steps. After solving the neutron transport equation over an elementary domain of the core, a set of parameters, namely macroscopic cross sections and potentially diffusion coefficients, are defined in order to perform a full core calculation. In the first step, the cell or assembly is calculated using the "fundamental mode theory", the pattern being inserted in an infinite lattice of periodic structures. This simple representation allows a precise modeling for the geometry and the energy variable and can be treated within transport theory with minimalist approximations. However, it supposes that the reactor's core can be treated as a periodic lattice of elementary domains, which is already a big hypothesis, and cannot, at first sight, take into account neutron leakage between two different zones and out of the core. The leakage models propose to correct the transport equation with an additional leakage term in order to represent this phenomenon. For historical reasons, numerical methods for solving the transport equation being limited by computer's features (processor speeds and memory sizes), the leakage term is, in most cases, modeled by a homogeneous and isotropic probability within a "homogeneous leakage model". Driven by technological innovation in the computer science field, "heterogeneous leakage models" have been developed and implemented in several neutron transport calculation codes. This work focuses on a study of some of those models, including the TIBERE model from the DRAGON-3 code developed at Ecole Polytechnique de Montreal, as well as the heterogeneous model from the APOLLO-3 code developed at Commissariat a l'Energie Atomique et aux energies alternatives. The research based on sodium cooled fast reactors and light water reactors has allowed us to demonstrate the interest of those models compared to a homogeneous leakage model. In particular, it has been shown that a heterogeneous model has a significant impact on the calculation of the out of core leakage rate that permits a better estimation of the transport equation eigenvalue Keff . The neutron streaming between two zones of different compositions was also proven to be better calculated.

  2. Computational thermochemistry: Automated generation of scale factors for vibrational frequencies calculated by electronic structure model chemistries

    NASA Astrophysics Data System (ADS)

    Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.

    2017-01-01

    We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.

  3. The NATA code: Theory and analysis, volume 1. [user manuals (computer programming) - gas dynamics, wind tunnels

    NASA Technical Reports Server (NTRS)

    Bade, W. L.; Yos, J. M.

    1975-01-01

    A computer program for calculating quasi-one-dimensional gas flow in axisymmetric and two-dimensional nozzles and rectangular channels is presented. Flow is assumed to start from a state of thermochemical equilibrium at a high temperature in an upstream reservoir. The program provides solutions based on frozen chemistry, chemical equilibrium, and nonequilibrium flow with finite reaction rates. Electronic nonequilibrium effects can be included using a two-temperature model. An approximate laminar boundary layer calculation is given for the shear and heat flux on the nozzle wall. Boundary layer displacement effects on the inviscid flow are considered also. Chemical equilibrium and transport property calculations are provided by subroutines. The code contains precoded thermochemical, chemical kinetic, and transport cross section data for high-temperature air, CO2-N2-Ar mixtures, helium, and argon. It provides calculations of the stagnation conditions on axisymmetric or two-dimensional models, and of the conditions on the flat surface of a blunt wedge. The primary purpose of the code is to describe the flow conditions and test conditions in electric arc heated wind tunnels.

  4. Spectral Analysis Tool 6.2 for Windows

    NASA Technical Reports Server (NTRS)

    Morgan, Feiming; Sue, Miles; Peng, Ted; Tan, Harry; Liang, Robert; Kinman, Peter

    2006-01-01

    Spectral Analysis Tool 6.2 is the latest version of a computer program that assists in analysis of interference between radio signals of the types most commonly used in Earth/spacecraft radio communications. [An earlier version was reported in Software for Analyzing Earth/Spacecraft Radio Interference (NPO-20422), NASA Tech Briefs, Vol. 25, No. 4 (April 2001), page 52.] SAT 6.2 calculates signal spectra, bandwidths, and interference effects for several families of modulation schemes. Several types of filters can be modeled, and the program calculates and displays signal spectra after filtering by any of the modeled filters. The program accommodates two simultaneous signals: a desired signal and an interferer. The interference-to-signal power ratio can be calculated for the filtered desired and interfering signals. Bandwidth-occupancy and link-budget calculators are included for the user s convenience. SAT 6.2 has a new software structure and provides a new user interface that is both intuitive and convenient. SAT 6.2 incorporates multi-tasking, multi-threaded execution, virtual memory management, and a dynamic link library. SAT 6.2 is designed for use on 32- bit computers employing Microsoft Windows operating systems.

  5. Revalidation studies of Mark 16 experiments: J70

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.Y.

    1993-10-25

    The MGBS-TGAL combination of the J70 criticality modules was validated for Mark 16 lattices by H. K. Clark as reported in DPST-83-1025. Unfortunately, the records of the calculations reported can not be retrieved and the descriptions of the modeling used are not fully provided in DPST-83-1025. The report does not describe in detail how to model the experiments and how to set up the input. The computer output for the cases reported in the memorandum can not be located in files. The MGBS-TGAL calculations reported in DPST-83-1025 have been independently reperformed to provide retrievable record copies of the calculations, tomore » provide a detailed description and discussion of the methodology used, and to serve as a training exercise for a novice criticality safety engineer. The current results reproduce Clark`s reported results to within about 0.01% or better. A procedure to perform these and similar calculations is given in this report, with explanation of the methodology choices provided. Copies of the computer output have been made via microfiche and will be maintained in APG files.« less

  6. Computer-Aided Drug Design in Epigenetics

    NASA Astrophysics Data System (ADS)

    Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng

    2018-03-01

    Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field.

  7. Computer-Aided Drug Design in Epigenetics

    PubMed Central

    Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng

    2018-01-01

    Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation, and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field. PMID:29594101

  8. Quantum lattice model solver HΦ

    NASA Astrophysics Data System (ADS)

    Kawamura, Mitsuaki; Yoshimi, Kazuyoshi; Misawa, Takahiro; Yamaji, Youhei; Todo, Synge; Kawashima, Naoki

    2017-08-01

    HΦ [aitch-phi ] is a program package based on the Lanczos-type eigenvalue solution applicable to a broad range of quantum lattice models, i.e., arbitrary quantum lattice models with two-body interactions, including the Heisenberg model, the Kitaev model, the Hubbard model and the Kondo-lattice model. While it works well on PCs and PC-clusters, HΦ also runs efficiently on massively parallel computers, which considerably extends the tractable range of the system size. In addition, unlike most existing packages, HΦ supports finite-temperature calculations through the method of thermal pure quantum (TPQ) states. In this paper, we explain theoretical background and user-interface of HΦ. We also show the benchmark results of HΦ on supercomputers such as the K computer at RIKEN Advanced Institute for Computational Science (AICS) and SGI ICE XA (Sekirei) at the Institute for the Solid State Physics (ISSP).

  9. User's guide to the SEPHIS computer code for calculating the Thorex solvent extraction system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, S.B.; Rainey, R.H.

    1979-05-01

    The SEPHIS computer program was developed to simulate the countercurrent solvent extraction process. The code has now been adapted to model the Acid Thorex flow sheet. This report represents a practical user's guide to SEPHIS - Thorex containing a program description, user information, program listing, and sample input and output.

  10. Difference-Equation/Flow-Graph Circuit Analysis

    NASA Technical Reports Server (NTRS)

    Mcvey, I. M.

    1988-01-01

    Numerical technique enables rapid, approximate analyses of electronic circuits containing linear and nonlinear elements. Practiced in variety of computer languages on large and small computers; for circuits simple enough, programmable hand calculators used. Although some combinations of circuit elements make numerical solutions diverge, enables quick identification of divergence and correction of circuit models to make solutions converge.

  11. THEORETICAL p-MODE OSCILLATION FREQUENCIES FOR THE RAPIDLY ROTATING {delta} SCUTI STAR {alpha} OPHIUCHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deupree, Robert G., E-mail: bdeupree@ap.smu.ca

    2011-11-20

    A rotating, two-dimensional stellar model is evolved to match the approximate conditions of {alpha} Oph. Both axisymmetric and nonaxisymmetric oscillation frequencies are computed for two-dimensional rotating models which approximate the properties of {alpha} Oph. These computed frequencies are compared to the observed frequencies. Oscillation calculations are made assuming the eigenfunction can be fitted with six Legendre polynomials, but comparison calculations with eight Legendre polynomials show the frequencies agree to within about 0.26% on average. The surface horizontal shape of the eigenfunctions for the two sets of assumed number of Legendre polynomials agrees less well, but all calculations show significant departuresmore » from that of a single Legendre polynomial. It is still possible to determine the large separation, although the small separation is more complicated to estimate. With the addition of the nonaxisymmetric modes with |m| {<=} 4, the frequency space becomes sufficiently dense that it is difficult to comment on the adequacy of the fit of the computed to the observed frequencies. While the nonaxisymmetric frequency mode splitting is no longer uniform, the frequency difference between the frequencies for positive and negative values of the same m remains 2m times the rotation rate.« less

  12. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  13. Electronic Structure at Electrode/Electrolyte Interfaces in Magnesium based Batteries

    NASA Astrophysics Data System (ADS)

    Balachandran, Janakiraman; Siegel, Donald

    2015-03-01

    Magnesium is a promising multivalent element for use in next generation electrochemical energy storage systems. However, a wide range of challenges such as low coulombic efficiency, low/varying capacity and cyclability need to be resolved in order to realize Mg based batteries. Many of these issues can be related to interfacial phenomena between the Mg anode and common electrolytes. Ab-initio based computational models of these interfaces can provide insights on the interfacial interactions that can be difficult to probe experimentally. In this work we present ab-initio computations of common electrolyte solvents (THF, DME) in contact with two model electrode surfaces namely -- (i) an ``SEI-free'' electrode based on Mg metal and, (ii) a ``passivated'' electrode consisting of MgO. We perform GW calculations to predict the reorganization of the molecular orbitals (HOMO/LUMO) upon contact with the these surfaces and their alignment with respect to the Fermi energy of the electrodes. These computations are in turn compared with more efficient GGA (PBE) & Hybrid (HSE) functional calculations. The results obtained from these computations enable us to qualitatively describe the stability of these solvent molecules at electrode-electrolyte interfaces

  14. Computer Software Management and Information Center

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Computer programs for passive anti-roll tank, earth resources laboratory applications, the NIMBUS-7 coastal zone color scanner derived products, transportable applications executive, plastic and failure analysis of composites, velocity gradient method for calculating velocities in an axisymmetric annular duct, an integrated procurement management system, data I/O PRON for the Motorola exorcisor, aerodynamic shock-layer shape, kinematic modeling, hardware library for a graphics computer, and a file archival system are documented.

  15. Computation of hypersonic flows with finite rate condensation and evaporation of water

    NASA Technical Reports Server (NTRS)

    Perrell, Eric R.; Candler, Graham V.; Erickson, Wayne D.; Wieting, Alan R.

    1993-01-01

    A computer program for modelling 2D hypersonic flows of gases containing water vapor and liquid water droplets is presented. The effects of interphase mass, momentum and energy transfer are studied. Computations are compared with existing quasi-1D calculations on the nozzle of the NASA Langley Eight Foot High Temperature Tunnel, a hypersonic wind tunnel driven by combustion of natural gas in oxygen enriched air.

  16. SAM 2.1—A computer program for plotting and formatting surveying data for estimating peak discharges by the slope-area method

    USGS Publications Warehouse

    Hortness, J.E.

    2004-01-01

    The U.S. Geological Survey (USGS) measures discharge in streams using several methods. However, measurement of peak discharges is often impossible or impractical due to difficult access, inherent danger of making measurements during flood events, and timing often associated with flood events. Thus, many peak discharge values often are calculated after the fact by use of indirect methods. The most common indirect method for estimating peak dis- charges in streams is the slope-area method. This, like other indirect methods, requires measuring the flood profile through detailed surveys. Processing the survey data for efficient entry into computer streamflow models can be time demanding; SAM 2.1 is a program designed to expedite that process. The SAM 2.1 computer program is designed to be run in the field on a portable computer. The program processes digital surveying data obtained from an electronic surveying instrument during slope- area measurements. After all measurements have been completed, the program generates files to be input into the SAC (Slope-Area Computation program; Fulford, 1994) or HEC-RAS (Hydrologic Engineering Center-River Analysis System; Brunner, 2001) computer streamflow models so that an estimate of the peak discharge can be calculated.

  17. A model of the atmospheric metal deposition by cosmic dust particles

    NASA Astrophysics Data System (ADS)

    McNeil, W. J.

    1993-11-01

    We have developed a model of the deposition of meteoric metals in Earth's atmosphere. The model takes as input the total mass influx of material to the Earth and calculates the deposition rate at all altitudes through solution of the drag and subliminal equations in a Monte Carlo-type computation. The diffusion equation is then solved to give steady state concentration of complexes of specific metal species and kinetics are added to calculate the concentration of individual complexes. Concentrating on sodium, we calculate the Na(D) nightglow predicted by the model, and by introduction of seasonal variations in lower tropospheric ozone based on experimental results, we are able to duplicate the seasonal variation of mid-latitude nightglow data.

  18. Computational modeling of high-entropy alloys: Structures, thermodynamics and elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Michael C.; Gao, Pan; Hawk, Jeffrey A.

    This study provides a short review on computational modeling on the formation, thermodynamics, and elasticity of single-phase high-entropy alloys (HEAs). Hundreds of predicted single-phase HEAs were re-examined using various empirical thermo-physical parameters. Potential BCC HEAs (CrMoNbTaTiVW, CrMoNbReTaTiVW, and CrFeMoNbReRuTaVW) were suggested based on CALPHAD modeling. The calculated vibrational entropies of mixing are positive for FCC CoCrFeNi, negative for BCC MoNbTaW, and near-zero for HCP CoOsReRu. The total entropies of mixing were observed to trend in descending order: CoCrFeNi > CoOsReRu > MoNbTaW. Calculated lattice parameters agree extremely well with averaged values estimated from the rule of mixtures (ROM) if themore » same crystal structure is used for the elements and the alloy. The deviation in the calculated elastic properties from ROM for select alloys is small but is susceptible to the choice used for the structures of pure components.« less

  19. Computational modeling of high-entropy alloys: Structures, thermodynamics and elasticity

    DOE PAGES

    Gao, Michael C.; Gao, Pan; Hawk, Jeffrey A.; ...

    2017-10-12

    This study provides a short review on computational modeling on the formation, thermodynamics, and elasticity of single-phase high-entropy alloys (HEAs). Hundreds of predicted single-phase HEAs were re-examined using various empirical thermo-physical parameters. Potential BCC HEAs (CrMoNbTaTiVW, CrMoNbReTaTiVW, and CrFeMoNbReRuTaVW) were suggested based on CALPHAD modeling. The calculated vibrational entropies of mixing are positive for FCC CoCrFeNi, negative for BCC MoNbTaW, and near-zero for HCP CoOsReRu. The total entropies of mixing were observed to trend in descending order: CoCrFeNi > CoOsReRu > MoNbTaW. Calculated lattice parameters agree extremely well with averaged values estimated from the rule of mixtures (ROM) if themore » same crystal structure is used for the elements and the alloy. The deviation in the calculated elastic properties from ROM for select alloys is small but is susceptible to the choice used for the structures of pure components.« less

  20. A Mass-balance nitrate model for predicting the effects of land use on ground-water quality in municipal wellhead-protection areas

    USGS Publications Warehouse

    Frimpter, M.H.; Donohue, J.J.; Rapacz, M.V.; Beye, H.G.

    1990-01-01

    A mass-balance accounting model can be used to guide the management of septic systems and fertilizers to control the degradation of groundwater quality in zones of an aquifer that contributes water to public supply wells. The nitrate nitrogen concentration of the mixture in the well can be predicted for steady-state conditions by calculating the concentration that results from the total weight of nitrogen and total volume of water entering the zone of contribution to the well. These calculations will allow water-quality managers to predict the nitrate concentrations that would be produced by different types and levels of development, and to plan development accordingly. Computations for different development schemes provide a technical basis for planners and managers to compare water quality effects and to select alternatives that limit nitrate concentration in wells. Appendix A contains tables of nitrate loads and water volumes from common sources for use with the accounting model. Appendix B describes the preparation of a spreadsheet for the nitrate loading calculations with a software package generally available for desktop computers. (USGS)

Top