ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Modeling Functions with the Calculator Based Ranger.
ERIC Educational Resources Information Center
Sherrill, Donna; Tibbs, Peggy
This paper presents two mathematics activities that model functions studied using the Calculator Based Ranger (CBR) software for TI-82 and TI-83 graphing calculators. The activities concern a bouncing ball experiment and modeling a decaying exponential function. (ASK)
An Instructional Model for Integrating the Calculator.
ERIC Educational Resources Information Center
Berlin, Donna F.; White, Arthur L.
1987-01-01
The design, selection, and organization of instructional materials that integrate calculators are described in relation to a model based on movement and representational level. Instructional resources and advantages of the model are described. (MNS)
Carbon cycle modeling calculations for the IPCC
Wuebbles, D.J.; Jain, A.K.
1993-08-12
We carried out essentially all the carbon cycle modeling calculations that were required by the IPCC Working Group 1. Specifically, IPCC required two types of calculations, namely, ``inverse calculations`` (input was CO{sub 2} concentrations and the output was CO{sub 2} emissions), and the ``forward calculations`` (input was CO{sub 2} emissions and output was CO{sub 2} concentrations). In particular, we have derived carbon dioxide concentrations and/or emissions for several scenarios using our coupled climate-carbon cycle modelling system.
Precipitates/Salts Model Sensitivity Calculation
P. Mariner
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.
Numerical Calculation of Model Rocket Trajectories.
ERIC Educational Resources Information Center
Keeports, David
1990-01-01
Discussed is the use of model rocketry to teach the principles of Newtonian mechanics. Included are forces involved; calculations for vertical launches; two-dimensional trajectories; and variations in mass, drag, and launch angle. (CW)
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Configuration mixing calculations in soluble models
NASA Astrophysics Data System (ADS)
Cambiaggio, M. C.; Plastino, A.; Szybisz, L.; Miller, H. G.
1983-07-01
Configuration mixing calculations have been performed in two quasi-spin models using basis states which are solutions of a particular set of Hartree-Fock equations. Each of these solutions, even those which do not correspond to the global minimum, is found to contain interesting physical information. Relatively good agreement with the exact lowest-lying states has been obtained. In particular, one obtains a better approximation to the ground state than that provided by Hartree-Fock.
Actinic Flux Calculations: A Model Sensitivity Study
NASA Technical Reports Server (NTRS)
Krotkov, Nickolay A.; Flittner, D.; Ahmad, Z.; Herman, J. R.; Einaudi, Franco (Technical Monitor)
2000-01-01
calculate direct and diffuse surface irradiance and actinic flux (downwelling (2p) and total (4p)) for the reference model. Sensitivity analysis has shown that the accuracy of the radiative transfer flux calculations for a unit ETS (i.e. atmospheric transmittance) together with a numerical interpolation technique for the constituents' vertical profiles is better than 1% for SZA less than 70(sub o) and wavelengths longer than 310 nm. The differences increase for shorter wavelengths and larger SZA, due to the differences in pseudo-spherical correction techniques and vertical discretetization among the codes. Our sensitivity study includes variation of ozone cross-sections, ETS spectra and the effects of wavelength shifts between vacuum and air scales. We also investigate the effects of aerosols on the spectral flux components in the UV and visible spectral regions. The "aerosol correction factors" (ACFs) were calculated at discrete wavelengths and different SZAs for each flux component (direct, diffuse, reflected) and prescribed IPMMI aerosol parameters. Finally, the sensitivity study was extended to calculation of selected photolysis rates coefficients.
Actinic Flux Calculations: A Model Sensitivity Study
NASA Technical Reports Server (NTRS)
Krotkov, Nickolay A.; Flittner, D.; Ahmad, Z.; Herman, J. R.; Einaudi, Franco (Technical Monitor)
2000-01-01
calculate direct and diffuse surface irradiance and actinic flux (downwelling (2p) and total (4p)) for the reference model. Sensitivity analysis has shown that the accuracy of the radiative transfer flux calculations for a unit ETS (i.e. atmospheric transmittance) together with a numerical interpolation technique for the constituents' vertical profiles is better than 1% for SZA less than 70(sub o) and wavelengths longer than 310 nm. The differences increase for shorter wavelengths and larger SZA, due to the differences in pseudo-spherical correction techniques and vertical discretetization among the codes. Our sensitivity study includes variation of ozone cross-sections, ETS spectra and the effects of wavelength shifts between vacuum and air scales. We also investigate the effects of aerosols on the spectral flux components in the UV and visible spectral regions. The "aerosol correction factors" (ACFs) were calculated at discrete wavelengths and different SZAs for each flux component (direct, diffuse, reflected) and prescribed IPMMI aerosol parameters. Finally, the sensitivity study was extended to calculation of selected photolysis rates coefficients.
Model calculations of superlubricity of graphite
NASA Astrophysics Data System (ADS)
Verhoeven, Gertjan S.; Dienwiebel, Martin; Frenken, Joost W.
2004-10-01
In this paper, friction between a finite, nanometer-sized, rigid graphite flake and a rigid graphite surface is analyzed theoretically in the framework of a modified Tomlinson model. Lateral forces are studied as a function of orientational misfit between flake and surface lattices, pulling direction of the flake, flake size and flake shape. The calculations show that the orientation dependence of the friction provides information on the contact size and shape. We find good agreement between the calculations and the experimental results, discussed in a recent publication by Dienwiebel et al. [
Extremity model for neutron dose calculations
Sattelberger, J. A.; Shores, E. F.
2001-01-01
In personnel dosimetry for external radiation exposures, health physicists tend to focus on measurement of whole body dose, where 'whole body' is generally regarded as the torso on which the dosimeter is placed.' Although a variety of scenarios exist in which workers must handle radioactive materials, whole body dose estimates may not be appropriate when assessing dose, particularly to the extremities. For example, consider sources used for instrument calibration. If such sources are in a contact geometry (e.g. held by fingers), an extremity dose estimate may be more relevant than a whole body dose. However, because questions arise regarding how that dose should be calculated, a detailed extremity model was constructed with the MCNP-4Ca Monte Carlo code. Although initially intended for use with gamma sources, recent work by Shores2 provided the impetus to test the model with neutrons.
Global Microscopic Models for Nuclear Reaction Calculations
Goriely, S.
2005-05-24
Important effort has been devoted in the last decades to measuring reaction cross sections. Despite such effort, many nuclear applications still require the use of theoretical predictions to estimate experimentally unknown cross sections. Most of the nuclear ingredients in the calculations of reaction cross sections need to be extrapolated in an energy and/or mass domain out of reach of laboratory simulations. In addition, some applications often involve a large number of unstable nuclei, so that only global approaches can be used. For these reasons, when the nuclear ingredients to the reaction models cannot be determined from experimental data, it is highly recommended to consider preferentially microscopic or semi-microscopic global predictions based on sound and reliable nuclear models which, in turn, can compete with more phenomenological highly-parameterized models in the reproduction of experimental data. The latest developments made in deriving such microscopic models for practical applications are reviewed. It mainly concerns nuclear structure properties (masses, deformations, radii, etc.), level densities at the equilibrium deformation, {gamma}-ray strength, as well as fission barriers and level densities at the fission saddle points.
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-10-01
batman provides fast calculation of exoplanet transit light curves and supports calculation of light curves for any radially symmetric stellar limb darkening law. It uses an integration algorithm for models that cannot be quickly calculated analytically, and in typical use, the batman Python package can calculate a million model light curves in well under ten minutes for any limb darkening profile.
Reduced Shell Model Calculations of 106Sb and 108Sb
Dikmen, Erdal
2007-04-23
The reduced shell model calculations have been done for the odd-odd 106Sb and 108Sb isotopes. The model space has been chosen as 1d5/2, 0g7/2, 1d3/2, 2s1/2 for the reduced calculations and included 0h11/2 for the full calculations. The reduced shell model calculations of 108Sb isotope are presented for the first time. The calculated energy spectra are compared to the experimental results to understand which model space is the best for the shell model calculations around N = Z = 50 region of the periodic table. This is the extention of the study that whether the reduced shell model calculations are capable of reproducing the experimental results for the nuclei whose shell model calculations can be carried out in the full model space.
Model potential calculations of lithium transitions.
NASA Technical Reports Server (NTRS)
Caves, T. C.; Dalgarno, A.
1972-01-01
Semi-empirical potentials are constructed that have eigenvalues close in magnitude to the binding energies of the valence electron in lithium. The potentials include the long range polarization force between the electron and the core. The corresponding eigenfunctions are used to calculate dynamic polarizabilities, discrete oscillator strengths, photoionization cross sections and radiative recombination coefficients. A consistent application of the theory imposes a modification on the transition operator, but its effects are small for lithium. The method presented can be regarded as a numerical generalization of the widely used Coulomb approximation.
CALCULATION OF PHYSICOCHEMICAL PROPERTIES FOR ENVIRONMENTAL MODELING
Recent trends in environmental regulatory strategies dictate that EPA will rely heavily on predictive modeling to carry out the increasingly complex array of exposure and risk assessments necessary to develop scientifically defensible regulations. In response to this need, resea...
CALCULATION OF PHYSICOCHEMICAL PROPERTIES FOR ENVIRONMENTAL MODELING
Recent trends in environmental regulatory strategies dictate that EPA will rely heavily on predictive modeling to carry out the increasingly complex array of exposure and risk assessments necessary to develop scientifically defensible regulations. In response to this need, resea...
Models for Automated Tube Performance Calculations
C. Brunkhorst
2002-12-12
High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance.
Quantum biological channel modeling and capacity calculation.
Djordjevic, Ivan B
2012-12-10
Quantum mechanics has an important role in photosynthesis, magnetoreception, and evolution. There were many attempts in an effort to explain the structure of genetic code and transfer of information from DNA to protein by using the concepts of quantum mechanics. The existing biological quantum channel models are not sufficiently general to incorporate all relevant contributions responsible for imperfect protein synthesis. Moreover, the problem of determination of quantum biological channel capacity is still an open problem. To solve these problems, we construct the operator-sum representation of biological channel based on codon basekets (basis vectors), and determine the quantum channel model suitable for study of the quantum biological channel capacity and beyond. The transcription process, DNA point mutations, insertions, deletions, and translation are interpreted as the quantum noise processes. The various types of quantum errors are classified into several broad categories: (i) storage errors that occur in DNA itself as it represents an imperfect storage of genetic information, (ii) replication errors introduced during DNA replication process, (iii) transcription errors introduced during DNA to mRNA transcription, and (iv) translation errors introduced during the translation process. By using this model, we determine the biological quantum channel capacity and compare it against corresponding classical biological channel capacity. We demonstrate that the quantum biological channel capacity is higher than the classical one, for a coherent quantum channel model, suggesting that quantum effects have an important role in biological systems. The proposed model is of crucial importance towards future study of quantum DNA error correction, developing quantum mechanical model of aging, developing the quantum mechanical models for tumors/cancer, and study of intracellular dynamics in general.
Quantum Biological Channel Modeling and Capacity Calculation
Djordjevic, Ivan B.
2012-01-01
Quantum mechanics has an important role in photosynthesis, magnetoreception, and evolution. There were many attempts in an effort to explain the structure of genetic code and transfer of information from DNA to protein by using the concepts of quantum mechanics. The existing biological quantum channel models are not sufficiently general to incorporate all relevant contributions responsible for imperfect protein synthesis. Moreover, the problem of determination of quantum biological channel capacity is still an open problem. To solve these problems, we construct the operator-sum representation of biological channel based on codon basekets (basis vectors), and determine the quantum channel model suitable for study of the quantum biological channel capacity and beyond. The transcription process, DNA point mutations, insertions, deletions, and translation are interpreted as the quantum noise processes. The various types of quantum errors are classified into several broad categories: (i) storage errors that occur in DNA itself as it represents an imperfect storage of genetic information, (ii) replication errors introduced during DNA replication process, (iii) transcription errors introduced during DNA to mRNA transcription, and (iv) translation errors introduced during the translation process. By using this model, we determine the biological quantum channel capacity and compare it against corresponding classical biological channel capacity. We demonstrate that the quantum biological channel capacity is higher than the classical one, for a coherent quantum channel model, suggesting that quantum effects have an important role in biological systems. The proposed model is of crucial importance towards future study of quantum DNA error correction, developing quantum mechanical model of aging, developing the quantum mechanical models for tumors/cancer, and study of intracellular dynamics in general. PMID:25371271
Beyond standard model calculations with Sherpa
Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; ...
2015-03-24
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.
Detailed opacity calculations for stellar models
NASA Astrophysics Data System (ADS)
Pain, Jean-Christophe; Gilleron, Franck
2016-10-01
We present a state of the art of precise spectral opacity calculations illustrated by stellar applications. The essential role of laboratory experiments to check the quality of the computed data is underlined. We review some X-ray and XUV laser and Z-pinch photo-absorption measurements as well as X-ray emission spectroscopy experiments of hot dense plasmas produced by ultra-high-intensity laser interaction. The measured spectra are systematically compared with the fine-structure opacity code SCO-RCG. Focus is put on iron, due to its crucial role in the understanding of asteroseismic observations of Beta Cephei-type and Slowly Pulsating B stars, as well as in the Sun. For instance, in Beta Cephei-type stars (which should not be confused with Cepheid variables), the iron-group opacity peak excites acoustic modes through the kappa-mechanism. A particular attention is paid to the higher-than-predicted iron opacity measured on Sandia's Z facility at solar interior conditions (boundary of the convective zone). We discuss some theoretical aspects such as orbital relaxation, electron collisional broadening, ionic Stark effect, oscillator-strength sum rules, photo-ionization, or the ``filling-the-gap'' effect of highly excited states.
Martian Radiation Environment: Model Calculations and Recent Measurements with "MARIE"
NASA Technical Reports Server (NTRS)
Saganti, P. B.; Cucinotta, F. A.; zeitlin, C. J.; Cleghorn, T. F.
2004-01-01
The Galactic Cosmic Ray spectra in Mars orbit were generated with the recently expanded HZETRN (High Z and Energy Transport) and QMSFRG (Quantum Multiple-Scattering theory of nuclear Fragmentation) model calculations. These model calculations are compared with the first eighteen months of measured data from the MARIE (Martian Radiation Environment Experiment) instrument onboard the 2001 Mars Odyssey spacecraft that is currently in Martian orbit. The dose rates observed by the MARIE instrument are within 10% of the model calculated predictions. Model calculations are compared with the MARIE measurements of dose, dose-equivalent values, along with the available particle flux distribution. Model calculated particle flux includes GCR elemental composition of atomic number, Z = 1-28 and mass number, A = 1-58. Particle flux calculations specific for the current MARIE mapping period are reviewed and presented.
Shell model calculations of 109Sb in the sdgh shell
NASA Astrophysics Data System (ADS)
Dikmen, E.; Novoselsky, A.; Vallieres, M.
2001-12-01
The energy spectra of the antimony isotope 109Sb in the sdgh shell are calculated in the nuclear shell model approach by using the CD-Bonn nucleon-nucleon interaction. The modified Drexel University parallel shell model code (DUPSM) was used for the calculations with maximum Hamiltonian dimension of 762 253 of 5.14% sparsity. The energy levels are compared to the recent experimental results. The calculations were done on the Cyborg Parallel Cluster System at Drexel University.
[Calculation of parameters in forest evapotranspiration model].
Wang, Anzhi; Pei, Tiefan
2003-12-01
Forest evapotranspiration is an important component not only in water balance, but also in energy balance. It is a great demand for the development of forest hydrology and forest meteorology to simulate the forest evapotranspiration accurately, which is also a theoretical basis for the management and utilization of water resources and forest ecosystem. Taking the broadleaved Korean pine forest on Changbai Mountain as an example, this paper constructed a mechanism model for estimating forest evapotranspiration, based on the aerodynamic principle and energy balance equation. Using the data measured by the Routine Meteorological Measurement System and Open-Path Eddy Covariance Measurement System mounted on the tower in the broadleaved Korean pine forest, the parameters displacement height d, stability functions for momentum phi m, and stability functions for heat phi h were ascertained. The displacement height of the study site was equal to 17.8 m, near to the mean canopy height, and the functions of phi m and phi h changing with gradient Richarson number R i were constructed.
Noise calculation on the basis of vortex flow models
NASA Technical Reports Server (NTRS)
Hardin, J. C.
1978-01-01
Flow-modeling technique yields relatively simple method for calculating sound radiation involving planar, cylindrical, or spherical surfaces. Model employs potential flow theory with action of viscosity on flowfield described in terms of point vortices. Surface presence in flow is analyzed, using classical image method; sound is calculated through sound generation theory reformulation.
Precipitates/Salts Model Calculations for Various Drift Temperature Environments
P. Marnier
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b).
Method and models for R-curve instability calculations
NASA Technical Reports Server (NTRS)
Orange, Thomas W.
1988-01-01
This paper presents a simple method for performing elastic R-curve instability calculations. For a single material-structure combination, the calculations can be done on some pocket calculators. On microcomputers and larger, it permits the development of a comprehensive program having libraries of driving force equations for different configurations and R-curve model equations for different materials. The paper also presents several model equations for fitting to experimental R-curve data, both linear elastic and elastoplastic. The models are fit to data from the literature to demonstrate their viability.
Method and models for R-curve instability calculations
NASA Technical Reports Server (NTRS)
Orange, Thomas W.
1990-01-01
This paper presents a simple method for performing elastic R-curve instability calculations. For a single material-structure combination, the calculations can be done on some pocket calculators. On microcomputers and larger, it permits the development of a comprehensive program having libraries of driving force equations for different configurations and R-curve model equations for different materials. The paper also presents several model equations for fitting to experimental R-curve data, both linear elastic and elastoplastic. The models are fit to data from the literature to demonstrate their viability.
Campbell, David L.; Watts, Raymond D.
1978-01-01
Program listing, instructions, and example problems are given for 12 programs for the interpretation of geophysical data, for use on Hewlett-Packard models 67 and 97 programmable hand-held calculators. These are (1) gravity anomaly over 2D prism with = 9 vertices--Talwani method; (2) magnetic anomaly (?T, ?V, or ?H) over 2D prism with = 8 vertices?Talwani method; (3) total-field magnetic anomaly profile over thick sheet/thin dike; (4) single dipping seismic refractor--interpretation and design; (5) = 4 dipping seismic refractors--interpretation; (6) = 4 dipping seismic refractors?design; (7) vertical electrical sounding over = 10 horizontal layers--Schlumberger or Wenner forward calculation; (8) vertical electric sounding: Dar Zarrouk calculations; (9) magnetotelluric planewave apparent conductivity and phase angle over = 9 horizontal layers--forward calculation; (10) petrophysics: a.c. electrical parameters; (11) petrophysics: elastic constants; (12) digital convolution with = 10-1ength filter.
Modeling and Calculator Tools for State and Local Transportation Resources
Air quality models, calculators, guidance and strategies are offered for estimating and projecting vehicle air pollution, including ozone or smog-forming pollutants, particulate matter and other emissions that pose public health and air quality concerns.
Effective UV radiation from model calculations and measurements
NASA Technical Reports Server (NTRS)
Feister, Uwe; Grewe, Rolf
1994-01-01
Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.
Katz, B F
2001-11-01
Human spatial perception of sound is a complex phenomenon. The Head-Related Transfer Function (HRTF) is a vital component to spatial sound perception. In order improve the understanding of the correlation between the HRTF and specific geometry of the head and pinna, a Boundary Element Method (BEM) has been used to calculate a portion of the HRTF of an individual based on precise geometrical data. Advantages of this approach include the ability to alter the geometry of the individual through the model in ways which are not possible with real subjects. Several models are used in the study, including a head with no pinna and several sized spheres. Calculations are performed for various source locations around the head. Results are presented for rigid model cases. Effects of variations on impedance and comparisons to measured data will be presented in the subsequent paper.
Model calculations of spectral transmission for the CLAES etalons
NASA Technical Reports Server (NTRS)
James, T. C.; Roche, A. E.; Kumer, J. B.
1989-01-01
This paper describes models for calculating spectral transmission for the Cryogenic-Limb-Array-Etalon-Spectrometer (CLAES) etalons. These models involve a convolution of the Airy function for a given thickness with the distribution of surface thicknesses, the effect of absorption in the substrate, and the field of view broadening as a function of etalon tilt angle. A comparison of model calculations with experimental transmission data for CLAES etalons centered at 3.52, 5.72, 8.0, and 11.86 microns showed that these models are able to provide a good description of the CLAES etalons.
The calculation model of the satellite solar panels infrared feature
NASA Astrophysics Data System (ADS)
Yang, Li; Lv, Xiang-yin; Jin, Wei; Yang, Hua; Zhao, Ji-jin
2013-09-01
The infrared radiation change of the solar panels is an obvious feature to tell whether they are working normally or not. In this paper, the calculation model of the satellite's solar panels infrared feature is established. First, combined with the parameter descriptions of the satellite's six orbital elements and with the use of the coordinate transformation method, the calculation formulas of the radiation flux that the solar panels get from the sun and the earth are derived, no matter which location of the satellite in orbit. Second, the calculation model of the solar panels' temperature field is established, and the equations are solved numerically with the boundary condition of radiation heat flux. Finally, the calculation models of the solar panels' infrared radiation in 3~5μm band and 8~14μm band are established, and the equations are solved numerically, thinking differently about their radiation and reflected radiation.
HOM study and parameter calculation of the TESLA cavity model
NASA Astrophysics Data System (ADS)
Zeng, Ri-Hua; Schuh, Marcel; Gerigk, Frank; Wegner, Rolf; Pan, Wei-Min; Wang, Guang-Wei; Liu, Rong
2010-01-01
The Superconducting Proton Linac (SPL) is the project for a superconducting, high current H-accelerator at CERN. To find dangerous higher order modes (HOMs) in the SPL superconducting cavities, simulation and analysis for the cavity model using simulation tools are necessary. The existing TESLA 9-cell cavity geometry data have been used for the initial construction of the models in HFSS. Monopole, dipole and quadrupole modes have been obtained by applying different symmetry boundaries on various cavity models. In calculation, scripting language in HFSS was used to create scripts to automatically calculate the parameters of modes in these cavity models (these scripts are also available in other cavities with different cell numbers and geometric structures). The results calculated automatically are then compared with the values given in the TESLA paper. The optimized cavity model with the minimum error will be taken as the base for further simulation of the SPL cavities.
NASA Astrophysics Data System (ADS)
Seifert, Merlin; Theisen, Werner
2016-12-01
In this work, martensite start temperatures of several martensitic stainless steels containing different amounts and types of carbides were calculated by means of thermodynamic equilibrium calculations. Two different equations were introduced into the Thermo-Calc® software. The calculations were performed for the respective compositions at austenitization temperature and compared to martensite start temperatures measured using a quenching dilatometer. The purpose was to estimate hardenability and hardness of newly developed steels. Even though the equations used were determined empirically for specific alloying systems, general trends for the investigated steels were found to be reproduced very well. Thus, the comparison of martensite start temperatures of different steels in comparable alloying systems is highly effective for modeling new steels and for predicting their hardenability.
Microbial Communities Model Parameter Calculation for TSPA/SR
D. Jolley
2001-07-16
This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.
[Construction of model for calculating and recording neonatal weight percentiles].
González González, N L; González Dávila, E; García Hernández, J A; Cabrera Morales, F; Padrón, E; Domenech, E
2014-02-01
To construct a model for calculating optimal foetal and neonatal weight curves with a method that allows automatic calculation of the percentile and sequential recording of results. A model was constructed for calculating optimal weight and the corresponding percentiles for gestational age and sex from a sample of 23,578 newborns, after excluding cases with diseases. Birth weight was modelled using stepwise multiple regression analysis. Newborns were classified as small or large for gestational age (SGA or LGA) using the proposed model. The resulting classification was compared with those derived from other models designed for Spanish children. Optimal weight model: 3,311.062+68.074 *sex+143.267 *GE40 -13.481 *GE40(2) - 0.797 *GE40(3)+sex* (5.528 *GE40 - 0.674 *GE40(2) - 0.064 *GE40(3)). (GE, gestational age). Weight percentiles were obtained from standardized data using the coefficient of variation of the optimal weight. The degree of agreement between our model classification and those of the Carrascosa model and Ramos model, with empirical and smooth percentiles, was "almost perfect" (κ=0.866, κ=0.872, and κ=0.876 (P<.001), respectively), and between our model and that proposed by Figueras it was "substantial" (κ=0.720, P<.001). The new model is comparable to those used for Spanish children and allows accurate, updated automatic percentile calculation for gestational age and sex. The results can be digitally stored to track longitudinal foetal growth. Free access to the model is offered, together with the possibility of automatic calculation of foetal and neonatal weight percentiles. Copyright © 2012 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .
Statistical Model Calculations for (n,γ) Reactions
NASA Astrophysics Data System (ADS)
Beard, Mary; Uberseder, Ethan; Wiescher, Michael
2015-05-01
Hauser-Feshbach (HF) cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well-observed result that different nuclear input models sensitively affect HF cross section calculations. Less well known however are the effects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, back-shift and giant dipole parameters), as well as effects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the effects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative effects of these model details will be discussed.
Neutron Capture Cross Section Calculations with the Statistical Model
NASA Astrophysics Data System (ADS)
Beard, Mary; Uberseder, Ethan; Wiescher, Michael
2014-09-01
Hauser-Feshbach (HF) cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well observed result that different nuclear input models sensitively affect HF cross section calculations. Less well-known however are the effects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, backshift and giant dipole parameters), as well as effects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the effects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative effects of these model details will be discussed.
Explicit solvent models in protein pKa calculations.
Gibas, C J; Subramaniam, S
1996-07-01
Continuum methods for calculation of protein electrostatics treat buried and ordered water molecules by one of two approximations; either the dielectric constant of regions containing ordered water molecules is equal to the bulk solvent dielectric constant, or it is equal to the protein dielectric constant though no fixed atoms are used to represent water molecules. A method for calculating the titration behavior of individual residues in proteins has been tested on models of hen egg white lysozyme containing various numbers of explicit water molecules. Water molecules were included based on hydrogen bonding, solvent accessibility, and/or proximity to titrating groups in the protein. Inclusion of water molecules significantly alters the calculated titration behavior of individual titrating sites, shifting calculated pKa values by up to 0.5 pH unit. Our results suggest that approximately one water molecule within hydrogen-bonding distance of each charged group should be included in protein electrostatics calculations.
Influence of Wake Models on Calculated Tiltrotor Aerodynamics
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
The tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single,l/4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will examine the influence of wake models on calculated tiltrotor aerodynamics, comparing calculations of performance and airloads with TRAM DNW measurements. The calculations will be performed using the comprehensive analysis CAMRAD II.
Influence of Wake Models on Calculated Tiltrotor Aerodynamics
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
The tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single,l/4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will examine the influence of wake models on calculated tiltrotor aerodynamics, comparing calculations of performance and airloads with TRAM DNW measurements. The calculations will be performed using the comprehensive analysis CAMRAD II.
Calculating osmotic pressure according to nonelectrolyte Wilson nonrandom factor model.
Li, Hui; Zhan, Tingting; Zhan, Xiancheng; Wang, Xiaolan; Tan, Xiaoying; Guo, Yiping; Li, Chengrong
2014-08-01
Abstract The osmotic pressure of NaCl solutions was determined by the air humidity in equilibrium (AHE) method. The relationship between the osmotic pressure and the concentration was explored theoretically, and the osmotic pressure was calculated according to the nonelectrolyte Wilson nonrandom factor (N-Wilson-NRF) model from the concentration. The results indicate that the calculated osmotic pressure is comparable to the measured one.
Low energy dipole strength from large scale shell model calculations
NASA Astrophysics Data System (ADS)
Sieja, Kamila
2017-09-01
Low energy enhancement of radiative strength functions has been deduced from experiments in several mass regions of nuclei. Such an enhancement is believed to impact the calculated neutron capture rates which are crucial input for reaction rates of astrophysical interest. Recently, shell model calculations have been performed to explain the upbend of the γ-strength as due to the M1 transitions between close-lying states in the quasi-continuum in Fe and Mo nuclei. Beyond mean-↓eld calculations in Mo suggested, however, a non-negligible role of electric dipole in the low energy enhancement. So far, no calculations of both dipole components within the same theoretical framework have been presented in this context. In this work we present newly developed large scale shell model appraoch that allows to treat on the same footing natural and non-natural parity states. The calculations are performed in a large sd - pf - gds model space, allowing for 1p{1h excitations on the top of the full pf-shell con↓guration mixing. We restrict the discussion to the magnetic part of the dipole strength, however, we calculate for the ↓rst time the magnetic dipole strength between states built of excitations going beyond the classical shell model spaces. Our results corroborate previous ↓ndings for the M1 enhancement for the natural parity states while we observe no enhancement for the 1p{1h contributions. We also discuss in more detail the e↑ects of con↓guration mixing limitations on the enhancement coming out from shell model calculations.
Separated transonic airfoil flow calculations with a nonequilibrium turbulence model
NASA Technical Reports Server (NTRS)
King, L. S.; Johnson, D. A.
1985-01-01
Navier-Stokes transonic airfoil calculations based on a recently developed nonequilibrium, turbulence closure model are presented for a supercritical airfoil section at transonic cruise conditions and for a conventional airfoil section at shock-induced stall conditions. Comparisons with experimental data are presented which show that this nonequilibrium closure model performs significantly better than the popular Baldwin-Lomax and Cebeci-Smith equilibrium algebraic models when there is boundary-layer separation that results from the inviscid-viscous interactions.
Proton Diffusion Model for High-Throughput Calculations
NASA Astrophysics Data System (ADS)
Wisesa, Pandu; Mueller, Tim
2013-03-01
Solid oxide fuel cells (SOFCs) have many advantages over other fuel cells with high efficiency, myriad fuel choices, and low cost. The main issue however is the high operating temperature of SOFCs, which can be lowered by using an electrolyte material with high ionic conductivity, such as proton conducting oxides. Our goal is to identify promising proton-conducting materials in a manner that is time and cost efficient through the utilization of high-throughput calculations. We present a model for proton diffusion developed using machine learning techniques with training data that consists of density functional theory (DFT) calculations on various metal oxides. The built model is tested against other DFT results to see how it performs. The results of the DFT calculations and how the model fares are discussed, with focus on hydrogen diffusion pathways inside the bulk material.
Strange hadronic loops of the proton: A quark model calculation
Paul Geiger; Nathan Isgur
1996-10-01
Nontrivial q{anti q} sea effects have their origin in the low-Q{sup 2} dynamics of strong QCD. The authors present here a quark model calculation of the contribution of s{anti s} pairs arising from a complete set of OZI-allowed strong Y{sup *}K{sup *} hadronic loops to the net spin of the proton, to its charge radius, and to its magnetic moment. The calculation is performed in an ``unquenched quark model'' which has been shown to preserve the spectroscopic successes of the naive quark model and to respect the OZI rule. They speculate that an extension of the calculation to the nonstrange sea will show that most of the ``missing spin'' of the proton is in orbital angular momenta.
An Improved Radiative Transfer Model for Climate Calculations
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
Interactions of model biomolecules. Benchmark CC calculations within MOLCAS
Urban, Miroslav; Pitoňák, Michal; Neogrády, Pavel; Dedíková, Pavlína; Hobza, Pavel
2015-01-22
We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine – guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.
NASA Astrophysics Data System (ADS)
McKemmish, Laura K.; Yurchenko, Sergei N.; Tennyson, Jonathan
2016-11-01
Accurate knowledge of the rovibronic near-infrared and visible spectra of vanadium monoxide (VO) is very important for studies of cool stellar and hot planetary atmospheres. Here, the required ab initio dipole moment and spin-orbit coupling curves for VO are produced. This data forms the basis of a new VO line list considering 13 different electronic states and containing over 277 million transitions. Open shell transition, metal diatomics are challenging species to model through ab initio quantum mechanics due to the large number of low-lying electronic states, significant spin-orbit coupling and strong static and dynamic electron correlation. Multi-reference configuration interaction methodologies using orbitals from a complete active space self-consistent-field (CASSCF) calculation are the standard technique for these systems. We use different state-specific or minimal-state CASSCF orbitals for each electronic state to maximise the calculation accuracy. The off-diagonal dipole moment controls the intensity of electronic transitions. We test finite-field off-diagonal dipole moments, but found that (1) the accuracy of the excitation energies were not sufficient to allow accurate dipole moments to be evaluated and (2) computer time requirements for perpendicular transitions were prohibitive. The best off-diagonal dipole moments are calculated using wavefunctions with different CASSCF orbitals.
Preliminary Modulus and Breakage Calculations on Cellulose Models
USDA-ARS?s Scientific Manuscript database
The Young’s modulus of polymers can be calculated by stretching molecular models with the computer. The molecule is stretched and the derivative of the changes in stored potential energy for several displacements, divided by the molecular cross-section area, is the stress. The modulus is the slope o...
Errors in infiltration volume calculations in volume balance models
USDA-ARS?s Scientific Manuscript database
Volume balance models of surface irrigation calculate the infiltrated volume at a given time as a product of the stream length, upstream infiltration, and shape factors. The best known expression of this type was derived by combining the Lewis-Milne equation with empirical power-law expressions for ...
Teaching Modelling Concepts: Enter the Pocket-Size Programmable Calculator.
ERIC Educational Resources Information Center
Gaar, Kermit A., Jr.
1980-01-01
Addresses the problem of the failure of students to see a physiological system in an integrated way. Programmable calculators armed with a printer are suggested as useful teaching devices that avoid the expense and the unavailability of computers for modelling in teaching physiology. (Author/SA)
The role of hand calculations in ground water flow modeling.
Haitjema, Henk
2006-01-01
Most ground water modeling courses focus on the use of computer models and pay little or no attention to traditional analytic solutions to ground water flow problems. This shift in education seems logical. Why waste time to learn about the method of images, or why study analytic solutions to one-dimensional or radial flow problems? Computer models solve much more realistic problems and offer sophisticated graphical output, such as contour plots of potentiometric levels and ground water path lines. However, analytic solutions to elementary ground water flow problems do have something to offer over computer models: insight. For instance, an analytic one-dimensional or radial flow solution, in terms of a mathematical expression, may reveal which parameters affect the success of calibrating a computer model and what to expect when changing parameter values. Similarly, solutions for periodic forcing of one-dimensional or radial flow systems have resulted in a simple decision criterion to assess whether or not transient flow modeling is needed. Basic water balance calculations may offer a useful check on computer-generated capture zones for wellhead protection or aquifer remediation. An easily calculated "characteristic leakage length" provides critical insight into surface water and ground water interactions and flow in multi-aquifer systems. The list goes on. Familiarity with elementary analytic solutions and the capability of performing some simple hand calculations can promote appropriate (computer) modeling techniques, avoids unnecessary complexity, improves reliability, and is likely to save time and money. Training in basic hand calculations should be an important part of the curriculum of ground water modeling courses.
Microscopic Shell Model Calculations for the Fluorine Isotopes
NASA Astrophysics Data System (ADS)
Barrett, Bruce R.; Dikmen, Erdal; Maris, Pieter; Vary, James P.; Shirokov, Andrey M.
2015-10-01
Using a formalism based on the No Core Shell Model (NCSM), we have determined miscroscopically the core and single-particle energies and the effective two-body interactions that are the input to standard shell model (SSM) calculations. The basic idea is to perform a succession of a Okubo-Lee-Suzuki (OLS) transformation, a NCSM calculation, and a second OLS transformation to a further reduced space, such as the sd-shell, which allows the separation of the many-body matrix elements into an ``inert'' core part plus a few valence-nucleons calculation. In the present investigation we use this technique to calculate the properties of the nuclides in the Fluorine isotopic chain, using the JISP16 nucleon-nucleon interaction. The obtained SSM input, along with the results of the SSM calculations for the Fluorine isotopes, will be presented. This work supported in part by TUBITAK-BIDEB, the US DOE, the US NSF, NERSC, and the Russian Ministry of Education and Science.
Evaluating bifactor models: Calculating and interpreting statistical indices.
Rodriguez, Anthony; Reise, Steven P; Haviland, Mark G
2016-06-01
Bifactor measurement models are increasingly being applied to personality and psychopathology measures (Reise, 2012). In this work, authors generally have emphasized model fit, and their typical conclusion is that a bifactor model provides a superior fit relative to alternative subordinate models. Often unexplored, however, are important statistical indices that can substantially improve the psychometric analysis of a measure. We provide a review of the particularly valuable statistical indices one can derive from bifactor models. They include omega reliability coefficients, factor determinacy, construct reliability, explained common variance, and percentage of uncontaminated correlations. We describe how these indices can be calculated and used to inform: (a) the quality of unit-weighted total and subscale score composites, as well as factor score estimates, and (b) the specification and quality of a measurement model in structural equation modeling. (PsycINFO Database Record
RCS calculations of 3-dimensional objects, modeled by CAD
NASA Astrophysics Data System (ADS)
Deleeneer, I.; Schweicher, E.; Barel, A.
1991-09-01
All the steps are detailed that one has to perform to enable efficient Radar Cross Section calculation for objects with a complex and general shape. Only cavities are supposed to be nonexistent at this state of the work. Before the actual RCS calculations, preliminary treatments like systematic modeling, Hidden Faces removal, and automatic recognition of reflection and diffraction centers are realized. After the creation of the object's geometry and its adaptation to the direction of the observator, Physical Optics (PO) was used to determine the backscattered field, and Geometrical Theory of Diffraction (GTD) was used to evaluate the diffracted fields. Only monostatic scattering (i.e., backscattering) is considered.
Noise calculation on the basis of vortex flow models
NASA Technical Reports Server (NTRS)
Hardin, J. C.
1977-01-01
A technique for noise calculation on the basis of vortex flow models is described. The 'reflection principle' is first extended to the whole class of potential flows which may be solved by the method of images. This allows the sound radiation to be computed solely through a volume integral over both the exterior and interior of any surfaces which may be present. The source distribution is then rewritten in terms of the vorticity within the flow which yields a highly computationally efficient formulation of the aeroacoustic theory. Several examples of such noise calculations are included.
Shell-model calculations of nuclei around mass 130
NASA Astrophysics Data System (ADS)
Teruya, E.; Yoshinaga, N.; Higashiyama, K.; Odahara, A.
2015-09-01
Shell-model calculations are performed for even-even, odd-mass, and doubly-odd nuclei of Sn, Sb, Te, I, Xe, Cs, and Ba isotopes around mass 130 using the single-particle space made up of valence nucleons occupying the 0 g7 /2 ,1 d5 /2 ,2 s1 /2 ,0 h11 /2 , and 1 d3 /2 orbitals. The calculated energies and electromagnetic transitions are compared with the experimental data. In addition, several typical isomers in this region are investigated.
OV and hard X-rays, observations and model calculations
NASA Technical Reports Server (NTRS)
Poland, A. I.; Mariska, J. T.
1986-01-01
An amalgamation of two published works that discuss the observation and theoretical calculations of OV (T approx. 250,000K) and Hard X-rays (30 to 100keV) emitted during flares are presented. The papers are by Poland et al (1984) and Mariska and Poland (1985). The observations of Hard X-rays and OV show that the excitation processes for each type of emission are closely coupled. Except for small differences the two types of emission rise and fall together during a flare. Model calculations are able to reproduce this behavior to a large extent, only when conductive processes do not dominate the energy transport processes.
The viscosity of magmatic silicate liquids: A model for calculation
NASA Technical Reports Server (NTRS)
Bottinga, Y.; Weill, D. F.
1971-01-01
A simple model has been designed to allow reasonably accurate calculations of viscosity as a function of temperature and composition. The problem of predicting viscosities of anhydrous silicate liquids has been investigated since such viscosity numbers are applicable to many extrusive melts and to nearly dry magmatic liquids in general. The fluidizing action of water dissolved in silicate melts is well recognized and it is now possible to predict the effect of water content on viscosity in a semiquantitative way. Water was not incorporated directly into the model. Viscosities of anhydrous compositions were calculated, and, where necessary, the effect of added water and estimated. The model can be easily modified to incorporate the effect of water whenever sufficient additional data are accumulated.
Calculations of multiquark functions in effective models of strong interaction
Jafarov, R. G.; Rochev, V. E.
2013-09-15
In this paper we present our results of the investigation of multiquark equations in the Nambu-Jona-Lasinio model with chiral symmetry of SU(2) group in the mean-field expansion. To formulate the mean-field expansion we have used an iteration scheme of solution of the Schwinger-Dyson equations with the fermion bilocal source. We have considered the equations for Green functions of the Nambu-Jona-Lasinio model up to third step for this iteration scheme. To calculate the high-order corrections to the mean-field approximation, we propose the method of the Legendre transformation with respect to the bilocal source, which allows effectively to take into account the symmetry constraints related with the chiral Ward identity. We discuss also the problem of calculating the multiquark functions in the mean-field expansion for Nambu-Jona-Lasinio-type models with other types of the multifermion sources.
Feasibility of supersonic diode pumped alkali lasers: Model calculations
Barmashenko, B. D.; Rosenwaks, S.
2013-04-08
The feasibility of supersonic operation of diode pumped alkali lasers (DPALs) is studied for Cs and K atoms applying model calculations, based on a semi-analytical model previously used for studying static and subsonic flow DPALs. The operation of supersonic lasers is compared with that measured and modeled in subsonic lasers. The maximum power of supersonic Cs and K lasers is found to be higher than that of subsonic lasers with the same resonator and alkali density at the laser inlet by 25% and 70%, respectively. These results indicate that for scaling-up the power of DPALs, supersonic expansion should be considered.
Investigation of Transformer Model for TRV Calculation by EMTP
NASA Astrophysics Data System (ADS)
Thein, Myo Min; Ikeda, Hisatoshi; Harada, Katsuhiko; Ohtsuka, Shinya; Hikita, Masayuki; Haginomori, Eiichi; Koshiduka, Tadashi
Analysis of the EMTP transformer model was performed with the 4kVA two windings low voltage transformer with the current injection (CIJ) measurement method to study a transient recovery voltage (TRV) at the transformer limited fault (TLF) current interrupting condition. Tested transformer's impedance was measured by the frequency response analyzer (FRA). From FRA measurement graphs leakage inductance, stray capacitance and resistance were calculated. The EMTP transformer model was constructed with those values. The EMTP simulation was done for a current injection circuit by using transformer model. The experiment and simulation results show a reasonable agreement.
Revised method for calculating cloud densities in equilibrium models
NASA Astrophysics Data System (ADS)
Wong, M. H.; Atreya, S. K.; Kuhn, W. R.
2013-12-01
Models of cloud condensation under thermodynamic equilibrium in planetary atmospheres are simple but still useful for several reasons. They calculate the wet adiabatic lapse rate, they determine saturation-limited mixing ratios of condensing species, and they calculate the stabilizing effect of latent heat release and molecular weight stratification. Equilibrium cloud condensation models (ECCMs) also calculate a type of condensate density---a condensate "unit density"---that only equates to cloud density under specific circumstances, because microphysics and dynamics are not considered in ECCMs. Unit densities are calculated for every model altitude by requiring that condensed material remains at the level where it condenses. Many ECCMs in use trace their heritage to Weidenschilling and Lewis (1973; Icarus 20, 465--476; hereafter WL73), which contains an error that affects only the calculation of condensate unit density. The error led to densities too high by a factor of the atmospheric scale height divided by unit length, which is about 3x10^6 at Jupiter's ammonia cloud level. We will describe the condensate unit density calculation error in WL73, and provide a new algorithm based on the local change in vapor mixing ratio, rather than the difference between integrated column masses as in WL73. The new algorithm satisfies conservation of mass. Using a simple scaling law to parameterize dynamics in terms of updraft speed and duration, condensate unit densities from ECCMs can be converted to cloud densities. We validate the technique for the terrestrial case, by comparing model predictions with representative densities of cirrus and cumulus clouds. For cirrus and cumulus updraft parameters, respectively, we find cloud densities of 0.01--0.2 g m-3 and 0.8--7 g m-3, in excellent agreement with observations and models of terrestrial clouds of these types. Implications for models of planetary and exoplanetary atmospheres will be discussed. [This material is based upon
[Calculating the intrinsic growth rate: comparison of definition and model].
Voronov, D A
2005-01-01
It was shown that well known equation r = ln[N(t2)/N(t1)]/(t2 - t1) is the definition of the average value of intrinsic growth rate of population r within any given interval of time t2-t1 and changing arbitrarity its numbers N(t). The common opinion considering the equation as suitable only for exponentially growing population was found to be incorrect. The fundamentally different approach is based on the calculation of r within the framework of demographic model, realized as Euler - Lotka equation or population projection matrices. However this model requires simultaneous realization of several assumptions improbable for natural populations: exponential change in population size, stable age structure and maintaining constant age-dependent birth and death rates. The calculation of r by definition requires the data on the dynamics of population numbers, whereas calculation on the basis of the model requires the demographic tables of birth and death rate, but not the population numbers. With the example of American ginseng it was shown that evalution of r by definition and model approaches could produce opposite results.
Optical model calculations of heavy-ion target fragmentation
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Norbury, J. W.
1986-01-01
The fragmentation of target nuclei by relativistic protons and heavy ions is described within the context of a simple abrasion-ablation-final-state interaction model. Abrasion is described by a quantum mechanical formalism utilizing an optical model potential approximation. Nuclear charge distributions of the excited prefragments are calculated by both a hypergeometric distribution and a method based upon the zero-point oscillations of the giant dipole resonance. Excitation energies are estimated from the excess surface energy resulting from the abrasion process and the additional energy deposited by frictional spectator interactions of the abraded nucleons. The ablation probabilities are obtained from the EVA-3 computer program. Isotope production cross sections for the spallation of copper targets by relativistic protons and for the fragmenting of carbon targets by relativistic carbon, neon, and iron projectiles are calculated and compared with available experimental data.
Microscopic Shell Model Calculations for sd-Shell Nuclei
NASA Astrophysics Data System (ADS)
Barrett, Bruce R.; Dikmen, Erdal; Maris, Pieter; Shirokov, Andrey M.; Smirnova, Nadya A.; Vary, James P.
Several techniques now exist for performing detailed and accurate calculations of the structure of light nuclei, i.e., A ≤ 16. Going to heavier nuclei requires new techniques or extensions of old ones. One of these is the so-called No Core Shell Model (NCSM) with a Core approach, which involves an Okubo-Lee-Suzuki (OLS) transformation of a converged NCSM result into a single major shell, such as the sd-shell. The obtained effective two-body matrix elements can be separated into core and single-particle (s.p.) energies plus residual two-body interactions, which can be used for performing standard shell-model (SSM) calculations. As an example, an application of this procedure will be given for nuclei at the beginning ofthe sd-shell.
ILNCSIM: improved lncRNA functional similarity calculation model
You, Zhu-Hong; Huang, De-Shuang; Chan, Keith C.C.
2016-01-01
Increasing observations have indicated that lncRNAs play a significant role in various critical biological processes and the development and progression of various human diseases. Constructing lncRNA functional similarity networks could benefit the development of computational models for inferring lncRNA functions and identifying lncRNA-disease associations. However, little effort has been devoted to quantifying lncRNA functional similarity. In this study, we developed an Improved LNCRNA functional SIMilarity calculation model (ILNCSIM) based on the assumption that lncRNAs with similar biological functions tend to be involved in similar diseases. The main improvement comes from the combination of the concept of information content and the hierarchical structure of disease directed acyclic graphs for disease similarity calculation. ILNCSIM was combined with the previously proposed model of Laplacian Regularized Least Squares for lncRNA-Disease Association to further evaluate its performance. As a result, new model obtained reliable performance in the leave-one-out cross validation (AUCs of 0.9316 and 0.9074 based on MNDR and Lnc2cancer databases, respectively), and 5-fold cross validation (AUCs of 0.9221 and 0.9033 for MNDR and Lnc2cancer databases), which significantly improved the prediction performance of previous models. It is anticipated that ILNCSIM could serve as an effective lncRNA function prediction model for future biomedical researches. PMID:27028993
ILNCSIM: improved lncRNA functional similarity calculation model.
Huang, Yu-An; Chen, Xing; You, Zhu-Hong; Huang, De-Shuang; Chan, Keith C C
2016-05-03
Increasing observations have indicated that lncRNAs play a significant role in various critical biological processes and the development and progression of various human diseases. Constructing lncRNA functional similarity networks could benefit the development of computational models for inferring lncRNA functions and identifying lncRNA-disease associations. However, little effort has been devoted to quantifying lncRNA functional similarity. In this study, we developed an Improved LNCRNA functional SIMilarity calculation model (ILNCSIM) based on the assumption that lncRNAs with similar biological functions tend to be involved in similar diseases. The main improvement comes from the combination of the concept of information content and the hierarchical structure of disease directed acyclic graphs for disease similarity calculation. ILNCSIM was combined with the previously proposed model of Laplacian Regularized Least Squares for lncRNA-Disease Association to further evaluate its performance. As a result, new model obtained reliable performance in the leave-one-out cross validation (AUCs of 0.9316 and 0.9074 based on MNDR and Lnc2cancer databases, respectively), and 5-fold cross validation (AUCs of 0.9221 and 0.9033 for MNDR and Lnc2cancer databases), which significantly improved the prediction performance of previous models. It is anticipated that ILNCSIM could serve as an effective lncRNA function prediction model for future biomedical researches.
Influence of saturation properties on shell-model calculations
NASA Astrophysics Data System (ADS)
Abzouzi, A.; Caurier, E.; Zuker, A. P.
1991-03-01
It is shown that the nuclear Hamiltonian scrH separates rigorously into a monopole field scrHm and a multipole part scrHm. scrHm is entirely responsible for saturation properties and can be treated phenomenologically with few parameters. When realistic interactions are used for scrHM in regions from the p shell to the N=82 isotones, shell-model calculations yield excellent spectroscopy and demand nuclear radii very close to the observed ones.
A model of the cell nucleus for DNA damage calculations.
Nikjoo, Hooshang; Girard, Peter
2012-01-01
Development of a computer model of genomic deoxyribonucleic acid (DNA) in the human cell nucleus for DNA damage and repair calculations. The model comprises the human genomic DNA, chromosomal domains, and loops attached to factories. A model of canonical B-DNA was used to build the nucleosomes and the 30-nanometer solenoidal chromatin. In turn the chromatin was used to form the loops of factories in chromosome domains. The entire human genome was placed in a spherical nucleus of 10 micrometers diameter. To test the new target model, tracks of protons and alpha-particles were generated using Monte Carlo track structure codes PITS99 (Positive Ion Track Structure) and KURBUC. Damage sites induced in the genome were located and classified according to type and complexity. The three-dimensional structure of the genome starting with a canonical B-DNA model, nucleosomes, and chromatin loops in chromosomal domains are presented. The model was used to obtain frequencies of DNA damage induced by protons and alpha-particles by direct energy deposition, including single- and double-strand breaks, base damage, and clustered lesions. This three-dimensional model of the genome is the first such model using the full human genome for the next generation of more comprehensive modelling of DNA damage and repair. The model combines simple geometrical structures at the level of domains and factories with potentially full detail at the level of atoms in particular genes, allowing damage patterns in the latter to be simulated.
Effect of molecular models on viscosity and thermal conductivity calculations
NASA Astrophysics Data System (ADS)
Weaver, Andrew B.; Alexeenko, Alina A.
2014-12-01
The effect of molecular models on viscosity and thermal conductivity calculations is investigated. The Direct Simulation Monte Carlo (DSMC) method for rarefied gas flows is used to simulate Couette and Fourier flows as a means of obtaining the transport coefficients. Experimental measurements for argon (Ar) provide a baseline for comparison over a wide temperature range of 100-1,500 K. The variable hard sphere (VHS), variable soft sphere (VSS), and Lennard-Jones (L-J) molecular models have been implemented into a parallel version of Bird's one-dimensional DSMC code, DSMC1, and the model parameters have been recalibrated to the current experimental data set. While the VHS and VSS models only consider the short-range, repulsive forces, the L-J model also includes constributions from the long-range, dispersion forces. Theoretical results for viscosity and thermal conductivity indicate the L-J model is more accurate than the VSS model; with maximum errors of 1.4% and 3.0% in the range 300-1,500 K for L-J and VSS models, respectively. The range of validity of the VSS model is extended to 1,650 K through appropriate choices for the model parameters.
Calculating Free Energy Changes in Continuum Solvation Models
Ho, Junming; Ertem, Mehmed Z.
2016-02-27
We recently showed for a large dataset of pKas and reduction potentials that free energies calculated directly within the SMD continuum model compares very well with corresponding thermodynamic cycle calculations in both aqueous and organic solvents (Phys. Chem. Chem. Phys. 2015, 17, 2859). In this paper, we significantly expand the scope of our study to examine the suitability of this approach for the calculation of general solution phase kinetics and thermodynamics, in conjunction with several commonly used solvation models (SMDM062X, SMD-HF, CPCM-UAKS, and CPCM-UAHF) for a broad range of systems and reaction types. This includes cluster-continuum schemes for pKa calculations,more » as well as various neutral, radical and ionic reactions such as enolization, cycloaddition, hydrogen and chlorine atom transfer, and bimolecular SN2 and E2 reactions. On the basis of this benchmarking study, we conclude that the accuracies of both approaches are generally very similar – the mean errors for Gibbs free energy changes of neutral and ionic reactions are approximately 5 kJ mol-1 and 25 kJ mol-1 respectively. In systems where there are significant structural changes due to solvation, as is the case for certain ionic transition states and amino acids, the direct approach generally afford free energy changes that are in better agreement with experiment. The results indicate that when appropriate combinations of electronic structure methods are employed, the direct approach provides a reliable alternative to the thermodynamic cycle calculations of solution phase kinetics and thermodynamics across a broad range of organic reactions.« less
Calculating Free Energy Changes in Continuum Solvation Models
Ho, Junming; Ertem, Mehmed Z.
2016-02-27
We recently showed for a large dataset of pK_{a}s and reduction potentials that free energies calculated directly within the SMD continuum model compares very well with corresponding thermodynamic cycle calculations in both aqueous and organic solvents (Phys. Chem. Chem. Phys. 2015, 17, 2859). In this paper, we significantly expand the scope of our study to examine the suitability of this approach for the calculation of general solution phase kinetics and thermodynamics, in conjunction with several commonly used solvation models (SMDM062X, SMD-HF, CPCM-UAKS, and CPCM-UAHF) for a broad range of systems and reaction types. This includes cluster-continuum schemes for pK_{a} calculations, as well as various neutral, radical and ionic reactions such as enolization, cycloaddition, hydrogen and chlorine atom transfer, and bimolecular SN2 and E2 reactions. On the basis of this benchmarking study, we conclude that the accuracies of both approaches are generally very similar – the mean errors for Gibbs free energy changes of neutral and ionic reactions are approximately 5 kJ mol^{-1} and 25 kJ mol^{-1} respectively. In systems where there are significant structural changes due to solvation, as is the case for certain ionic transition states and amino acids, the direct approach generally afford free energy changes that are in better agreement with experiment. The results indicate that when appropriate combinations of electronic structure methods are employed, the direct approach provides a reliable alternative to the thermodynamic cycle calculations of solution phase kinetics and thermodynamics across a broad range of organic reactions.
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
Comparison of SAMO and ab initio model calculations for pyrazine
NASA Astrophysics Data System (ADS)
Duke, B. J.; Collins, M. P. S.
1981-04-01
The simulated ab initio molecular orbital (SAMO) technique and the ab initio model calculation of Butkus and Fink are compared for the pyrazine molecule. Both methods construct the wave function of pyrazine from wave functions of smaller pattern molecules. The methods are complimentary in that the strengths of one are often the weakness of the other. The SAMO method gives good orbital energies which are not given by the ab initio model method, while the latter is more readily extended to the ionic protonated molecules.
Freeway Travel Speed Calculation Model Based on ETC Transaction Data
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107
[Establishment of a mathematical model for calculating cochlear length].
Zhong, L L; Hao, Q Q; Ren, L L; Guo, W W; Yang, S M
2016-06-07
To compare the cochlear length of the miniature pig calculated by 3-dimensional reconstruction technique and an Archimedean spiral model, and to evaluate the feasibility of determining the length of the cochlea using a mathematical model. The temporal bones of three miniature pigs with normal hearing were selected and scanned by micro-CT. The pictures were input into Mimics software, the 3D structure of the inner ear was reconstructed, and the following parameters were determined through Mimics: cochlear length, diameter of each turn, cochlear height, and apical turn angle. The cochlear length was calculated using the Archimedean spiral model. The length of the cochlea was (35.30±0.88)mm based on the three-dimensional reconstruction technique compared to (34.85±0.64)mm based on the Archimedean spiral model. The differences between the two values were not statistically significant. The height of the cochlea is (2.64±0.24)mm. The capsule of the cochlea had 3.67 turns. The three-dimensional reconstruction technique provides accurate and reliable results, but the reconstruction process is time-consuming and is unsuitable for clinical application. The Archimedean spiral model method is simple, feasible, reliable, and therefore suitable for clinical applications, in particular to provide references for cochlear implantation surgeries.
A PROPOSED BENCHMARK PROBLEM FOR SCATTER CALCULATIONS IN RADIOGRAPHIC MODELLING
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-03
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
Acoustic intensity calculations for axisymmetrically modeled fluid regions
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.; Everstine, Gordon C.
1992-01-01
An algorithm for calculating acoustic intensities from a time harmonic pressure field in an axisymmetric fluid region is presented. Acoustic pressures are computed in a mesh of NASTRAN triangular finite elements of revolution (TRIAAX) using an analogy between the scalar wave equation and elasticity equations. Acoustic intensities are then calculated from pressures and pressure derivatives taken over the mesh of TRIAAX elements. Intensities are displayed as vectors indicating the directions and magnitudes of energy flow at all mesh points in the acoustic field. A prolate spheroidal shell is modeled with axisymmetric shell elements (CONEAX) and submerged in a fluid region of TRIAAX elements. The model is analyzed to illustrate the acoustic intensity method and the usefulness of energy flow paths in the understanding of the response of fluid-structure interaction problems. The structural-acoustic analogy used is summarized for completeness. This study uncovered a NASTRAN limitation involving numerical precision issues in the CONEAX stiffness calculation causing large errors in the system matrices for nearly cylindrical cones.
Hydrothermal hydration of Martian crust: illustration via geochemical model calculations
NASA Technical Reports Server (NTRS)
Griffith, L. L.; Shock, E. L.
1997-01-01
If hydrothermal Systems existed on Mars, hydration of crustal rocks may have had the potential to affect the water budget of the planet. We have conducted geochemical model calculations to investigate the relative roles of host rock composition, temperature, water-to-rock ratio, and initial fluid oxygen fugacity on the mineralogy of hydrothermal alteration assemblages, as well as the effectiveness of alteration to store water in the crust as hydrous minerals. In order to place calculations for Mars in perspective, models of hydrothermal alteration of three genetically related Icelandic volcanics (a basalt, andesite, and rhyolite) are presented, together with results for compositions based on SNC meteorite samples (Shergotty and Chassigny). Temperatures from 150 degrees C to 250 degrees C, water-to-rock ratios from 0.1 to 1000, and two initial fluid oxygen fugacities are considered in the models. Model results for water-to-rock ratios less than 10 are emphasized because they are likely to be more applicable to Mars. In accord with studies of low-grade alteration of terrestrial rocks, we find that the major controls on hydrous mineral production are host rock composition and temperature. Over the range of conditions considered, the alteration of Shergotty shows the greatest potential for storing water as hydrous minerals, and the alteration of Icelandic rhyolite has the lowest potential.
Hydrothermal hydration of Martian crust: illustration via geochemical model calculations.
Griffith, L L; Shock, E L
1997-04-25
If hydrothermal Systems existed on Mars, hydration of crustal rocks may have had the potential to affect the water budget of the planet. We have conducted geochemical model calculations to investigate the relative roles of host rock composition, temperature, water-to-rock ratio, and initial fluid oxygen fugacity on the mineralogy of hydrothermal alteration assemblages, as well as the effectiveness of alteration to store water in the crust as hydrous minerals. In order to place calculations for Mars in perspective, models of hydrothermal alteration of three genetically related Icelandic volcanics (a basalt, andesite, and rhyolite) are presented, together with results for compositions based on SNC meteorite samples (Shergotty and Chassigny). Temperatures from 150 degrees C to 250 degrees C, water-to-rock ratios from 0.1 to 1000, and two initial fluid oxygen fugacities are considered in the models. Model results for water-to-rock ratios less than 10 are emphasized because they are likely to be more applicable to Mars. In accord with studies of low-grade alteration of terrestrial rocks, we find that the major controls on hydrous mineral production are host rock composition and temperature. Over the range of conditions considered, the alteration of Shergotty shows the greatest potential for storing water as hydrous minerals, and the alteration of Icelandic rhyolite has the lowest potential.
A review of Higgs mass calculations in supersymmetric models
NASA Astrophysics Data System (ADS)
Draper, Patrick; Rzehak, Heidi
2016-03-01
The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass in the Minimal Supersymmetric Standard Model, in particular the large radiative corrections required to lift mh to 125 GeV and their calculation via Feynman-diagrammatic and effective field theory techniques. This review is intended as an entry point for readers new to the field, and as a summary of the current status, including the existing analytic calculations and publicly-available computer codes.
Study on Calculation Model of Culvert Soil Pressure
NASA Astrophysics Data System (ADS)
Liu, Jing; Tian, Xiao-yan; Gao, Xiao-mei
2017-09-01
Culvert diseases are prevalent in highway engineering. There are many factors involved in the occurrence of the disease, and the problem is complex. However, the design cannot accurately determine the role of the soil pressure on the culvert is the main reason to the disease. Based on the theoretical analysis and field test, this paper studies the characteristics of the stress and deformation of the culvert-soil structure. According to the theory of soil mechanics, the calculation model of vertical soil pressure at the top of culvert is determined, and the formula of vertical soil pressure at the top of culvert is deduced. Through the field test of the vertical soil pressure at the top of culvert of several engineering examples, the calculation formula of this paper is verified, which can provide reference for future practical engineering.
A simple model of throughput calculation for single screw
NASA Astrophysics Data System (ADS)
Béreaux, Yves; Charmeau, Jean-Yves; Moguedet, Maël
2007-04-01
To be able to predict the throughput of a single-screw extruder or the metering time of an injection moulding machine for a given screw geometry, set of processing conditions and polymeric material is important both for practical and designing purposes. Our simple model show that the screw geometry is the most important parameter, followed by polymer rheology and processing conditions. Melting properties and length seem to intervene to a lesser extent. The calculations hinges on the idea of viewing the entire screw as a pump, conveying a solid and a molten fraction. The evolution of the solid fraction is the essence of the plastication process, but under particular circumstances, its influence on the throughput is nil. This allows us to get a very good estimate on the throughput and pressure development along the screw. Our calculations are compared to different sets of experiments available from the literature. We have consistent agreement both in throughput and pressure with published data.
2HDMC — two-Higgs-doublet model calculator
NASA Astrophysics Data System (ADS)
Eriksson, David; Rathsman, Johan; Stål, Oscar
2010-04-01
We describe version 1.0.6 of the public C++ code 2HDMC, which can be used to perform calculations in a general, CP-conserving, two-Higgs-doublet model (2HDM). The program features simple conversion between different parametrizations of the 2HDM potential, a flexible Yukawa sector specification with choices of different Z-symmetries or more general couplings, a decay library including all two-body — and some three-body — decay modes for the Higgs bosons, and the possibility to calculate observables of interest for constraining the 2HDM parameter space, as well as theoretical constraints from positivity and unitarity. The latest version of the 2HDMC code and full documentation is available from: http://www.isv.uu.se/thep/MC/2HDMC. New version program summaryProgram title: 2HDMC Catalogue identifier: AEFI_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFI_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL No. of lines in distributed program, including test data, etc.: 12 110 No. of bytes in distributed program, including test data, etc.: 92 731 Distribution format: tar.gz Programming language: C++ Computer: Any computer running Linux Operating system: Linux RAM: 5 Mb Catalogue identifier of previous version: AEFI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2010) 189 Classification: 11.1 External routines: GNU Scientific Library ( http://www.gnu.org/software/gsl/) Does the new version supersede the previous version?: Yes Nature of problem: Determining properties of the potential, calculation of mass spectrum, couplings, decay widths, oblique parameters, muon g-2, and collider constraints in a general two-Higgs-doublet model. Solution method: From arbitrary potential and Yukawa sector, tree-level relations are used to determine Higgs masses and couplings. Decay widths are calculated at leading order, including FCNC decays when applicable. Decays to off
Sample size calculation for the proportional hazards cure model.
Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin
2012-12-20
In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. Copyright © 2012 John Wiley & Sons, Ltd.
Model test and CFD calculation of a cavitating bulb turbine
NASA Astrophysics Data System (ADS)
Necker, J.; Aschenbrenner, T.
2010-08-01
The flow in a horizontal shaft bulb turbine is calculated as a two-phase flow with a commercial Computational Fluid Dynamics (CFD-)-code including cavitation model. The results are compared with experimental results achieved at a closed loop test rig for model turbines. On the model test rig, for a certain operating point (i.e. volume flow, net head, blade angle, guide vane opening) the pressure behind the turbine is lowered (i.e. the Thoma-coefficient σ is lowered) and the efficiency of the turbine is recorded. The measured values can be depicted in a so-called σ-break curve or η- σ-diagram. Usually, the efficiency is independent of the Thoma-coefficient up to a certain value. When lowering the Thoma-coefficient below this value the efficiency will drop rapidly. Visual observations of the different cavitation conditions complete the experiment. In analogy, several calculations are done for different Thoma-coefficients σand the corresponding hydraulic losses of the runner are evaluated quantitatively. For a low σ-value showing in the experiment significant efficiency loss, the the change of volume flow in the experiment was simulated. Besides, the fraction of water vapour as an indication of the size of the cavitation cavity is analyzed qualitatively. The experimentally and the numerically obtained results are compared and show a good agreement. Especially the drop in efficiency can be calculated with satisfying accuracy. This drop in efficiency is of high practical importance since it is one criterion to determine the admissible cavitation in a bulb-turbine. The visual impression of the cavitation in the CFD-analysis is well in accordance with the observed cavitation bubbles recorded on sketches and/or photographs.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Aerosol activation: parameterised versus explicit calculation for global models
NASA Astrophysics Data System (ADS)
Tost, H.; Pringle, K.; Metzger, S.; Lelieveld, J.
2009-04-01
A key process in studies of the aerosol indirect effects on clouds is the activation of particles into droplets at 100% relative humidity. To model this process in cloud, meteorological and climate models is a difficult undertaking because of the wide range of scales involved. The chemical composition of the atmospheric aerosol, originating from both air pollution and natural sources, substantially impacts the aerosol water uptake and growth due to its hygroscopicity. In this study a comparison of aerosol activation, using state-of-the-art aerosol activation parameterisations, and explicit activation due to hygroscopic growth is performed.For that purpose we apply the GMXe aerosol model - treating both dynamic and thermodynamic aerosol properties - within the EMAC (ECHAM5/MESSy Atmospheric chemistry, an atmospheric chemistry general circulation) model. This new aerosol model can explicitely calculate the water uptake of aerosols due to hygroscopicity, allowing the growth of aerosol particles into the regimes of cloud droplets in case of sufficient water vapour availability. Global model simulations using both activation schemes will be presented and compared, elucidating the advantages of each approach.
Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose
NASA Technical Reports Server (NTRS)
Welton, Andrew; Lee, Kerry
2010-01-01
While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.
Calculations of hot gas ingestion for a STOVL aircraft model
NASA Technical Reports Server (NTRS)
Fricker, David M.; Holdeman, James D.; Vanka, Surya P.
1992-01-01
Hot gas ingestion problems for Short Take-Off, Vertical Landing (STOVL) aircraft are typically approached with empirical methods and experience. In this study, the hot gas environment around a STOVL aircraft was modeled as multiple jets in crossflow with inlet suction. The flow field was calculated with a Navier-Stokes, Reynolds-averaged, turbulent, 3D computational fluid dynamics code using a multigrid technique. A simple model of a STOVL aircraft with four choked jets at 1000 K was studied at various heights, headwind speeds, and thrust splay angles in a modest parametric study. Scientific visualization of the computed flow field shows a pair of vortices in front of the inlet. This and other qualitative aspects of the flow field agree well with experimental data.
Model calculations of the dayside ionosphere of Venus - Energetics
NASA Technical Reports Server (NTRS)
Cravens, T. E.; Gombosi, T. I.; Kozyra, J.; Nagy, A. F.; Brace, L. H.; Knudsen, W. C.
1980-01-01
A model of the energy balance of the dayside ionosphere of Venus is presented. Calculations of the dayside electron and ion temperature profiles are carried out and compared with data from experiments on the Pioneer Venus orbiter. The coupled heat conduction equations for electrons and ions are solved for several values of the solar zenith angle. It is shown that thermal conductivities are inhibited by the presence of a horizontal magnetic field. A realistic model of the magnetic field that includes fluctuations is employed in deriving an appropriate expression for the thermal conductivity. The contributions of photoelectrons, ion chemistry, Joule heating, and solar wind heating to the energy balance of the ionosphere are considered.
Folding model calculations for 6He+12C elastic scattering
NASA Astrophysics Data System (ADS)
Awad, A. Ibraheem
2016-03-01
In the framework of the double folding model, we used the α+2n and di-triton configurations for the nuclear matter density of the 6He nucleus to generate the real part of the optical potential for the system 6He+12C. As an alternative, we also use the high energy approximation to generate the optical potential for the same system. The derived potentials are employed to analyze the elastic scattering differential cross section at energies of 38.3, 41.6 and 82.3 MeV/u. For the imaginary part of the potential we adopt the squared Woods-Saxon form. The obtained results are compared with the corresponding measured data as well as with available results in the literature. The calculated total reaction cross sections are investigated and compared with the optical limit Glauber model description.
Recent Developments in No-Core Shell-Model Calculations
Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R
2009-03-20
We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.
Plasmon-pole models affect band gaps in GW calculations
NASA Astrophysics Data System (ADS)
Larson, Paul; Wu, Zhigang
2013-03-01
Density functional theory calculations have long been known to underestimate the band gaps in semiconductors. Significant improvements have been made by using GW calculations that uses the self energy, defined as the product of the Green function (G) and screened Coulomb exchange (W). However, many approximations are made in the GW method, specifically the plasmon-pole approximation. This approximation replaces the integration necessary to produce W with a simple approximation to the inverse dielectric function. Four different plasmon-pole approximations have been tested using the tight-binding program ABINIT: Godby-Needs, Hybertsen-Louie, von der Linden-Horsch, and Engel-Farid. For many materials, the differences in the GW band gaps for the different plasmon-pole models are negligible, but for systems with localized electrons, the difference can be larger than 1 eV. The plasmon-pole approximation is generally chosen to best agree with experimental data, but this is misleading in that this ignores all of the other approximations used in the GW method. Improvements in plasmon-pole models in GW can only come about by trying to reproduce the results of the numerical integration rather than trying to reproduce experimental results.
Nonlinear damping calculation in cylindrical gear dynamic modeling
NASA Astrophysics Data System (ADS)
Guilbault, Raynald; Lalonde, Sébastien; Thomas, Marc
2012-04-01
The nonlinear dynamic problem posed by cylindrical gear systems has been extensively covered in the literature. Nonetheless, a significant proportion of the mechanisms involved in damping generation remains to be investigated and described. The main objective of this study is to contribute to this task. Overall, damping is assumed to consist of three sources: surrounding element contribution, hysteresis of the teeth, and oil squeeze damping. The first two contributions are considered to be commensurate with the supported load; for its part however, squeeze damping is formulated using expressions developed from the Reynolds equation. A lubricated impact analysis between the teeth is introduced in this study for the minimum film thickness calculation during contact losses. The dynamic transmission error (DTE) obtained from the final model showed close agreement with experimental measurements available in the literature. The nonlinear damping ratio calculated at different mesh frequencies and torque amplitudes presented average values between 5.3 percent and 8 percent, which is comparable to the constant 8 percent ratio used in published numerical simulations of an equivalent gear pair. A close analysis of the oil squeeze damping evidenced the inverse relationship between this damping effect and the applied load.
Assessment of Some Atomization Models Used in Spray Calculations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Bulzin, Dan
2011-01-01
The paper presents the results from a validation study undertaken as a part of the NASA s fundamental aeronautics initiative on high altitude emissions in order to assess the accuracy of several atomization models used in both non-superheat and superheat spray calculations. As a part of this investigation we have undertaken the validation based on four different cases to investigate the spray characteristics of (1) a flashing jet generated by the sudden release of pressurized R134A from cylindrical nozzle, (2) a liquid jet atomizing in a subsonic cross flow, (3) a Parker-Hannifin pressure swirl atomizer, and (4) a single-element Lean Direct Injector (LDI) combustor experiment. These cases were chosen because of their importance in some aerospace applications. The validation is based on some 3D and axisymmetric calculations involving both reacting and non-reacting sprays. In general, the predicted results provide reasonable agreement for both mean droplet sizes (D32) and average droplet velocities but mostly underestimate the droplets sizes in the inner radial region of a cylindrical jet.
Quantum plasmonics: from jellium models to ab initio calculations
NASA Astrophysics Data System (ADS)
Varas, Alejandro; García-González, Pablo; Feist, Johannes; García-Vidal, F. J.; Rubio, Angel
2016-08-01
Light-matter interaction in plasmonic nanostructures is often treated within the realm of classical optics. However, recent experimental findings show the need to go beyond the classical models to explain and predict the plasmonic response at the nanoscale. A prototypical system is a nanoparticle dimer, extensively studied using both classical and quantum prescriptions. However, only very recently, fully ab initio time-dependent density functional theory (TDDFT) calculations of the optical response of these dimers have been carried out. Here, we review the recent work on the impact of the atomic structure on the optical properties of such systems. We show that TDDFT can be an invaluable tool to simulate the time evolution of plasmonic modes, providing fundamental understanding into the underlying microscopical mechanisms.
The EOSTA model for opacities and EOS calculations
NASA Astrophysics Data System (ADS)
Barshalom, Avraham; Oreg, Joseph
2007-11-01
The EOSTA model developed recently combines the STA and INFERNO models to calculate opacities and EOS on the same footing. The quantum treatment of the plasma continuum and the inclusion of the resulted shape resonances yield a smooth behavior of the EOS and opacity global quantities vs density and temperature. We will describe the combined model and focus on its latest improvements. In particular we have extended the use of the special representation of the relativistic virial theorem to obtain an exact differential equation for the free energy. This equation, combined with a boundary condition at the zero pressure point, serves to advance the LDA EOS results significantly. The method focuses on applicability to high temperature and high density plasmas, warm dens matter etc. but applies at low temperatures as well treating fluids and even solids. Excellent agreement is obtained with experiments covering a wide range of density and temperature. The code is now used to create EOS and opacity databases for the use of hydro-dynamical simulations.
Ultrasonic energy in liposome production: process modelling and size calculation.
Barba, A A; Bochicchio, S; Lamberti, G; Dalmoro, A
2014-04-21
The use of liposomes in several fields of biotechnology, as well as in pharmaceutical and food sciences is continuously increasing. Liposomes can be used as carriers for drugs and other active molecules. Among other characteristics, one of the main features relevant to their target applications is the liposome size. The size of liposomes, which is determined during the production process, decreases due to the addition of energy. The energy is used to break the lipid bilayer into smaller pieces, then these pieces close themselves in spherical structures. In this work, the mechanisms of rupture of the lipid bilayer and the formation of spheres were modelled, accounting for how the energy, supplied by ultrasonic radiation, is stored within the layers, as the elastic energy due to the curvature and as the tension energy due to the edge, and to account for the kinetics of the bending phenomenon. An algorithm to solve the model equations was designed and the relative calculation code was written. A dedicated preparation protocol, which involves active periods during which the energy is supplied and passive periods during which the energy supply is set to zero, was defined and applied. The model predictions compare well with the experimental results, by using the energy supply rate and the time constant as fitting parameters. Working with liposomes of different sizes as the starting point of the experiments, the key parameter is the ratio between the energy supply rate and the initial surface area.
Relative Binding Free Energy Calculations Applied to Protein Homology Models.
Cappel, Daniel; Hall, Michelle Lynn; Lenselink, Eelke B; Beuming, Thijs; Qi, Jun; Bradner, James; Sherman, Woody
2016-12-27
A significant challenge and potential high-value application of computer-aided drug design is the accurate prediction of protein-ligand binding affinities. Free energy perturbation (FEP) using molecular dynamics (MD) sampling is among the most suitable approaches to achieve accurate binding free energy predictions, due to the rigorous statistical framework of the methodology, correct representation of the energetics, and thorough treatment of the important degrees of freedom in the system (including explicit waters). Recent advances in sampling methods and force fields coupled with vast increases in computational resources have made FEP a viable technology to drive hit-to-lead and lead optimization, allowing for more efficient cycles of medicinal chemistry and the possibility to explore much larger chemical spaces. However, previous FEP applications have focused on systems with high-resolution crystal structures of the target as starting points-something that is not always available in drug discovery projects. As such, the ability to apply FEP on homology models would greatly expand the domain of applicability of FEP in drug discovery. In this work we apply a particular implementation of FEP, called FEP+, on congeneric ligand series binding to four diverse targets: a kinase (Tyk2), an epigenetic bromodomain (BRD4), a transmembrane GPCR (A2A), and a protein-protein interaction interface (BCL-2 family protein MCL-1). We apply FEP+ using both crystal structures and homology models as starting points and find that the performance using homology models is generally on a par with the results when using crystal structures. The robustness of the calculations to structural variations in the input models can likely be attributed to the conformational sampling in the molecular dynamics simulations, which allows the modeled receptor to adapt to the "real" conformation for each ligand in the series. This work exemplifies the advantages of using all-atom simulation methods with
Full waveform modelling and misfit calculation using the VERCE platform
NASA Astrophysics Data System (ADS)
Garth, Thomas; Spinuso, Alessandro; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schwichtenberg, Horst; Frank, Anton; Vilotte, Jean-Pierre; Rietbrock, Andreas
2016-04-01
simulated and recorded waveforms, enabling seismologists to specify and steer their misfit analyses using existing python tools and libraries such as Pyflex and the dispel4py data-intensive processing library. All these processes, including simulation, data access, pre-processing and misfit calculation, are presented to the users of the gateway as dedicated and interactive workspaces. The VERCE platform can also be used to produce animations of seismic wave propagation through the velocity model, and synthetic shake maps. We demonstrate the functionality of the VERCE platform with two case studies, using the pre-loaded velocity model and mesh for Chile and Northern Italy. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shake map production and other full waveform applications, in a wide range of tectonic settings.
A conceptual model for ray tracing calculations with mosaic crystals
NASA Astrophysics Data System (ADS)
Sánchez del Río, M.; Bernstorff, S.; Savoia, A.; Cerrina, F.
1992-01-01
Mosaic crystals provide an interesting choice for synchrotron radiation monochromators under certain conditions. They show a wider and lower reflectivity curve than perfect crystals, but a higher integrated reflectivity. Some mosaic crystals such as graphite or beryllium could be considered as monochromators or premonochromators for third generation synchrotron radiation machines (Sincrotrone Trieste, European Synchrotron Radiation facility, etc.). In order to assess these possibilities, we have implemented a new mosaic crystal part in the ray tracing code SHADOW. The effect of the random distribution of the crystallites in a mosaic crystal can be analyzed efficiently with a Monte Carlo method. Taking into account the random distribution of the crystal planes, modeled as a Gaussian of standard deviation κ, it is possible to reproduce the well known focusing and defocusing properties of these crystals. For reflectivity calculations we have implemented in the computer code the Mosaic Crystals Theory of Zachariasen. As secondary extinction is not negligible in mosaic crystals, we have also included the penetration effect of the x-ray beam inside the crystal.
MCNPX Cosmic Ray Shielding Calculations with the NORMAN Phantom Model
NASA Technical Reports Server (NTRS)
James, Michael R.; Durkee, Joe W.; McKinney, Gregg; Singleterry Robert
2008-01-01
The United States is planning manned lunar and interplanetary missions in the coming years. Shielding from cosmic rays is a critical aspect of manned spaceflight. These ventures will present exposure issues involving the interplanetary Galactic Cosmic Ray (GCR) environment. GCRs are comprised primarily of protons (approx.84.5%) and alpha-particles (approx.14.7%), while the remainder is comprised of massive, highly energetic nuclei. The National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) has commissioned a joint study with Los Alamos National Laboratory (LANL) to investigate the interaction of the GCR environment with humans using high-fidelity, state-of-the-art computer simulations. The simulations involve shielding and dose calculations in order to assess radiation effects in various organs. The simulations are being conducted using high-resolution voxel-phantom models and the MCNPX[1] Monte Carlo radiation-transport code. Recent advances in MCNPX physics packages now enable simulated transport over 2200 types of ions of widely varying energies in large, intricate geometries. We report here initial results obtained using a GCR spectrum and a NORMAN[3] phantom.
Coarse grained model for calculating the ion mobility of hydrocarbons
NASA Astrophysics Data System (ADS)
Kuroboshi, Y.; Takemura, K.
2016-12-01
Hydrocarbons are widely used as insulating compounds. However, their fundamental characteristics in conduction phenomena are not completely understood. A great deal of effort is required to determine reasonable ionic behavior from experiments because of their complicated procedures and tight controls of the temperature and the purity of the liquids. In order to understand the conduction phenomena, we have theoretically calculated the ion mobilities of hydrocarbons and investigated their characteristics using the coarse grained model in molecular dynamics simulations. We assumed a molecule of hydrocarbons to be a bead and simulated its dependence on the viscosity, electric field, and temperature. Furthermore, we verified the suitability of the conformation, scale size, and long-range interactions for the ion mobility. The results of the simulations show that the ion mobility values agree reasonably well with the values from Walden's rule and depend on the viscosity but not on the electric field. The ion mobility and self-diffusion coefficient exponentially increase with increasing temperature, while the activation energy decreases with increasing molecular size. These values and characteristics of the ion mobility are in reasonable agreement with experimental results. In the future, we can understand not only the ion mobilies of hydrocarbons in conduction, but also we can predict general phenomena in electrochemistry with molecular dynamics simulations.
Effective Inflow Conditions for Turbulence Models in Aerodynamic Calculations
NASA Technical Reports Server (NTRS)
Spalart, Philippe R.; Rumsey, Christopher L.
2007-01-01
The selection of inflow values at boundaries far upstream of an aircraft is considered, for one- and two-equation turbulence models. Inflow values are distinguished from the ambient values near the aircraft, which may be much smaller. Ambient values should be selected first, and inflow values that will lead to them after the decay second; this is not always possible, especially for the time scale. The two-equation decay during the approach to the aircraft is shown; often, the time scale has been set too short for this decay to be calculated accurately on typical grids. A simple remedy for both issues is to impose floor values for the turbulence variables, outside the viscous sublayer, and it is argued that overriding the equations in this manner is physically justified. Selecting laminar ambient values is easy, if the boundary layers are to be tripped, but a more common practice is to seek ambient values that will cause immediate transition in boundary layers. This opens up a wide range of values, and selection criteria are discussed. The turbulent Reynolds number, or ratio of eddy viscosity to laminar viscosity has a huge dynamic range that makes it unwieldy; it has been widely mis-used, particularly by codes that set upper limits on it. The value of turbulent kinetic energy in a wind tunnel or the atmosphere is also of dubious value as an input to the model. Concretely, the ambient eddy viscosity must be small enough to preserve potential cores in small geometry features, such as flap gaps. The ambient frequency scale should also be small enough, compared with shear rates in the boundary layer. Specific values are recommended and demonstrated for airfoil flows
Finite jellium models. I. Restricted Hartree-Fock calculations.
Ghosh, Sankha; Gill, Peter M W
2005-04-15
Restricted Hartree-Fock calculations have been performed on the Fermi configurations of n electrons confined within a cube. The self-consistent-field orbitals have been expanded in a basis of N particle-in-a-box wave functions. The difficult one- and two-electron integrals have been reduced to a small set of canonical integrals that are calculated accurately using quadrature. The total energy and exchange energy per particle converge smoothly toward their limiting values as n increases; the highest occupied molecular orbital-lowest unoccupied molecular orbital gap and Dirac coefficient converge erratically. However, the convergence in all cases is slow.
Carbon dioxide fluid-flow modeling and injectivity calculations
Burke, Lauri
2011-01-01
These results were used to classify subsurface formations into three permeability classifications for the probabilistic calculations of storage efficiency and containment risk of the U.S. Geological Survey geologic carbon sequestration assessment methodology. This methodology is currently in use to determine the total carbon dioxide containment capacity of the onshore and State waters areas of the United States.
National Stormwater Calculator - Version 1.1 (Model)
EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The SWC estimates runoff at a site based on available information ...
Reliability calculation of rubber-metal damper using computer modeling
NASA Astrophysics Data System (ADS)
Tsyss, V. G.; Strokov, I. M.; Sergaeva, M. Y.
2017-08-01
The purpose of the work is to assess the reliability indexes of the rubber-metal shock damper of the seismic isolation system of the ball tank at the design stage using ANSYS software. An algorithm is given and the reliability calculation of the damper considering the probabilistic distribution of its strength and design parameters has been completed.
National Stormwater Calculator - Version 1.1 (Model)
EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The SWC estimates runoff at a site based on available information ...
Perturbation theory calculations of model pair potential systems
Gong, Jianwu
2016-01-01
Helmholtz free energy is one of the most important thermodynamic properties for condensed matter systems. It is closely related to other thermodynamic properties such as chemical potential and compressibility. It is also the starting point for studies of interfacial properties and phase coexistence if free energies of different phases can be obtained. In this thesis, we will use an approach based on the Weeks-Chandler-Anderson (WCA) perturbation theory to calculate the free energy of both solid and liquid phases of Lennard-Jones pair potential systems and the free energy of liquid states of Yukawa pair potentials. Our results indicate that the perturbation theory provides an accurate approach to the free energy calculations of liquid and solid phases based upon comparisons with results from molecular dynamics (MD) and Monte Carlo (MC) simulations.
Calculation of single chain cellulose elasticity using fully atomistic modeling
Xiawa Wu; Robert J. Moon; Ashlie Martini
2011-01-01
Cellulose nanocrystals, a potential base material for green nanocomposites, are ordered bundles of cellulose chains. The properties of these chains have been studied for many years using atomic-scale modeling. However, model predictions are difficult to interpret because of the significant dependence of predicted properties on model details. The goal of this study is...
Thermochemical data for CVD modeling from ab initio calculations
Ho, P.; Melius, C.F.
1993-12-31
Ab initio electronic-structure calculations are combined with empirical bond-additivity corrections to yield thermochemical properties of gas-phase molecules. A self-consistent set of heats of formation for molecules in the Si-H, Si-H-Cl, Si-H-F, Si-N-H and Si-N-H-F systems is presented, along with preliminary values for some Si-O-C-H species.
A New Charge Model in The Valence Force Field Model for Phonon Calculations
NASA Astrophysics Data System (ADS)
Barrett, Christopher; Wang, Lin-Wang
2013-03-01
The classical ball and spring Valence Force Field model is useful to determine the elastic relaxation of thousand-atom nanosystems. We have also used it to calculate the phonon spectra of nanosystems. However, we found that the conventional point charge model in the Valence Force Field model can cause artificial instability in nanostructures. In this talk, we will present a new charge model which represents the electron cloud feature of the Born charge in a real crystal. More specifically, we have two opposite-signed point charges assigned to each atom, one at its real position, another at a position determined by its neighbor atoms. This innovation allows both electrostatic charges and Born charges to be accurately represented while retaining extreme efficiency. This customized VFF method is developed to be fittable to the results of density functional theory (DFT) calculation. We will present the results of CdSe bulk, surface, and nanowire calculations and compare them with the equivalent ab-initio calculations, for both in their accuracies and their costs. This work is supported by U.S. Department of Energy BES, Office of Science, under Contract No. DE-AC02-05CH11231.
Moeller, M. P.; Urbanik, II, T.; Desrosiers, A. E.
1982-03-01
This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuatlon tlmes for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies.
A stirling engine computer model for performance calculations
NASA Technical Reports Server (NTRS)
Tew, R.; Jefferies, K.; Miao, D.
1978-01-01
To support the development of the Stirling engine as a possible alternative to the automobile spark-ignition engine, the thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer. The modeling techniques used are presented. The performance of an existing rhombic-drive Stirling engine was simulated by use of this computer program, and some typical results are presented. Engine tests are planned in order to evaluate this model.
Chemically reacting supersonic flow calculation using an assumed PDF model
NASA Technical Reports Server (NTRS)
Farshchi, M.
1990-01-01
This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.
An assessment of artificial damping models for aeroacoustic calculations
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham
1995-01-01
We present a study of the effect of artificial dissipation models on nonlinear wave computations using a few high order schemes. Our motivation is to assess the effectiveness of artificial dissipation models for their suitability for aeroacoustic computations. We solve three model problems in one dimension using the Euler equations. Initial conditions are chosen to generate nonlinear waves in the computational domain. We examine various dissipation models in central difference schemes such as the Dispersion Relation Preserving (DRP) scheme and the standard fourth and sixth order schemes. We also make a similar study with the fourth order MacCormack scheme due to Gottieb and Turkel.
Calculation of the Aerodynamic Behavior of the Tilt Rotor Aeroacoustic Model (TRAM) in the DNW
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 1/4-scale V- 22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance and airloads for helicopter mode operation, as well as calculated induced and profile power. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
The uncertainty in ozone calculations by a stratospheric photochemistry model
NASA Technical Reports Server (NTRS)
Butler, D. M.
1978-01-01
At present, there is an apparent conflict between one-dimensional stratospheric photochemistry models used in predicting ozone depletion and average data for stratospheric ozone. This conflict is in three particulars - column density of O3 and ozone density in the regions around 30 km and 50 km altitude. A study of the sensitivity of one such model to the values of reaction rates, boundary conditions, solar intensities, photolysis cross sections, and O(1D) yield parameters reveals that even at the 2-sigma uncertainty limit due to these input parameters, the model does not overlap the data for O3 density at 30 km and 50 km. The data is outside the 1-sigma model uncertainty limit for O3 column density. The study also shows the relative contribution of the various parameters studied to the imprecision in these model results.
Mathematical model partitioning and packing for parallel computer calculation
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Milner, Edward J.
1986-01-01
This paper deals with the development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system. The identification of computational parallelism within the model equations is discussed. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. Next, an algorithm which packs the equations into a minimum number of processors is described. The results of applying the packing algorithm to a turboshaft engine model are presented.
Mathematical model partitioning and packing for parallel computer calculation
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Milner, Edward J.
1986-01-01
This paper deals with the development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system. The identification of computational parallelism within the model equations is discussed. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. Next, an algorithm which packs the equations into a minimum number of processors is described. The results of applying the packing algorithm to a turboshaft engine model are presented.
A simple model for calculating air pollution within street canyons
NASA Astrophysics Data System (ADS)
Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.
2014-04-01
This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.
NASA Technical Reports Server (NTRS)
Maples, A. L.
1980-01-01
The operation of solidification model 1 is described. Model 1 calculates the macrosegregation in a rectangular ingot of a binary alloy as a result of horizontal axisymmetric bidirectional solidification. The calculation is restricted to steady-state solidification; there is no variation in final local average composition in the direction of isotherm movement. The physics of the model are given.
Improved Dielectric Solvation Model for Electronic Structure Calculations
Chipman, Daniel M.
2015-12-16
This project was originally funded for the three year period from 09/01/2009 to 08/31/2012. Subsequently a No-Cost Extension was approved for a revised end date of 11/30/2013. The primary goals of the project were to develop continuum solvation models for nondielectric short-range interactions between solvent and solute that arise from dispersion, exchange, and hydrogen bonding. These goals were accomplished and are reported in the five peer-reviewed journal publications listed in the bibliography below. The secondary goals of the project included derivation of analytic gradients for the models, improvement of the cavity integration scheme, application of the models to the core-level spectroscopy of water, and several other miscellaneous items. These goals were not accomplished because they depended on completion of the primary goals, after which there was a lack of time for any additional effort.
Implementation of Minimal Representations in 2d Ising Model Calculations
1992-05-01
Re r’ u. 60:252-262.263-276. 1941. [Ons44] Lars Onsager . Crystal statistics I. A two-dimensional model with an order-disorder transition. Physical Re...ID lattices but the subject really came to life in 1944 when Onsager [Ons44] derived an exact closed form expression for the partition function (see
Fluorescein as a model molecular calculator with reset capability.
Margulies, David; Melman, Galina; Shanzer, Abraham
2005-10-01
The evolution of molecules capable of performing boolean operations has gone a long way since the inception of the first molecular AND logic gate, followed by other logic functions, such as XOR and INHIBIT, and has reached the stage where these tiny processors execute arithmetic calculations. Molecular logic gates that process a variety of chemical inputs can now be loaded with arrays of logic functions, enabling even a single molecular species to execute distinct algebraic operations: addition and subtraction. However, unlike electronic or optical signals, the accumulation of chemical inputs prevents chemical arithmetic systems from resetting. Consequently, a set of solutions is required to complete even the simplest arithmetic cycle. It has been suggested that these limitations can be overcome by washing off the input signals from solid supports. An alternative approach, which does not require solvent exchange or incorporation of bulk surfaces, is to reset the arithmetic system chemically. Ultimately, this is how some biological systems regenerate. Here we report a highly efficient and exceptionally simple molecular arithmetic system based on a plain fluorescein dye, capable of performing a full scale of elementary addition and subtraction algebraic operations. This system can be reset following each separate arithmetic step. The ability to selectively eradicate chemical inputs brings us closer to the realization of chemical computation.
Aeroelastic Calculations Using CFD for a Typical Business Jet Model
NASA Technical Reports Server (NTRS)
Gibbons, Michael D.
1996-01-01
Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.
Satellite Skin-Force Modelling for Atmospheric Drag Calculations
NASA Astrophysics Data System (ADS)
Maat, Matthias
2010-03-01
Satellites in low Earth orbits are influenced by the Earth’s atmosphere. The interactions between the molecules and the spacecraft cause the highest non-gravitational force, which in magnitude is comparable to planetary disturbances. Therefore the modelling of atmospheric drag effects is important for many missions with a scientific background like STEP (Satellite Test of Equivalence Principle). With the STEP mission variations between gravitational and inertial mass shall be measured with an accuracy of 10-18. The results are of great interest for cosmological and gravitational theories. To achieve the aimed accuracy, a precise model of external disturbances is necessary. In this article the method of Ray-Tracing is used to quantify the atmospheric drag forces and torques for spacecrafts of arbitrary shape.
Monte Carlo calculations of the finite density Thirring model
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Ridgway, Gregory W.; Warrington, Neill C.
2017-01-01
We present results of the numerical simulation of the two-dimensional Thirring model at finite density and temperature. The severe sign problem is dealt with by deforming the domain of integration into complex field space. This is the first example where a fermionic sign problem is solved in a quantum field theory by using the holomorphic gradient flow approach, a generalization of the Lefschetz thimble method.
An Effective Stress Model for Ground Motion Calculations
1979-09-01
which simulate, respectively, the drained and undrained stress-strain behavior of earth media, and whose coefficients can all be determined... BEHAVIOR OF SOIL 13 2.1 EFFECTIVE STRESS CONCEPT 13 2.2 MECHANICAL BEHAVIOR OF SOIL 15 CHAPTER 3 ELASTIC-PLASTIC CONSTITUTIVE MODEL 20 3.1...ELASTIC BEHAVIOR 20 3.2 PLASTIC BEHAVIOR 22 3.3 SUMMARY 2k CHAPTER U THE TREATMENT OF A MULTIPHASE SYSTEM 31 CHAPTER 5 BEHAVIOR OF THE MULTIPHASE
Suomi NPP VIIRS Striping Analysis using Radiative Transfer Model Calculations
NASA Astrophysics Data System (ADS)
Wang, Z.; Cao, C.
2015-12-01
Modern satellite radiometers such as VIIRS have many detectors with slightly different relative spectral response (RSR). These differences can introduce artifacts such as striping in the imagery. In recent studies we have analyzed the striping pattern related to the detector level RSR difference in VIIRS Thermal Emissive Bands (TEB) M15 and M16, which includes line-by-line radiative transfer model (LBLRTM) detector level response study and onboard detector stability evaluation using the solar diffuser. Now we extend these analysis to the Reflective Solar Bands (RSB) using MODTRAN atmospheric radiative transfer model (RTM) for detector level radiance simulation. Previous studies analyzed the striping pattern in the images of VIIRS ocean color and reflectance in RSB, further studies about the root cause for striping are still needed. In this study, we will use the MODTRAN model at spectral resolution of 1 cm^-1 under different atmospheric conditions for VIIRS RSB, for example band M1 centered at 410nm which is used for Ocean Color product retrieval. The impact of detector level RSR difference, atmospheric dependency, and solar geometry on the striping in VIIRS SDR imagery will be investigated. The cumulative histogram method used successfully for the TEB striping analysis will be used to quantify the striping. These analysis help S-NPP and J1 to better understand the root cause for VIIRS image artifacts and reduce the uncertainties in geophysical retrievals to meet the user needs.
Some atmospheric scattering considerations relevant to BATSE: A model calculation
NASA Technical Reports Server (NTRS)
Young, John H.
1986-01-01
The orbiting Burst and Transient Source Experiement (BATSE) will locate gamma ray burst sources by analysis of the relative numbers of photons coming directly from a source and entering its prescribed array of detectors. In order to accurately locate burst sources it is thus necessary to identify and correct for any counts contributed by events other than direct entry by a mainstream photon. An effort is described which estimates the photon numbers which might be scattered into the BATSE detectors from interactions with the Earth atmosphere. A model was developed which yielded analytical expressions for single-scatter photon contributions in terms of source and satellite locations.
Nuclear model calculations and their role in space radiation research
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Heilbronn, L. H.
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. c2002 COSPAR. Published by Elsevier Science Ltd. All right reserved.
Nuclear model calculations and their role in space radiation research
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Heilbronn, L. H.
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. c2002 COSPAR. Published by Elsevier Science Ltd. All right reserved.
Starting Models in FLASH for Calculations of Type Ia Supernovae
NASA Astrophysics Data System (ADS)
Lamb, D. Q.; Caceres, A.; Calder, A. C.; Dursi, L. J.; Fryxell, B.; MacNeice, P.; Olson, K.; Plewa, T.; Ricker, P.; Riley, K.; Rosner, R.; Siegel, A.; Timmes, F. X.; Truran, J. W.; Vladimirova, N.; Wiers, G.; Zingale, M.
2003-05-01
Type Ia supernovae are thought to be the result of a thermonuclear explosion in a white dwarf that is approaching the Chandrasekhar mass limit. The properties of the supernova explosion, including its energy, depends significantly on the way in which the thermonuclear runaway begins. Where in the white dwarf ignition takes place, and how many ignition points there are, are important unsolved questions. We discuss the challenges of modeling Type Ia supernova during the several hours before thermonuclear runaway using the FLASH code. In three-dimensional hydrodynamic codes, the pre-supernova white dwarf can exhibit ``ringing'' at the fundamental frequency of the star that is driven by numerical noise. These solutions manifest themselves as undamped velocity waves (the white dwarf "breathes in and out") that reach peak amplitudes of about 200 km s-1. We show the results of several methods aimed at reducing the amplitude of these undamped waves in FLASH. We also discuss some of our experiments in mapping spherically symmetric models, which suggest large scale convective motions of 50 km s-1 a few hours prior to ignition, onto a three-dimensional mesh. This work was supported in part by the DOE under the ASCI/Alliance program.
RADIOGRAPHIC BENCHMARK PROBLEM 2009 - SCATTER CALCULATIONS IN MODELLING
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2010-02-22
Code Validation is a permanent concern in computer simulation, and has been addressed repeatedly in eddy current and ultrasonic modelling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radio-graphic modelling, the scattered radiation prediction. An update of the results of the 2008 benchmark is presented. Additionally we discuss the extension of this benchmark on the lower energy part for 60 and 80 keV as well as for higher energies up to 10 MeV to study the contribution of pair production. Of special interest will be the primary radiation (attenuation law as reference), the total scattered radiation, the relative contribution of scattered radiation separated by order of scatter events (1st, 2nd, ..., 20th), and the spectrum of scattered radiation. We present the results of three Monte Carlo codes (MC-Ray, Sindbad and Moderato) as well as an analytical first order scattering code (VXI) and compare to MCNP as reference.
Modelling of power lines in lightning incidence calculations
Mousa, A.M. ); Srivastava, K.D. )
1990-01-01
When applying the electrogeometric model to power lines to determine the frequency and characteristics of the collected lightning strokes, the power line has traditionally been represented by a set of horizontal wires, i.e. both the sag of the wires and the existence of the towers have been ignored. This approach has serious shortcomings including inability to determine the percentage of the strokes terminating on the towers, failure to correctly predict the effect of height on median current, and giving an approximate value for the number of collected strokes without telling the corresponding degree of error. This paper eliminates the above problems by presenting a computerized solution which takes into consideration the sag of the wires, the existence of the towers, and the inequality of the striking distances to towers and to wires. The features of the program are discussed in the paper, and some of its results are given.
Evaluating Models for Calculating Sub-Debris Ice Ablation
NASA Astrophysics Data System (ADS)
Nicholson, L. I.
2014-12-01
Debris-covered glaciers are a common feature of the mountain cryosphere, and the proportion of glacierized area that is debris-covered is increasing in many regions. Thus, in order to generate decadal or centennial projections of runoff and mass change of these glaciers it is important to be able to quantify the impact of surface debris on ice ablation. While exposed ice cliffs, ponded water and fluvial action are all important contributors to ablation within debris-covered areas, the focus here is on assessing the performance of a number of models of sub-debris melt of varying complexity. The goal is to determine the impact simplifying assumptions can be expected to have on glacier ablation over seasonal, annual and decadal timescales.
User Guide for GoldSim Model to Calculate PA/CA Doses and Limits
Smith, F.
2016-10-31
A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 “Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site”.
Multiscale modeling approach for calculating grain-boundary energies from first principles
Shenderova, O.A.; Brenner, D.W.; Nazarov, A.A.; Romanov, A.E.; Yang, L.H.
1998-02-01
A multiscale modeling approach is proposed for calculating energies of tilt-grain boundaries in covalent materials from first principles over an entire misorientation range for given tilt axes. The method uses energies from density-functional calculations for a few key structures as input into a disclination structural-units model. This approach is demonstrated by calculating energies of {l_angle}001{r_angle}-symmetrical tilt-grain boundaries in diamond. {copyright} {ital 1998} {ital The American Physical Society}
A model for calculating expected performance of the Apollo unified S-band (USB) communication system
NASA Technical Reports Server (NTRS)
Schroeder, N. W.
1971-01-01
A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.
Efficient distance calculation using the spherically-extended polytope (s-tope) model
NASA Technical Reports Server (NTRS)
Hamlin, Gregory J.; Kelley, Robert B.; Tornero, Josep
1991-01-01
An object representation scheme which allows for Euclidean distance calculation is presented. The object model extends the polytope model by representing objects as the convex hull of a finite set of spheres. An algorithm for calculating distances between objects is developed which is linear in the total number of spheres specifying the two objects.
Sensitivity of Material Response Calculations to the Equation of State Model
equation of state model. Three equation of state models, all...sources. The sensitivity of the calculated material response to the choice of equation of state model is characterized in terms of the generated impulse...and the peak propagating stress at the time the radiation source is cut off. For the calculations presented in this report, the three equation of state models are in fairly good
NASA Astrophysics Data System (ADS)
Preobrazhenskii, M. P.; Rudakov, O. B.
2016-01-01
A regression model for calculating the boiling point isobars of tetrachloromethane-organic solvent binary homogeneous systems is proposed. The parameters of the model proposed were calculated for a series of solutions. The correlation between the nonadditivity parameter of the regression model and the hydrophobicity criterion of the organic solvent is established. The parameter value of the proposed model is shown to allow prediction of the potential formation of azeotropic mixtures of solvents with tetrachloromethane.
Model Calculations of Ocean Acidification at the End Cretaceous
NASA Astrophysics Data System (ADS)
Tyrrell, T.; Merico, A.; Armstrong McKay, D. I.
2014-12-01
Most episodes of ocean acidification (OA) in Earth's past were either too slow or too minor to provide useful lessons for understanding the present. The end-Cretaceous event (66 Mya) is special in this sense, both because of its rapid onset and also because many calcifying species (including 100% of ammonites and >95% of calcareous nannoplankton and planktonic foraminifera) went extinct at this time. We used box models of the ocean carbon cycle to evaluate whether impact-generated OA could feasibly have been responsible for the calcifier mass extinctions. We simulated several proposed consequences of the asteroid impact: (1) vaporisation of gypsum (CaSO4) and carbonate (CaCO3) rocks at the point of impact, producing sulphuric acid and CO2 respectively; (2) generation of NOx by the impact pressure wave and other sources, producing nitric acid; (3) release of CO2 from wildfires, biomass decay and disinterring of fossil organic carbon and hydrocarbons; and (4) ocean stirring leading to introduction into the surface layer of deep water with elevated CO2. We simulated additions over: (A) a few years (e-folding time of 6 months), and also (B) a few days (e-folding time of 10 hours) for SO4 and NOx, as recently proposed by Ohno et al (2014. Nature Geoscience, 7:279-282). Sulphuric acid as a consequence of gypsum vaporisation was found to be the most important acidifying process. Results will also be presented of the amounts of SO4 required to make the surface ocean become extremely undersaturated (Ωcalcite<0.5) for different e-folding times and combinations of processes. These will be compared to estimates in the literature of how much SO4 was actually released.
Comparison of Hugoniots calculated for aluminum in the framework of three quantum-statistical models
NASA Astrophysics Data System (ADS)
Kadatskiy, M. A.; Khishchenko, K. V.
2015-11-01
The results of calculations of thermodynamic properties of aluminum under shock compression in the framework of the Thomas-Fermi model, the Thomas-Fermi model with quantum and exchange corrections and the Hartree-Fock-Slater model are presented. The influences of the thermal motion and the interaction of ions are taken into account in the framework of three models: the ideal gas, the one-component plasma and the charged hard spheres. Calculations are performed in the pressure range from 1 to 107 GPa. Calculated Hugoniots are compared with available experimental data.
A computer program for calculating relative-transmissivity input arrays to aid model calibration
Weiss, Emanuel
1982-01-01
A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.
Er, Li; Xiangying, Zeng
2014-01-01
To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models.
A simple model to calculate when drought causes plants to die
NASA Astrophysics Data System (ADS)
Schultz, Colin
2014-10-01
To build a realistic ecosystem model requires, among other things, designing a way to calculate when plants will persevere through hostile conditions and when they will wither and die. Using a simplified model of trees' internal hydrological systems, Manzoni et al. have built a model meant to calculate when changes in environmental conditions will cause a tree stress and when they will push it toward critical failure.
Online calculators for geomagnetic models at the National Geophysical Data Center
NASA Astrophysics Data System (ADS)
Ford, J. P.; Nair, M.; Maus, S.; McLean, S. J.
2009-12-01
NOAA’s National Geophysical Data Center at Boulder provides online calculators for geomagnetic field models. These models provide current and past values of the geomagnetic field on regional and global spatial scales. These calculators are popular among scientists, engineers and the general public across the world as a resource to compute geomagnetic field elements. We regularly update both the web interfaces and the underlying geomagnetic models. We have four different calculators to compute geomagnetic fields for different user applications. The declination calculators optionally use our World Magnetic Model (WMM) or the International Geomagnetic Reference Field (IGRF) to provide geomagnetic declination as well as its annual rate of change for the chosen location. All seven magnetic field components for a single day or for a range of years from 1900-present can obtained using our Magnetic Field Calculator IGRFWMM. Users can also compute magnetic field values (current and past) over an area using the IGRFGrid calculator. The USHistoric calculator uses a US declination model to compute the declination for the conterminous US from 1750 - present (data permitting). All calculators allow the user to enter the location either as a Zip Code or by specifying the geographic latitude and longitude.
Sensitivity of p-nuclei abundance calculations to statistical model parameters
NASA Astrophysics Data System (ADS)
Roach, Brandon; Simon, Anna
2017-01-01
Many reactions relevant to astrophysics involve nuclei far from stability, and their cross sections must therefore be calculated numerically for input into large-scale stellar nucleosynthesis calculations. Recent work, especially regarding p-process nucleosynthesis, has shown that the observed astrophysical abundances of certain nuclides differ by almost a factor of 10 from those predicted by network calculations using accepted reaction rates. Additionally, significant differences between calculated abundances when using different versions of these rates have been obtained. We therefore present the abundances of p-nuclei calculated using the open-source NucNet Tools code for a 25 solar mass type II supernova model, incorporating reaction cross sections calculated using the statistical-model code TALYS using several α optical potentials and γ-strength functions. This work is supported by the NSF under Grant Numbers PHY-1614442 and PHY-1430152 (JINA-CEE).
S-values calculated from a tomographic head/brain model for brain imaging
NASA Astrophysics Data System (ADS)
Chao, Tsi-chian; Xu, X. George
2004-11-01
A tomographic head/brain model was developed from the Visible Human images and used to calculate S-values for brain imaging procedures. This model contains 15 segmented sub-regions including caudate nucleus, cerebellum, cerebral cortex, cerebral white matter, corpus callosum, eyes, lateral ventricles, lenses, lentiform nucleus, optic chiasma, optic nerve, pons and middle cerebellar peduncle, skull CSF, thalamus and thyroid. S-values for C-11, O-15, F-18, Tc-99m and I-123 have been calculated using this model and a Monte Carlo code, EGS4. Comparison of the calculated S-values with those calculated from the MIRD (1999) stylized head/brain model shows significant differences. In many cases, the stylized head/brain model resulted in smaller S-values (as much as 88%), suggesting that the doses to a specific patient similar to the Visible Man could have been underestimated using the existing clinical dosimetry.
Rapid calculation of terrain parameters for radiation modeling from digital elevation data
NASA Technical Reports Server (NTRS)
Dozier, Jeff; Frew, James
1990-01-01
Digital elevation models are now widely used to calculate terrain parameters to determine incoming solar and longwave radiation for use in surface climate models, interpretation of remote-sensing data, and parameters in hydrologic models. Because of the large number of points in an elevation grid, fast algorithms are useful to save computation time. A description is given of rapid methods for calculating slope and azimuth, solar illumination angle, horizons, and view factors for radiation from sky and terrain. Calculation time is reduced by fast algorithms and lookup tables.
Model-Independent Calculation of Radiative Neutron Capture on Lithium-7
Rupak, Gautam; Higa, Renato
2011-06-03
The radiative neutron capture on lithium-7 is calculated model independently using a low-energy halo effective field theory. The cross section is expressed in terms of scattering parameters directly related to the S-matrix elements. It depends on the poorly known p-wave effective range parameter r{sub 1}. This constitutes the largest uncertainty in traditional model calculations. It is explicitly demonstrated by comparing with potential model calculations. A single parameter fit describes the low-energy data extremely well and yields r{sub 1{approx_equal}}-1.47 fm{sup -1}.
40 CFR 600.207-86 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel...
40 CFR 600.207-93 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy...
Airloads and Wake Geometry Calculations for an Isolated Tiltrotor Model in a Wind Tunnel
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 0.25-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance for hover and helicopter mode operation, and airloads for helicopter mode. Calculated induced power, profile power, and wake geometry provide additional information about the aerodynamic behavior. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
NASA Technical Reports Server (NTRS)
Maples, A. L.
1980-01-01
The software developed for the solidification model is presented. A link between the calculations and the FORTRAN code is provided, primarily in the form of global flow diagrams and data structures. A complete listing of the solidification code is given.
Fast and accurate calculation of dilute quantum gas using Uehling-Uhlenbeck model equation
NASA Astrophysics Data System (ADS)
Yano, Ryosuke
2017-02-01
The Uehling-Uhlenbeck (U-U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U-U model equation. DSMC analysis based on the U-U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U-U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green-Kubo expression and the shock layer of a dilute Bose gas around a cylinder.
40 CFR 600.207-93 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2011 CFR
2011-07-01
... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values for 1977 and Later Model...
Large-scale shell-model calculations on the spectroscopy of N <126 Pb isotopes
NASA Astrophysics Data System (ADS)
Qi, Chong; Jia, L. Y.; Fu, G. J.
2016-07-01
Large-scale shell-model calculations are carried out in the model space including neutron-hole orbitals 2 p1 /2 ,1 f5 /2 ,2 p3 /2 ,0 i13 /2 ,1 f7 /2 , and 0 h9 /2 to study the structure and electromagnetic properties of neutron-deficient Pb isotopes. An optimized effective interaction is used. Good agreement between full shell-model calculations and experimental data is obtained for the spherical states in isotopes Pb-206194. The lighter isotopes are calculated with an importance-truncation approach constructed based on the monopole Hamiltonian. The full shell-model results also agree well with our generalized seniority and nucleon-pair-approximation truncation calculations. The deviations between theory and experiment concerning the excitation energies and electromagnetic properties of low-lying 0+ and 2+ excited states and isomeric states may provide a constraint on our understanding of nuclear deformation and intruder configuration in this region.
Reliability and structural integrity. [analytical model for calculating crack detection probability
NASA Technical Reports Server (NTRS)
Davidson, J. R.
1973-01-01
An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.
Difficult Budgetary Decisions: A Desk-Top Calculator Model to Facilitate Executive Decisions.
ERIC Educational Resources Information Center
Tweddale, R. Bruce
Presented is a budgetary decision model developed to aid the executive officers in arriving at tentative decisions on enrollment, tuition rates, increased compensation, and level of staffing as they affect the total institutional budget. The model utilizes a desk-top programmable calculator (in this case, a Burroughs Model C 3660). The model…
NASA Astrophysics Data System (ADS)
Romashets, E.; Huang, T.
2006-12-01
Using the experimental magnetic field and the newly defined Eular Potentials, we upgraded Prairie View Magnetosphere-ionosphere Coupling Model that was originally created in the frame of IGRF. The electric fields in the ionosphere and the field-aligned currents in the magnetosphere are calculated with the upgraded magnetosphere-ionosphere coupling model, and a preliminary comparison of the calculations with the measurements from ST5 will be presented.
Analytical approach to calculation of response spectra from seismological models of ground motion
Safak, Erdal
1988-01-01
An analytical approach to calculate response spectra from seismological models of ground motion is presented. Seismological models have three major advantages over empirical models: (1) they help in an understanding of the physics of earthquake mechanisms, (2) they can be used to predict ground motions for future earthquakes and (3) they can be extrapolated to cases where there are no data available. As shown with this study, these models also present a convenient form for the calculation of response spectra, by using the methods of random vibration theory, for a given magnitude and site conditions. The first part of the paper reviews the past models for ground motion description, and introduces the available seismological models. Then, the random vibration equations for the spectral response are presented. The nonstationarity, spectral bandwidth and the correlation of the peaks are considered in the calculation of the peak response.
Optimization of Couch Modeling in the Change of Dose Calculation Methods and Their Versions.
Kuwahara, Junichi; Nakata, Manabu; Fujimoto, Takahiro; Nakamura, Mitsuhiro; Sasaki, Makoto; Tsuruta, Yusuke; Yano, Shinsuke; Higashimura, Kyoji; Hiraoka, Masahiro
2017-01-01
In external radiotherapy, the X-ray beam passes through the treatment couch, leading to the dose reduction by the attenuation of the couch. As a method to compensate for the reduction, radiation treatment planning systems (RTPS) support virtual couch function, namely "couch modeling method". In the couch modeling method, the computed tomography (CT) numbers assigned to each structure should be optimized by comparing calculations to measurements for accurate dose calculation. Thus, re-optimization of CT numbers will be required when the dose calculation algorithm or their version changes. The purpose of this study is to evaluate the calculation accuracy of the couch modeling method in different calculation algorithms and their versions. The optimal CT numbers were determined by minimizing the difference between measured transmission factors and calculated ones. When CT numbers optimized by Anisotropic Analytical Algorithm (AAA) Ver. 8.6 were used, the maximum and the mean difference of transmission factor were 5.8% and 1.5%, respectively, for Acuros XB (AXB) Ver. 11.0. However, when CT numbers optimized by AXB Ver. 11.0 were used, they were 2.6% and 0.6%, respectively. The CT numbers for couch structures should be optimized when changing dose calculation algorithms and their versions. From the comparison of the measured transmission to calculation, it was found that the CT numbers had high accuracy.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna
2017-08-01
Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Calculation of delayed-neutron energy spectra in a QRPA-Hauser-Feshbach model
Kawano, Toshihiko; Moller, Peter; Wilson, William B
2008-01-01
Theoretical {beta}-delayed-neutron spectra are calculated based on the Quasiparticle Random-Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after {beta} decay to the granddaughter residual are more accurately calculated than in previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra agree reasonably well with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors.
Numerical calculations of gaseous reacting flows in a model of gas turbine combustors
NASA Astrophysics Data System (ADS)
Yan, Chuanjun; Tang, Ming; Zhu, Huiling; Sun, Huixian
1991-02-01
The numerical calculations of gaseous reaction flows in a model of gas-turbine combustors are described. The profiles of hydrodynamic and thermodynamic patterns in a 3D combustor model are obtained by solving the governing differential transport equations. The well-established numerical prediction algorithm SIMPLE; a modified turbulence model, and a turbulent diffusion flame model are adopted in the computations. The beta-function is selected as the probability density function. The effect of the combustion process on flow patterns is investigated. The calculated results are verified by experiments, and are in good agreement.
Verification of smoke plume opacity model on a TI-59 calculator
NASA Astrophysics Data System (ADS)
Cowen, Stanton J.; Ensor, David S.; Sparks, Leslie E.
A smoke plume opacity model previously developed by the authors for a programmable calculator was tested with laboratory and field data. This model is based on Mie theory calculation of light scattering of small spherical particles. This model has been tested with two data sets; the first data set includes emissions from a laboratory-scale electrostatic precipitator (ESP) and the second data set originates from a study on a high efficiency ESP downstream of a pulverized coal-fired 520 MW boiler. The predicted opacity agreed well with measured opacity. The model predicts opacity within the accuracy of the input measurements, approximately 25 per cent.
Yoon, Jihyung; Jung, Jae Won; Kim, Jong Oh; Yeo, Inhwan
2016-05-15
Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate the model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously
Simoncini, David; Nakata, Hiroya; Ogata, Koji; Nakamura, Shinichiro; Zhang, Kam Yj
2015-02-01
Protein structure prediction directly from sequences is a very challenging problem in computational biology. One of the most successful approaches employs stochastic conformational sampling to search an empirically derived energy function landscape for the global energy minimum state. Due to the errors in the empirically derived energy function, the lowest energy conformation may not be the best model. We have evaluated the use of energy calculated by the fragment molecular orbital method (FMO energy) to assess the quality of predicted models and its ability to identify the best model among an ensemble of predicted models. The fragment molecular orbital method implemented in GAMESS was used to calculate the FMO energy of predicted models. When tested on eight protein targets, we found that the model ranking based on FMO energies is better than that based on empirically derived energies when there is sufficient diversity among these models. This model diversity can be estimated prior to the FMO energy calculations. Our result demonstrates that the FMO energy calculated by the fragment molecular orbital method is a practical and promising measure for the assessment of protein model quality and the selection of the best protein model among many generated.
Thomas-Fermi Quark Model and Techniques to Improve Lattice QCD Calculation
NASA Astrophysics Data System (ADS)
Liu, Quan
Two topics are discussed separately in this thesis. In the first part a semiclassical quark model, called the Thomas-Fermi quark model, is reviewed. After a modified approach to spin in the model is introduced, I present the calculation of the spectra of octet and decuplet baryons. The six-quark doubly strange H-dibaryon state is also investigated. In the second part, two numerical techniques which improve latice QCD calculations are covered. The first one, which we call Polynomial-Preconditioned GMRES-DR(PP-GMRESDR), is used to speed up the calculation of large systems of linear equations in LQCD. The second one, called the Polynomial-Subtraction method, is used to help reduce the noise variance of the calculations for disconnected loops in LQCD.
Comparison of results of experimental research with numerical calculations of a model one-sided seal
NASA Astrophysics Data System (ADS)
Joachimiak, Damian; Krzyślak, Piotr
2015-06-01
Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.
1993-01-01
Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.
The calculation of theoretical chromospheric models and the interpretation of the solar spectrum
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.
1994-01-01
Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for nonradiative heating, and for solar activity in general.
The Calculation of Theoretical Chromospheric Models and the Interpretation of the Solar Spectrum
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.
1998-01-01
Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Rurucz on the detailed synthesis of the solar spectrum based on opacity data or over 58 million atomic and molecular lines. our goals are: to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.
Triaxial superdeformed and normal-deformed high-spin band structures in {sup 170}Hf
Neusser-Neffgen, A.; Huebel, H.; Bringel, P.; Domscheit, J.; Mergel, E.; Nenoff, N.; Singh, A.K.; Hagemann, G.B.; Jensen, D.R.; Bhattacharya, S.; Curien, D.; Dorvaux, O.; Hannachi, F.; Lopez-Martens, A.
2006-03-15
The high-spin structure of {sup 170}Hf was investigated using the EUROBALL spectrometer. The previously known level scheme was extended in the low-spin region as well as to higher spins, and several new bands were discovered. In particular, two bands were identified which show the characteristics of triaxial superdeformation. One of these bands is strongly populated, and its excitation energy and spins are established. Configuration assignments are made to the normal-deformed bands based on comparisons of their properties with cranked shell model calculations. The results for the very high spin states provide important input for such calculations.
Multi-Scale Thermohydrologic Model Sensitivity-Study Calculations in Support of the SSPA
Glascoe, L G; Buscheck, T A; Loosmore, G A; Sun, Y
2001-12-20
The purpose of this calculation report is to document the thermohydrologic (TH) model calculations performed for the Supplemental Science and Performance Analysis (SSPA), Volume 1, Section 5 and Volume 2 (BSC 2001d [DIRS 155950], BSC 2001e [DIRS 154659]). The calculations are documented here in accordance with AP-3.12Q REV0 ICN4 [DIRS 154418]. The Technical Working Plan (Twp) for this document is TWP-NGRM-MD-000015 Real. These TH calculations were primarily conducted using three model types: (1) the Multiscale Thermohydrologic (MSTH) model, (2) the line-averaged-heat-source, drift-scale thermohydrologic (LDTH) model, and (3) the discrete-heat-source, drift-scale thermal (DDT) model. These TH-model calculations were conducted to improve the implementation of the scientific conceptual model, quantify previously unquantified uncertainties, and evaluate how a lower-temperature operating mode (LTOM) would affect the in-drift TH environment. Simulations for the higher-temperature operating mode (HTOM), which is similar to the base case analyzed for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) (CRWMS M&O 2000j [DIRS 153246]), were also conducted for comparison with the LTOM. This Calculation Report describes (1) the improvements to the MSTH model that were implemented to reduce model uncertainty and to facilitate model validation, and (2) the sensitivity analyses conducted to better understand the influence of parameter and process uncertainty. The METHOD Section (Section 2) describes the improvements to the MSTH-model methodology and submodels. The ASSUMPTIONS Section (Section 3) lists the assumptions made (e.g., boundaries, material properties) for this methodology. The USE OF SOFTWARE Section (Section 4) lists the software, routines and macros used for the MSTH model and submodels supporting the SSPA. The CALCULATION Section (Section 5) lists the data used in the model and the manner in which the MSTH model is prepared and executed. And
Hot DA white dwarf model atmosphere calculations: including improved Ni PI cross-sections
NASA Astrophysics Data System (ADS)
Preval, S. P.; Barstow, M. A.; Badnell, N. R.; Hubeny, I.; Holberg, J. B.
2017-02-01
To calculate realistic models of objects with Ni in their atmospheres, accurate atomic data for the relevant ionization stages need to be included in model atmosphere calculations. In the context of white dwarf stars, we investigate the effect of changing the Ni IV-VI bound-bound and bound-free atomic data on model atmosphere calculations. Models including photoionization cross-section (PICS) calculated with AUTOSTRUCTURE show significant flux attenuation of up to ˜80 per cent shortward of 180 Å in the extreme ultraviolet (EUV) region compared to a model using hydrogenic PICS. Comparatively, models including a larger set of Ni transitions left the EUV, UV, and optical continua unaffected. We use models calculated with permutations of these atomic data to test for potential changes to measured metal abundances of the hot DA white dwarf G191-B2B. Models including AUTOSTRUCTURE PICS were found to change the abundances of N and O by as much as ˜22 per cent compared to models using hydrogenic PICS, but heavier species were relatively unaffected. Models including AUTOSTRUCTURE PICS caused the abundances of N/O IV and V to diverge. This is because the increased opacity in the AUTOSTRUCTURE PICS model causes these charge states to form higher in the atmosphere, more so for N/O V. Models using an extended line list caused significant changes to the Ni IV-V abundances. While both PICS and an extended line list cause changes in both synthetic spectra and measured abundances, the biggest changes are caused by using AUTOSTRUCTURE PICS for Ni.
Li, Feifei; Park, Ji-Yeon; Barraclough, Brendan; Lu, Bo; Li, Jonathan; Liu, Chihray; Yan, Guanghua
2017-03-01
The aim of this study is to perform a direct comparison of the source model for photon beams with and without flattening filter (FF) and to develop an efficient independent algorithm for planar dose calculation for FF-free (FFF) intensity-modulated radiotherapy (IMRT) quality assurance (QA). The source model consisted of a point source modeling the primary photons and extrafocal bivariate Gaussian functions modeling the head scatter, monitor chamber backscatter, and collimator exchange effect. The model parameters were obtained by minimizing the difference between the calculated and measured in-air output factors (Sc ). The fluence of IMRT beams was calculated from the source model using a backprojection and integration method. The off-axis ratio in FFF beams were modeled with a fourth degree polynomial. An analytical kernel consisting of the sum of three Gaussian functions was used to describe the dose deposition process. A convolution-based method was used to account for the ionization chamber volume averaging effect when commissioning the algorithm. The algorithm was validated by comparing the calculated planar dose distributions of FFF head-and-neck IMRT plans with measurements performed with a 2D diode array. Good agreement between the measured and calculated Sc was achieved for both FF beams (<0.25%) and FFF beams (<0.10%). The relative contribution of the head-scattered photons reduced by 34.7% for 6 MV and 49.3% for 10 MV due to the removal of the FF. Superior agreement between the calculated and measured dose distribution was also achieved for FFF IMRT. In the gamma comparison with a 2%/2 mm criterion, the average passing rate was 96.2 ± 1.9% for 6 MV FFF and 95.5 ± 2.6% for 10 MV FFF. The efficient independent planar dose calculation algorithm is easy to implement and can be valuable in FFF IMRT QA.
Calculation of individual isotope equilibrium constants for implementation in geochemical models
Thorstenson, Donald C.; Parkhurst, David L.
2002-01-01
Theory is derived from the work of Urey to calculate equilibrium constants commonly used in geochemical equilibrium and reaction-transport models for reactions of individual isotopic species. Urey showed that equilibrium constants of isotope exchange reactions for molecules that contain two or more atoms of the same element in equivalent positions are related to isotope fractionation factors by , where is n the number of atoms exchanged. This relation is extended to include species containing multiple isotopes, for example and , and to include the effects of nonideality. The equilibrium constants of the isotope exchange reactions provide a basis for calculating the individual isotope equilibrium constants for the geochemical modeling reactions. The temperature dependence of the individual isotope equilibrium constants can be calculated from the temperature dependence of the fractionation factors. Equilibrium constants are calculated for all species that can be formed from and selected species containing , in the molecules and the ion pairs with where the subscripts g, aq, l, and s refer to gas, aqueous, liquid, and solid, respectively. These equilibrium constants are used in the geochemical model PHREEQC to produce an equilibrium and reaction-transport model that includes these isotopic species. Methods are presented for calculation of the individual isotope equilibrium constants for the asymmetric bicarbonate ion. An example calculates the equilibrium of multiple isotopes among multiple species and phases.
Quantitative calculation model of dilution ratio based on reaching standard of water function zone
NASA Astrophysics Data System (ADS)
Du, Zhong; Dong, Zengchuan; Wu, Huixiu; Yang, Lin
2017-03-01
Dilution ratio is an important indicator of water quality assessment, and it’s difficult to calculate quantitatively. This paper proposed quantitative calculation model of dilution ratio based on the permissible pollution bearing capacity model of water function zone. The model contains three parameters of concentration. Particularly, the 1-D model has three river characteristics parameters in addition. Applications of the model are based on the national standard of wastewater discharge concentration and reaching standard concentration. The results show the inverse correlation between the dilution ratio and the C P and C 0, and the positive correlation with C s . The quantitative maximum control standard of dilution ratio is 12.50% by 0-D model and 22.96% by 1-D model. Moreover, we propose to choose the minimum parameter and find the invalid pollution bearing capacity.
Deremigio, Hilary; Kemper, Peter; Lamar, M Drew; Smith, Gregory D
2008-01-01
Mathematical models of calcium release sites derived from Markov chain models of intracellular calcium channels exhibit collective gating reminiscent of the experimentally observed phenomenon of stochastic calcium excitability (i.e., calcium puffs and sparks). We present a Kronecker structured representation for calcium release site models and perform benchmark stationary distribution calculations using numerical iterative solution techniques that leverage this structure. In this context we find multi-level methods and certain preconditioned projection methods superior to simple Gauss-Seidel type iterations. Response measures such as the number of channels in a particular state converge more quickly using these numerical iterative methods than occupation measures calculated via Monte Carlo simulation.
SAMPLE AOR CALCULATION USING ANSYS SLICE PARAMETRIC MODEL FOR TANK SST-SX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS slice parametric model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for the single shell tank (SST) SX, and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS FULL PARAMETRIC MODEL FOR TANK SST-SX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS parametric 360-degree model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric full model for the single shell tank (SST) SX to deal with asymmetry loading conditions and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS SLICE PARAMETRIC MODEL FOR TANK SST-BX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS slice parametric model for single-shell tank BX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for the single shell tank (SST) BX, and provide a sample analysis of the SST-BX tank based on analysis of record (AOR) loads. The SST-BX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS AXISYMMETRIC PARAMETRIC MODEL FOR TANK SST-S
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS axisymmetric parametric model for single-shell tank S and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for single shell tank (SST) S, and provide a sample analysis of SST-S tank based on analysis of record (AOR) loads. The SST-S model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS AXISYMMETRIC PARAMETRIC MODEL FOR TANK SST-SX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS axisymmetric parametric model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for single shell tank (SST) SX, and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS SLICE PARAMETRIC MODEL FOR TANK SST-S
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS slice parametric model for single-shell tank S and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for the single shell tank (SST) S, and provide a sample analysis of the SST-S tank based on analysis of record (AOR) loads. The SST-S model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS SLICE PARAMETRIC MODEL FOR TANK SST-AX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS slice parametric model for single-shell tank AX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for the single shell tank (SST) AX, and provide a sample analysis of the SST-AX tank based on analysis of record (AOR) loads. The SST-AX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS AXISYMMETRIC PARAMETRIC MODEL FOR TANK SST-AX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS axisymmetric parametric model for single-shell tank AX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for single shell tank (SST) AX, and provide a sample analysis of SST-AX tank based on analysis of record (AOR) loads. The SST-AX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS AXISYMMETRIC PARAMETRIC MODEL FOR TANK SST-A
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS axisymmetric parametric model for single-shell tank A and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for single shell tank (SST) A, and provide a sample analysis of SST-A tank based on analysis of record (AOR) loads. The SST-A model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS SLICE PARAMETRIC MODEL FOR TANK SST-A
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS slice parametric model for single-shell tank A and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for the single shell tank (S) A, and provide a sample analysis of the SST-S tank based on analysis of record (AOR) loads. The SST-A model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
Benchmark calculation of no-core Monte Carlo shell model in light nuclei
Abe, T.; Shimizu, N.; Maris, P.; Vary, J. P.; Otsuka, T.; Utsuno, Y.
2011-05-06
The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.
ERIC Educational Resources Information Center
Polly, Drew
2008-01-01
American students continue to struggle on measures of student achievement. This study employed Hierarchical Linear Modeling to estimate a two-level model and examine the influences of calculator use and teachers' instructional practices on student achievement in mathematics amongst first-grade students. The outcome variable was mathematics test…
Dobos, A. P.
2012-05-01
This paper describes an improved algorithm for calculating the six parameters required by the California Energy Commission (CEC) photovoltaic (PV) Calculator module model. Rebate applications in California require results from the CEC PV model, and thus depend on an up-to-date database of module characteristics. Currently, adding new modules to the database requires calculating operational coefficients using a general purpose equation solver - a cumbersome process for the 300+ modules added on average every month. The combination of empirical regressions and heuristic methods presented herein achieve automated convergence for 99.87% of the 5487 modules in the CEC database and greatly enhance the accuracy and efficiency by which new modules can be characterized and approved for use. The added robustness also permits general purpose use of the CEC/6 parameter module model by modelers and system analysts when standard module specifications are known, even if the module does not exist in a preprocessed database.
NASA Astrophysics Data System (ADS)
Paranin, Y.; Burmistrov, A.; Salikeev, S.; Fomina, M.
2015-08-01
Basic propositions of calculation procedures for oil free scroll compressors characteristics are presented. It is shown that mathematical modelling of working process in a scroll compressor makes it possible to take into account such factors influencing the working process as heat and mass exchange, mechanical interaction in working chambers, leakage through slots, etc. The basic mathematical model may be supplemented by taking into account external heat exchange, elastic deformation of scrolls, inlet and outlet losses, etc. To evaluate the influence of procedure on scroll compressor characteristics calculations accuracy different calculations were carried out. Internal adiabatic efficiency was chosen as a comparative parameter which evaluates the perfection of internal thermodynamic and gas-dynamic compressor processes. Calculated characteristics are compared with experimental values obtained for the compressor pilot sample.
Schecher, W.D.; Driscoll, C.T.
1988-04-01
There is much concern over the effects of acidic deposition on soils and draining waters. To better understand the processes regulating the acidification of surface waters, computer models have been developed which utilize equilibrium calculations to predict the concentration of chemical parameters such as pH, acid neutralizing capacity, dissolved inorganic carbon, Al and SO/sub 4//sup 2 -/. A simple chemical equilibrium model (ALCHEMI) was used to evaluate the effect of uncertainty in the measurement of chemical constituents on thermodynamic calculations. For calculations where pH was not allowed to vary, uncertainty in Al speciation was small and largely due to imprecision in the measurement of total F and pH. When calculations were made from electroneutrality based on measured constituents, most of the uncertainty associated with the values of output parameters was due to imprecision in the determination of SO/sub 4//sup 2 -/.
Kaminski, George A.; Ponomarev, Sergei Y.; Liu, Aibing B.
2009-01-01
We are presenting POSSIM (POlarizable Simulations with Second order Interaction Model) – a software package and a set of parameters designed for molecular simulations. The key feature of POSSIM is that the electrostatic polarization is taken into account using a previously introduced fast formalism. This permits cutting computational cost of using the explicit polarization by about an order of magnitude. In this article, parameters for water, methane, ethane, propane, butane, methanol and NMA are introduced. These molecules are viewed as model systems for protein simulations. We have achieved our goal of ca. 0.5 kcal/mol accuracy for gas-phase dimerization energies and no more than 2% deviations in liquid state heats of vaporization and densities. Moreover, free energies of hydration of the polarizable methane, ethane and methanol have been calculated using the statistical perturbation theory. These calculations are a model for calculating protein pKa shifts and ligand binding affinities. The free energies of hydration were found to be 2.12 kcal/mol, 1.80 kcal/mol and −4.95 kcal/mol for methane, ethane and methanol, respectively. The experimentally determined literature values are 1.91 kcal/mol, 1.83 kcal/mol and −5.11 kcal/mol. The POSSIM average error in these absolute free energies of hydration is only about 0.13 kcal/mol. Using the statistical perturbation theory with polarizable force fields is not widespread, and we believe that this work opens road to further development of the POSSIM force field and its applications for obtaining accurate energies in protein-related computer modeling. PMID:20209038
NASA Astrophysics Data System (ADS)
de Santana, O. L.; da Gama, A. A. S.
1999-12-01
The Green's function formalism is applied for the calculation of the effective through-bond donor-acceptor coupling in model molecular systems. The calculation is performed at a Hartree-Fock (self-consistent) level, by using semiempirical AM1 and CNDO/S, and ab initio STO-3G methods. The results are compared with that obtained from the splitting of the appropriate levels, by using the Koopmans' theorem, within each one of the selected quantum chemical methods.
NASA Astrophysics Data System (ADS)
Li, Zhijie; Kenmotsu, Takahiro; Kawamura, Takaichi; Ono, Tadayoshi; Yamamura, Yasunori
1999-06-01
In order to test the availabilities of the theoretical screening lengths with the shell effect and the new local electronic-energy-loss model proposed by Yamamura et al., the sputtering yields due to various ion impacts on monatomic materials were calculated with the ACAT code. It is found that the sputtering yields calculated by the Molière potential with the present theoretical screening lengths are in reasonable good agreement with experimental data and Yamamura's empirical sputtering formula without free parameters.
Absorbing-sphere model for calculating ion-ion recombination total cross sections.
NASA Technical Reports Server (NTRS)
Olson, R. E.
1972-01-01
An 'absorbing-sphere' model based on the Landau-Zener method is set up for calculating the upper limit thermal energy (300 K) reaction rate and the energy dependence of the total cross sections. The crucial parameter needed for the calculation is the electron detachment energy for the outer electron on the anion. It is found that the cross sections increase with decreasing electron detachment energy.
2009-10-01
Beattie - Bridgeman Virial expansion The above equations are suitable for moderate pressures and are usually based on either empirical constants...CR 2010-013 October 2009 A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation...Defence R&D Canada. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation
NASA Astrophysics Data System (ADS)
Piringer, Martin; Knauder, Werner; Petz, Erwin; Schauberger, Günther
2016-09-01
Direction-dependent separation distances to avoid odour annoyance, calculated with the Gaussian Austrian Odour Dispersion Model AODM and the Lagrangian particle diffusion model LASAT at two sites, are analysed and compared. The relevant short-term peak odour concentrations are calculated with a stability-dependent peak-to-mean algorithm. The same emission and meteorological data, but model-specific atmospheric stability classes are used. The estimate of atmospheric stability is obtained from three-axis ultrasonic anemometers using the standard deviations of the three wind components and the Obukhov stability parameter. The results are demonstrated for the Austrian villages Reidling and Weissbach with very different topographical surroundings and meteorological conditions. Both the differences in the wind and stability regimes as well as the decrease of the peak-to-mean factors with distance lead to deviations in the separation distances between the two sites. The Lagrangian model, due to its model physics, generally calculates larger separation distances. For worst-case calculations necessary with environmental impact assessment studies, the use of a Lagrangian model is therefore to be preferred over that of a Gaussian model. The study and findings relate to the Austrian odour impact criteria.
Evidence for a low-frequency. pi. h/sub 9/2/ band crossing in /sup 185/Pt and /sup 183/Ir
Janzen, V.P.; Carpenter, M.P.; Riedinger, L.L.; Schmitz, W.; Pilotte, S.; Monaro, S.; Rajnauth, D.D.; Johansson, J.K.; Popescu, D.G.; Waddington, J.C.; and others
1988-10-31
Evidence is presented for a ..pi..h/sub 9/2/ band crossing at surprisingly low rotational frequency in the odd-neutron /sup 185/Pt and odd-proton /sup 183/Ir nuclei. The proton nature of the crossing is indicated by measured B(M1;I..-->..I-1)/B(E2;I..-->..I-2) ratios, strongly supported by theoretical values from both the semiclassical Doenau and Frauendorf approach and a variable-core particle-rotor calculation employing cranked-shell-model matrix elements. However, the actual crossing frequency is greatly overestimated by the cranking method using generally accepted model parameters.
A pencil beam dose calculation model for CyberKnife system.
Liang, Bin; Li, Yongbao; Liu, Bo; Zhou, Fugen; Xu, Shouping; Wu, Qiuwen
2016-10-01
CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensity profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation, the pencil beam calculated
Calculations of Diffuser Flows with an Anisotropic K-Epsilon Model
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T.-H.
1995-01-01
A newly developed anisotropic K-epsilon model is applied to calculate three axisymmetric diffuser flows with or without separation. The new model uses a quadratic stress-strain relation and satisfies the realizability conditions, i.e., it ensures both the positivity of the turbulent normal stresses and the Schwarz' inequality between any fluctuating velocities. Calculations are carried out with a finite-element method. A second-order accurate, bounded convection scheme and sufficiently fine grids are used to ensure numerical credibility of the solutions. The standard K-epsilon model is also used in order to highlight the performance of the new model. Comparison with the experimental data shows that the anisotropic K-epsilon model performs consistently better than does the standard K-epsilon model in all of the three test cases.
NASA Technical Reports Server (NTRS)
Kiehl, J. T.; Lacis, A. A.; Schwarzkopf, M. D.; Fels, S. B.
1991-01-01
The performance of several parameterized models is described with respect to numerical prediction and climate research at GFDL, NCAR, and GISS. The radiation codes of the models were compared to benchmark calculations and other codes for the intercomparison of radiation codes in climate models (ICRCCM). Cooling rates and fluxes calculated from the models are examined in terms of their application to established general circulation models (GCMs) from the three research institutions. The newest radiation parameterization techniques show the most significant agreement with the benchmark line-by-line (LBL) results. The LBL cooling rates correspond to cooling rate profiles from the models, but the parameterization of the water vapor continuum demonstrates uncertain results. These uncertainties affect the understanding of some lower tropospheric cooling, and therefore more accurate parameterization of the water vapor continuum, as well as the weaker absorption bands of CO2 and O3 is recommended.
Model calculations of the Sivers function satisfying the Burkardt sum rule
Courtoy, A.; Vento, V.; Scopetta, S.
2009-04-01
It is shown that, at variance with previous analyses, the MIT bag model can explain the available data of the Sivers function and satisfies the Burkardt sum rule to a few percent accuracy. The agreement is similar to the one recently found in the constituent quark model. Therefore, these two model calculations of the Sivers function are in agreement with the present experimental and theoretical wisdom.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1984-01-01
Models and spectra of sunspots were studied, because they are important to energy balance and variability discussions. Sunspot observations in the ultraviolet region 140 to 168 nn was obtained by the NRL High Resolution Telescope and Spectrograph. Extensive photometric observations of sunspot umbrae and prenumbrae in 10 chanels covering the wavelength region 387 to 3800 nm were made. Cool star opacities and model atmospheres were computed. The Sun is the first testcase, both to check the opacity calculations against the observed solar spectrum, and to check the purely theoretical model calculation against the observed solar energy distribution. Line lists were finally completed for all the molecules that are important in computing statistical opacities for energy balance and for radiative rate calculations in the Sun (except perhaps for sunspots). Because many of these bands are incompletely analyzed in the laboratory, the energy levels are not well enough known to predict wavelengths accurately for spectrum synthesis and for detailed comparison with the observations.
QSPR modeling of thermal stability of nitroaromatic compounds: DFT vs. AM1 calculated descriptors.
Fayet, Guillaume; Rotureau, Patricia; Joubert, Laurent; Adamo, Carlo
2010-04-01
The quantitative structure-property relationship (QSPR) methodology was applied to predict the decomposition enthalpies of 22 nitroaromatic compounds, used as indicators of thermal stability. An extended series of descriptors (constitutional, topological, geometrical charge related and quantum chemical) was calculated at two different levels of theory: density functional theory (DFT) and semi-empirical AM1 approaches. Reliable models have been developed for each level, leading to similar correlations between calculated and experimental data (R(2) > 0.98). Hence, both of them can be employed as screening tools for the prediction of thermal stability of nitroaromatic compounds. If using the AM1 model presents the advantage to be less time consuming, DFT allows the calculation of more accurate molecular quantum properties, e.g., conceptual DFT descriptors. In this study, our best QSPR model is based on such descriptors, providing more chemical comprehensive relationships with decomposition reactivity, a particularly complex property for the specific class of nitroaromatic compounds.
Development of a patient-specific model for calculation of pulmonary function
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Ding, Mingyue; Movsas, Benjamin; Chetty, Indrin J.
2011-06-01
The purpose of this paper is to develop a patient-specific finite element model (FEM) to calculate the pulmonary function of lung cancer patients for evaluation of radiation treatment. The lung model was created with an in-house developed FEM software with region-specific parameters derived from a four-dimensional CT (4DCT) image. The model was used first to calculate changes in air volume and elastic stress in the lung, and then to calculate regional compliance defined as the change in air volume corrected by its associated stress. The results have shown that the resultant compliance images can reveal the regional elastic property of lung tissue, and could be useful for radiation treatment planning and assessment.
Schick, W.C. Jr.; Milani, S.; Duncombe, E.
1980-03-01
A model has been devised for incorporating into the thermal feedback procedure of the PDQ few-group diffusion theory computer program the explicit calculation of depletion and temperature dependent fuel-rod shrinkage and swelling at each mesh point. The model determines the effect on reactivity of the change in hydrogen concentration caused by the variation in coolant channel area as the rods contract and expand. The calculation of fuel temperature, and hence of Doppler-broadened cross sections, is improved by correcting the heat transfer coefficient of the fuel-clad gap for the effects of clad creep, fuel densification and swelling, and release of fission-product gases into the gap. An approximate calculation of clad stress is also included in the model.
Tabulation of Mie scattering calculation results for microwave radiative transfer modeling
NASA Technical Reports Server (NTRS)
Yeh, Hwa-Young M.; Prasad, N.
1988-01-01
In microwave radiative transfer model simulations, the Mie calculations usually consume the majority of the computer time necessary for the calculations (70 to 86 percent for frequencies ranging from 6.6 to 183 GHz). For a large array of atmospheric profiles, the repeated calculations of the Mie codes make the radiative transfer computations not only expensive, but sometimes impossible. It is desirable, therefore, to develop a set of Mie tables to replace the Mie codes for the designated ranges of temperature and frequency in the microwave radiative transfer calculation. Results of using the Mie tables in the transfer calculations show that the total CPU time (IBM 3081) used for the modeling simulation is reduced by a factor of 7 to 16, depending on the frequency. The tables are tested by computing the upwelling radiance of 144 atmospheric profiles generated by a 3-D cloud model (Tao, 1986). Results are compared with those using Mie quantities computed from the Mie codes. The bias and root-mean-square deviation (RMSD) of the model results using the Mie tables, in general, are less than 1 K except for 37 and 90 GHz. Overall, neither the bias nor RMSD is worse than 1.7 K for any frequency and any viewing angle.
Modification method of numerical calculation of heat flux over dome based on turbulence models
NASA Astrophysics Data System (ADS)
Zhang, Daijun; Luo, Haibo; Zhang, Junchao; Zhang, Xiangyue
2016-10-01
For the optical guidance system flying at low altitude and high speed, the calculation of turbulent convection heat transfer over its dome is the key to designing this kind of aircraft. RANS equations-based turbulence models are of high computation efficiency and their calculation accuracy can satisfy the engineering requirement. But for the calculation of the flow in the shock layer of strong entropy and pressure disturbances existence, especially of aerodynamic heat, some parameters in the RANS energy equation are necessary to be modified. In this paper, we applied turbulence models on the calculation of the heat flux over the dome of sphere-cone body at zero attack. Based on Billig's results, the shape and position of detached shock were extracted in flow field using multi-block structured grid. The thermal conductivity of the inflow was set to kinetic theory model with respect to temperature. When compared with Klein's engineering formula at the stagnation point, we found that the results of turbulent models were larger. By analysis, we found that the main reason of larger values was the interference from entropy layer to boundary layer. Then thermal conductivity of inflow was assigned a fixed value as equivalent thermal conductivity in order to compensate the overestimate of the turbulent kinetic energy. Based on the SST model, numerical experiments showed that the value of equivalent thermal conductivity was only related with the Mach number. The proposed modification approach of equivalent thermal conductivity for inflow in this paper could also be applied to other turbulence models.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Modification of the Simons model for calculation of nonradial expansion plumes
NASA Technical Reports Server (NTRS)
Boyd, I. D.; Stark, J. P. W.
1989-01-01
The Simons model is a simple model for calculating the expansion plumes of rockets and thrusters and is a widely used engineering tool for the determination of spacecraft impingement effects. The model assumes that the density of the plume decreases radially from the nozzle exit. Although a high degree of success has been achieved in modeling plumes with moderate Mach numbers, the accuracy obtained under certain conditions is unsatisfactory. A modification made to the model that allows effective description of nonradial behavior in plumes is presented, and the conditions under which its use is preferred are prescribed.
Half-life calculation of one-proton emitters with a shell model potential
Rodrigues, M. M.; Duarte, S. B.
2013-03-25
The accumulated amount of data for half-lives of proton emitters still remains a challenge to the ability of nuclear models to reproduce them consistently. These nuclei are far from beta stability line in a region where the validity of current nuclear models is not guaranteed. A nuclear shell model is introduced to the calculation of the nuclear barrier of less deformed proton emitters. The predictions using the proposed model are in good agreement with the data, with the advantage of have used only a single parameter in the model.
NASA Astrophysics Data System (ADS)
Feng, Y.; Xia, H.; Shrestha, S.; Conibeer, G.
2015-11-01
Intrigued by the high demand of fast design and development of nanoscale electronic devices, electronic transport across atomic dimensions becomes an important theoretical and computational problem. In this paper we present a tight-binding based model specially tailed for calculating realistic tunneling structures with scattering region dimensions of several nanometers. The proposed model allows for a proper treatment of the electron-phonon coupling effects in a tractable manner. By greatly reducing the complexity of the phonon-involved problem down to a quadratic level, transmission calculation for large-scale systems, including both planar structures and quantum wire structures, becomes practically feasible.
Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow
NASA Astrophysics Data System (ADS)
Kemerink, G. J.; Pleiter, F.
1986-08-01
The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.
Turowski, Marcus; Amotchkina, Tatiana; Ehlers, Henrik; Jupé, Marco; Ristau, Detlev
2014-02-01
The electronic and optical properties of TiO2 atomic structures representing simulated thin films have been investigated using density functional theory. Suitable model parameters and system sizes have been identified in advance by validation of the results with experimental data. Dependencies of the electronic band gap and the refractive index have been calculated as a function of film density. The results of the performed calculations have been compared to characterized optical properties of titania single layers deposited using different coating techniques. The modeled dependencies are consistent with experimental observations, and absolute magnitudes of simulated values are in agreement with measured optical data.
A Computer Code for the Calculation of NLTE Model Atmospheres Using ALI
NASA Astrophysics Data System (ADS)
Kubát, J.
2003-01-01
A code for calculation of NLTE model atmospheres in hydrostatic and radiative equilibrium in either spherically symmetric or plane parallel geometry is described. The method of accelerated lambda iteration is used for the treatment of radiative transfer. Other equations (hydrostatic equilibrium, radiative equilibrium, statistical equilibrium, optical depth) are solved using the Newton-Raphson method (linearization). In addition to the standard output of the model atmosphere (dependence of temperature, density, radius, and population numbers on column mass depth) the code enables optional additional outputs for better understanding of processes in the atmosphere. The code is able to calculate model atmospheres of plane-parallel and spherically symmetric semi-infinite atmospheres as well as models of plane parallel and spherical shells. There is also an option for solution of a restricted problem of a NLTE line formation (solution of radiative transfer and statistical equilibrium for a given model atmosphere). The overall scheme of the code is presented.
NASA Astrophysics Data System (ADS)
Müller, Arne; Jovanov, Vladislav; Wagner, Veit
2017-07-01
This work shows an analytical semiconductor diode model suitable to describe photovoltaic cells for a large variety of physical parameters, such as mobility of charge carriers and illumination intensity. The model is based on a simplified drift-diffusion calculation assuming a constant electric field and a linear increasing current inside the semiconductor layer. The model also accounts for recombination processes in the active and contact layers. Organic and inorganic solar cells can be accurately modeled, which is confirmed by comparison of experimental data and full drift-diffusion calculations with the same physical parameters. In addition, this model shows how physical properties can be directly extracted from the crossing point often found in J-V characteristics.
Photon and electron absorbed fractions calculated from a new tomographic rat model
NASA Astrophysics Data System (ADS)
Peixoto, P. H. R.; Vieira, J. W.; Yoriyaz, H.; Lima, F. R. A.
2008-10-01
This paper describes the development of a tomographic model of a rat developed using CT images of an adult male Wistar rat for radiation transport studies. It also presents calculations of absorbed fractions (AFs) under internal photon and electron sources using this rat model and the Monte Carlo code MCNP. All data related to the developed phantom were made available for the scientific community as well as the MCNP inputs prepared for AF calculations in that phantom and also all estimated AF values, which could be used to obtain absorbed dose estimates—following the MIRD methodology—in rats similar in size to the presently developed model. Comparison between the rat model developed in this study and that published by Stabin et al (2006 J. Nucl. Med. 47 655) for a 248 g Sprague-Dawley rat, as well as between the estimated AF values for both models, has been presented.
A model of the circulating blood for use in radiation dose calculations
Hui, T.E.; Poston, J.W. Sr.
1987-12-31
Over the last few years there has been a significant increase in the use of radionuclides in leukocyte, platelet, and erythrocyte imaging procedures. Radiopharmaceutical used in these procedures are confined primarily to the blood, have short half-lives, and irradiate the body as they move through the circulatory system. There is a need for a model, to describe the circulatory system in an adult human, which can be used to provide radiation absorbed dose estimates for these procedures. A simplified model has been designed assuming a static circulatory system and including major organs of the body. The model has been incorporated into the MIRD phantom and calculations have been completed for a number of exposure situations and radionuclides of clinical importance. The model will be discussed in detail and results of calculations using this model will be presented.
A model of the circulating blood for use in radiation dose calculations
Hui, T.E.; Poston, J.W. Sr.
1987-01-01
Over the last few years there has been a significant increase in the use of radionuclides in leukocyte, platelet, and erythrocyte imaging procedures. Radiopharmaceutical used in these procedures are confined primarily to the blood, have short half-lives, and irradiate the body as they move through the circulatory system. There is a need for a model, to describe the circulatory system in an adult human, which can be used to provide radiation absorbed dose estimates for these procedures. A simplified model has been designed assuming a static circulatory system and including major organs of the body. The model has been incorporated into the MIRD phantom and calculations have been completed for a number of exposure situations and radionuclides of clinical importance. The model will be discussed in detail and results of calculations using this model will be presented.
Waegeneers, Nadia; Ruttens, Ann; De Temmerman, Ludwig
2011-06-15
A chain model was developed to calculate the flow of cadmium from soil, drinking water and feed towards bovine tissues. The data used for model development were tissue Cd concentrations of 57 bovines and Cd concentrations in soil, feed and drinking water, sampled at the farms were the bovines were reared. Validation of the model occurred with a second set of measured tissue Cd concentrations of 93 bovines of which age and farm location were known. The exposure part of the chain model consists of two parts: (1) a soil-plant transfer model, deriving cadmium concentrations in feed from basic soil characteristics (pH and organic matter content) and soil Cd concentrations, and (2) bovine intake calculations, based on typical feed and water consumption patterns for cattle and Cd concentrations in feed and drinking water. The output of the exposure model is an animal-specific average daily Cd intake, which is then taken forward to a kinetic uptake model in which time-dependent Cd concentrations in bovine tissues are calculated. The chain model was able to account for 65%, 42% and 32% of the variation in observed kidney, liver and meat Cd concentrations in the validation study.
NASA Astrophysics Data System (ADS)
Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.
2014-06-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.
A model to calculate the induced dose rate around an 18 MV ELEKTA linear accelerator.
Perrin, Bruce; Walker, Anne; Mackay, Ranald
2003-03-07
The dose rate due to activity induced by (gamma, n) reactions around an ELEKTA Precise accelerator running at 18 MV is reported. A model to calculate the induced dose rate for a variety of working practices has been derived and compared to the measured values. From this model, the dose received by the staff using the machine can be estimated. From measured dose rates at the face of the linear accelerator for a 10 x 10 cm2 jaw setting at 18 MV an activation coefficient per MU was derived for each of the major activation products. The relative dose rates at points around the linac head, for different energy and jaw settings, were measured. Dose rates adjacent to the patient support system and portal imager were also measured. A model to calculate the dose rate at these points was derived, and compared to those measured over a typical working week. The model was then used to estimate the maximum dose to therapists for the current working schedule on this machine. Calculated dose rates at the linac face agreed to within +/- 12% of those measured over a week, with a typical dose rate of 4.5 microSv h(-1) 2 min after the beam has stopped. The estimated maximum annual whole body dose for a treatment therapist, with the machine treating at only 18 MV, for 60000 MUs per week was 2.5 mSv. This compares well with value of 2.9 mSv published for a Clinac 21EX. A model has been derived to calculate the dose from the four dominant activation products of an ELEKTA Precise 18 MV linear accelerator. This model is a useful tool to calculate the induced dose rate around the treatment head. The model can be used to estimate the dose to the staff for typical working patterns.
Bichara, C.; Bergman, C.; Mathieu, J.-C.
1985-01-01
Monte Carlo calculations are carried out to give exact values of some thermodynamic properties of alloys. The calculations are performed within the framework of the surrounded atom model the main assumptions of which are: quasilattice structure of the alloy, nearest neighbour interactions, description of the configuration in terms of ''surrounded atoms''. The results are then compared wit those obtained using current approximations: the Bragg-Williams treatment and the quasichemical treatment. This work enables the authors to generalize the previous conclusions drawn in the study of the regular solution model. In every case, whatever the sign of the interactions (ordering or clustering tendency) Monte Carlo calculations yield a local order that both approximations fail to reproduce. In order to compare the calculations with experimental data, Cowley's short range order parameter is calculated by Monte Carlo and by the approximate methods (the parameters of the surrounded atom model are derived from thermodynamic data). The Monte Carlo values compare better than the quasichemical ones with the data obtained by X-ray or neutron diffraction in three actual systems.
Space Radiation Dose Calculations for the Space Experiment Matroshka-R Modelling Conditions
NASA Astrophysics Data System (ADS)
Shurshakov, Vyacheslav; Kartashov, Dmitrij; Tolochek, Raisa
Space radiation dose calculations for the space experiment Matroshka-R modelling conditions are presented in the report. The experiment has been carried out onboard the ISS from 2004 to 2014. Dose measurements were realized both outside the ISS on the outer surface of the Service Module with the MTR-facility and in the ISS compartments with anthropomorphic and spherical phantoms, and the protective curtain facility. Newly applied approach to calculate the shielding probability functions for complex shape objects is used when the object surface is composed from a set of the disjoint adjacent triangles that fully cover the surface. Using the simplified Matroshka-R shielding geometry models of the space station compartments the space ionizing radiation dose distributions in tissue-equivalent spherical and anthropomorphic phantoms, and for an additional shielding installed in the compartment are calculated. There is good agreement between the data obtained in the experiment and calculated ones within an experiment accuracy of about 10%. Thus the calculation method used has been successfully verified with the Matroshka-R experiment data. The suggested method can be recommended for modelling of radiation loads on the crewmembers, and estimation of the additional shielding efficiency in space station compartments, and also for pre-flight estimations of radiation shielding in future space missions.
Continuum solvent model calculations of alamethicin-membrane interactions: thermodynamic aspects.
Kessel, A; Cafiso, D S; Ben-Tal, N
2000-01-01
Alamethicin is a 20-amino acid antibiotic peptide that forms voltage-gated ion channels in lipid bilayers. Here we report calculations of its association free energy with membranes. The calculations take into account the various free-energy terms that contribute to the transfer of the peptide from the aqueous phase into bilayers of different widths. The electrostatic and nonpolar contributions to the solvation free energy are calculated using continuum solvent models. The contributions from the lipid perturbation and membrane deformation effects and the entropy loss associated with peptide immobilization in the bilayer are estimated from a statistical thermodynamic model. The calculations were carried out using two classes of experimentally observed conformations, both of which are helical: the NMR and the x-ray crystal structures. Our calculations show that alamethicin is unlikely to partition into bilayers in any of the NMR conformations because they have uncompensated backbone hydrogen bonds and their association with the membrane involves a large electrostatic solvation free energy penalty. In contrast, the x-ray conformations provide enough backbone hydrogen bonds for the peptide to associate with bilayers. We tested numerous transmembrane and surface orientations of the peptide in bilayers, and our calculations indicate that the most favorable orientation is transmembrane, where the peptide protrudes approximately 4 A into the water-membrane interface, in very good agreement with electron paramagnetic resonance and oriented circular dichroism measurements. The calculations were carried out using two alamethicin isoforms: one with glutamine and the other with glutamate in the 18th position. The calculations indicate that the two isoforms have similar membrane orientations and that their insertion into the membrane is likely to involve a 2-A deformation of the bilayer, again, in good agreement with experimental data. The implications of the results for the
Nonadditivity in moments ofinertia of high-K multiquasiparticle bands
NASA Astrophysics Data System (ADS)
Zhang, Zhen-Hua; Wu, Xi; Lei, Yi-An; Zeng, Jin-Yan
2008-09-01
The experimental high-K 2- and 3-quasiparticle bands of well deformed rare-earth nuclei are analyzed. It is found that there exists significant nonadditivity in moments of inertia (MOIs) for these bands. The microscopic mechanism of the rotational bands is investigated by the particle number conserving (PNC) method in the frame of cranked shell model with pairing, in which the blocking effects are taken care of exactly. The experimental rotational frequency dependence of these bands is well reproduced in PNC calculations. The nonadditivity in MOIs originates from the destructive interference between Pauli blocking effects. Supported by National Natural Science Foundation of China (10675006, 10675007, 10435010)
Wimmer, Peter; Benedikt, Martin; Huber, Philipp; Ferenczi, Izabella
2015-01-01
In previous research, a tool chain to simulate vehicle-pedestrian accidents from ordinary driving state to in-crash has been developed. This tool chain allows for injury criteria-based, vehicle-specific (geometry, stiffness, active safety systems, etc.) assessments. Due to the complex nature of the included finite element analysis (FEA) models, calculation times are very high. This is a major drawback for using FEA models in large-scale effectiveness assessment studies. Therefore, fast calculating surrogate models to approximate the relevant injury criteria as a function of pedestrian vehicle collision constellations have to be developed. The development of surrogate models for head and leg injury criteria to overcome the problem of long calculation times while preserving high detail level of results for effectiveness analysis is shown in this article. These surrogate models are then used in the tool chain as time-efficient replacements for the FEA model to approximate the injury criteria values. The method consists of the following steps: Selection of suitable training data sets out of a large number of given collision constellations, detailed FEA calculations with the training data sets as input, and training of the surrogate models with the FEA model's input and output values. A separate surrogate model was created for each injury criterion, consisting of a response surface that maps the input parameters (i.e., leg impactor position and velocity) to the output value. In addition, a performance test comparing surrogate model predictions of additional collision constellations to the results of respective FEA calculations was carried out. The developed method allows for prediction of injury criteria based on impact constellation for a given vehicle. Because the surrogate models are specific to a certain vehicle, training has to be redone for a new vehicle. Still, there is a large benefit regarding calculation time when doing large-scale studies. The method can be
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation
Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
Purpose To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. Materials and methods A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. Results The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. Conclusion A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm. PMID:28886048
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
A finite element method for shear stresses calculation in composite blade models
NASA Astrophysics Data System (ADS)
Paluch, B.
1991-09-01
A finite-element method is developed for accurately calculating shear stresses in helicopter blade models, induced by torsion and shearing forces. The method can also be used to compute the equivalent torsional stiffness of the section, their transverse shear coefficient, and the position of their center of torsion. A grid generator method which is a part of the calculation program is also described and used to discretize the sections quickly and to condition the grid data reliably. The finite-element method was validated on a few sections composed of isotropic materials and was then applied to a blade model sections made of composite materials. Good agreement was obtained between the calculated and experimental data.
Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model.
Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong
2012-10-17
We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion k at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.
SAMPLE AOR CALCULATION USING ANSYS PARAMETRIC MODEL FOR TANK SST-AY
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS parametric model for double-shell tank AY and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to provide a sample analysis of the DST-AY tanks based on AOR loads, plus loads identified in the Statement of Work (SOW) for CHG contract 92879. This is not an analysis. Instead, the present calculation utilizes the parametric model generated for the double shell tank DST-AY, which is based on Buyer-supplied as-built drawings and information for the analyses of record (AOR) for Double-Shell Tanks (DSTs), encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.
2011-06-15
We present the calculation of the atmospheric neutrino fluxes with an interaction model named JAM, which is used in PHITS (Particle and Heavy-Ion Transport code System) [K. Niita et al., Radiation Measurements 41, 1080 (2006).]. The JAM interaction model agrees with the HARP experiment [H. Collaboration, Astropart. Phys. 30, 124 (2008).] a little better than DPMJET-III[S. Roesler, R. Engel, and J. Ranft, arXiv:hep-ph/0012252.]. After some modifications, it reproduces the muon flux below 1 GeV/c at balloon altitudes better than the modified DPMJET-III, which we used for the calculation of atmospheric neutrino flux in previous works [T. Sanuki, M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 75, 043005 (2007).][M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, and T. Sanuki, Phys. Rev. D 75, 043006 (2007).]. Some improvements in the calculation of atmospheric neutrino flux are also reported.
Modelling lateral beam quality variations in pencil kernel based photon dose calculations
NASA Astrophysics Data System (ADS)
Nyholm, T.; Olofsson, J.; Ahnesjö, A.; Karlsson, M.
2006-08-01
Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error
Wei, Guocui; Zhan, Tingting; Zhan, Xiancheng; Yu, Lan; Wang, Xiaolan; Tan, Xiaoying; Li, Chengrong
2016-09-01
The osmotic pressure of glucose solution at a wide concentration range was calculated using ASOG model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with the well-established freezing point osmometry and ASOG model calculations at low concentrations and with only ASOG model calculations at high concentrations where no standard experimental method could serve as a reference for comparison. Results indicate that air humidity osmometry measurements are comparable to ASOG model calculations at a wide concentration range, while at low concentrations freezing point osmometry measurements provide better comparability with ASOG model calculations.
Calculations of inflaton decays and reheating: with applications to no-scale inflation models
Ellis, John; Garcia, Marcos A.G.; Olive, Keith A.; Nanopoulos, Dimitri V. E-mail: garciagarcia@physics.umn.edu E-mail: olive@physics.umn.edu
2015-07-01
We discuss inflaton decays and reheating in no-scale Starobinsky-like models of inflation, calculating the effective equation-of-state parameter, w, during the epoch of inflaton decay, the reheating temperature, T{sub reh}, and the number of inflationary e-folds, N{sub *}, comparing analytical approximations with numerical calculations. We then illustrate these results with applications to models based on no-scale supergravity and motivated by generic string compactifications, including scenarios where the inflaton is identified as an untwisted-sector matter field with direct Yukawa couplings to MSSM fields, and where the inflaton decays via gravitational-strength interactions. Finally, we use our results to discuss the constraints on these models imposed by present measurements of the scalar spectral index n{sub s} and the tensor-to-scalar perturbation ratio r, converting them into constraints on N{sub *}, the inflaton decay rate and other parameters of specific no-scale inflationary models.
Two-dimensional model calculations of stratospheric HCl and ClO
NASA Astrophysics Data System (ADS)
Miller, C.; Steed, J. M.; Filkin, D. L.; Jesson, J. P.
1980-12-01
A two-dimensional atmospheric model has been developed to take into account latitudinal and seasonal effects in the calculation of atmospheric constituent profiles. The model includes 30 active chemical species and all chemical reactions and reactions rates applicable to them for the domain from pole to pole and 0 to 55 km height; mean meridional circulation is parameterized using the advective circulation field of Murgatroyd and Singleton (1961), while eddy diffusion parameterization is basically that of Luther (1974) and spatial derivatives for transport are approximated by a second-order finite difference representation. Time-dependent integration of the model results in latitudinal variations of the N2O volume mixing ratio, CFCl3, CF2Cl2 and CH4, column ozone and HNO3 in agreement with available measurements, whereas the agreement between calculated and measured HCl and ClO profiles is found to be no better than that obtained with one-dimensional models.
NASA Technical Reports Server (NTRS)
Boudreau, R. D.
1973-01-01
A numerical model is developed which calculates the atmospheric corrections to infrared radiometric measurements due to absorption and emission by water vapor, carbon dioxide, and ozone. The corrections due to aerosols are not accounted for. The transmissions functions for water vapor, carbon dioxide, and water are given. The model requires as input the vertical distribution of temperature and water vapor as determined by a standard radiosonde. The vertical distribution of carbon dioxide is assumed to be constant. The vertical distribution of ozone is an average of observed values. The model also requires as input the spectral response function of the radiometer and the nadir angle at which the measurements were made. A listing of the FORTRAN program is given with details for its use and examples of input and output listings. Calculations for four model atmospheres are presented.
Monte Carlo calculation model for heat radiation of inclined cylindrical flames and its application
NASA Astrophysics Data System (ADS)
Chang, Zhangyu; Ji, Jingwei; Huang, Yuankai; Wang, Zhiyi; Li, Qingjie
2017-07-01
Based on Monte Carlo method, a calculation model and its C++ calculating program for radiant heat transfer from an inclined cylindrical flame are proposed. In this model, the total radiation energy of the inclined cylindrical flame is distributed equally among a certain number of energy beams, which are emitted randomly from the flame surface. The incident heat flux on a surface is calculated by counting the number of energy beams which could reach the surface. The paper mainly studies the geometrical evaluation criterion for validity of energy beams emitted by inclined cylindrical flames and received by other surfaces. Compared to Mudan's formula results for a straight cylinder or a cylinder with 30° tilt angle, the calculated view factors range from 0.0043 to 0.2742 and the predicted view factors agree well with Mudan's results. The changing trend and values of incident heat fluxes computed by the model is consistent with experimental data measured by Rangwala et al. As a case study, incident heat fluxes on a gasoline tank, both the side and the top surface are calculated by the model. The heat radiation is from an inclined cylindrical flame generated by another 1000 m3 gasoline tank 4.6 m away from it. The cone angle of the flame to the adjacent oil tank is 45° and the polar angle is 0°. The top surface and the side surface of the tank are divided into 960 and 5760 grids during the calculation, respectively. The maximum incident heat flux on the side surface is 39.64 and 51.31 kW/m2 on the top surface. Distributions of the incident heat flux on the surface of the oil tank and on the ground around the fire tank are obtained, too.
TRAC-PF1 LOCA calculations using fine-node and coarse-node input models
Dobranich, D.; Buxton, L.D.; Wong, C.N.C.
1985-05-01
TRAC-PF1 calculations of a 200% cold-leg break LOCA have been completed for a UHI (upper head injection accumulator) plant using both fine-node (with 776 mesh cells) and coarse-node (with 320 mesh cells) input models. This study was performed to determine the effect of noding on predicted results and on computer running time. It was found that the overall sequence of events and the important trends of the transient were predicted to be nearly the same with both the fine-node and coarse-node models. There were differences in the time-dependent behavior of the cold-leg accumulator injection, and the predicted PCT for the coarse-node calculation was about 75 K less than that for the fine-node calculation. The higher PCT of the fine-node calculation is attributed primarily to three-dimensional flow effects in the core. The complete (steady state plus transient) coarse-node calculation required 13.5 hours of CYBER 76 computer time compared to 68.3 hours for the fine node calculation, yielding an overall factor of five decrease in running time. Thus, we conclude that for any large break LOCA analyses in which only the overall trends are of concern, the loss of accuracy resulting from use of such a coarse-node model will normally be inconsequential compared to the savings in resources that are realized. However, if the objective of the analyses is the investigation of the effects of multi-dimensional flows on clad temperatures, then a detailed model is required.
Monte Carlo calculation model for heat radiation of inclined cylindrical flames and its application
NASA Astrophysics Data System (ADS)
Chang, Zhangyu; Ji, Jingwei; Huang, Yuankai; Wang, Zhiyi; Li, Qingjie
2017-02-01
Based on Monte Carlo method, a calculation model and its C++ calculating program for radiant heat transfer from an inclined cylindrical flame are proposed. In this model, the total radiation energy of the inclined cylindrical flame is distributed equally among a certain number of energy beams, which are emitted randomly from the flame surface. The incident heat flux on a surface is calculated by counting the number of energy beams which could reach the surface. The paper mainly studies the geometrical evaluation criterion for validity of energy beams emitted by inclined cylindrical flames and received by other surfaces. Compared to Mudan's formula results for a straight cylinder or a cylinder with 30° tilt angle, the calculated view factors range from 0.0043 to 0.2742 and the predicted view factors agree well with Mudan's results. The changing trend and values of incident heat fluxes computed by the model is consistent with experimental data measured by Rangwala et al. As a case study, incident heat fluxes on a gasoline tank, both the side and the top surface are calculated by the model. The heat radiation is from an inclined cylindrical flame generated by another 1000 m3 gasoline tank 4.6 m away from it. The cone angle of the flame to the adjacent oil tank is 45° and the polar angle is 0°. The top surface and the side surface of the tank are divided into 960 and 5760 grids during the calculation, respectively. The maximum incident heat flux on the side surface is 39.64 and 51.31 kW/m2 on the top surface. Distributions of the incident heat flux on the surface of the oil tank and on the ground around the fire tank are obtained, too.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1985-01-01
Solar chromospheric models are described. The models included are based on the observed spectrum, and on the assumption of hydrostatic equilibrium. The calculations depend on realistic solutions of the radiative transfer and statistical equilibrium equations for optically thick lines and continua, and on including the effects of large numbers of lines throughout the spectrum. Although spectroheliograms show that the structure of the chromosphere is highly complex, one-dimensional models of particular features are reasonably successful in matching observed spectra. Such models were applied to the interpretation of chromospheric observations.
Improved Ionospheric Electrodynamic Models and Application to Calculating Joule Heating Rates
NASA Technical Reports Server (NTRS)
Weimer, D. R.
2004-01-01
Improved techniques have been developed for empirical modeling of the high-latitude electric potentials and magnetic field aligned currents (FAC) as a function of the solar wind parameters. The FAC model is constructed using scalar magnetic Euler potentials, and functions as a twin to the electric potential model. The improved models have more accurate field values as well as more accurate boundary locations. Non-linear saturation effects in the solar wind-magnetosphere coupling are also better reproduced. The models are constructed using a hybrid technique, which has spherical harmonic functions only within a small area at the pole. At lower latitudes the potentials are constructed from multiple Fourier series functions of longitude, at discrete latitudinal steps. It is shown that the two models can be used together in order to calculate the total Poynting flux and Joule heating in the ionosphere. An additional model of the ionospheric conductivity is not required in order to obtain the ionospheric currents and Joule heating, as the conductivity variations as a function of the solar inclination are implicitly contained within the FAC model's data. The models outputs are shown for various input conditions, as well as compared with satellite measurements. The calculations of the total Joule heating are compared with results obtained by the inversion of ground-based magnetometer measurements. Like their predecessors, these empirical models should continue to be a useful research and forecast tools.
Improved Ionospheric Electrodynamic Models and Application to Calculating Joule Heating Rates
NASA Technical Reports Server (NTRS)
Weimer, D. R.
2004-01-01
Improved techniques have been developed for empirical modeling of the high-latitude electric potentials and magnetic field aligned currents (FAC) as a function of the solar wind parameters. The FAC model is constructed using scalar magnetic Euler potentials, and functions as a twin to the electric potential model. The improved models have more accurate field values as well as more accurate boundary locations. Non-linear saturation effects in the solar wind-magnetosphere coupling are also better reproduced. The models are constructed using a hybrid technique, which has spherical harmonic functions only within a small area at the pole. At lower latitudes the potentials are constructed from multiple Fourier series functions of longitude, at discrete latitudinal steps. It is shown that the two models can be used together in order to calculate the total Poynting flux and Joule heating in the ionosphere. An additional model of the ionospheric conductivity is not required in order to obtain the ionospheric currents and Joule heating, as the conductivity variations as a function of the solar inclination are implicitly contained within the FAC model's data. The models outputs are shown for various input conditions, as well as compared with satellite measurements. The calculations of the total Joule heating are compared with results obtained by the inversion of ground-based magnetometer measurements. Like their predecessors, these empirical models should continue to be a useful research and forecast tools.
Power and Sample Size Calculations for Multivariate Linear Models with Random Explanatory Variables
ERIC Educational Resources Information Center
Shieh, Gwowen
2005-01-01
This article considers the problem of power and sample size calculations for normal outcomes within the framework of multivariate linear models. The emphasis is placed on the practical situation that not only the values of response variables for each subject are just available after the observations are made, but also the levels of explanatory…
NASA Technical Reports Server (NTRS)
Svizhenko, Alexel; Anantram, M. P.; Maiti, Amitesh
2003-01-01
This paper presents viewgraphs on the modeling of the electromechanical response of carbon nanotubes, utilizing molecular dynamics and transport calculations. The topics include: 1) Simulations of the experiment; 2) Effect of diameter, length and temperature; and 3) Study of sp3 coordination-"The Table experiment".
A new timing model for calculating the intrinsic timing resolution of a scintillator detector.
Shao, Yiping
2007-02-21
The coincidence timing resolution is a critical parameter which to a large extent determines the system performance of positron emission tomography (PET). This is particularly true for time-of-flight (TOF) PET that requires an excellent coincidence timing resolution (<1 ns) in order to significantly improve the image quality. The intrinsic timing resolution is conventionally calculated with a single-exponential timing model that includes two parameters of a scintillator detector: scintillation decay time and total photoelectron yield from the photon-electron conversion. However, this calculation has led to significant errors when the coincidence timing resolution reaches 1 ns or less. In this paper, a bi-exponential timing model is derived and evaluated. The new timing model includes an additional parameter of a scintillator detector: scintillation rise time. The effect of rise time on the timing resolution has been investigated analytically, and the results reveal that the rise time can significantly change the timing resolution of fast scintillators that have short decay time constants. Compared with measured data, the calculations have shown that the new timing model significantly improves the accuracy in the calculation of timing resolutions.
NASA Technical Reports Server (NTRS)
Svizhenko, Alexel; Anantram, M. P.; Maiti, Amitesh
2003-01-01
This paper presents viewgraphs on the modeling of the electromechanical response of carbon nanotubes, utilizing molecular dynamics and transport calculations. The topics include: 1) Simulations of the experiment; 2) Effect of diameter, length and temperature; and 3) Study of sp3 coordination-"The Table experiment".
The 'Little Ice Age' - Northern Hemisphere average observations and model calculations
NASA Technical Reports Server (NTRS)
Robock, A.
1979-01-01
Numerical energy balance climate model calculations of the average surface temperature of the Northern Hemisphere for the past 400 years are compared with a new reconstruction of the past climate. Forcing with volcanic dust produces the best simulation, whereas expressing the solar constant as a function of the envelope of the sunspot number gives very poor results.
Modeling of an anoxic/methanogenic biofilm: effect of pH calculation within the biofilm.
César, Huiliñir; Silvio, Montalvo
2013-11-01
The models of anoxic/methanogenic processes in biofilm reactors published until now have supposed that pH does not change between the bulk liquid and biofilm. These assumptions are not necessarily valid for processes in reactors with biofilms. The present work studied an anoxic/methanogenic biofilm reactor incorporating the pH variation in both bulk and biofilm. Two dynamic models, one including the calculation of pH throughout the biofilm, were solved numerically and compared with each other. The results showed that the inclusion of a pH algorithm calculation produces different profiles and efficiencies on an anoxic/methanogenic biofilm system. Values of C/N ratio higher than 20 mg TOC/mg NO3-N and values of HRT lower than 4.5 h produce differences of up to 46 % with a traditional model that does not include pH calculation inside the biofilm. Thus, the assumption of a constant pH within the biofilm when using the traditional model does not accurately describe the performance of the system under these conditions, and pH calculation inside the biofilm should be included.
The 'Little Ice Age' - Northern Hemisphere average observations and model calculations
NASA Technical Reports Server (NTRS)
Robock, A.
1979-01-01
Numerical energy balance climate model calculations of the average surface temperature of the Northern Hemisphere for the past 400 years are compared with a new reconstruction of the past climate. Forcing with volcanic dust produces the best simulation, whereas expressing the solar constant as a function of the envelope of the sunspot number gives very poor results.
Radiation environment at MARS: Assessment with measurements and model calculated predictions
NASA Astrophysics Data System (ADS)
Saganti, P.; Cucinotta, F.; Zeitlin, C.; Cleghorn, T.; Wilson, J.
Understanding the distribution of the particle flux such as protons alpha particles and heavy ions in deep-space and on the surface of Mars for a known GCR Galactic Cosmic Ray environment and contributions from Solar Particle Events SPE are essential for future human exploration missions The GCR spectra in Mars orbit were generated with the recently expanded HZETRN High Z and Energy Transport and QMSFRG Quantum Multiple-Scattering theory of nuclear Fragmentation model calculations These model calculations are compared with the first two years of measured data from the MARIE Martian Radiation Environment Experiment instrument onboard the 2001 Mars Odyssey spacecraft The dose rates observed by the MARIE instrument are within 10 of the model calculated predictions and the heavy ion particle flux predictions are found to be within 5 of the measurements from the Cosmic Ray Isotope Spectrometer CRIS instrument on board the Advanced Composition Explorer ACE Model calculated particle flux predictions and comparisons with other observed measurement trends for the years 2004 and 2005 will be presented
Herzog, Bernd
2002-01-01
Measurements of in vitro sun protection factors (SPFs) are a common way of assessing sunscreen formulations at the stage of screening. The aim of the present investigation is to provide an alternative tool for the estimation of SPF values using a calculation based on the UV spectroscopic properties of the individual UV absorbers. As with in vitro measurements, the crucial step is to work out realistic values of transmissions of UV light through a film of the sunscreen formulation in the important spectral range between 290 and 400 nm. Once these transmissions are given, the SPF can be calculated. Since the human skin is an inhomogeneous substrate, a step film model for the calculation of such transmissions had been proposed by J.J. O'Neill. The step film geometry in this model is a function of two parameters that characterize the fraction of the thin and thick parts of the film and their difference in thickness. The transmissions and therefore the SPF are sensitive functions of the step film parameters. In order to use the model for the prediction of realistic SPF values, the step film parameters are calibrated using three sunscreen standard formulations with well-known in vivo SPF. A satisfactory correlation of in vivo SPF values and SPF values calculated with the calibrated step film model using an additional 36 different sunscreen formulations (in vivo SPF values between 3 and 36) is demonstrated.
NASA Technical Reports Server (NTRS)
Popinceanu, N. G.; Kremmer, I.
1974-01-01
A mechano-acoustic model is reported for calculating acoustic energy radiated by a working gear. According to this model, a gear is an acoustic coublet formed of the two wheels. The wheel teeth generate cylindrical acoustic waves while the front surfaces of the teeth behave like vibrating pistons. Theoretical results are checked experimentally and good agreement is obtained with open gears. The experiments show that the air noise effect is negligible as compared with the structural noise transmitted to the gear box.
Optical model calculations of 14.6A GeV silicon fragmentation cross sections
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Khan, Ferdous; Tripathi, Ram K.
1993-01-01
An optical potential abrasion-ablation collision model is used to calculate hadronic dissociation cross sections for a 14.6 A GeV(exp 28) Si beam fragmenting in aluminum, tin, and lead targets. The frictional-spectator-interaction (FSI) contributions are computed with two different formalisms for the energy-dependent mean free path. These estimates are compared with experimental data and with estimates obtained from semi-empirical fragmentation models commonly used in galactic cosmic ray transport studies.
A mathematical model of the nine-month pregnant woman for calculating specific absorbed fractions
Watson, E.E.; Stabin, M.G.
1986-01-01
Existing models that allow calculation of internal doses from radionuclide intakes by both men and women are based on a mathematical model of Reference Man. No attempt has been made to allow for the changing geometric relationships that occur during pregnancy which would affect the doses to the mother's organs and to the fetus. As pregnancy progresses, many of the mother's abdominal organs are repositioned, and their shapes may be somewhat changed. Estimation of specific absorbed fractions requires that existing mathematical models be modified to accommodate these changes. Specific absorbed fractions for Reference Woman at three, six, and nine months of pregnancy should be sufficient for estimating the doses to the pregnant woman and the fetus. This report describes a model for the pregnant woman at nine months. An enlarged uterus was incorporated into a model for Reference Woman. Several abdominal organs as well as the exterior of the trunk were modified to accommodate the new uterus. This model will allow calculation of specific absorbed fractions for the fetus from photon emitters in maternal organs. Specific absorbed fractions for the repositioned maternal organs from other organs can also be calculated. 14 refs., 2 figs.
NASA Astrophysics Data System (ADS)
Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.
2012-04-01
Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.
Zehnder, Ashley M; Hawkins, Michelle G; Trestrail, Earl A; Holt, Randall W; Kent, Michael S
2012-12-01
To optimize the use of CT-guided modeling for the calculation of body surface area (BSA) in domestic rabbits (Oryctolagus cuniculus). Animals-12 domestic rabbits. Adult rabbits (body weight, 1 to > 4 kg) that were client-owned animals undergoing CT for disease diagnosis or deceased laboratory animals donated from other research projects were scanned with a CT scanner. Images were transferred to a radiation therapy planning software program. Image slices were captured as contiguous slices at 100 kVp and 100 mA and processed to 0.1-cm-thick sections. The length of each contoured slice was summed to calculate a final BSA measurement. Nonlinear regression analysis was then used to derive an equation for the calculation of BSA in rabbits. The constant calculated by use of this method was 9.9 (range, 9.59 to 10). The R(2) for the goodness of fit was 0.9332. The equation that best described BSA as a function of body weight for domestic rabbits with this method was as follows: BSA = (9.9 × [body weight {in grams}](2/3))/10,000. The BSA calculated via the CT-guided method yielded results similar to those obtained with equations for other similarly sized mammals and verified the use of such equations for rabbits. Additionally, this technique can be used for species that lack equations for the accurate calculation of BSA.
Model-based calculating tool for pollen-mediated gene flow frequencies in plants.
Lei, Wang; Bao-Rong, Lu
2016-12-30
The potential social-economic and environmental impacts caused by transgene flow from genetically engineered (GE) crops have stimulated worldwide biosafety concerns. To determine transgene flow frequencies resulted from pollination is the first critical step for assessing such impacts, in addition to the determination of transgene expression and fitness in crop-wild hybrid descendants. Two methods are commonly used to estimate pollen-mediated gene flow (PMGF) frequencies: field experimenting and mathematical modeling. Field experiments can provide relatively accurate results but are time/resource consuming. Modeling offers an effective complement for PMGF experimental assessment. However, many published models describe PMGF by mathematical equations and are practically not easy to use. To increase the application of PMGF modeling for the estimation of transgene flow, we established a tool to calculate PMGF frequencies based on a quasi-mechanistic PMGF model for wind-pollination species. This tool includes a calculating program displayed by an easy-operating interface. PMGF frequencies of different plant species can be quickly calculated under different environmental conditions by including a number of biological and wind speed parameters that can be measured in the fields/laboratories or obtained from published data. The tool is freely available in the public domain (http://ecology.fudan.edu.cn/userfiles/cn/files/Tool_Manual.zip). Case studies including rice, wheat, and maize demonstrated similar results between the calculated frequencies based on this tool and those from published PMGF data. This PMGF calculating tool will provide useful information for assessing and monitoring social-economic and environmental impacts caused by transgene flow from GE crops. This tool can also be applied to determine the isolation distances between GE and non-GE crops in a coexistence agro-ecosystem, and to ensure the purity of certified seeds by setting proper isolation distances
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding. PMID:28409079
A global average model of atmospheric aerosols for radiative transfer calculations
NASA Technical Reports Server (NTRS)
Toon, O. B.; Pollack, J. B.
1976-01-01
A global average model is proposed for the size distribution, chemical composition, and optical thickness of stratospheric and tropospheric aerosols. This aerosol model is designed to specify the input parameters to global average radiative transfer calculations which assume the atmosphere is horizontally homogeneous. The model subdivides the atmosphere at multiples of 3 km, where the surface layer extends from the ground to 3 km, the upper troposphere from 3 to 12 km, and the stratosphere from 12 to 45 km. A list of assumptions made in construction of the model is presented and discussed along with major model uncertainties. The stratospheric aerosol is modeled as a liquid mixture of 75% H2SO4 and 25% H2O, while the tropospheric aerosol consists of 60% sulfate and 40% soil particles above 3 km and of 50% sulfate, 35% soil particles, and 15% sea salt below 3 km. Implications and consistency of the model are discussed.
Turner, D.R.; Pabalan, R.T.
1999-11-01
Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.
Bonn potential and shell-model calculations for N=126 isotones
Coraggio, L.; Covello, A.; Gargano, A.; Itaco, N.; Kuo, T. T. S.
1999-12-01
We have performed shell-model calculations for the N=126 isotones {sup 210}Po, {sup 211}At, and {sup 212}Rn using a realistic effective interaction derived from the Bonn-A nucleon-nucleon potential by means of a G-matrix folded-diagram method. The calculated binding energies, energy spectra, and electromagnetic properties show remarkably good agreement with the experimental data. The results of this paper complement those of our previous study on neutron hole Pb isotopes, confirming that realistic effective interactions are now able to reproduce with quantitative accuracy the spectroscopic properties of complex nuclei. (c) 1999 The American Physical Society.
Radiation calculations on the base of atmospheric models from lidar sounding
NASA Astrophysics Data System (ADS)
Melnikova, Irina; Samulenkov, Dmitry; Sapunov, Maxim; Vasilyev, Alexander; Kuznetsov, Anatoly; Frolkis, Victor
2017-02-01
The results of lidar sounding in the Resource Center "Observatory of Environmental Safety" of the St. Petersburg University, Research Park, have been obtained in the center of St. Petersburg. Observations are accomplished during 12 hours on 5 March 2015, from 11 am till 11 pm. Four time periods are considered. Results of AERONET observations and retrieval at 4 stations around St. Petersburg region are considered in addition. Optical models of the atmosphere in day and night time are constructed from the lidar and AERONET observations and used for radiation calculation. The radiative divergence, transmitted and reflected irradiance and heating rate are calculated.
Shell model calculation for Te and Sn isotopes in the vicinity of {sup 100}Sn
Yakhelef, A.; Bouldjedri, A.
2012-06-27
New Shell Model calculations for even-even isotopes {sup 104-108}Sn and {sup 106,108}Te, in the vicinity of {sup 100}Sn have been performed. The calculations have been carried out using the windows version of NuShell-MSU. The two body matrix elements TBMEs of the effective interaction between valence nucleons are obtained from the renormalized two body effective interaction based on G-matrix derived from the CD-bonn nucleon-nucleon potential. The single particle energies of the proton and neutron valence spaces orbitals are defined from the available spectra of lightest odd isotopes of Sb and Sn respectively.
Development of a New Shielding Model for JB-Line Dose Rate Calculations
Buckner, M.R.
2001-08-09
This report describes the shielding model development for the JB-Line Upgrade project. The product of this effort is a simple-to-use but accurate method of estimating the personnel dose expected for various operating conditions on the line. The current techniques for shielding calculations use transport codes such as ANISN which, while accurate for geometries which can be accurately approximated as one dimensional slabs, cylinders or spheres, fall short in calculating configurations in which two-or three-dimensional effects (e.g., streaming) play a role in the dose received by workers.
Algorithms and physical parameters involved in the calculation of model stellar atmospheres
NASA Astrophysics Data System (ADS)
Merlo, D. C.
This contribution summarizes the Doctoral Thesis presented at Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba for the degree of PhD in Astronomy. We analyze some algorithms and physical parameters involved in the calculation of model stellar atmospheres, such as atomic partition functions, functional relations connecting gaseous and electronic pressure, molecular formation, temperature distribution, chemical compositions, Gaunt factors, atomic cross-sections and scattering sources, as well as computational codes for calculating models. Special attention is paid to the integration of hydrostatic equation. We compare our results with those obtained by other authors, finding reasonable agreement. We make efforts on the implementation of methods that modify the originally adopted temperature distribution in the atmosphere, in order to obtain constant energy flux throughout. We find limitations and we correct numerical instabilities. We integrate the transfer equation solving directly the integral equation involving the source function. As a by-product, we calculate updated atomic partition functions of the light elements. Also, we discuss and enumerate carefully selected formulae for the monochromatic absorption and dispersion of some atomic and molecular species. Finally, we obtain a flexible code to calculate model stellar atmospheres.
A brief look at model-based dose calculation principles, practicalities, and promise.
Sloboda, Ron S; Morrison, Hali; Cawston-Grant, Brie; Menon, Geetha V
2017-02-01
Model-based dose calculation algorithms (MBDCAs) have recently emerged as potential successors to the highly practical, but sometimes inaccurate TG-43 formalism for brachytherapy treatment planning. So named for their capacity to more accurately calculate dose deposition in a patient using information from medical images, these approaches to solve the linear Boltzmann radiation transport equation include point kernel superposition, the discrete ordinates method, and Monte Carlo simulation. In this overview, we describe three MBDCAs that are commercially available at the present time, and identify guidance from professional societies and the broader peer-reviewed literature intended to facilitate their safe and appropriate use. We also highlight several important considerations to keep in mind when introducing an MBDCA into clinical practice, and look briefly at early applications reported in the literature and selected from our own ongoing work. The enhanced dose calculation accuracy offered by a MBDCA comes at the additional cost of modelling the geometry and material composition of the patient in treatment position (as determined from imaging), and the treatment applicator (as characterized by the vendor). The adequacy of these inputs and of the radiation source model, which needs to be assessed for each treatment site, treatment technique, and radiation source type, determines the accuracy of the resultant dose calculations. Although new challenges associated with their familiarization, commissioning, clinical implementation, and quality assurance exist, MBDCAs clearly afford an opportunity to improve brachytherapy practice, particularly for low-energy sources.
A brief look at model-based dose calculation principles, practicalities, and promise
Morrison, Hali; Cawston-Grant, Brie; Menon, Geetha V.
2017-01-01
Model-based dose calculation algorithms (MBDCAs) have recently emerged as potential successors to the highly practical, but sometimes inaccurate TG-43 formalism for brachytherapy treatment planning. So named for their capacity to more accurately calculate dose deposition in a patient using information from medical images, these approaches to solve the linear Boltzmann radiation transport equation include point kernel superposition, the discrete ordinates method, and Monte Carlo simulation. In this overview, we describe three MBDCAs that are commercially available at the present time, and identify guidance from professional societies and the broader peer-reviewed literature intended to facilitate their safe and appropriate use. We also highlight several important considerations to keep in mind when introducing an MBDCA into clinical practice, and look briefly at early applications reported in the literature and selected from our own ongoing work. The enhanced dose calculation accuracy offered by a MBDCA comes at the additional cost of modelling the geometry and material composition of the patient in treatment position (as determined from imaging), and the treatment applicator (as characterized by the vendor). The adequacy of these inputs and of the radiation source model, which needs to be assessed for each treatment site, treatment technique, and radiation source type, determines the accuracy of the resultant dose calculations. Although new challenges associated with their familiarization, commissioning, clinical implementation, and quality assurance exist, MBDCAs clearly afford an opportunity to improve brachytherapy practice, particularly for low-energy sources. PMID:28344608
Poston, J.W.
1989-01-01
This presentation will review and describe the development of pediatric phantoms for use in radiation dose calculations . The development of pediatric models for dose calculations essentially paralleled that of the adult. In fact, Snyder and Fisher at the Oak Ridge National Laboratory reported on a series of phantoms for such calculations in 1966 about two years before the first MIRD publication on the adult human phantom. These phantoms, for a newborn, one-, five-, ten-, and fifteen-year old, were derived from the adult phantom. The pediatric'' models were obtained through a series of transformations applied to the major dimensions of the adult, which were specified in a Cartesian coordinate system. These phantoms suffered from the fact that no real consideration was given to the influence of these mathematical transformations on the actual organ sizes in the other models nor to the relation of the resulting organ masses to those in humans of the particular age. Later, an extensive effort was invested in designing individual'' pediatric phantoms for each age based upon a careful review of the literature. Unfortunately, the phantoms had limited use and only a small number of calculations were made available to the user community. Examples of the phantoms, their typical dimensions, common weaknesses, etc. will be discussed.
Poston, J.W.
1989-12-31
This presentation will review and describe the development of pediatric phantoms for use in radiation dose calculations . The development of pediatric models for dose calculations essentially paralleled that of the adult. In fact, Snyder and Fisher at the Oak Ridge National Laboratory reported on a series of phantoms for such calculations in 1966 about two years before the first MIRD publication on the adult human phantom. These phantoms, for a newborn, one-, five-, ten-, and fifteen-year old, were derived from the adult phantom. The ``pediatric`` models were obtained through a series of transformations applied to the major dimensions of the adult, which were specified in a Cartesian coordinate system. These phantoms suffered from the fact that no real consideration was given to the influence of these mathematical transformations on the actual organ sizes in the other models nor to the relation of the resulting organ masses to those in humans of the particular age. Later, an extensive effort was invested in designing ``individual`` pediatric phantoms for each age based upon a careful review of the literature. Unfortunately, the phantoms had limited use and only a small number of calculations were made available to the user community. Examples of the phantoms, their typical dimensions, common weaknesses, etc. will be discussed.
Wang, Junmei; Hou, Tingjun
2012-05-25
It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS
Experience at Los Alamos with use of the optical model for applied nuclear data calculations
Young, P.G.
1994-10-01
While many nuclear models are important in calculations of nuclear data, the optical model usually provides the basic underpinning of analyses directed at data for applications. An overview is given here of experience in the Nuclear Theory and Applications Group at Los Alamos National Laboratory in the use of the optical model for calculations of nuclear cross section data for applied purposes. We consider the direct utilization of total, elastic, and reaction cross sections for neutrons, protons, deuterons, tritons, {sup 3}He and alpha particles in files of evaluated nuclear data covering the energy range of 0 to 200 MeV, as well as transmission coefficients for reaction theory calculations and neutron and proton wave functions direct-reaction and Feshbach-Kerman-Koonin analyses. Optical model codes such as SCAT and ECIS and the reaction theory codes COMNUC, GNASH FKK-GNASH, and DWUCK have primarily been used in our analyses. A summary of optical model parameterizations from past analyses at Los Alamos will be given, including detailed tabulations of the parameters for a selection of nuclei.
Accelerated complete-linearization method for calculating NLTE model stellar atmospheres
NASA Technical Reports Server (NTRS)
Hubeny, I.; Lanz, T.
1992-01-01
Two approaches to accelerating the method of complete linearization for calculating NLTE model stellar atmospheres are suggested. The first one, the so-called Kantorovich variant of the Newton-Raphson method, consists of keeping the Jacobi matrix of the system fixed, which allows us to calculate the costly matrix inversions only a few times and then keep them fixed during the subsequent computations. The second method is an application of the Ng acceleration. Both methods are extremely easy to implement with any model atmosphere code based on complete linearization. It is demonstrated that both methods, and especially their combination, yield a rapidly and globally convergent algorithm, which takes 2 to 5 times less computer time, depending on the model at hand and the required accuracy, than the ordinary complete linearization. Generally, the time gain is more significant for more complicated models. The methods were tested for a broad range of atmospheric parameters, and in all cases they exhibited similar behavior. Ng acceleration applied on the Kantorovich variant thus offers a significant improvement of the standard complete-linearization method, and may now be used for calculating relatively involved NLTE model stellar atmospheres.
Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing.
Tight-binding model for carbon nanotubes from ab initio calculations.
Correa, J D; da Silva, Antônio J R; Pacheco, M
2010-07-14
Here we present a parametrized tight-binding (TB) model to calculate the band structure of single-wall carbon nanotubes (SWNTs). On the basis of ab initio calculations we fit the band structure of nanotubes of different radii with results obtained with an orthogonal TB model to third neighbors, which includes the effects of orbital hybridization by means of a reduced set of parameters. The functional form for the dependence of these parameters on the radius of the tubes can be used to interpolate appropriate TB parameters for different SWNTs and to study the effects of curvature on their electronic properties. Additionally, we have shown that the model gives an appropriate description of the optical spectra of SWNTs, which can be useful for a proper assignation of SWNTs' specific chirality from optical absorption experiments.
[A 3D FEM model for calculation of electromagnetic fields in transmagnetic stimulation].
Seilwinder, J; Kammer, T; Andrä, W; Bellemann, M E
2002-01-01
We developed a realistic finite elements method (FEM) model of the brain for the calculation of electromagnetic fields in transcranial magnetic stimulation (TMS). A focal butterfly stimulation coil was X-rayed, parameterized, and modeled. The magnetic field components of the TMS coil were calculated and compared for validation to pointwise measurements of the magnetic fields with a Hall sensor. We found a mean deviation of 7.4% at an axial distance of 20 mm to the coil. A 3D brain model with the biological tissues of white and gray matter, bone, and cerebrospinal fluid was developed. At a current sweep of 1000 A in 120 microseconds, the maximum induced current density in gray matter was 177 mA/m2 and the strongest electric field gradient covered an area of 40 mm x 53 mm.
Model for Calculating Photosynthetic Photon Flux Densities in Forest Openings on Slopes.
NASA Astrophysics Data System (ADS)
Chen, Jing M.; Black, T. Andrew; Price, David T.; Carter, Reid E.
1993-10-01
A model has been developed to calculate the spatial distribution of the photosynthetic photon flux density (PPFD) in elliptical forest openings of given slopes and orientations. The PPFD is separated into direct and diffuse components. The direct component is calculated according to the opening and radiation geometries, and pathlength of the solar beam through the forest canopy. The diffuse component is obtained from the sky, tree, and landscape view factors. In this model, the distribution of foliage area with height and the effect of foliage clumping on both direct and diffuse radiation transmission are considered.The model has been verified using measurements for six quantum sensors (LI-COR Inc.) located at different positions in a small clear-cut (0.37 ha) in a 90-year-old western hemlock-Douglas fir forest.
Molecular Modeling for Calculation of Mechanical Properties of Epoxies with Moisture Ingress
NASA Technical Reports Server (NTRS)
Clancy, Thomas C.; Frankland, Sarah J.; Hinkley, J. A.; Gates, T. S.
2009-01-01
Atomistic models of epoxy structures were built in order to assess the effect of crosslink degree, moisture content and temperature on the calculated properties of a typical representative generic epoxy. Each atomistic model had approximately 7000 atoms and was contained within a periodic boundary condition cell with edge lengths of about 4 nm. Four atomistic models were built with a range of crosslink degree and moisture content. Each of these structures was simulated at three temperatures: 300 K, 350 K, and 400 K. Elastic constants were calculated for these structures by monitoring the stress tensor as a function of applied strain deformations to the periodic boundary conditions. The mechanical properties showed reasonably consistent behavior with respect to these parameters. The moduli decreased with decreasing crosslink degree with increasing temperature. The moduli generally decreased with increasing moisture content, although this effect was not as consistent as that seen for temperature and crosslink degree.
[Variation characteristics and calculation model of evapotranspiration in latored soils on hills].
Guo, Qingrong; Zhang, Binggang; Zhong, Jihong; Tan, Jun; Luo, Bosheng; Huang, Xianglan
2003-04-01
The dynamic characteristics of evapotranspiration in latored soil on hills of subtropics regions in south China was analyzed. The results showed that evapotranspiration presented annual and seasonal fluctuations. The maximum monthly evapotranspiration was 10.80-15.41 times of the monthly minimum. The evapotranspiration in wet season (March to September) accounted for about 77% of annual total evapotranspiration, and that in dry season (October to February of next year) accounted for about 23%. Although the amount of annual rainfalls could balance annual total evapotranspiration, rainfalls were insufficient for evapotranspiration in the dry season, and soil water could be depleted by evapotranspiration. Based on the modified Penman equation, the calculation model of evapotranspiration in latored soil on hills of subtropics regions in south China was set up. By comparing modeling results with experimental data, it was proved that the calculation model was very reliable.
Calculation of velocity structure functions for vortex models of isotropic turbulence
NASA Astrophysics Data System (ADS)
Saffman, P. G.; Pullin, D. I.
1996-11-01
Velocity structure functions (up'-up)m are calculated for vortex models of isotropic turbulence. An integral operator is introduced which defines an isotropic two-point field from a volume-orientation average for a specific solution of the Navier-Stokes equations. Applying this to positive integer powers of the longitudinal velocity difference then gives explicit formulas for (up'-up)m as a function of order m and of the scalar separation r. Special forms of the operator are then obtained for rectilinear stretched vortex models of the Townsend-Lundgren type. Numerical results are given for the Burgers vortex and also for a realization of the Lundgren-strained spiral vortex, and comparison with experimental measurement is made. In an Appendix, we calculate values of the velocity-derivative moments for the Townsend-Burgers model.
Lengers, Bernd; Schiefler, Inga; Büscher, Wolfgang
2013-12-01
The overall measurement of farm level greenhouse gas (GHG) emissions in dairy production is not feasible, from either an engineering or administrative point of view. Instead, computational model systems are used to generate emission inventories, demanding a validation by measurement data. This paper tests the GHG calculation of the dairy farm-level optimization model DAIRYDYN, including methane (CH₄) from enteric fermentation and managed manure. The model involves four emission calculation procedures (indicators), differing in the aggregation level of relevant input variables. The corresponding emission factors used by the indicators range from default per cow (activity level) emissions up to emission factors based on feed intake, manure amount, and milk production intensity. For validation of the CH₄ accounting of the model, 1-year CH₄ measurements of an experimental free-stall dairy farm in Germany are compared to model simulation results. An advantage of this interdisciplinary study is given by the correspondence of the model parameterization and simulation horizon with the experimental farm's characteristics and measurement period. The results clarify that modeled emission inventories (2,898, 4,637, 4,247, and 3,600 kg CO₂-eq. cow(-1) year(-1)) lead to more or less good approximations of online measurements (average 3,845 kg CO₂-eq. cow(-1) year(-1) (±275 owing to manure management)) depending on the indicator utilized. The more farm-specific characteristics are used by the GHG indicator; the lower is the bias of the modeled emissions. Results underline that an accurate emission calculation procedure should capture differences in energy intake, owing to milk production intensity as well as manure storage time. Despite the differences between indicator estimates, the deviation of modeled GHGs using detailed indicators in DAIRYDYN from on-farm measurements is relatively low (between -6.4% and 10.5%), compared with findings from the literature.
Construction of new skin models and calculation of skin dose coefficients for electron exposures
NASA Astrophysics Data System (ADS)
Yeom, Yeon Soo; Kim, Chan Hyeong; Nguyen, Thang Tat; Choi, Chansoo; Han, Min Cheol; Jeong, Jong Hwi
2016-08-01
The voxel-type reference phantoms of the International Commission on Radiological Protection (ICRP), due to their limited voxel resolutions, cannot represent the 50- μm-thick radiosensitive target layer of the skin necessary for skin dose calculations. Alternatively, in ICRP Publication 116, the dose coefficients (DCs) for the skin were calculated approximately, averaging absorbed dose over the entire skin depth of the ICRP phantoms. This approximation is valid for highly-penetrating radiations such as photons and neutrons, but not for weakly penetrating radiations like electrons due to the high gradient in the dose distribution in the skin. To address the limitation, the present study introduces skin polygon-mesh (PM) models, which have been produced by converting the skin models of the ICRP voxel phantoms to a high-quality PM format and adding a 50- μm-thick radiosensitive target layer into the skin models. Then, the constructed skin PM models were implemented in the Geant4 Monte Carlo code to calculate the skin DCs for external exposures of electrons. The calculated values were then compared with the skin DCs of the ICRP Publication 116. The results of the present study show that for high-energy electrons (≥ 1 MeV), the ICRP-116 skin DCs are, indeed, in good agreement with the skin DCs calculated in the present study. For low-energy electrons (< 1 MeV), however, significant discrepancies were observed, and the ICRP-116 skin DCs underestimated the skin dose as much as 15 times for some energies. Besides, regardless of the small tissue weighting factor of the skin ( w T = 0.01), the discrepancies in the skin dose were found to result in significant discrepancies in the effective dose, demonstarting that the effective DCs in ICRP-116 are not reliable for external exposure to electrons.
Direct comparison between two {gamma}-alumina structural models by DFT calculations
Ferreira, Ary R.; Martins, Mateus J.F.; Konstantinova, Elena; Capaz, Rodrigo B.; Souza, Wladmir F.; Chiaro, Sandra Shirley X.; Leitao, Alexandre A.
2011-05-15
We selected two important {gamma}-alumina models proposed in literature, a spinel-like one and a nonspinel one, to perform a theoretical comparison. Using ab initio calculations, the models were compared regarding their thermodynamic stability, lattice vibrational modes, and bulk electronic properties. The spinel-like model is thermodynamically more stable by 4.55 kcal/mol per formula unit on average from 0 to 1000 K. The main difference between the models is in their simulated infrared spectra, with the spinel-like model showing the best agreement with experimental data. Analysis of the electronic density of states and charge transfer between atoms reveal the similarity on the electronic structure of the two models, despite some minor differences. -- Graphical abstract: Two {gamma}-Alumina bulk models selected in this work for a comparison focusing in the electronic structure and thermodynamics of the systems. (a) The nonspinel model and (b) the spinel-like model. Display Omitted Highlights: {yields} There is still a debate about the {gamma}-Alumina structure in the literature. {yields} Models of surfaces are constructed from different bulk structural models. {yields} Two models commonly used in the literate were selected and compared. {yields} One model reproduce better the experimental data. {yields} Both presented a similar electronic structure.
The impact of nuclear mass models on r-process nucleosynthesis network calculations
NASA Astrophysics Data System (ADS)
Vaughan, Kelly
2002-10-01
An insight into understanding various nucleosynthesis processes is via modelling of the process with network calculations. My project focus is r-process network calculations where the r-process is nucleosynthesis via rapid neutron capture thought to take place in high entropy supernova bubbles. One of the main uncertainties of the simulations is the Nuclear Physics input. My project investigates the role that nuclear masses play in the resulting abundances. The code tecode, involves rapid (n,γ) capture reactions in competition with photodisintegration and β decay onto seed nuclei. In order to fully analyze the effects of nuclear mass models on the relative isotopic abundances, calculations were done from the network code, keeping the initial environmental parameters constant throughout. The supernova model investigated by Qian et al (1996) in which two r-processes, of high and low frequency with seed nucleus ^90Se and of fixed luminosity (fracL_ν_e(0)r_7(0)^2 ˜= 8.77), contribute to the nucleosynthesis of the heavier elements. These two r-processes, however, do not contribute equally to the total abundance observed. The total isotopic abundance produced from both events was therefore calculated using equation refabund. Y(H+L) = fracY(H)+fY(L)f+1 <~belabund where Y(H) denotes the relative isotopic abundance produced in the high frequency event, Y(L) corresponds to the low freqeuncy event and f is the ratio of high event matter to low event matter produced. Having established reliable, fixed parameters, the network code was run using data files containing parameters such as the mass excess, neutron separation energy, β decay rates and neutron capture rates based around three different nuclear mass models. The mass models tested are the HFBCS model (Hartree-Fock BCS) derived from first principles, the ETFSI-Q model (Extended Thomas-Fermi with Strutinsky Integral including shell Quenching) known for its particular successes in the replication of Solar System
Lattice model for the calculation of the angle of repose from microscopic grain properties
NASA Astrophysics Data System (ADS)
Alonso, J. J.; Hovi, J.-P.; Herrmann, H. J.
1998-07-01
We study a simple lattice model for granular heap, which aims at calculating the macroscopic angle of repose from the microscopic grain properties. The model includes the effects of dissipation of the energy in the particle-particle collisions, and sticking of the particles to the pile. We obtain that, due to the discretization of the space, the angle of repose of the pile behaves as a complete devil's staircase as a function of the model parameters. We present numerical and analytical considerations which characterize the properties of this staircase.
Surface water management: a user's guide to calculate a water balance using the CREAMS model
Lane, L.J.
1984-11-01
The hydrologic component of the CREAMS model is described and discussed in terms of calculating a surface water balance for shallow land burial systems used for waste disposal. Parameter estimates and estimation procedures are presented in detail in the form of a user's guide. Use of the model is illustrated with three examples based on analysis of data from Los Alamos, New Mexico and Rock Valley, Nevada. Use of the model in design of trench caps for shallow land burial systems is illustrated with the example applications at Los Alamos.
Li, Xiaofeng; Hylton, Nicholas P; Giannini, Vincenzo; Lee, Kan-Hua; Ekins-Daukes, Ned J; Maier, Stefan A
2011-07-04
We report three-dimensional modelling of plasmonic solar cells in which electromagnetic simulation is directly linked to carrier transport calculations. To date, descriptions of plasmonic solar cells have only involved electromagnetic modelling without realistic assumptions about carrier transport, and we found that this leads to considerable discrepancies in behaviour particularly for devices based on materials with low carrier mobility. Enhanced light absorption and improved electronic response arising from plasmonic nanoparticle arrays on the solar cell surface are observed, in good agreement with previous experiments. The complete three-dimensional modelling provides a means to design plasmonic solar cells accurately with a thorough understanding of the plasmonic interaction with a photovoltaic device.
Calculated flame temperature (CFT) modeling of fuel mixture lower flammability limits.
Zhao, Fuman; Rogers, William J; Mannan, M Sam
2010-02-15
Heat loss can affect experimental flammability limits, and it becomes indispensable to quantify flammability limits when apparatus quenching effect becomes significant. In this research, the lower flammability limits of binary hydrocarbon mixtures are predicted using calculated flame temperature (CFT) modeling, which is based on the principle of energy conservation. Specifically, the hydrocarbon mixture lower flammability limit is quantitatively correlated to its final flame temperature at non-adiabatic conditions. The modeling predictions are compared with experimental observations to verify the validity of CFT modeling, and the minor deviations between them indicated that CFT modeling can represent experimental measurements very well. Moreover, the CFT modeling results and Le Chatelier's Law predictions are also compared, and the agreement between them indicates that CFT modeling provides a theoretical justification for the Le Chatelier's Law.
Thermal conductance of one-dimensional materials calculated with typical lattice models
NASA Astrophysics Data System (ADS)
Zhang, Chunyi; Kang, Wei; Wang, Jianxiang
2016-11-01
We show through calculations on typical lattice models that thermal conductance σ can well describe the near-equilibrium thermal transport property of one-dimensional materials of finite length, which presents a situation often met in the application of nanoscale devices. The σ generally contains contributions from the material itself and those from the thermal reservoirs. The intrinsic σ of the material, i.e., the one with the fewest external influences, can be efficiently calculated with the help of the "blackbody"-like nonreflective thermal reservoir, either through the nonequilibrium method or through the Green-Kubo-type formula. σ thus calculated would be helpful to guide the design of thermal management and heat control in nanoscale devices.
Calculations of isothermal elastic constants in the phase-field crystal model
NASA Astrophysics Data System (ADS)
Pisutha-Arnond, N.; Chan, V. W. L.; Elder, K. R.; Thornton, K.
2013-01-01
The phase-field crystal (PFC) method is an emerging coarse-grained atomistic model that can be used to predict material properties. In this work, we describe procedures for calculating isothermal elastic constants using the PFC method. We find that the conventional procedures used in the PFC method for calculating the elastic constants are inconsistent with those defined from a theory of thermoelasticity of stressed materials. Therefore we present an alternative procedure for calculating the elastic constants that are consistent with the definitions from the thermoelasticity theory, and show that the two procedures result in different predictions. Furthermore, we employ a thermodynamic formulation of stressed solids to quantify the differences between the elastic constants obtained from the two procedures in terms of thermodynamic quantities such as the pressure evaluated at the undeformed state.
The Lake Nyos disaster: model calculations for the flow of carbon dioxide
NASA Astrophysics Data System (ADS)
Faivre Pierret, R. X.; Berne, P.; Roussel, C.; Le Guern, F.
1992-06-01
In order to explain the origin of the Lake Nyos Disaster, a three-dimensional model for carbon dioxide flow has been calculated. The boundary conditions were based on the field observations. Calculations show that CO 2 tends to flow down slope by gravity and velocity tends to increase on steep slopes. The results of calculation are in good agreements with the effects observed in the field. The initial velocity of the gas jet should have been 28 m/s, the volume of pure CO 2 0.6 km 3 and the gas release should occur in a very short time (a few minutes). The gas was released as a jet half way from the center on the south-south west side of the lake.
A new method for modeling rough membrane surface and calculation of interfacial interactions.
Zhao, Leihong; Zhang, Meijia; He, Yiming; Chen, Jianrong; Hong, Huachang; Liao, Bao-Qiang; Lin, Hongjun
2016-01-01
Membrane fouling control necessitates the establishment of an effective method to assess interfacial interactions between foulants and rough surface membrane. This study proposed a new method which includes a rigorous mathematical equation for modeling membrane surface morphology, and combination of surface element integration (SEI) method and the composite Simpson's approach for assessment of interfacial interactions. The new method provides a complete solution to quantitatively calculate interfacial interactions between foulants and rough surface membrane. Application of this method in a membrane bioreactor (MBR) showed that, high calculation accuracy could be achieved by setting high segment number, and moreover, the strength of three energy components and energy barrier was remarkably impaired by the existence of roughness on the membrane surface, indicating that membrane surface morphology exerted profound effects on membrane fouling in the MBR. Good agreement between calculation prediction and fouling phenomena was found, suggesting the feasibility of this method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Calculation of generalized Lorenz-Mie theory based on the localized beam models
NASA Astrophysics Data System (ADS)
Jia, Xiaowei; Shen, Jianqi; Yu, Haitao
2017-07-01
It has been proved that localized approximation (LA) is the most efficient way to evaluate the beam shape coefficients (BSCs) in generalized Lorenz-Mie theory (GLMT). The numerical calculation of relevant physical quantities is a challenge for its practical applications due to the limit of computer resources. The study presents an improved algorithm of the GLMT calculation based on the localized beam models. The BSCs and the angular functions are calculated by multiplying them with pre-factors so as to keep their values in a reasonable range. The algorithm is primarily developed for the original localized approximation (OLA) and is further extended to the modified localized approximation (MLA). Numerical results show that the algorithm is efficient, reliable and robust.
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy. PMID:26734567
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.
NASA Astrophysics Data System (ADS)
Roelofs, Geert-Jan; Lelieveld, Jos
1995-10-01
We present results of global tropospheric chemistry simulations with the coupled chemistry/atmospheric general circulation model ECHAM. Ultimately, the model will be used to study climate changes induced by anthropogenic influences on the chemistry of the atmosphere; meteorological parameters that are important for the chemistry, such as temperature, humidity, air motions, cloud and rain characteristics, and mixing processes are calculated on-line. The chemical part of the model describes background tropospheric CH4-CO-NOx-HOx photochemistry. Emissions of NO and CO, surface concentrations of CH4, and stratospheric concentrations of O3 and NOy are prescribed as boundary conditions. Calculations of the tropospheric O3 budget indicate that seasonal variabilities of the photochemical production and of injection from the stratosphere are represented realistically, although some aspects of the model still need improvement. Comparisons of calculated O3 surface concentrations and O3 profiles with available measurements show that the model reproduces O3 distributions in remote tropical and midlatitudinal sites. Also, the model matches typical profiles connected with deep convection in the Intertropical Convergence Zone (ITCZ). However, the model tends to underestimate O3 concentrations at the poles and in relatively polluted regions. These underestimates are caused by the poor representation of tropopause foldings in midlatitudes, which form a significant source of tropospheric O3 from the stratosphere, too weak transport to the poles, and the neglect of higher hydrocarbon chemistry. Also, mixing of polluted continental boundary layer air into the free troposphere may be underestimated. We discuss how these model deficiencies will be improved in the future.
A general model for stray dose calculation of static and intensity-modulated photon radiation
Hauri, Pascal Schneider, Uwe; Hälg, Roger A.; Besserer, Jürgen
2016-04-15
Purpose: There is an increasing number of cancer survivors who are at risk of developing late effects caused by ionizing radiation such as induction of second tumors. Hence, the determination of out-of-field dose for a particular treatment plan in the patient’s anatomy is of great importance. The purpose of this study was to analytically model the stray dose according to its three major components. Methods: For patient scatter, a mechanistic model was developed. For collimator scatter and head leakage, an empirical approach was used. The models utilize a nominal beam energy of 6 MeV to describe two linear accelerator types of a single vendor. The parameters of the models were adjusted using ionization chamber measurements registering total absorbed dose in simple geometries. Whole-body dose measurements using thermoluminescent dosimeters in an anthropomorphic phantom for static and intensity-modulated treatment plans were compared to the 3D out-of-field dose distributions calculated by a combined model. Results: The absolute mean difference between the whole-body predicted and the measured out-of-field dose of four different plans was 11% with a maximum difference below 44%. Computation time of 36 000 dose points for one field was around 30 s. By combining the model-calculated stray dose with the treatment planning system dose, the whole-body dose distribution can be viewed in the treatment planning system. Conclusions: The results suggest that the model is accurate, fast and can be used for a wide range of treatment modalities to calculate the whole-body dose distribution for clinical analysis. For similar energy spectra, the mechanistic patient scatter model can be used independently of treatment machine or beam orientation.
Study on the calculation models of bus delay at bays using queueing theory and Markov chain.
Sun, Feng; Sun, Li; Sun, Shao-Wei; Wang, Dian-Hai
2015-01-01
Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays.
Study on the Calculation Models of Bus Delay at Bays Using Queueing Theory and Markov Chain
Sun, Li; Sun, Shao-wei; Wang, Dian-hai
2015-01-01
Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays. PMID:25759720
The best model for the calculation of profile losses in the axial turbine
NASA Astrophysics Data System (ADS)
Baturin, O. V.; Popov, G. M.; Kolmakova, D. A.; Novikova, Yu D.
2017-01-01
The paper proposes a method for evaluating the reliability of models for estimation of the energy losses in the blade rows of axial turbines, based on the statistical analysis of the deviation of the experimental data from the calculated. It was shown that these deviations are subjected to the normal distribution law and can be described by mathematical expectations μΔξ and standard deviation σΔξ. The values of profile losses were calculated by five well-known models for 170 different axial turbines cascades, representing the diversity of turbines used in aircraft GTE. The findings were compared with experimental data. Compared results were subjected to statistical analysis. It was found that the best model to describe the profile losses in axial turbines is a model that has been developed in Central Institute of Aviation Motors (Russia). With a probability of 95%, it allows the calculation of profile losses deviating from the actual values of losses by -8±84%.
Modelling of Rail Vehicles and Track for Calculation of Ground-Vibration Transmission Into Buildings
NASA Astrophysics Data System (ADS)
Hunt, H. E. M.
1996-05-01
A methodology for the calculation of vibration transmission from railways into buildings is presented. The method permits existing models of railway vehicles and track to be incorporated and it has application to any model of vibration transmission through the ground. Special attention is paid to the relative phasing between adjacent axle-force inputs to the rail, so that vibration transmission may be calculated as a random process. The vehicle-track model is used in conjunction with a building model of infinite length. The tracking and building are infinite and parallel to each other and forces applied are statistically stationary in space so that vibration levels at any two points along the building are the same. The methodology is two-dimensional for the purpose of application of random process theory, but fully three-dimensional for calculation of vibration transmission from the track and through the ground into the foundations of the building. The computational efficiency of the method will interest engineers faced with the task of reducing vibration levels in buildings. It is possible to assess the relative merits of using rail pads, under-sleeper pads, ballast mats, floating-slab track or base isolation for particular applications.
Seth, Ajay; Delp, Scott L.
2015-01-01
Biomechanics researchers often use multibody models to represent biological systems. However, the mapping from biology to mechanics and back can be problematic. OpenSim is a popular open source tool used for this purpose, mapping between biological specifications and an underlying generalized coordinate multibody system called Simbody. One quantity of interest to biomechanical researchers and clinicians is “muscle moment arm,” a measure of the effectiveness of a muscle at contributing to a particular motion over a range of configurations. OpenSim can automatically calculate these quantities for any muscle once a model has been built. For simple cases, this calculation is the same as the conventional moment arm calculation in mechanical engineering. But a muscle may span several joints (e.g., wrist, neck, back) and may follow a convoluted path over various curved surfaces. A biological joint may require several bodies or even a mechanism to accurately represent in the multibody model (e.g., knee, shoulder). In these situations we need a careful definition of muscle moment arm that is analogous to the mechanical engineering concept, yet generalized to be of use to biomedical researchers. Here we present some biomechanical modeling challenges and how they are resolved in OpenSim and Simbody to yield biologically meaningful muscle moment arms. PMID:25905111
Sherman, Michael A; Seth, Ajay; Delp, Scott L
2013-08-01
Biomechanics researchers often use multibody models to represent biological systems. However, the mapping from biology to mechanics and back can be problematic. OpenSim is a popular open source tool used for this purpose, mapping between biological specifications and an underlying generalized coordinate multibody system called Simbody. One quantity of interest to biomechanical researchers and clinicians is "muscle moment arm," a measure of the effectiveness of a muscle at contributing to a particular motion over a range of configurations. OpenSim can automatically calculate these quantities for any muscle once a model has been built. For simple cases, this calculation is the same as the conventional moment arm calculation in mechanical engineering. But a muscle may span several joints (e.g., wrist, neck, back) and may follow a convoluted path over various curved surfaces. A biological joint may require several bodies or even a mechanism to accurately represent in the multibody model (e.g., knee, shoulder). In these situations we need a careful definition of muscle moment arm that is analogous to the mechanical engineering concept, yet generalized to be of use to biomedical researchers. Here we present some biomechanical modeling challenges and how they are resolved in OpenSim and Simbody to yield biologically meaningful muscle moment arms.
Nikjoo, H; Uehara, S; Pinsky, L; Cucinotta, Francis A
2007-01-01
Space activities in earth orbit or in deep space pose challenges to the estimation of risk factors for both astronauts and instrumentation. In space, risk from exposure to ionising radiation is one of the main factors limiting manned space exploration. Therefore, characterising the radiation environment in terms of the types of radiations and the quantity of radiation that the astronauts are exposed to is of critical importance in planning space missions. In this paper, calculations of the response of TEPC to protons and carbon ions were reported. The calculations have been carried out using Monte Carlo track structure simulation codes for the walled and the wall-less TEPC counters. The model simulates nonhomogenous tracks in the sensitive volume of the counter and accounts for direct and indirect events. Calculated frequency- and dose-averaged lineal energies 0.3 MeV-1 GeV protons are presented and compared with the experimental data. The calculation of quality factors (QF) were made using individual track histories. Additionally, calculations of absolute frequencies of energy depositions in cylindrical targets, 100 nm height by 100 nm diameter, when randomly positioned and oriented in water irradiated with 1 Gy of protons of energy 0.3-100 MeV, is presented. The distributions show the clustering properties of protons of different energies in a 100 nm by 100 nm cylinder.
Many-pole model of inelastic losses applied to calculations of XANES
NASA Astrophysics Data System (ADS)
Kas, J. J.; Vinson, J.; Trcera, N.; Cabaret, D.; Shirley, E. L.; Rehr, J. J.
2009-11-01
Conventional Kohn-Sham band-structure methods for calculating deep-core x-ray spectra typically neglect photoelectron self-energy effects, which give rise to an energy-dependent shift and broadening of the spectra. Here an a posteriori procedure is introduced to correct for these effects. The method is based on ab initio calculations of the GW self-energy using a many-pole model and a calculation of the dielectric function in the long wavelength limit using either the FEFF8 real-space Green's function code, or the AI2NBSE interface between the National Institute of Standards and Technology (NIST) Bethe-Salpeter equation solver (NBSE) and the ABINIT pseudopotential code. As an example the method is applied to core level x-ray spectra of LiF and MgAl2O4 calculated using (respectively) OCEAN, an extension of the AI2NBSE code for core level excitations, and the PARATEC pseudopotential code with the core-hole treated using a super-cell. The method satisfactorily explains the discrepancy between experiment and calculations.
NASA Astrophysics Data System (ADS)
Brasseur, G. P.; Hauglustaine, D. A.; Walters, S.
1996-06-01
A global three-dimensional chemical transport model, called MOZART (Model of OZone And Related species in the Troposphere), is used to compare calculated abundances of chemical species and their seasonal evolution in the remote Pacific troposphere near Hawaii with values observed during the Mauna Loa Observatory Photochemistry Experiments (MLOPEX 1 and 2). MOZART is a fully diurnal model which calculates the time evolution of about 30 chemical species from the surface to the upper stratosphere. It accounts for surface emissions of source gases, wet and dry depositions, photochemical transformations and transport processes. The dynamical variables are provided by the National Center for Atmospheric Research (NCAR) Community Climate Model (CCM2) at T42 resolution (2.8° × 2.8°) and 18 levels in the vertical. Simulated abundances of 222Rn reveal an underestimate of the transport of continental emissions to the remote Pacific troposphere, more particularly during winter and summer. Calculated concentrations of chemical species are generally in fair agreement with observations. However, the abundances of soluble species are overestimated, leading, for example, to concentrations of nitric acid (HNO3) and hydrogen peroxide (H2O2) which are overpredicted by a factor of 3-8, depending on the season. This feature is attributed to insufficient washout by clouds and precipitation in the model. MOZART succesfully reproduces the development of high-NOx episodes at Mauna Loa Observatory (MLO) associated with anticyclonic conditions to the north of Hawaii and breakdown of the polar jet which tends to deflect to the central Pacific the flow of NOx transported from eastern Asia (China, Japan). During high NOx episodes, the calculated NOx mixing ratio in the vicinity of the MLO increases by about a factor of 3 over its background level (reaching 90-100 pptv) within 3-5 days.
NASA Astrophysics Data System (ADS)
Santillana, Mauricio; Le Sager, Philippe; Jacob, Daniel J.; Brenner, Michael P.
2010-11-01
We present a computationally efficient adaptive method for calculating the time evolution of the concentrations of chemical species in global 3-D models of atmospheric chemistry. Our strategy consists of partitioning the computational domain into fast and slow regions for each chemical species at every time step. In each grid box, we group the fast species and solve for their concentration in a coupled fashion. Concentrations of the slow species are calculated using a simple semi-implicit formula. Separation of species between fast and slow is done on the fly based on their local production and loss rates. This allows for example to exclude short-lived volatile organic compounds (VOCs) and their oxidation products from chemical calculations in the remote troposphere where their concentrations are negligible, letting the simulation determine the exclusion domain and allowing species to drop out individually from the coupled chemical calculation as their production/loss rates decline. We applied our method to a 1-year simulation of global tropospheric ozone-NO x-VOC-aerosol chemistry using the GEOS-Chem model. Results show a 50% improvement in computational performance for the chemical solver, with no significant added error.
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation
Yao, W; Farr, J
2015-06-15
Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.
PHASE-OTI: A pre-equilibrium model code for nuclear reactions calculations
NASA Astrophysics Data System (ADS)
Elmaghraby, Elsayed K.
2009-09-01
The present work focuses on a pre-equilibrium nuclear reaction code (based on the one, two and infinity hypothesis of pre-equilibrium nuclear reactions). In the PHASE-OTI code, pre-equilibrium decays are assumed to be single nucleon emissions, and the statistical probabilities come from the independence of nuclei decay. The code has proved to be a good tool to provide predictions of energy-differential cross sections. The probability of emission was calculated statistically using bases of hybrid model and exciton model. However, more precise depletion factors were used in the calculations. The present calculations were restricted to nucleon-nucleon interactions and one nucleon emission. Program summaryProgram title: PHASE-OTI Catalogue identifier: AEDN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5858 No. of bytes in distributed program, including test data, etc.: 149 405 Distribution format: tar.gz Programming language: Fortran 77 Computer: Pentium 4 and Centrino Duo Operating system: MS Windows RAM: 128 MB Classification: 17.12 Nature of problem: Calculation of the differential cross section for nucleon induced nuclear reaction in the framework of pre-equilibrium emission model. Solution method: Single neutron emission was treated by assuming occurrence of the reaction in successive steps. Each step is called phase because of the phase transition nature of the theory. The probability of emission was calculated statistically using bases of hybrid model [1] and exciton model [2]. However, more precise depletion factor was used in the calculations. Exciton configuration used in the code is that described in earlier work [3]. Restrictions: The program is restricted to single nucleon emission and nucleon
MPS solidification model. Analysis and calculation of macrosegregation in a casting ingot
NASA Technical Reports Server (NTRS)
Poirier, D. R.; Maples, A. L.
1985-01-01
Work performed on several existing solidification models for which computer codes and documentation were developed is presented. The models describe the solidification of alloys in which there is a time varying zone of coexisting solid and liquid phases; i.e., the S/L zone. The primary purpose of the models is to calculate macrosegregation in a casting or ingot which results from flow of interdendritic liquid in this S/L zone during solidification. The flow, driven by solidification contractions and by gravity acting on density gradients in the interdendritic liquid, is modeled as flow through a porous medium. In Model 1, the steady state model, the heat flow characteristics are those of steady state solidification; i.e., the S/L zone is of constant width and it moves at a constant velocity relative to the mold. In Model 2, the unsteady state model, the width and rate of movement of the S/L zone are allowed to vary with time as it moves through the ingot. Each of these models exists in two versions. Models 1 and 2 are applicable to binary alloys; models 1M and 2M are applicable to multicomponent alloys.
Necessity of using heterogeneous ellipsoidal Earth model with terrain to calculate co-seismic effect
NASA Astrophysics Data System (ADS)
Cheng, Huihong; Zhang, Bei; Zhang, Huai; Huang, Luyuan; Qu, Wulin; Shi, Yaolin
2016-04-01
Co-seismic deformation and stress changes, which reflect the elasticity of the earth, are very important in the earthquake dynamics, and also to other issues, such as the evaluation of the seismic risk, fracture process and triggering of earthquake. Lots of scholars have researched the dislocation theory and co-seismic deformation and obtained the half-space homogeneous model, half-space stratified model, spherical stratified model, and so on. Especially, models of Okada (1992) and Wang (2003, 2006) are widely applied in the research of calculating co-seismic and post-seismic effects. However, since both semi-infinite space model and layered model do not take the role of the earth curvature or heterogeneity or topography into consideration, there are large errors in calculating the co-seismic displacement of a great earthquake in its impacted area. Meanwhile, the computational methods of calculating the co-seismic strain and stress are different between spherical model and plane model. Here, we adopted the finite element method which could well deal with the complex characteristics (such as anisotropy, discontinuities) of rock and different conditions. We use the mash adaptive technique to automatically encrypt the mesh at the fault and adopt the equivalent volume force replace the dislocation source, which can avoid the difficulty in handling discontinuity surface with conventional (Zhang et al., 2015). We constructed an earth model that included earth's layered structure and curvature, the upper boundary was set as a free surface and the core-mantle boundary was set under buoyancy forces. Firstly, based on the precision requirement, we take a testing model - - a strike-slip fault (the length of fault is 500km and the width is 50km, and the slippage is 10m) for example. Because of the curvature of the Earth, some errors certainly occur in plane coordinates just as previous studies (Dong et al., 2014; Sun et al., 2012). However, we also found that: 1) the co
Calculation of Forming Limits for Sheet Metal using an Enhanced Continuous Damage Fracture Model
NASA Astrophysics Data System (ADS)
Nguyen, Ngoc-Trung; Kim, Dae-Young; Kim, Heon Young
2011-08-01
An enhanced continuous damage fracture model was introduced in this paper to calculate forming limits of sheet metal. The fracture model is a combination of a fracture criterion and a continuum damage constitutive law. A modified McClintock void growth fracture criterion was incorporated with a coupled damage-plasticity Gurson-type constitutive law. Also, by introducing a Lode angle dependent parameter to define the loading asymmetry condition, the shear effect was phenomenologically taken into account. The proposed fracture model was implemented using user-subroutines in commercial finite element software. The model was calibrated and correlated by the uniaxial tension, shear and notched specimens tests. Application of the fracture model for the LDH tests was discussed and the simulation results were compared with the experimental data.
Improved analytical flux surface representation and calculation models for poloidal asymmetries
NASA Astrophysics Data System (ADS)
Collart, T. G.; Stacey, W. M.
2016-05-01
An orthogonalized flux-surface aligned curvilinear coordinate system has been developed from an up-down asymmetric variation of the "Miller" flux-surface equilibrium model. It is found that the new orthogonalized "asymmetric Miller" model representation of equilibrium flux surfaces provides a more accurate match than various other representations of DIII-D [J. L. Luxon, Nucl. Fusion 42, 614-633 (2002)] discharges to flux surfaces calculated using the DIII-D Equilibrium Fitting tokamak equilibrium reconstruction code. The continuity and momentum balance equations were used to develop a system of equations relating asymmetries in plasma velocities, densities, and electrostatic potential in this curvilinear system, and detailed calculations of poloidal asymmetries were performed for a DIII-D discharge.
NASA Astrophysics Data System (ADS)
Jenkins, B.; Bailey, G. J.; Abdu, M. A.; Batista, I. S.; Balan, N.
1997-06-01
Calculations using the Sheffield University plasmasphere ionosphere model have shown that under certain conditions an additional layer can form in the low latitude topside ionosphere. This layer (the F3 layer) has subsequently been observed in ionograms recorded at Fortaleza in Brazil. It has not been observed in ionograms recorded at the neighbouring station São Luis. Model calculations have shown that the F3 layer is most likely to form in summer at Fortaleza due to a combination of the neutral wind and the E×B drift acting to raise the plasma. At the location of São Luis, almost on the geomagnetic equator, the neutral wind has a smaller vertical component so the F3 layer does not form.
NASA Astrophysics Data System (ADS)
Pommé, S.
2009-06-01
An analytical model is presented to calculate the total detection efficiency of a well-type radiation detector for photons, electrons and positrons emitted from a radioactive source at an arbitrary position inside the well. The model is well suited to treat a typical set-up with a point source or cylindrical source and vial inside a NaI well detector, with or without lead shield surrounding it. It allows for fast absolute or relative total efficiency calibrations for a wide variety of geometrical configurations and also provides accurate input for the calculation of coincidence summing effects. Depending on its accuracy, it may even be applied in 4π-γ counting, a primary standardisation method for activity. Besides an accurate account of photon interactions, precautions are taken to simulate the special case of 511 keV annihilation quanta and to include realistic approximations for the range of (conversion) electrons and β -- and β +-particles.
REPLY: Reply to comment on 'Model calculation of the scanned field enhancement factor of CNTs'
NASA Astrophysics Data System (ADS)
Ahmad, Amir; Tripathi, V. K.
2010-09-01
In the paper (Ahmad and Tripathi 2006 Nanotechnology 17 3798), we derived an expression to compute the field enhancement factor of CNTs under any positional distribution of CNTs by using the model of a floating sphere between parallel anode and cathode plates. Using this expression we can compute the field enhancement factor of a CNT in a cluster (non-uniformly distributed CNTs). This expression was used to compute the field enhancement factor of a CNT in an array (uniformly distributed CNTs). We used an approximation to calculate the field enhancement factor. Hence, our expressions are correct in that assumption only. Zhbanov et al (2010 Nanotechnology 21 358001) suggest a correction that can calculate the field enhancement factor without using the approximation. Hence, this correction can improve the applicability of this model.
A theoretical model for calculation of molecular stopping power. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Xu, Y. J.
1984-01-01
A modified local plasma model is established. The Gordon-Kim's molecular charged density model is employed to obtain a formula to evaluate the stopping power of many useful molecular systems. The stopping power of H2 and He gas was calculated for incident proton energy ranging from 100 keV to 2.5 MeV. The stopping power of O2, N2, and water vapor was also calculated for incident proton energy ranging from 40 keV. to 2.5 MeV. Good agreement with experimental data was obtained. A discussion of molecular effects leading to department from Bragg's rule is presented. The equipartition rule and the effect of nuclear momentum recoiling in stopping power are also discussed.
Comparison of calculated and measured pressures on straight and swept-tip model rotor blades
NASA Technical Reports Server (NTRS)
Tauber, M. E.; Chang, I. C.; Caughey, D. A.; Phillipe, J. J.
1983-01-01
Using the quasi-steady, full potential code, ROT22, pressures were calculated on straight and swept tip model helicopter rotor blades at advance ratios of 0.40 and 0.45, and into the transonic tip speed range. The calculated pressures were compared with values measured in the tip regions of the model blades. Good agreement was found over a wide range of azimuth angles when the shocks on the blade were not too strong. However, strong shocks persisted longer than predicted by ROT22 when the blade was in the second quadrant. Since the unsteady flow effects present at high advance ratios primarily affect shock waves, the underprediction of shock strengths is attributed to the simplifying, quasi-steady, assumption made in ROT22.
Large-scale shell-model calculations of nuclei around mass 210
NASA Astrophysics Data System (ADS)
Teruya, E.; Higashiyama, K.; Yoshinaga, N.
2016-06-01
Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.
GEOM: A new tool for molecular modelling based on distance geometry calculations with NMR data
NASA Astrophysics Data System (ADS)
Sanner, Michel; Widmer, Armin; Senn, Hans; Braun, Werner
1989-09-01
GEOM is a new graphics tool which allows the use of distance geometry to compute linear and cyclic structures typically arising in drug design situations. Modified amino acids or monomeric organic entities can be easily constructed in an interactive way and deposited in the library of the distance geometry program together with geometric information required for structure calculation in dihedral angle space. In addition, GEOM is able to produce all files needed to calculate a structure based on NMR data (NOE and J-coupling constraints) and it permits the graphic analysis and comparison of computed structures. The application of GEOM is demonstrated in three examples: modelling of cyclosporin A structures with and without a limited set of H-bond constraints and modelling of a cyclic hexapeptide with a full NMR data set.
Improved analytical flux surface representation and calculation models for poloidal asymmetries
Collart, T. G. Stacey, W. M.
2016-05-15
An orthogonalized flux-surface aligned curvilinear coordinate system has been developed from an up-down asymmetric variation of the “Miller” flux-surface equilibrium model. It is found that the new orthogonalized “asymmetric Miller” model representation of equilibrium flux surfaces provides a more accurate match than various other representations of DIII-D [J. L. Luxon, Nucl. Fusion 42, 614–633 (2002)] discharges to flux surfaces calculated using the DIII-D Equilibrium Fitting tokamak equilibrium reconstruction code. The continuity and momentum balance equations were used to develop a system of equations relating asymmetries in plasma velocities, densities, and electrostatic potential in this curvilinear system, and detailed calculations of poloidal asymmetries were performed for a DIII-D discharge.
NASA Astrophysics Data System (ADS)
Avrett, E. H.
1986-02-01
Calculated results based on two chromospheric flare models F1 and F2 of Machado, et al., (1980) are presented. Two additional models are included: F1*, which has enhanced temperatures relative to the weak-flare model F1 in the upper photosphere and low chromosphere, and F3 which has enhanced temperatures relative to the strong flare model F2 in the upper chromosphere. Each model is specified by means of a given variation of the temperature as a function of column mass. The corresponding variation of particle density and the geometrical height scale are determined by assuming hydrostatic equilibrium. The coupled equations of statistical equilibrium is solved as is radiative transfer for H, H-, He I-II, C I-IV, Si I-II, Mg I-II, Fe, Al, O I-II, Na, and Ca II. The overall absorption and emission of radiation by lines throughout the spectrum is determined by means of a reduced set of opacities sampled from a compilation of over 10 to the 7th power individual lines. That the white flight flare continuum may arise by extreme chromospheric overheating as well as by an enhancement of the minimum temperature region is also shown. The radiative cooling rate calculations for our brightest flare model suggest that chromospheric overheating provides enhanced radiation that could cause significant heating deep in the flare atmosphere.
A numerical model for calculating vibration from a railway tunnel embedded in a full-space
NASA Astrophysics Data System (ADS)
Hussein, M. F. M.; Hunt, H. E. M.
2007-08-01
Vibration generated by underground railways transmits to nearby buildings causing annoyance to inhabitants and malfunctioning to sensitive equipment. Vibration can be isolated through countermeasures by reducing the stiffness of railpads, using floating-slab tracks and/or supporting buildings on springs. Modelling of vibration from underground railways has recently gained more importance on account of the need to evaluate accurately the performance of vibration countermeasures before these are implemented. This paper develops an existing model, reported by Forrest and Hunt, for calculating vibration from underground railways. The model, known as the Pipe-in-Pipe model, has been developed in this paper to account for anti-symmetrical inputs and therefore to model tangential forces at the tunnel wall. Moreover, three different arrangements of supports are considered for floating-slab tracks, one which can be used to model directly-fixed slabs. The paper also investigates the wave-guided solution of the track, the tunnel, the surrounding soil and the coupled system. It is shown that the dynamics of the track have significant effect on the results calculated in the wavenumber-frequency domain and therefore an important role on controlling vibration from underground railways.
Supersonic flow calculation using a Reynolds-stress and an eddy thermal diffusivity turbulence model
NASA Technical Reports Server (NTRS)
Sommer, T. P.; So, R. M. C.; Zhang, H. S.
1993-01-01
A second-order model for the velocity field and a two-equation model for the temperature field are used to calculate supersonic boundary layers assuming negligible real gas effects. The modeled equations are formulated on the basis of an incompressible assumption and then extended to supersonic flows by invoking Morkovin's hypothesis, which proposes that compressibility effects are completely accounted for by mean density variations alone. In order to calculate the near-wall flow accurately, correction functions are proposed to render the modeled equations asymptotically consistent with the behavior of the exact equations near a wall and, at the same time, display the proper dependence on the molecular Prandtl number. Thus formulated, the near-wall second order turbulence model for heat transfer is applicable to supersonic flows with different Prandtl numbers. The model is validated against flows with different Prandtl numbers and supersonic flows with free-stream Mach numbers as high as 10 and wall temperature ratios as low as 0.3. Among the flow cases considered, the momentum thickness Reynolds number varies from approximately 4,000 to approximately 21,000. Good correlation with measurements of mean velocity, temperature, and its variance is obtained. Discernible improvements in the law-of-the-wall are observed, especially in the range where the big-law applies.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1986-01-01
Calculated results based on two chromospheric flare models F1 and F2 of Machado, et al., (1980) are presented. Two additional models are included: F1*, which has enhanced temperatures relative to the weak-flare model F1 in the upper photosphere and low chromosphere, and F3 which has enhanced temperatures relative to the strong flare model F2 in the upper chromosphere. Each model is specified by means of a given variation of the temperature as a function of column mass. The corresponding variation of particle density and the geometrical height scale are determined by assuming hydrostatic equilibrium. The coupled equations of statistical equilibrium is solved as is radiative transfer for H, H-, He I-II, C I-IV, Si I-II, Mg I-II, Fe, Al, O I-II, Na, and Ca II. The overall absorption and emission of radiation by lines throughout the spectrum is determined by means of a reduced set of opacities sampled from a compilation of over 10 to the 7th power individual lines. That the white flight flare continuum may arise by extreme chromospheric overheating as well as by an enhancement of the minimum temperature region is also shown. The radiative cooling rate calculations for our brightest flare model suggest that chromospheric overheating provides enhanced radiation that could cause significant heating deep in the flare atmosphere.
Influence of polarization and a source model for dose calculation in MRT
Bartzsch, Stefan Oelfke, Uwe; Lerch, Michael; Petasecca, Marco; Bräuer-Krisch, Elke
2014-04-15
Purpose: Microbeam Radiation Therapy (MRT), an alternative preclinical treatment strategy using spatially modulated synchrotron radiation on a micrometer scale, has the great potential to cure malignant tumors (e.g., brain tumors) while having low side effects on normal tissue. Dose measurement and calculation in MRT is challenging because of the spatial accuracy required and the arising high dose differences. Dose calculation with Monte Carlo simulations is time consuming and their accuracy is still a matter of debate. In particular, the influence of photon polarization has been discussed in the literature. Moreover, it is controversial whether a complete knowledge of phase space trajectories, i.e., the simulation of the machine from the wiggler to the collimator, is necessary in order to accurately calculate the dose. Methods: With Monte Carlo simulations in the Geant4 toolkit, the authors investigate the influence of polarization on the dose distribution and the therapeutically important peak to valley dose ratios (PVDRs). Furthermore, the authors analyze in detail phase space information provided byMartínez-Rovira et al. [“Development and commissioning of a Monte Carlo photon model for the forthcoming clinical trials in microbeam radiation therapy,” Med. Phys. 39(1), 119–131 (2012)] and examine its influence on peak and valley doses. A simple source model is developed using parallel beams and its applicability is shown in a semiadjoint Monte Carlo simulation. Results are compared to measurements and previously published data. Results: Polarization has a significant influence on the scattered dose outside the microbeam field. In the radiation field, however, dose and PVDRs deduced from calculations without polarization and with polarization differ by less than 3%. The authors show that the key consequences from the phase space information for dose calculations are inhomogeneous primary photon flux, partial absorption due to inclined beam incidence outside
Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ruf, Joe
2007-01-01
As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.
Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model
NASA Technical Reports Server (NTRS)
White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.
1989-01-01
A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.
Schädlich, P K; Brecht, J G
1997-01-01
The purpose of this study is to estimate the potential of savings which can be achieved by prophylaxis of myocardial reinfarction with low-dose acetylsalicylic acid (ASA) at 75 mg per day over a treatment period of two years. After secondary analysis of published data, the effectiveness of low-dose ASA is compared to placebo by a model calculation. The difference in the effectiveness between the prophylaxis with ASA and placebo is taken from an international meta-analysis. The economic valuation of this difference is carried out by a cost-effectiveness analysis applying disease costs per case. According to the model calculation, 5535 DM can be saved per patient with a history of myocardial infarction with 75 mg ASA a day over a treatment period of two years. In 1991 there were around 740,000 patients with a history of myocardial infarction in the age group of 25-64 in the Old Bundesländer of the Federal Republic of Germany. The application of the results of the model calculation would lead to considerable savings. Even in the sensitivity analysis with different assumptions regarding costs incurred by hospital treatment and costs incurred by premature retirement, the cost advantage of the ASA-prophylaxis remains. Due to the cautious and conservative assumptions in the model calculation the potential of savings is likely underestimated. Nevertheless, there is a distinct advantage for the prophylaxis with low-dose ASA which already occurs in direct costs thus leading to advantages also for cost carriers.
An Analytic Model of the Strategic Bomber Penetration Mission with Variance Calculations.
1981-12-01
calculates the damage done by the weapon; and g) Penetrator (Ref 7:IV-20). All the results of the air battle simulations are stored as out- put, allowing a...detailed damage assessment information (Ref 8:13). b) NYLAND (RAND) - a small expected value model that includes bombers, decoys, BDMs, interceptors and SAMs...reliability data, perhaps adjusted to allow for damage that may occur to tankers during base escape. Together these two probabilities define a single
1991-01-01
gradient force [Babcock and Evans , 1979; Rishbeth, 1972]. Figure 8 shows meridional neutral winds calculated for two magnetically conjugate locations...be included in the USU servo model for a physically better daytime output. 97 REFERENCES Babcock, R. R. Jr., and J. V. Evans , Seasonal and Solar...non-exclusive world English rights. Foreign language rights must be applied for separately. Sincerely, Andrew Adler Journals Permissions MATERIAL TO BE
Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model
NASA Technical Reports Server (NTRS)
White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.
1989-01-01
A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.
A deterministic partial differential equation model for dose calculation in electron radiotherapy
NASA Astrophysics Data System (ADS)
Duclous, R.; Dubroca, B.; Frank, M.
2010-07-01
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung
A deterministic partial differential equation model for dose calculation in electron radiotherapy.
Duclous, R; Dubroca, B; Frank, M
2010-07-07
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung
Solar particle events observed at Mars: dosimetry measurements and model calculations
NASA Astrophysics Data System (ADS)
Cleghorn, T.; Saganti, P.; Zeitlin, C.; Cucinotta, F.
The first solar particle events from a Martian orbit are observed with the MARIE (Martian Radiation Environment Experiment) on the 2001 Mars Odyssey space -craft that is currently in orbit and collecting the mapping data of the red planet. These solar particle events observed at Mars during March and April 2002, are correlated with the GOES-8 and ACE satellite data from the same time period at Earth orbits. Dosimetry measurements for the Mars orbit from the period of March 13t h through April 29t h . Particle count rate and the corresponding dose rate enhancements were observed on March 16t h through 20t h and on April 22n d corresponding to solar particle events that were observed at Earth orbit on March 16t h through 21s t and beginning on April 21s t respectively. The model calculations with the HZETRN (High Z=atomic number and high Energy Transport) code estimated the background GCR (Galactic Cosmic Rays) dose rates. The dose rates observed by the MARIE instrument are within 10% of the model calculations. Dosimetry measurements and model calculation will be presented.
First-Principles Calculations, Experimental Study, and Thermodynamic Modeling of the Al-Co-Cr System
Liu, Xuan L.; Gheno, Thomas; Lindahl, Bonnie B.; Lindwall, Greta; Gleeson, Brian; Liu, Zi-Kui
2015-01-01
The phase relations and thermodynamic properties of the condensed Al-Co-Cr ternary alloy system are investigated using first-principles calculations based on density functional theory (DFT) and phase-equilibria experiments that led to X-ray diffraction (XRD) and electron probe micro-analysis (EPMA) measurements. A thermodynamic description is developed by means of the calculations of phase diagrams (CALPHAD) method using experimental and computational data from the present work and the literature. Emphasis is placed on modeling the bcc-A2, B2, fcc-γ, and tetragonal-σ phases in the temperature range of 1173 to 1623 K. Liquid, bcc-A2 and fcc-γ phases are modeled using substitutional solution descriptions. First-principles special quasirandom structures (SQS) calculations predict a large bcc-A2 (disordered)/B2 (ordered) miscibility gap, in agreement with experiments. A partitioning model is then used for the A2/B2 phase to effectively describe the order-disorder transitions. The critically assessed thermodynamic description describes all phase equilibria data well. A2/B2 transitions are also shown to agree well with previous experimental findings. PMID:25875037
Monte Carlo calculation of phase equilibria for a bead-spring polymeric model
Sheng, Y.J.; Panagiotopoulos, A.Z. . School of Chemical Engineering); Kumar, S.K. . Dept. of Materials Science and Engineering); Szleifer, I. )
1994-01-17
Vapor-liquid phase diagrams for a bead-spring polymeric model have been calculated for chain lengths of 20, 50, and 100 from Monte Carlo simulations using the recently proposed chain increment method to determine the chain chemical potentials. Densities of both phases at coexistence and vapor pressures were obtained directly for a range of temperatures from highly subcritical to the vicinity of the critical point, and the critical temperature and density for each chain length were obtained by extrapolation. They also calculated the second virial coefficients for chain-chain interactions of the model and found that the temperature at which the second virial coefficients for chain-chain interactions of the model and found that the temperature at which the second virial coefficient vanishes for long chains coincides, within computational uncertainty, with the infinite chain length critical point from the phase equilibrium results. At the critical points of the finite length chains the second virial coefficient assume negative values, indicating attractive interchain interactions. The radius of gyration of chains of varying length was also determined and the [theta] temperature obtained from the radii of gyration found to coincide, within computational uncertainty, with the critical point for an infinite chain length polymer. The computational methodology they utilize can be extended to the calculation of phase equilibria in multicomponent polymer/solvent systems.
NASA Astrophysics Data System (ADS)
Sokolova, Elena A.; Reyes Cortes, Santiago D.
1997-02-01
The latest advances in the field of holographic gratings and spectral devices is in calculation, manufacture and use of these gratings for spectral devices. The general theory of diffraction grating was developed in 1974. Although this theory is in wide use, not all the problems associated with the theory have been resolved. Theoretical calculations show that this is possible using a more complicated mounting of recording the grating. For recording of the grating with the compensation of the four aberrations it is necessary to use beams from opposite sides of the blanks. To examine this method special mathematical model was found. It is based on the ray tracing calculation, but includes two steps recording and the refraction in the glass blank. In this work we represent a system of nonhomocentric recording, which doesn't include aspheric or refractive optics, mathematical model of this system, spectral devices, which can be produced with the gratings, recorded in our system and the results of the mathematical model experiments with concrete examples of those devices.
Observation and model calculations of sunspot ring structure at 8.46GHz
NASA Astrophysics Data System (ADS)
Gopalswarmy, N.; Raulin, J. P.; Kundu, M. R.; Hildebrandt, J.; Krueger, A.; Hofmann, A.
1996-12-01
We present Very Large Array (VLA) observations of AR 7542 which demonstrate the existence of definite ring and horse-shoe structures of a sunspot in intensity (I) and polarization (V) at 8.46GHz (3.5cm wavelength) and compare them with model calculations of gyroresonance radiation. The VLA measurements have been made on three different days in July 1993 when AR 7542 was at three different longitudes which allows us to study the effect of viewing angle on sunspot-associated microwave emission. Model calculations of gyroresonance radiation have been carried out using a modified dipole model corresponding to the observed photospheric magnetic field strength and average temperature/electron density distributions consistent with soft X-ray and EUV observations (for the lower atmosphere) as well as theoretical assumptions (for the corona). The calculated I and V maps were found to be generally consistent with the radio observations. We obtain information on the magnetic scale length in vertical and horizontal directions above the sunspot and about the distribution of other plasma parameters (temperature, density) inside the radio source region.
Observations and model calculations of the F3 layer in the Southeast Asian equatorial ionosphere
NASA Astrophysics Data System (ADS)
Uemoto, Jyunpei; Maruyama, Takashi; Ono, Takayuki; Saito, Susumu; Iizima, Masahide; Kumamoto, Atsushi
2011-03-01
To clarify the characteristics of the F3 layer with a focus on magnetic latitude dependence and the relationship to the equatorial anomaly, we performed statistical analysis of F3 layer occurrences using the ionosonde chain data in a magnetic meridional plane in Southeast Asia and performed model calculations. From comparison of the observational and model calculation results, it was found that the field-aligned diffusion of plasma acts to make the F3 layer prominent in the magnetic low-latitude region while acting to decrease the peak density of the F3 layer near the magnetic equator. The magnetic latitude dependence of the F3 layer formation comes not only from the meridional neutral wind effect but also from the field-aligned diffusion effect. The model calculations revealed that the F3 peak corresponds to the electron density-enhanced region associated with the equatorial anomaly. This relationship is consistent with the suggestion that the field-aligned diffusion acts to make the F3 layer prominent in the magnetic low-latitude region since the fundamental factors for generation of the equatorial anomaly are also E × B drift and field-aligned downward diffusion. It is suggested that the local time and magnetic latitudinal variations of the F3 layer result from those of the electron density-enhanced region associated with the equatorial anomaly.
Reacidification modeling and dose calculation procedures for calcium-carbonate-treated lakes
Scheffe, R.D.
1987-01-01
Two dose calculation models and a reacidification model were developed and applied to two Adirondack acid lakes (Woods Lake and Cranberry Pond) that were treated with calcite during May 30-31, 1985 as part of the EPRI-funded Lake Acidification Mitigation Project. The first dose model extended Sverdrup's (1983) Lake Liming model by incorporating chemical equilibrium routines to eliminate empirical components. The model simulates laboratory column water chemistry profiles (spatially and temporally) and dissolution efficiencies fairly well; however, the model predicted conservative dissolution efficiencies for the study lakes. Time-series water chemistry profiles of the lakes suggest that atmospheric carbon dioxide intrusion rate was far greater than expected and enhanced dissolution efficiency. Accordingly, a second dose model was developed that incorporated ongoing CO/sub 2/ intrusion and added flexibility in the handling of solid and dissolved species transport. This revised model simulated whole-lake water chemistry throughout the three week dissolution period. The Acid Lake Reacidification Model (ALaRM) is a general mass-balance model developed for the temporal prediction of the principal chemical species in both the water column and sediment pore water of small lakes and ponds.
Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models
Stiglic, Gregor; Pajnkihar, Majda
2015-01-01
Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999–2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001), with a significantly higher AUC (p < 0.001) of 0.774 (0.740) for the pre-diabetes group. Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were
NASA Technical Reports Server (NTRS)
Newman, P. A.; Schoeberl, M. R.; Plumb, R. A.
1986-01-01
Calculations of the two-dimensional, species-independent mixing coefficients for two-dimensional chemical models for the troposphere and stratosphere are performed using quasi-geostrophic potential vorticity fluxes and gradients from 4 years of National Meteorological Center data for the four seasons in both hemispheres. Results show that the horizontal mixing coefficient values for the winter lower stratosphere are broadly consistent with those currently employed in two-dimensional models, but the horizontal mixing coefficient values in the northern winter upper stratosphere are much larger than those usually used.
From Kuo-Brown to today's realistic shell-model calculations
NASA Astrophysics Data System (ADS)
Coraggio, L.; Covello, A.; Gargano, A.; Itaco, N.
2014-08-01
This paper is an homage to the seminal work of Gerry Brown and Tom Kuo, where shell model calculations were performed for 18O and 18F using an effective interaction derived from the Hamada-Johnston nucleon-nucleon potential. That work has been the first successful attempt to provide a description of nuclear structure properties starting from the free nucleon-nucleon potential. We shall compare the approach employed in the 1966 paper with the derivation of a modern realistic shell-model interaction for sd-shell nuclei, evidencing the progress that has been achieved during the last decades.
Chambers, Alex; Rajantie, Arttu
2008-02-01
If light scalar fields are present at the end of inflation, their nonequilibrium dynamics such as parametric resonance or a phase transition can produce non-Gaussian density perturbations. We show how these perturbations can be calculated using nonlinear lattice field theory simulations and the separate universe approximation. In the massless preheating model, we find that some parameter values are excluded while others lead to acceptable but observable levels of non-Gaussianity. This shows that preheating can be an important factor in assessing the viability of inflationary models.
Most predictions of the effect of climate change on species’ ranges are based on correlations between climate and current species’ distributions. These so-called envelope models may be a good first approximation, but we need demographically mechanistic models to incorporate the ...
Most predictions of the effect of climate change on species’ ranges are based on correlations between climate and current species’ distributions. These so-called envelope models may be a good first approximation, but we need demographically mechanistic models to incorporate the ...
The truth is out there: measured, calculated and modelled benthic fluxes.
NASA Astrophysics Data System (ADS)
Pakhomova, Svetlana; Protsenko, Elizaveta
2016-04-01
In a modern Earth science there is a great importance of understanding the processes, forming the benthic fluxes as one of element sources or sinks to or from the water body, which affects the elements balance in the water system. There are several ways to assess benthic fluxes and here we try to compare the results obtained by chamber experiments, calculated from porewater distributions and simulated with model. Benthic fluxes of dissolved elements (oxygen, nitrogen species, phosphate, silicate, alkalinity, iron and manganese species) were studied in the Baltic and Black Seas from 2000 to 2005. Fluxes were measured in situ using chamber incubations (Jch) and at the same time sediment cores were collected to assess the porewater distribution at different depths to calculate diffusive fluxes (Jpw). Model study was carried out with benthic-pelagic biogeochemical model BROM (O-N-P-Si-C-S-Mn-Fe redox model). It was applied to simulate biogeochemical structure of the water column and upper sediment and to assess the vertical fluxes (Jmd). By the behaviour at the water-sediment interface all studied elements can be divided into three groups: (1) elements which benthic fluxes are determined by the concentrations gradient only (Si, Mn), (2) elements which fluxes depend on redox conditions in the bottom water (Fe, PO4, NH4), and (3) elements which fluxes are strongly connected with organic matter fate (O2, Alk, NH4). For the first group it was found that measured fluxes are always higher than calculated diffusive fluxes (1.5
Two-dimensional model calculation of fluorine-containing reservoir species in the stratosphere
NASA Technical Reports Server (NTRS)
Kaye, Jack A.; Douglass, Anne R.; Jackman, Charles H.; Stolarski, Richard S.; Zander, R.
1991-01-01
Two-dimensional model calculations have been carried out of the distributions of the fluorine-containing reservoir species HF, CF2O, and CFClO. HF constitutes the largest fluorine reservoir in the stratosphere, but CF2O also makes an important contribution to the inorganic fluorine budget. CFClO amounts are most important in the tropical lower stratosphere. HF amounts increase with altitude throughout the stratosphere, while those of CF2O and CFClO fall off above their mixing ratio peaks due to photolysis. The model is in good qualitative agreement with observed vertical profiles of HF and CF2O but tends to underestimate the total column of HF. The calculated CFClO distribution is in good agreement with the very limited data. The disagreement in the HF columns is likely due to small inaccuracies in the model's treatment of lower stratospheric photolysis of chlorofluorocarbons. The model results support the suggestion that CF2O may be heterogeneously converted to HF on the surface of polar stratospheric cloud particles. The model results also suggest that the quantum yield for photolysis of CF2O is near unity.
Calculation of electrical potentials on the surface of a realistic head model by finite differences.
Lemieux, L; McBride, A; Hand, J W
1996-07-01
We present a method for the calculation of electrical potentials at the surface of realistic head models from a point dipole generator based on a 3D finite-difference algorithm. The model was validated by comparing calculated values with those obtained algebraically for a three-shell spherical model. For a 1.25 mm cubic grid size, the mean error was 4.9% for a superficial dipole (3.75 mm from the inner surface of the skull) pointing in the radial direction. The effect of generator discretization and node spacing on the accuracy of the model was studied. Three values of the node spacing were considered: 1, 1.25 and 1.5 mm. The mean relative errors were 4.2, 6.3 and 9.3% respectively. The quality of the approximation of a point dipole by an array of nodes in a spherical neighbourhood did not depend significantly on the number of nodes used. The application of the method to a conduction model derived from MRI data is demonstrated.
Activity-based costing: a practical model for cost calculation in radiotherapy.
Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien
2003-10-01
The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.
NASA Astrophysics Data System (ADS)
Yuan, Weijia; Campbell, A. M.; Coombs, T. A.
2009-07-01
A model is presented for calculating the AC losses of a stack of second-generation high temperature superconductor tapes. This model takes as a starting point the model of Clem and co-workers for a stack in which each tape carries the same current. It is based on the assumption that the magnetic flux lines lie parallel to the tapes within the part of the stack where the flux has not penetrated. In this paper we allow for the depth of penetration of field to vary across the stack, and use the Kim model to allow for the variation of Jc with B. The model is applied to the cases of a transport current and an applied field. For a transport current the calculated result differs from the Norris expression for a single tape carrying a uniform current and it does not seem possible to define a suitable average Jc which could be used. Our method also gives a more accurate value for the critical current of the stack than other methods. For an applied field the stack behaves as a solid superconductor with the Jc averaged locally over several tapes, but still allowed to vary throughout the stack on a larger scale. For up to about ten tapes the losses rise rapidly with the number of tapes, but in thicker stacks the tapes shield each other and the losses become that of a slab with a field parallel to the faces.
Two-dimensional model calculation of fluorine-containing reservoir species in the stratosphere
NASA Technical Reports Server (NTRS)
Kaye, Jack A.; Douglass, Anne R.; Jackman, Charles H.; Stolarski, Richard S.; Zander, R.
1991-01-01
Two-dimensional model calculations have been carried out of the distributions of the fluorine-containing reservoir species HF, CF2O, and CFClO. HF constitutes the largest fluorine reservoir in the stratosphere, but CF2O also makes an important contribution to the inorganic fluorine budget. CFClO amounts are most important in the tropical lower stratosphere. HF amounts increase with altitude throughout the stratosphere, while those of CF2O and CFClO fall off above their mixing ratio peaks due to photolysis. The model is in good qualitative agreement with observed vertical profiles of HF and CF2O but tends to underestimate the total column of HF. The calculated CFClO distribution is in good agreement with the very limited data. The disagreement in the HF columns is likely due to small inaccuracies in the model's treatment of lower stratospheric photolysis of chlorofluorocarbons. The model results support the suggestion that CF2O may be heterogeneously converted to HF on the surface of polar stratospheric cloud particles. The model results also suggest that the quantum yield for photolysis of CF2O is near unity.
An empirical model for calculation of the collimator contamination dose in therapeutic proton beams
NASA Astrophysics Data System (ADS)
Vidal, M.; De Marzi, L.; Szymanowski, H.; Guinement, L.; Nauraye, C.; Hierso, E.; Freud, N.; Ferrand, R.; François, P.; Sarrut, D.
2016-02-01
Collimators are used as lateral beam shaping devices in proton therapy with passive scattering beam lines. The dose contamination due to collimator scattering can be as high as 10% of the maximum dose and influences calculation of the output factor or monitor units (MU). To date, commercial treatment planning systems generally use a zero-thickness collimator approximation ignoring edge scattering in the aperture collimator and few analytical models have been proposed to take scattering effects into account, mainly limited to the inner collimator face component. The aim of this study was to characterize and model aperture contamination by means of a fast and accurate analytical model. The entrance face collimator scatter distribution was modeled as a 3D secondary dose source. Predicted dose contaminations were compared to measurements and Monte Carlo simulations. Measurements were performed on two different proton beam lines (a fixed horizontal beam line and a gantry beam line) with divergent apertures and for several field sizes and energies. Discrepancies between analytical algorithm dose prediction and measurements were decreased from 10% to 2% using the proposed model. Gamma-index (2%/1 mm) was respected for more than 90% of pixels. The proposed analytical algorithm increases the accuracy of analytical dose calculations with reasonable computation times.
How the choice of model dielectric function affects the calculated observables
NASA Astrophysics Data System (ADS)
Vos, Maarten; Grande, Pedro L.
2017-09-01
It is investigated how the model used to describe a dielectric function (i.e. a Mermin, Drude, Drude-Lindhard, Levine-Louie with relaxation time dielectric function) affects the interpretation of a REELS experiment, the calculation of the electron inelastic mean free path as well proton stopping and straggling. Three dielectric functions are constructed that are based on different models describing a metal, but have identical loss functions in the optical limit. A loss function with the same shape, but half the amplitude, is used to derive four different model dielectric functions for an insulator. From these dielectric functions we calculate the differential inverse mean free path, the mean free path itself, as well as the stopping force and straggling for protons. The similarity of the underlying physics between proton stopping, straggling and the electron inelastic mean free path is stressed by describing all three in terms of the differential inverse inelastic mean free path. To further highlight the reason why observed quantities depend on the model dielectric function used we study partial differential inverse inelastic mean free paths, i.e. those obtained by integrating over only a limited range of momentum transfers. In this way it becomes quite transparent why the observable quantities depend on the choice of model dielectric function.
NASA Astrophysics Data System (ADS)
Wong, Michael H.; Atreya, Sushil K.; Kuhn, William R.; Romani, Paul N.; Mihalka, Kristen M.
2015-01-01
Models of cloud condensation under thermodynamic equilibrium in planetary atmospheres are useful for several reasons. These equilibrium cloud condensation models (ECCMs) calculate the wet adiabatic lapse rate, determine saturation-limited mixing ratios of condensing species, calculate the stabilizing effect of latent heat release and molecular weight stratification, and locate cloud base levels. Many ECCMs trace their heritage to Lewis (Lewis, J.S. [1969]. Icarus 10, 365-378) and Weidenschilling and Lewis (Weidenschilling, S.J., Lewis, J.S. [1973]. Icarus 20, 465-476). Calculation of atmospheric structure and gas mixing ratios are correct in these models. We resolve errors affecting the cloud density calculation in these models by first calculating a cloud density rate: the change in cloud density with updraft length scale. The updraft length scale parameterizes the strength of the cloud-forming updraft, and converts the cloud density rate from the ECCM into cloud density. The method is validated by comparison with terrestrial cloud data. Our parameterized updraft method gives a first-order prediction of cloud densities in a “fresh” cloud, where condensation is the dominant microphysical process. Older evolved clouds may be better approximated by another 1-D method, the diffusive-precipitative Ackerman and Marley (Ackerman, A.S., Marley, M.S. [2001]. Astrophys. J. 556, 872-884) model, which represents a steady-state equilibrium between precipitation and condensation of vapor delivered by turbulent diffusion. We re-evaluate observed cloud densities in the Galileo Probe entry site (Ragent, B. et al. [1998]. J. Geophys. Res. 103, 22891-22910), and show that the upper and lower observed clouds at ∼0.5 and ∼3 bars are consistent with weak (cirrus-like) updrafts under conditions of saturated ammonia and water vapor, respectively. The densest observed cloud, near 1.3 bar, requires unexpectedly strong updraft conditions, or higher cloud density rates. The cloud
Chow, James C L; Markel, Daniel; Jiang, Runqing
2010-09-01
The Gaussian error function was first used and verified in normal tissue complication probability (NTCP) calculation to reduce the dose-volume histogram (DVH) database by replacing the dose-volume bin set with the error function parameters for the differential DVH (dDVH). Seven-beam intensity modulated radiation therapy (IMRT) treatment planning was performed in three patients with small (40 cm3), medium (53 cm3), and large (87 cm3) prostate volume, selected from a group of 20 patients. Rectal dDVH varying with the interfraction prostate motion along the anterior-posterior direction was determined by the treatment planning system (TPS) and modeled by the Gaussian error function model for the three patients. Rectal NTCP was then calculated based on the routine dose-volume bin set of the rectum by the TPS and the error function model. The variations in the rectal NTCP with the prostate motion and volume were studied. For the ranges of prostate motion of 8-2, 4-8, and 4-3 mm along the anterior-posterior direction for the small, medium, and large prostate patient, the rectal NTCP was determined varying in the ranges of 4.6%-4.8%, 4.5%-4.7%, and 4.6%-4.7%, respectively. The deviation of the rectal NTCP calculated by the TPS and the Gaussian error function model was within +/- 0.1%. The Gaussian error function was successfully applied in the NTCP calculation by replacing the dose-volume bin set with the model parameters. This provides an option in the NTCP calculation using a reduced size of dose-volume database. Moreover, the rectal NTCP was found varying in about +/- 0.2% with the interfraction prostate motion along the anterior-posterior direction in the radiation treatment. The dependence of the variation in the rectal NTCP with the interfraction prostate motion on the prostate volume was found to be more significant in the patient with larger prostate.
Chow, James C L; Markel, Daniel; Jiang, Runqing
2010-09-01
The Gaussian error function was first used and verified in normal tissue complication probability (NTCP) calculation to reduce the dose-volume histogram (DVH) database by replacing the dose-volume bin set with the error function parameters for the differential DVH (dDVH). Seven-beam intensity modulated radiation therapy (IMRT) treatment planning was performed in three patients with small(40cm3), medium (53cm3), and large (87cm3) prostate volume, selected from a group of 20 patients. Rectal dDVH varying with the interfraction prostate motion along the anterior-posterior direction was determined by the treatment planning system (TPS) and modeled by the Gaussian error function model for the three patients. Rectal NTCP was then calculated based on the routine dose-volume bin set of the rectum by the TPS and the error function model. The variations in the rectal NTCP with the prostate motion and volume were studied. For the ranges of prostate motion of 8-2, 4-8, and 4-3 mm along the anterior-posterior direction for the small, medium, and large prostate patient, the rectal NTCP was determined varying in the ranges of 4.6%-4.8%, 4.5%-4.7%, and 4.6%-4.7%, respectively. The deviation of the rectal NTCP calculated by the TPS and the Gaussian error function model was within ±0.1%. The Gaussian error function was successfully applied in the NTCP calculation by replacing the dose-volume bin set with the model parameters. This provides an option in the NTCP calculation using a reduced size of dose-volume database. Moreover, the rectal NTCP was found varying in about ±0.2% with the interfraction prostate motion along the anterior-posterior direction in the radiation treatment. The dependence of the variation in the rectal NTCP with the interfraction prostate motion on the prostate volume was found to be more significant in the patient with larger prostate. © 2010 American Association of Physicists in Medicine.
Sensor-based clear and cloud radiance calculations in the community radiative transfer model.
Liu, Quanhua; Xue, Y; Li, C
2013-07-10
The community radiative transfer model (CRTM) has been implemented for clear and cloudy satellite radiance simulations in the National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Prediction (NCEP) Gridpoint Statistical Interpolation data assimilation system for global and regional forecasting as well as reanalysis for climate studies. Clear-sky satellite radiances are successfully assimilated, while cloudy radiances need to be assimilated for improving precipitation and severe weather forecasting. However, cloud radiance calculations are much slower than the calculations for clear-sky radiance, and exceed our computational capacity for weather forecasting. In order to make cloud radiance assimilation affordable, cloud optical parameters at the band central wavelength are used in the CRTM (OPTRAN-CRTM) where the optical transmittance (OPTRAN) band model is applied. The approximation implies that only one radiative transfer solution for each band (i.e., channel) is needed, instead of typically more than 10,000 solutions that are required for a detailed line-by-line radiative transfer model (LBLRTM). This paper investigated the accuracy of the approximation and helps us to understand the error source. Two NOAA operational sensors, High Resolution Infrared Radiation Sounder/3 (HIRS/3) and Advanced Microwave Sounding Unit (AMSU), have been chosen for this investigation with both clear and cloudy cases. By comparing the CRTM cloud radiance calculations with the LBLRTM simulations, we found that the CRTM cloud radiance model can achieve accuracy better than 0.4 K for the IR sensor and 0.1 K for the microwave sensor. The results suggest that the CRTM cloud radiance calculations may be adequate to the operational satellite radiance assimilation for numerical forecast model. The accuracy using OPTRAN is much better than using the scaling method (SCALING-CRTM). In clear-sky applications, the scaling of the optical depth derived at nadir
Binary cluster model calculations for 20Ne and 44Ti nuclei
NASA Astrophysics Data System (ADS)
Koyuncu, F.; Soylu, A.; Bayrak, O.
2017-03-01
The elastic scattering data of α+16O and α+40Ca systems at Elab = 32.2-146 MeV and Elab = 24.1-49.5 MeV energies have been analyzed with double-folding (DF) potential in optical model formalism in order to investigate the cluster structures of 20Ne and 44Ti nuclei. The deduced DF potentials between α and 16O as well as α and 40Ca have been used for obtaining the excitation energies and α-decay widths of 20Ne and 44Ti in Gamow code, but the reasonable results could not be obtained. Thus, the real parts of DF potentials which are in the best agreement with experimental data have been fitted with the squared-Woods-Saxon (WS2) potential parameters to calculate the α-decay widths of 20Ne and 44Ti with Wentzel-Kramers-Brillouin (WKB) approach. The nuclear potential sets obtained in WKB calculations are also used for Gamow code calculations. We take into account the deformation and orientation of 40Ca nucleus to examine their influence on both the excitation energies and decay widths of 44Ti. Besides, by using the binary cluster model the rotational band energies and electromagnetic transition probabilities (BE2)s according to angles are also reproduced for both nuclei. The obtained results showed that the binary cluster model is very useful to understand the observables of 20Ne and 44Ti nuclei. Although only spherical calculations are made for 20Ne (α + 16O), the deformation in 40Ca would be important for the understanding of 44Ti (α + 40Ca) cluster structure. The mechanism presented here would also be applied to understand the cluster structures in heavy nuclei.
Scheuerell, Mark D
2016-01-01
Stock-recruitment models have been used for decades in fisheries management as a means of formalizing the expected number of offspring that recruit to a fishery based on the number of parents. In particular, Ricker's stock recruitment model is widely used due to its flexibility and ease with which the parameters can be estimated. After model fitting, the spawning stock size that produces the maximum sustainable yield (S MSY) to a fishery, and the harvest corresponding to it (U MSY), are two of the most common biological reference points of interest to fisheries managers. However, to date there has been no explicit solution for either reference point because of the transcendental nature of the equation needed to solve for them. Therefore, numerical or statistical approximations have been used for more than 30 years. Here I provide explicit formulae for calculating both S MSY and U MSY in terms of the productivity and density-dependent parameters of Ricker's model.
NASA Astrophysics Data System (ADS)
Homma, H.; Murayama, T.
We investigate the chemical evolution model explaining the chemical composition and the star formation histories (SFHs) simultaneously for the dwarf spheroidal galaxies (dSphs). Recently, wide imaging photometry and multi-object spectroscopy give us a large number of data. Therefore, we start to develop the chemical evolution model based on an SFH given by photometric observations and estimates a metallicity distribution function (MDF) comparing with spectroscopic observations. With this new model we calculate the chemical evolution for 4 dSphs (Fornax, Sculptor, Leo II, Sextans), and then we found that the model of 0.1 Gyr for the delay time of type Ia SNe is too short to explain the observed [alpha /Fe] vs. [Fe/H] diagrams.
NASA Technical Reports Server (NTRS)
Kim, Frederick D.; Celi, Roberto; Tischler, Mark B.
1991-01-01
This paper describes a new trim procedure, that includes the calculation of the steady-state response of the rotor blades, and that is applicable to straight flight and steady coordinated turns. This paper also describes the results of a validation study for a high order linearized model of helicopter flight dynamics, that includes rotor, inflow, and actuator dynamics. The model is obtained by numerical perturbations of a nonlinear, blade element type mathematical model. Predicted responses are compared with flight test data for two values of flight speed. The comparison is carried out in the frequency domain. Numerical simulations show that the trim algorithm is very accurate, and preserves the periodicity of the aircraft states. The results also indicate that the predictions of the linearized model are in good agreement with flight test data, especially at medium and high frequencies.
NASA Astrophysics Data System (ADS)
Fujii, Hiroyuki; Okawa, Shinpei; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao
2015-12-01
Development of a physically accurate and computationally efficient photon migration model for turbid media is crucial for optical computed tomography such as diffuse optical tomography. For the development, this paper constructs a space-time coupling model of the radiative transport equation with the photon diffusion equation. In the coupling model, a space-time regime of the photon migration is divided into the ballistic and diffusive regimes with the interaction between the both regimes to improve the accuracy of the results and the efficiency of computation. The coupling model provides an accurate description of the photon migration in various turbid media in a wide range of the optical properties, and reduces computational loads when compared with those of full calculation of the RTE.
Development of an algebraic stress/two-layer model for calculating thrust chamber flow fields
NASA Astrophysics Data System (ADS)
Chen, C. P.; Shang, H. M.; Huang, J.
1993-07-01
Following the consensus of a workshop in Turbulence Modeling for Liquid Rocket Thrust Chambers, the current effort was undertaken to study the effects of second-order closure on the predictions of thermochemical flow fields. To reduce the instability and computational intensity of the full second-order Reynolds Stress Model, an Algebraic Stress Model (ASM) coupled with a two-layer near wall treatment was developed. Various test problems, including the compressible boundary layer with adiabatic and cooled walls, recirculating flows, swirling flows and the entire SSME nozzle flow were studied to assess the performance of the current model. Detailed calculations for the SSME exit wall flow around the nozzle manifold were executed. As to the overall flow predictions, the ASM removes another assumption for appropriate comparison with experimental data, to account for the non-isotropic turbulence effects.
Development of an algebraic stress/two-layer model for calculating thrust chamber flow fields
NASA Technical Reports Server (NTRS)
Chen, C. P.; Shang, H. M.; Huang, J.
1993-01-01
Following the consensus of a workshop in Turbulence Modeling for Liquid Rocket Thrust Chambers, the current effort was undertaken to study the effects of second-order closure on the predictions of thermochemical flow fields. To reduce the instability and computational intensity of the full second-order Reynolds Stress Model, an Algebraic Stress Model (ASM) coupled with a two-layer near wall treatment was developed. Various test problems, including the compressible boundary layer with adiabatic and cooled walls, recirculating flows, swirling flows and the entire SSME nozzle flow were studied to assess the performance of the current model. Detailed calculations for the SSME exit wall flow around the nozzle manifold were executed. As to the overall flow predictions, the ASM removes another assumption for appropriate comparison with experimental data, to account for the non-isotropic turbulence effects.
A revised model of the kidney for medical internal radiation dose calculations
Patel, J.S.
1988-12-01
Presently, there is a need for a revised model for the kidneys which clearly distinguishes major regions and structures in the kidneys. This model is needed since radionuclides used currently in nuclear medicine have marked preferences for various regions of the kidneys, and the radiation dose to one or more of these regions is of primary importance. At this time the kidneys are modeled as solid organs of uniform density by the ALGAM computer code, which uses Monte Carlo techniques to calculate absorbed fractions. This presentation will introduce a model in which the source regions will be the cortex, medulla and the papillae, while the target regions will be these regions as well as the other organs of the body. This research presents for the first time estimates of the specific absorbed fractions in various organs of the body from a source in the specific region of the kidneys. 17 refs., 8 figs., 10 tabs.
Obtaining model parameters for real materials from ab-initio calculations: Heisenberg exchange
NASA Astrophysics Data System (ADS)
Korotin, Dmitry; Mazurenko, Vladimir; Anisimov, Vladimir; Streltsov, Sergey
An approach to compute exchange parameters of the Heisenberg model in plane-wave based methods is presented. This calculation scheme is based on the Green's function method and Wannier function projection technique. It was implemented in the framework of the pseudopotential method and tested on such materials as NiO, FeO, Li2MnO3, and KCuF3. The obtained exchange constants are in a good agreement with both the total energy calculations and experimental estimations for NiO and KCuF3. In the case of FeO our calculations explain the pressure dependence of the Néel temperature. Li2MnO3 turns out to be a Slater insulator with antiferromagnetic nearest neighbor exchange defined by the spin splitting. The proposed approach provides a unique way to analyze magnetic interactions, since it allows one to calculate orbital contributions to the total exchange coupling and study the mechanism of the exchange coupling. The work was supported by a grant from the Russian Scientific Foundation (Project No. 14-22-00004).
A model for calculating polymer injectivity including the effects of shear degradation
Sorbie, K.S.; Roberts, L.J.
1984-04-01
Polymers are frequently injected into oil reservoirs in order to improve recovery. As they reduce the in-situ mobility of the aqueous phase (either by viscosity increase or permeability reduction), the fluid injectivity generally drops. It is very useful to be able to estimate in advance from a few laboratory measured quantities the injectivity of the polymer and whether the polymer is likely to be seriously degraded by the high shear experienced in the near-wellbore region. It is difficult to calculate the injectivity of the polymer solutions due to their complex rheological behaviour within porous media, especially when the polymer mechanically degrades. In this paper, the authors investigate one approach to calculating the injectivity of polymers in the general case where mechanical degradation occurs. A kinetic model for polymer degradation is proposed which is used to obtain the radial viscosity profile of the degrading polymer. This may in turn be used to calculate the steady-state pressure drops associated with the degrading polymer. The model is based on a discrete multicomponent representation of the polymer molecular weight distribution (MWD). During mechanical degradation, the MWD changes as higher components degrade into lower molecular weight fragments. The degradation rate of a given component of the MWD is related to the local shear/elongational stress within the porous medium and the concentration of the component (C /SUB i/ ). The model is used to match the results of experiments studying the shear degradation of polyacrylamide (PAM) in radial sandstone cores. The quantitative predictions of the model are very satisfactory. In addition, the model gives insight into the mechanism of shear degradation of polymers in porous media.
Black, Dennis M; Palermo, Lisa; Grima, Daniel T
2006-01-01
Simulation models are often used to assess cost-effectiveness of osteoporosis therapies. Many cost-effectiveness analyses are interested in a subset of the general population, such as high-risk patients. As the analyses are very sensitive to the assumed risk of fracture, it is imperative that the rates accurately reflect the fracture risk in the specified target population. The objective of this study was to describe the methodological difficulties and present some possible solutions for calculating the risk of fracture in target populations of interest. For binary risk factors, a method for converting from a relative risk (RR) for people with a risk factor relative to those without, to an RR in the target population compared with the general population, is described. For continuous risk factors (i.e., bone mineral density [BMD]), data are often provided as an RR of fracture per SD decrease. A method for converting from an RR per SD decrease to an RR in those below a certain BMD threshold, compared with the general population, is presented. These results should allow future economic models to more accurately incorporate existing knowledge of risk factors by introducing methods to calculate fracture risk estimates in a target population. It illustrates the importance of considering the prevalence of risk factors in the general population when calculating RR in a target population.
Calculation and modeling of the energy released in result of water freezing process (WFP)
NASA Astrophysics Data System (ADS)
Ghodsi Hassanabad, M.; Mehrbadi, A. Dehghani
Process of water freezing in different pressures has been studied with appropriate accuracy and freezing phenomenon has been tested in variety conditions. The effects of pressure on volume change in constant volume and constant pressure have also been reviewed. Calculation of these changes has been done by using the finite difference. Therefore, experimental model has been designed and built to validate these calculations and this experimental model has been studied the power of freezing water during the freezing process in different conditions. Finally, the results were used to design a machine that has an ability to control the power of freezing and turn it into a new clean energy. In this machine, some water is frozen due to temperature difference that is exerting between day and night and energy which is produced by this reaction consumes for creating electrical energy. The amount of extractable power from the temperature difference between day and night were calculated in different temperatures. As an overall result, the most energy extracted from freezing in one cubic meters water with a temperature below -22 °C during the night is 12.8 MJ, the equivalent of using 356 W for 10 h.
Kan, An-Kang; Cao, Dan; Zhang, Xue-Lai
2015-04-01
Accurately predicting the effective thermal conductivity of the fibrous materials is highly desirable but remains to be a challenging work. In this paper, the microstructure of the porous fiber materials is analyzed, approximated and modeled on basis of the statistical self-similarity of fractal theory. A fractal model is presented to accurately calculate the effective thermal conductivity of fibrous porous materials. Taking the two-phase heat transfer effect into account, the existing statistical microscopic geometrical characteristics are analyzed and the Hertzian Contact solution is introduced to calculate the thermal resistance of contact points. Using the fractal method, the impacts of various factors, including the porosity, fiber orientation, fractal diameter and dimension, rarified air pressure, bulk thermal conductivity coefficient, thickness and environment condition, on the effective thermal conductivity, are analyzed. The calculation results show that the fiber orientation angle caused the material effective thermal conductivity to be anisotropic, and normal distribution is introduced into the mathematic function. The effective thermal conductivity of fibrous material increases with the fiber fractal diameter, fractal dimension and rarefied air pressure within the materials, but decreases with the increase of vacancy porosity.
Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.
Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B
2014-11-01
The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and γ-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The γ-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm.
Raevskiĭ, O A; Grigor'ev, V Iu; Liplavskaia, E A; Vorts, A P
2012-01-01
Modeling of quantitative structure--activity relationships between physicochemical descriptors of organic chemicals and their acute intravenous toxicity in mice have been presented. This approach includes three steps: structure-similarity chemicals selection for every chemical-of-interest (clusterization); construction of quantitative structure--toxicity models for every cluster (without including of chemical-of-interest); application of obtained QSAR equations for chemical-of-interest toxicity estimation. This approach has been applied for acute intravenous toxicity calculations of 10241 organic chemicals. For 7759 chemicals which has enough quantity of structural neighbours with the Tanimoto index (Tc) on the level 0.30 and over, a standard deviation of calculation vs. experimental log(1/LD50) values is equal to 0.51 at the estimation of experimental determination on the level 0.50. The results of calculations isn't so good for remain chemicals (approximately 24%). It is connect with absence of sufficient number of structure similarity neighbours. It's assumed this QSAR approach can be useful for activity and toxicity prediction of chemicals large sets.
Application of ab-initio calculations to modeling of nanoscale diffusion and activation in silicon
NASA Astrophysics Data System (ADS)
Diebel, Milan
As ULSI devices enter the nanoscale, ultra-shallow and highly electrically active junctions become necessary. New materials and 3D device structures as well as new process technologies are under exploration to meet the requirements of future devices. A detailed understanding of the atomistic mechanisms of point-defect/dopant interactions which govern diffusion and activation behavior is required to overcome the challenges in building these devices. This dissertation describes how ab-initio calculations can be used to develop physical models of diffusion and activation in silicon. A hierarchy of approaches (ab-initio, kinetic lattice Monte Carlo, continuum) is used to bridge the gaps in time scale and system size between atomistic calculations and nanoscale devices. This modeling approach is demonstrated by investigating two very different challenges in process technology: F co-implantation and stress effects on dopant diffusion/activation. In the first application, ab-initio calculations are used to understand anomalous F diffusion behavior. A set of strongly bound fluorine vacancy complexes (FnVm ) were found. The decoration of vacancies/dangling silicon bonds by fluorine leads to fluorine accumulating in vacancy rich regions, which explains the fluorine redistribution behavior reported experimentally. The revealed interactions of F with point-defects explain the benefits of F co-implantation for B and P activation and diffusion. Based on the insight gained, a simplified F diffusion model at the continuum level (50--100 nm scale) is extracted that accounts for co-implantation effects on B and P for various implant energies and doses. The second application addresses the effect of stress on point-defect/dopant equilibrium concentration, diffusion, and activation. A methodology is developed to extract detailed stress effects from ab-initio calculations. The approach is used to extract induced strains and elasticity tensors for various defects and impurities in order
Model-based dose calculations for {sup 125}I lung brachytherapy
Sutherland, J. G. H.; Furutani, K. M.; Garces, Y. I.; Thomson, R. M.
2012-07-15
Purpose: Model-baseddose calculations (MBDCs) are performed using patient computed tomography (CT) data for patients treated with intraoperative {sup 125}I lung brachytherapy at the Mayo Clinic Rochester. Various metallic artifact correction and tissue assignment schemes are considered and their effects on dose distributions are studied. Dose distributions are compared to those calculated under TG-43 assumptions. Methods: Dose distributions for six patients are calculated using phantoms derived from patient CT data and the EGSnrc user-code BrachyDose. {sup 125}I (GE Healthcare/Oncura model 6711) seeds are fully modeled. Four metallic artifact correction schemes are applied to the CT data phantoms: (1) no correction, (2) a filtered back-projection on a modified virtual sinogram, (3) the reassignment of CT numbers above a threshold in the vicinity of the seeds, and (4) a combination of (2) and (3). Tissue assignment is based on voxel CT number and mass density is assigned using a CT number to mass density calibration. Three tissue assignment schemes with varying levels of detail (20, 11, and 5 tissues) are applied to metallic artifact corrected phantoms. Simulations are also performed under TG-43 assumptions, i.e., seeds in homogeneous water with no interseed attenuation. Results: Significant dose differences (up to 40% for D{sub 90}) are observed between uncorrected and metallic artifact corrected phantoms. For phantoms created with metallic artifact correction schemes (3) and (4), dose volume metrics are generally in good agreement (less than 2% differences for all patients) although there are significant local dose differences. The application of the three tissue assignment schemes results in differences of up to 8% for D{sub 90}; these differences vary between patients. Significant dose differences are seen between fully modeled and TG-43 calculations with TG-43 underestimating the dose (up to 36% in D{sub 90}) for larger volumes containing higher proportions of
A comparison of three radiation models for the calculation of nozzle arcs
NASA Astrophysics Data System (ADS)
Dixon, C. M.; Yan, J. D.; Fang, M. T. C.
2004-12-01
Three radiation models, the semi-empirical model based on net emission coefficients (Zhang et al 1987 J. Phys. D: Appl. Phys. 20 386-79), the five-band P1 model (Eby et al 1998 J. Phys. D: Appl. Phys. 31 1578-88), and the method of partial characteristics (Aubrecht and Lowke 1994 J. Phys. D: Appl.Phys. 27 2066-73, Sevast'yanenko 1979 J. Eng. Phys. 36 138-48), are used to calculate the radiation transfer in an SF6 nozzle arc. The temperature distributions computed by the three models are compared with the measurements of Leseberg and Pietsch (1981 Proc. 4th Int. Symp. on Switching Arc Phenomena (Lodz, Poland) pp 236-40) and Leseberg (1982 PhD Thesis RWTH Aachen, Germany). It has been found that all three models give similar distributions of radiation loss per unit time and volume. For arcs burning in axially dominated flow, such as arcs in nozzle flow, the semi-empirical model and the P1 model give accurate predictions when compared with experimental results. The prediction by the method of partial characteristics is poorest. The computational cost is the lowest for the semi-empirical model.
Recalibration of the Shear Stress Transport Model to Improve Calculation of Shock Separated Flows
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.
2013-01-01
The Menter Shear Stress Transport (SST) k . turbulence model is one of the most widely used two-equation Reynolds-averaged Navier-Stokes turbulence models for aerodynamic analyses. The model extends Menter s baseline (BSL) model to include a limiter that prevents the calculated turbulent shear stress from exceeding a prescribed fraction of the turbulent kinetic energy via a proportionality constant, a1, set to 0.31. Compared to other turbulence models, the SST model yields superior predictions of mild adverse pressure gradient flows including those with small separations. In shock - boundary layer interaction regions, the SST model produces separations that are too large while the BSL model is on the other extreme, predicting separations that are too small. In this paper, changing a1 to a value near 0.355 is shown to significantly improve predictions of shock separated flows. Several cases are examined computationally and experimental data is also considered to justify raising the value of a1 used for shock separated flows.
Sample size calculation to externally validate scoring systems based on logistic regression models.
Palazón-Bru, Antonio; Folgado-de la Rosa, David Manuel; Cortés-Castell, Ernesto; López-Cascales, María Teresa; Gil-Guillén, Vicente Francisco
2017-01-01
A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence). Scoring systems based on binary logistic regression models are a specific type of predictive model. The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study. The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index) were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units. In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature. An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.
Direct calculation of ice homogeneous nucleation rate for a molecular model of water
Haji-Akbari, Amir; Debenedetti, Pablo G.
2015-01-01
Ice formation is ubiquitous in nature, with important consequences in a variety of environments, including biological cells, soil, aircraft, transportation infrastructure, and atmospheric clouds. However, its intrinsic kinetics and microscopic mechanism are difficult to discern with current experiments. Molecular simulations of ice nucleation are also challenging, and direct rate calculations have only been performed for coarse-grained models of water. For molecular models, only indirect estimates have been obtained, e.g., by assuming the validity of classical nucleation theory. We use a path sampling approach to perform, to our knowledge, the first direct rate calculation of homogeneous nucleation of ice in a molecular model of water. We use TIP4P/Ice, the most accurate among existing molecular models for studying ice polymorphs. By using a novel topological approach to distinguish different polymorphs, we are able to identify a freezing mechanism that involves a competition between cubic and hexagonal ice in the early stages of nucleation. In this competition, the cubic polymorph takes over because the addition of new topological structural motifs consistent with cubic ice leads to the formation of more compact crystallites. This is not true for topological hexagonal motifs, which give rise to elongated crystallites that are not able to grow. This leads to transition states that are rich in cubic ice, and not the thermodynamically stable hexagonal polymorph. This mechanism provides a molecular explanation for the earlier experimental and computational observations of the preference for cubic ice in the literature. PMID:26240318
Direct calculation of ice homogeneous nucleation rate for a molecular model of water.
Haji-Akbari, Amir; Debenedetti, Pablo G
2015-08-25
Ice formation is ubiquitous in nature, with important consequences in a variety of environments, including biological cells, soil, aircraft, transportation infrastructure, and atmospheric clouds. However, its intrinsic kinetics and microscopic mechanism are difficult to discern with current experiments. Molecular simulations of ice nucleation are also challenging, and direct rate calculations have only been performed for coarse-grained models of water. For molecular models, only indirect estimates have been obtained, e.g., by assuming the validity of classical nucleation theory. We use a path sampling approach to perform, to our knowledge, the first direct rate calculation of homogeneous nucleation of ice in a molecular model of water. We use TIP4P/Ice, the most accurate among existing molecular models for studying ice polymorphs. By using a novel topological approach to distinguish different polymorphs, we are able to identify a freezing mechanism that involves a competition between cubic and hexagonal ice in the early stages of nucleation. In this competition, the cubic polymorph takes over because the addition of new topological structural motifs consistent with cubic ice leads to the formation of more compact crystallites. This is not true for topological hexagonal motifs, which give rise to elongated crystallites that are not able to grow. This leads to transition states that are rich in cubic ice, and not the thermodynamically stable hexagonal polymorph. This mechanism provides a molecular explanation for the earlier experimental and computational observations of the preference for cubic ice in the literature.
Linden, D.S.
1993-05-01
The traditional two-fluid model of superconducting conductivity was modified to make it accurate, while remaining fast, for designing and simulating microwave devices. The modification reflects the BCS coherence effects in the conductivity of a superconductor, and is incorporated through the ratio of normal to superconducting electrons. This modified ratio is a simple analytical expression which depends on frequency, temperature and material parameters. This modified two-fluid model allows accurate and rapid calculation of the microwave surface impedance of a superconductor in the clean and dirty limits and in the weak- and strong-coupled regimes. The model compares well with surface resistance data for Nb and provides insight into Nb3Sn and Y1Ba2Cu3O(7-delta). Numerical calculations with the modified two-fluid model are an order of magnitude faster than the quasi-classical program by Zimmermann (1), and two to five orders of magnitude faster than Halbritter's BCS program (2) for surface resistance.
A Comparison of Model Calculation and Measurement of Absorbed Dose for Proton Irradiation. Chapter 5
NASA Technical Reports Server (NTRS)
Zapp, N.; Semones, E.; Saganti, P.; Cucinotta, F.
2003-01-01
With the increase in the amount of time spent EVA that is necessary to complete the construction and subsequent maintenance of ISS, it will become increasingly important for ground support personnel to accurately characterize the radiation exposures incurred by EVA crewmembers. Since exposure measurements cannot be taken within the organs of interest, it is necessary to estimate these exposures by calculation. To validate the methods and tools used to develop these estimates, it is necessary to model experiments performed in a controlled environment. This work is such an effort. A human phantom was outfitted with detector equipment and then placed in American EMU and Orlan-M EVA space suits. The suited phantom was irradiated at the LLUPTF with proton beams of known energies. Absorbed dose measurements were made by the spaceflight operational dosimetrist from JSC at multiple sites in the skin, eye, brain, stomach, and small intestine locations in the phantom. These exposures are then modeled using the BRYNTRN radiation transport code developed at the NASA Langley Research Center, and the CAM (computerized anatomical male) human geometry model of Billings and Yucker. Comparisons of absorbed dose calculations with measurements show excellent agreement. This suggests that there is reason to be confident in the ability of both the transport code and the human body model to estimate proton exposure in ground-based laboratory experiments.
Calculations of axisymmetric vortex sheet roll-up using a panel and a filament model
NASA Technical Reports Server (NTRS)
Kantelis, J. P.; Widnall, S. E.
1986-01-01
A method for calculating the self-induced motion of a vortex sheet using discrete vortex elements is presented. Vortex panels and vortex filaments are used to simulate two-dimensional and axisymmetric vortex sheet roll-up. A straight forward application using vortex elements to simulate the motion of a disk of vorticity with an elliptic circulation distribution yields unsatisfactroy results where the vortex elements move in a chaotic manner. The difficulty is assumed to be due to the inability of a finite number of discrete vortex elements to model the singularity at the sheet edge and due to large velocity calculation errors which result from uneven sheet stretching. A model of the inner portion of the spiral is introduced to eliminate the difficulty with the sheet edge singularity. The model replaces the outermost portion of the sheet with a single vortex of equivalent circulation and a number of higher order terms which account for the asymmetry of the spiral. The resulting discrete vortex model is applied to both two-dimensional and axisymmetric sheets. The two-dimensional roll-up is compared to the solution for a semi-infinite sheet with good results.
Calculations of inflaton decays and reheating: with applications to no-scale inflation models
Ellis, John; Garcia, Marcos A.G.; Nanopoulos, Dimitri V.; Olive, Keith A.
2015-07-30
We discuss inflaton decays and reheating in no-scale Starobinsky-like models of inflation, calculating the effective equation-of-state parameter, w, during the epoch of inflaton decay, the reheating temperature, T{sub reh}, and the number of inflationary e-folds, N{sub ∗}, comparing analytical approximations with numerical calculations. We then illustrate these results with applications to models based on no-scale supergravity and motivated by generic string compactifications, including scenarios where the inflaton is identified as an untwisted-sector matter field with direct Yukawa couplings to MSSM fields, and where the inflaton decays via gravitational-strength interactions. Finally, we use our results to discuss the constraints on these models imposed by present measurements of the scalar spectral index n{sub s} and the tensor-to-scalar perturbation ratio r, converting them into constraints on N{sub ∗}, the inflaton decay rate and other parameters of specific no-scale inflationary models.
Development of Aerosol Models for Radiative Flux Calculations at ARM Sites
Ogren, John A.; Dutton, Ellsworth G.; McComiskey, Allison C.
2006-09-30
The direct radiative forcing (DRF) of aerosols, the change in net radiative flux due to aerosols in non-cloudy conditions, is an essential quantity for understanding the human impact on climate change. Our work has addressed several key issues that determine the accuracy, and identify the uncertainty, with which aerosol DRF can be modeled. These issues include the accuracy of several radiative transfer models when compared to measurements and to each other in a highly controlled closure study using data from the ARM 2003 Aerosol IOP. The primary focus of our work has been to determine an accurate approach to assigning aerosol properties appropriate for modeling over averaged periods of time and space that represent the observed regional variability of these properties. We have also undertaken a comprehensive analysis of the aerosol properties that contribute most to uncertainty in modeling aerosol DRF, and under what conditions they contribute the most uncertainty. Quantification of these issues enables the community to better state accuracies of radiative forcing calculations and to concentrate efforts in areas that will decrease uncertainties in these calculations in the future.
Ernren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.
1999-01-01
Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH)4, under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability.In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem.In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem.
Tosteson, Tor D; Buzas, Jeffrey S; Demidenko, Eugene; Karagas, Margaret
2003-04-15
Covariate measurement error is often a feature of scientific data used for regression modelling. The consequences of such errors include a loss of power of tests of significance for the regression parameters corresponding to the true covariates. Power and sample size calculations that ignore covariate measurement error tend to overestimate power and underestimate the actual sample size required to achieve a desired power. In this paper we derive a novel measurement error corrected power function for generalized linear models using a generalized score test based on quasi-likelihood methods. Our power function is flexible in that it is adaptable to designs with a discrete or continuous scalar covariate (exposure) that can be measured with or without error, allows for additional confounding variables and applies to a broad class of generalized regression and measurement error models. A program is described that provides sample size or power for a continuous exposure with a normal measurement error model and a single normal confounder variable in logistic regression. We demonstrate the improved properties of our power calculations with simulations and numerical studies. An example is given from an ongoing study of cancer and exposure to arsenic as measured by toenail concentrations and tap water samples.
Model-based calculations of fiber output fields for fiber-based spectroscopy
NASA Astrophysics Data System (ADS)
Hernandez, Eloy; Bodenmüller, Daniel; Roth, Martin M.; Kelz, Andreas
2016-08-01
The accurate characterization of the field at the output of the optical fibres is of relevance for precision spectroscopy in astronomy. The modal effects of the fibre translate to the illumination of the pupil in the spectrograph and impact on the resulting point spread function (PSF). A Model is presented that is based on the Eigenmode Expansion Method (EEM) that calculates the output field from a given fibre for different manipulations of the input field. The fibre design and modes calculation are done via the commercially available Rsoft-FemSIM software. We developed a Python script to apply the EEM. Results are shown for different configuration parameters, such as spatial and angular displacements of the input field, spot size and propagation length variations, different transverse fibre geometries and different wavelengths. This work is part of the phase A study of the fibre system for MOSAIC, a proposed multi-object spectrograph for the European Extremely Large Telescope (ELT-MOS).
Kállay, Mihály; Gauss, Jürgen
2004-11-15
Using string-based algorithms excitation energies and analytic first derivatives for excited states have been implemented for general coupled-cluster (CC) models within CC linear-response (LR) theory which is equivalent to the equation-of-motion (EOM) CC approach for these quantities. Transition moments between the ground and excited states are also considered in the framework of linear-response theory. The presented procedures are applicable to both single-reference-type and multireference-type CC wave functions independently of the excitation manifold constituting the cluster operator and the space in which the effective Hamiltonian is diagonalized. The performance of different LR-CC/EOM-CC and configuration-interaction approaches for excited states is compared. The effect of higher excitations on excited-state properties is demonstrated in benchmark calculations for NH(2) and NH(3). As a first application, the stationary points of the S(1) surface of acetylene are characterized by high-accuracy calculations.
An equivalent circuit model and power calculations for the APS SPX crab cavities.
Berenc, T. )
2012-03-21
An equivalent parallel resistor-inductor-capacitor (RLC) circuit with beam loading for a polarized TM110 dipole-mode cavity is developed and minimum radio-frequency (rf) generator requirements are calculated for the Advanced Photon Source (APS) short-pulse x-ray (SPX) superconducting rf (SRF) crab cavities. A beam-loaded circuit model for polarized TM110 mode crab cavities was derived. The single-cavity minimum steady-state required generator power has been determined for the APS SPX crab cavities for a storage ring current of 200mA DC current as a function of external Q for various vertical offsets including beam tilt and uncontrollable detuning. Calculations to aid machine protection considerations were given.
Calculating model of light transmission efficiency of diffusers attached to a lighting cavity.
Sun, Ching-Cherng; Chien, Wei-Ting; Moreno, Ivan; Hsieh, Chih-To; Lin, Mo-Cha; Hsiao, Shu-Li; Lee, Xuan-Hao
2010-03-15
A lighting cavity is a reflecting box with light sources inside. Its exit side is covered with a diffuser plate to mix and distribute light, which addresses a key issue of luminaires, display backlights, and other illumination systems. We derive a simple but precise formula for the optical efficiency of diffuser plates attached to a light cavity. We overcome the complexity of the scattering theory and the difficulty of the multiple calculations involved, by carrying out the calculation with a single ray of light that statistically represents all the scattered rays. We constructed and tested several optical cavities using light-emitting diodes, bulk-scattering diffusers, white scatter sheets, and silver coatings. All measurements are in good agreement with predictions from our optical model.
Calculation of the wetting parameter from a cluster model in the framework of nanothermodynamics.
García-Morales, V; Cervera, J; Pellicer, J
2003-06-01
The critical wetting parameter omega(c) determines the strength of interfacial fluctuations in critical wetting transitions. In this Brief Report, we calculate omega(c) from considerations on critical liquid clusters inside a vapor phase. The starting point is a cluster model developed by Hill and Chamberlin in the framework of nanothermodynamics [Proc. Natl. Acad. Sci. USA 95, 12779 (1998)]. Our calculations yield results for omega(c) between 0.52 and 1.00, depending on the degrees of freedom considered. The findings are in agreement with previous experimental results and give an idea of the universal dynamical behavior of the clusters when approaching criticality. We suggest that this behavior is a combination of translation and vortex rotational motion (omega(c)=0.84).
Solar particle events observed at Mars: dosimetry measurements and model calculations
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Saganti, Premkumar B.; Zeitlin, Cary J.; Cucinotta, Francis A.
2004-01-01
During the period from March 13, 2002 to mid-September, 2002, six solar particle events (SPE) were observed by the MARIE instrument onboard the Odyssey Spacecraft in Martian Orbit. These events were observed also by the GOES 8 satellite in Earth orbit, and thus represent the first time that the same SPE have been observed at these separate locations. The characteristics of these SPE are examined, given that the active regions of the solar disc from which the event originated can usually be identified. The dose rates at Martian orbit are calculated, both for the galactic and solar components of the ionizing particle radiation environment. The dose rates due to galactic cosmic rays (GCR) agree well with the HZETRN model calculations. Published by Elsevier Ltd on behalf of COSPAR.
Solar particle events observed at Mars: dosimetry measurements and model calculations
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Saganti, Premkumar B.; Zeitlin, Cary J.; Cucinotta, Francis A.
2004-01-01
During the period from March 13, 2002 to mid-September, 2002, six solar particle events (SPE) were observed by the MARIE instrument onboard the Odyssey Spacecraft in Martian Orbit. These events were observed also by the GOES 8 satellite in Earth orbit, and thus represent the first time that the same SPE have been observed at these separate locations. The characteristics of these SPE are examined, given that the active regions of the solar disc from which the event originated can usually be identified. The dose rates at Martian orbit are calculated, both for the galactic and solar components of the ionizing particle radiation environment. The dose rates due to galactic cosmic rays (GCR) agree well with the HZETRN model calculations. Published by Elsevier Ltd on behalf of COSPAR.
Sensitivity of model calculations to uncertain inputs, with an application to neutron star envelopes
NASA Technical Reports Server (NTRS)
Epstein, R. I.; Gudmundsson, E. H.; Pethick, C. J.
1983-01-01
A method is given for determining the sensitivity of certain types of calculations to the uncertainties in the input physics or model parameters; this method is applicable to problems that involve solutions to coupled, ordinary differential equations. In particular the sensitivity of calculations of the thermal structure of neutron star envelopes to uncertainties in the opacity and equation of state is examined. It is found that the uncertainties in the relationship between the surface and interior temperatures of a neutron star are due almost entirely to the imprecision in the values of the conductive opacity in the region where the ions form a liquid; here the conductive opacity is, for the most part, due to the scattering of electrons from ions.
Solar particle events observed at Mars: dosimetry measurements and model calculations.
Cleghorn, Timothy F; Saganti, Premkumar B; Zeitlin, Cary J; Cucinotta, Francis A
2004-01-01
During the period from March 13, 2002 to mid-September, 2002, six solar particle events (SPE) were observed by the MARIE instrument onboard the Odyssey Spacecraft in Martian Orbit. These events were observed also by the GOES 8 satellite in Earth orbit, and thus represent the first time that the same SPE have been observed at these separate locations. The characteristics of these SPE are examined, given that the active regions of the solar disc from which the event originated can usually be identified. The dose rates at Martian orbit are calculated, both for the galactic and solar components of the ionizing particle radiation environment. The dose rates due to galactic cosmic rays (GCR) agree well with the HZETRN model calculations.
Synthesis, spectroscopic characterization and DFT calculations of β-O-4 type lignin model compounds
NASA Astrophysics Data System (ADS)
Mostaghni, Fatemeh; Teimouri, Abbas; Mirshokraei, Seyed Ahmad
2013-06-01
β-O-4 type lignin model compounds with the title of Erythro-2-(2-methoxyphenoxy)-1-(3,4,5-trimethoxyphenyl)-1,3-propanediol and Erythro-2-(2-methoxyphenoxy)-1-(4-Hydroxy-3,5-dimethoxyphenyl)-1,3-propanediol were synthesised and some modifications and improvements on them were introduced. These compounds were characterized by IR, Mass and NMR spectroscopy. Density functional theory (DFT) calculations were performed for the title compounds using the standard 6-31G* basis set. IR, 13C and 1H NMR of the title compounds were calculated at the DFT-B3LYP level of theory using the 6-31G* basis set. In this work comparison between the experimental and the theoretical results indicates that the DFT-B3LYP method is able to provide satisfactory results for predicting the properties of the considered compounds.
Validation of the photon dose calculation model in the VARSKIN 4 skin dose computer code.
Sherbini, Sami; Decicco, Joseph; Struckmeyer, Richard; Saba, Mohammad; Bush-Goddard, Stephanie
2012-12-01
An updated version of the skin dose computer code VARSKIN, namely VARSKIN 4, was examined to determine the accuracy of the photon model in calculating dose rates with different combinations of source geometry and radionuclides. The reference data for this validation were obtained by means of Monte Carlo transport calculations using MCNP5. The geometries tested included the zero volume sources point and disc, as well as the volume sources sphere and cylinder. Three geometries were tested using source directly on the skin, source off the skin with an absorber material between source and skin, and source off the skin with only an air gap between source and skin. The results of these calculations showed that the non-volume sources produced dose rates that were in very good agreement with the Monte Carlo calculations, but the volume sources resulted in overestimates of the dose rates compared with the Monte Carlo results by factors that ranged up to about 2.5. The results for the air gap showed poor agreement with Monte Carlo for all source geometries, with the dose rates overestimated in all cases. The conclusion was that, for situations where the beta dose is dominant, these results are of little significance because the photon dose in such cases is generally a very small fraction of the total dose. For situations in which the photon dose is dominant, use of the point or disc geometries should be adequate in most cases except those in which the dose approaches or exceeds an applicable limit. Such situations will often require a more accurate dose assessment and may require the use of methods such as Monte Carlo transport calculations.
Modeling and calculation of impact friction caused by corner contact in gear transmission
NASA Astrophysics Data System (ADS)
Zhou, Changjiang; Chen, Siyu
2014-09-01
Corner contact in gear pair causes vibration and noise, which has attracted many attentions. However, teeth errors and deformation make it difficulty to determine the point situated at corner contact and study the mechanism of teeth impact friction in the current researches. Based on the mechanism of corner contact, the process of corner contact is divided into two stages of impact and scratch, and the calculation model including gear equivalent error—combined deformation is established along the line of action. According to the distributive law, gear equivalent error is synthesized by base pitch error, normal backlash and tooth profile modification on the line of action. The combined tooth compliance of the first point lying in corner contact before the normal path is inversed along the line of action, on basis of the theory of engagement and the curve of tooth synthetic compliance & load-history. Combined secondarily the equivalent error with the combined deflection, the position standard of the point situated at corner contact is probed. Then the impact positions and forces, from the beginning to the end during corner contact before the normal path, are calculated accurately. Due to the above results, the lash model during corner contact is founded, and the impact force and frictional coefficient are quantified. A numerical example is performed and the averaged impact friction coefficient based on the presented calculation method is validated. This research obtains the results which could be referenced to understand the complex mechanism of teeth impact friction and quantitative calculation of the friction force and coefficient, and to gear exact design for tribology.
Theoretical modeling of zircon's crystal morphology according to data of atomistic calculations
NASA Astrophysics Data System (ADS)
Gromalova, Natalia; Nikishaeva, Nadezhda; Eremin, Nikolay
2017-04-01
Zircon is an essential mineral that is used in the U-Pb dating. Moreover, zircon is highly resistant to radioactive exposure. It is of great interest in solving both fundamental and applied problems associated with the isolation of high-level radioactive waste. There is significant progress in forecasting of the most energetically favorable crystal structures at the present time. Unfortunately, the theoretical forecast of crystal morphology at high technological level is under-explored nowadays, though the estimation of crystal equilibrium habit is extremely important in studying the physical and chemical properties of new materials. For the first time, the thesis about relation of the equilibrium shape of a crystal with its crystal structure was put forward in the works by O.Brave. According to it, the idealized habit is determined in the simplest case by a correspondence with the reticular densities Rhkl of individual faces. This approach, along with all subsequent corrections, does not take into account the nature of atoms and the specific features of the chemical bond in crystals. The atomistic calculations of crystal surfaces are commonly performed using the energetic characteristics of faces, namely, the surface energy (Esurf), which is a measure of the thermodynamic stability of the crystal face. The stable crystal faces are characterized by small positive values of Esurf. As we know from our previous research (Gromalova et al.,2015) one of the constitutive factors affecting the value of the surface energy in calculations is a choice of potentials model. In this regard, we studied several sets of parameters of atomistic interatomic potentials optimized previously. As the first test model («Zircon 1») were used sets of interatomic potentials of interaction Zr-O, Si-O and O-O in the form of Buckingham potentials. To improve playback properties of zircon additionally used Morse potential for a couple of Zr-Si, as well as the three-particle angular harmonic
Calculating the renormalisation group equations of a SUSY model with Susyno
NASA Astrophysics Data System (ADS)
Fonseca, Renato M.
2012-10-01
Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features
NASA Technical Reports Server (NTRS)
Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.
1988-01-01
A physically based ground hydrology model is presented that includes the processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff. Data from the Goddard Institute for Space Studies GCM were used as inputs for off-line tests of the model in four 8 x 10 deg regions, including Brazil, Sahel, Sahara, and India. Soil and vegetation input parameters were caculated as area-weighted means over the 8 x 10 deg gridbox; the resulting hydrological quantities were compared to ground hydrology model calculations performed on the 1 x 1 deg cells which comprise the 8 x 10 deg gridbox. Results show that the compositing procedure worked well except in the Sahel, where low soil water levels and a heterogeneous land surface produce high variability in hydrological quantities; for that region, a resolution better than 8 x 10 deg is needed.
A model for calculating effects of liquid waste disposal in deep saline aquifers
Intercomp Resource Development and Engineering, Inc.
1976-01-01
A transient, three-dimensional subsurface waste-disposal model has been developed to provide methodology to design and test waste-disposal systems. The model is a finite-difference solution to the pressure, energy, and mass-transport equations. Equation parameters such as viscosity and density are allowed to be functions of the equations ' dependent variables. Multiple user options allow the choice of x, y, and z cartesian or r and z radial coordinates, various finite-difference methods, iterative and direct matrix solution techniques, restart options , and various provisions for output display. The addition of well-bore heat and pressure-loss calculations to the model makes available to the ground-water hydrologist the most recent advances from the oil and gas reservoir engineering field. (Woodard-USGS)
NASA Technical Reports Server (NTRS)
Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.
1988-01-01
A physically based ground hydrology model is presented that includes the processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff. Data from the Goddard Institute for Space Studies GCM were used as inputs for off-line tests of the model in four 8 x 10 deg regions, including Brazil, Sahel, Sahara, and India. Soil and vegetation input parameters were caculated as area-weighted means over the 8 x 10 deg gridbox; the resulting hydrological quantities were compared to ground hydrology model calculations performed on the 1 x 1 deg cells which comprise the 8 x 10 deg gridbox. Results show that the compositing procedure worked well except in the Sahel, where low soil water levels and a heterogeneous land surface produce high variability in hydrological quantities; for that region, a resolution better than 8 x 10 deg is needed.
Model averaging and Bayes factor calculation of relaxed molecular clocks in Bayesian phylogenetics.
Li, Wai Lok Sibon; Drummond, Alexei J
2012-02-01
We describe a procedure for model averaging of relaxed molecular clock models in Bayesian phylogenetics. Our approach allows us to model the distribution of rates of substitution across branches, averaged over a set of models, rather than conditioned on a single model. We implement this procedure and test it on simulated data to show that our method can accurately recover the true underlying distribution of rates. We applied the method to a set of alignments taken from a data set of 12 mammalian species and uncovered evidence that lognormally distributed rates better describe this data set than do exponentially distributed rates. Additionally, our implementation of model averaging permits accurate calculation of the Bayes factor(s) between two or more relaxed molecular clock models. Finally, we introduce a new computational approach for sampling rates of substitution across branches that improves the convergence of our Markov chain Monte Carlo algorithms in this context. Our methods are implemented under the BEAST 1.6 software package, available at http://beast-mcmc.googlecode.com.
Model Averaging and Bayes Factor Calculation of Relaxed Molecular Clocks in Bayesian Phylogenetics
Li, Wai Lok Sibon; Drummond, Alexei J.
2012-01-01
We describe a procedure for model averaging of relaxed molecular clock models in Bayesian phylogenetics. Our approach allows us to model the distribution of rates of substitution across branches, averaged over a set of models, rather than conditioned on a single model. We implement this procedure and test it on simulated data to show that our method can accurately recover the true underlying distribution of rates. We applied the method to a set of alignments taken from a data set of 12 mammalian species and uncovered evidence that lognormally distributed rates better describe this data set than do exponentially distributed rates. Additionally, our implementation of model averaging permits accurate calculation of the Bayes factor(s) between two or more relaxed molecular clock models. Finally, we introduce a new computational approach for sampling rates of substitution across branches that improves the convergence of our Markov chain Monte Carlo algorithms in this context. Our methods are implemented under the BEAST 1.6 software package, available at http://beast-mcmc.googlecode.com. PMID:21940644
Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth
2014-12-01
There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.
A compressible near-wall turbulence model for boundary layer calculations
NASA Technical Reports Server (NTRS)
So, R. M. C.; Zhang, H. S.; Lai, Y. G.
1992-01-01
A compressible near-wall two-equation model is derived by relaxing the assumption of dynamical field similarity between compressible and incompressible flows. This requires justifications for extending the incompressible models to compressible flows and the formulation of the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilational part, which is directly affected by these changes. This approach isolates terms with explicit dependence on compressibility so that they can be modeled accordingly. An equation that governs the transport of the solenoidal dissipation rate with additional terms that are explicitly dependent on the compressibility effects is derived similarly. A model with an explicit dependence on the turbulent Mach number is proposed for the dilational dissipation rate. Thus formulated, all near-wall incompressible flow models could be expressed in terms of the solenoidal dissipation rate and straight-forwardly extended to compressible flows. Therefore, the incompressible equations are recovered correctly in the limit of constant density. The two-equation model and the assumption of constant turbulent Prandtl number are used to calculate compressible boundary layers on a flat plate with different wall thermal boundary conditions and free-stream Mach numbers. The calculated results, including the near-wall distributions of turbulence statistics and their limiting behavior, are in good agreement with measurements. In particular, the near-wall asymptotic properties are found to be consistent with incompressible behavior; thus suggesting that turbulent flows in the viscous sublayer are not much affected by compressibility effects.
A simple model for calculating tsunami flow speed from tsunami deposits
Jaffe, B.E.; Gelfenbuam, G.
2007-01-01
This paper presents a simple model for tsunami sedimentation that can be applied to calculate tsunami flow speed from the thickness and grain size of a tsunami deposit (the inverse problem). For sandy tsunami deposits where grain size and thickness vary gradually in the direction of transport, tsunami sediment transport is modeled as a steady, spatially uniform process. The amount of sediment in suspension is assumed to be in equilibrium with the steady portion of the long period, slowing varying uprush portion of the tsunami. Spatial flow deceleration is assumed to be small and not to contribute significantly to the tsunami deposit. Tsunami deposits are formed from sediment settling from the water column when flow speeds on land go to zero everywhere at the time of maximum tsunami inundation. There is little erosion of the deposit by return flow because it is a slow flow and is concentrated in topographic lows. Variations in grain size of the deposit are found to have more effect on calculated tsunami flow speed than deposit thickness. The model is tested using field data collected at Arop, Papua New Guinea soon after the 1998 tsunami. Speed estimates of 14??m/s at 200??m inland from the shoreline compare favorably with those from a 1-D inundation model and from application of Bernoulli's principle to water levels on buildings left standing after the tsunami. As evidence that the model is applicable to some sandy tsunami deposits, the model reproduces the observed normal grading and vertical variation in sorting and skewness of a deposit formed by the 1998 tsunami.
Beyond Gaussians: a study of single-spot modeling for scanning proton dose calculation.
Li, Yupeng; Zhu, Ronald X; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong
2012-02-21
Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field size effects on dose output. In this study, we developed a pencil beam algorithm for scanning proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy.
Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation
Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong
2013-01-01
Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
Chen, M; Jiang, S; Lu, W
2015-06-15
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data, as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.
Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method
NASA Astrophysics Data System (ADS)
Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru
2015-05-01
Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
NASA Astrophysics Data System (ADS)
Zurita-Milla, Raul; Mehdipoor, Hamed; Batarseh, Sana; Ault, Toby; Schwartz, Mark D.
2014-05-01
Models that predict the timing of recurrent biological events play an important role in supporting the systematic study of phenological changes at a variety of spatial and temporal scales. One set of such models are the extended Spring indices (SI-x). These models predicts a suite of phenological metrics ("first leaf" and "first bloom," "last freeze" and the "damage index") from temperature data and geographic location (to model the duration of the day). The SI-x models were calibrated using historical phenological and weather observations from the continental US. In particular, the models relied on first leaf and first bloom observations for lilac and honeysuckle and on daily minimum and maximum temperature values from a number of weather stations located near to the sites where phenological observations were made. In this work, we study the use of DAYMET (http://daymet.ornl.gov/) to calculate the SI-x models over the continental USA. DAYMET offers daily gridded maximum and minimum temperature values for the period 1980 to 2012. Using an automatic downloader, we downloaded complete DAYMET temperature time series for the over 1100 geographic locations where historical lilac observations were made. The temperature values were parsed and, using the recently available MATLAB code, the SI-x indices were calculated. Subsequently, the predicted first leaf and first bloom dates were compared with historical lilac observations. The RMSE between predicted and observed lilac leaf/bloom dates was calculated after identifying data from the same geographic location and year. Results were satisfactory for the lilac observations in the Eastern US (e.g. the RMSE for the blooming date was of about 5 days). However, the correspondence between the observed and predicted lilac values in the West was rather week (e.g. RMSE for the blooming date of about 22 days). This might indicate that DAYMET temperature data in this region of the US might contain larger uncertainties due to a more
Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm(2) field size and dose profiles for a 40 × 40 cm(2) field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm(2) to 30 × 30 cm(2) . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
PFLOW: A 3-D Numerical Modeling Tool for Calculating Fluid-Pressure Diffusion from Coulomb Strain
NASA Astrophysics Data System (ADS)
Wolf, L. W.; Lee, M.; Meir, A.; Dyer, G.; Ma, K.; Chan, C.
2009-12-01
A new 3D time-dependent pore-pressure diffusion model PFLOW is developed to investigate the response of pore fluids to the crustal deformation generated by strong earthquakes in heterogeneous geologic media. Given crustal strain generated by changes in Coulomb stress, this MATLAB-based code uses Skempton's coefficient to calculate resulting changes fluid pressure. Pore-pressure diffusion can be tracked over time in a user-defined model space with user-prescribed Neumann or Dirchilet boundary conditions and with spatially variable values of permeability. PFLOW employs linear or quadratic finite elements for spatial discretization and first order or second order, explicit or implicit finite difference discretization in time. PFLOW is easily interfaced with output from deformation modeling programs such as Coulomb (Toda et al., 2007) or 3D-DEF (Gomberg and Ellis, 1994). The code is useful for investigating to first-order the evolution of pore pressure changes induced by changes in Coulomb stress and their possible relation to water-level changes in wells or changes in stream discharge. It can also be used for student research and classroom instruction. As an example application, we calculate the coseismic pore pressure changes and diffusion induced by volumetric strain associated with the 1999 Chi-Chi earthquake (Mw = 7.6) in Taiwan. The Chi-Chi earthquake provides an unique opportunity to investigate the spatial and time-dependent poroelastic response of near-field rocks and sediments because there exist extensive observational data of water-level changes and crustal deformation. The integrated model allows us to explore whether changes in Coulomb stress can adequately explain hydrologic anomalies observed in areas such as Taiwan’s western foothills and the Choshui River alluvial plain. To calculate coseismic strain, we use the carefully calibrated finite fault-rupture model of Ma et al. (2005) and the deformation modeling code Coulomb 3.1 (Toda et al., 2007
Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes
NASA Astrophysics Data System (ADS)
Hirsch, Damian; Gharib, Morteza
2016-11-01
Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.
Direct Calculation of the Rate of Homogeneous Ice Nucleation for a Molecular Model of Water
NASA Astrophysics Data System (ADS)
Haji-Akbari, Amir; Debenedetti, Pablo
Ice formation is ubiquitous in nature, with important consequences in many systems and environments. However, its intrinsic kinetics and mechanism are difficult to discern with experiments. Molecular simulations of ice nucleation are also challenging due to sluggish structural relaxation and the large nucleation barriers, and direct calculations of homogeneous nucleation rates have only been achieved for mW, a monoatomic coarse-grained model of water. For the more realistic molecular models, only indirect estimates have been obtained by assuming the validity of classical nucleation theory. Here, we use a coarse-grained variant of a path sampling approach known as forward-flux sampling to perform the first direct calculation of the homogeneous nucleation rate for TIP4P/Ice, which is the most accurate water model for studying ice polymorphs. By using a novel topological order parameter, we are able to identify a freezing mechanism that involves a competition between cubic and hexagonal ice polymorphs. In this competition, cubic ice wins as its growth leads to more compact crystallites
Fixed-node Monte Carlo calculations for the 1d Kondo lattice model
NASA Astrophysics Data System (ADS)
Bemmel, H. J. M. van; Saarloos, W. van; Haaf, D. F. B. ten
The effectiveness of the recently developed Fixed-Node Quantum Monte Carlo method for lattice fermions, developed by van Leeuwen and co-workers, is tested by applying it to the 1d Kondo lattice, an example of a one-dimensional model with a sign problem. The principles of this method and its implementation for the Kondo lattice model are discussed in detail. We compare the fixed-node upper bound for the ground-state energy at half filling with exact-diagonalization results from the literature, and determine several spin correlation functions. Our ‘best estimates’ for the ground-state correlation functions do not depend sensitively on the input trial wave function of the fixed-node projection, and are reasonably close to the exact values. We also calculate the spin gap of the model with the Fixed-Node Monte Carlo method. For this it is necessary to use a many-Slater-determinant trial state. The lowest-energy spin excitation is a running spin soliton with wave number π, in agreement with earlier calculations.
Calculation of velocity structure functions for vortex models of isotropic turbulence
Saffman, P.G.; Pullin, D.I.
1996-11-01
Velocity structure functions ({ital u}{sub {ital p}}{sup {prime}}{minus}{ital u}{sub {ital p}}){sup {ital m}} are calculated for vortex models of isotropic turbulence. An integral operator is introduced which defines an isotropic two-point field from a volume-orientation average for a specific solution of the Navier{endash}Stokes equations. Applying this to positive integer powers of the longitudinal velocity difference then gives explicit formulas for ({ital u}{sub {ital p}}{sup {prime}}{minus}{ital u}{sub {ital p}}){sup {ital m}} as a function of order {ital m} and of the scalar separation {ital r}. Special forms of the operator are then obtained for rectilinear stretched vortex models of the Townsend{endash}Lundgren type. Numerical results are given for the Burgers vortex and also for a realization of the Lundgren-strained spiral vortex, and comparison with experimental measurement is made. In an Appendix, we calculate values of the velocity-derivative moments for the Townsend{endash}Burgers model. {copyright} {ital 1996 American Institute of Physics.}
Koch, M H; Sayers, Z; Michon, A M; Sicre, P; Marquet, R; Houssier, C
1989-01-01
Electric dichroism and X-ray scattering measurements on solutions of uncondensed and condensed chicken erythrocyte chromatin were interpreted on the basis of model calculations. Information about the state of uncondensed fibers in the conditions of electric dichroism measurements was obtained from scattering patterns recorded as a function of pH, in the presence of spermine and at very low monovalent cation concentrations. Electric dichroism measurements on a complex of uncondensed chromatin with methylene blue were made to determine the contribution of the linker and of the nucleosomes to the total dichroism. A new approach to calculate the dichroism from realistic structural models, which also yields other structural parameters (radius of gyration, radius of gyration of the cross-section, mass per unit length) was used. Only a restricted range of structures is simultaneously compatible with all experimental results. Further, it is shown that previous interpretations of dichroism measurements on chromatin were in contradiction with X-ray scattering data and failed to take into account the distribution of orientation of the nucleosomes in the fibers. When this is done, it is found that the linker DNA in chicken erythrocyte and sea urchin chromatin must run nearly perpendicularly to the fibre axis. Taken together with the dependence of the fibre diameter on the linker length, these results provide the strongest evidence hitherto available for a model in which the linker crosses the central part of the fibre.
Calculation and Analysis of Magnetic Gradient Tensor Components of Global Magnetic Models
NASA Astrophysics Data System (ADS)
Schiffler, M.; Queitsch, M.; Schneider, M.; Goepel, A.; Stolz, R.; Krech, W.; Meyer, H. G.; Kukowski, N.
2014-12-01
Global Earth's magnetic field models like the International Geomagnetic Reference Field (IGRF), the World Magnetic Model (WMM) or the High Definition Geomagnetic Model (HDGM) are harmonic analysis regressions to available magnetic observations stored as spherical harmonic coefficients. Input data combine recordings from magnetic observatories, airborne magnetic surveys and satellite data. The advance of recent magnetic satellite missions like SWARM and its predecessors like CHAMP offer high resolution measurements while providing a full global coverage. This deserves expansion of the theoretical framework of harmonic synthesis to magnetic gradient tensor components. Measurement setups for Full Tensor Magnetic Gradiometry equipped with high sensitive gradiometers like the JeSSY STAR system can directly measure the gradient tensor components, which requires precise knowledge about the background regional gradients which can be calculated with this extension. In this study we develop the theoretical framework for calculation of the magnetic gradient tensor components from the harmonic series expansion and apply our approach to the IGRF and HDGM. The gradient tensor component maps for entire Earth's surface produced for the IGRF show low gradients reflecting the variation from the dipolar character, whereas maps for the HDGM (up to degree N=729) reveal new information about crustal structure, especially across the oceans, and deeply situated ore bodies. From the gradient tensor components, the rotational invariants, the Eigenvalues, and the normalized source strength (NSS) are calculated. The NSS focuses on shallower and stronger anomalies. Euler deconvolution using either the tensor components or the NSS applied to the HDGM reveals an estimate of the average source depth for the entire magnetic crust as well as individual plutons and ore bodies. The NSS reveals the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern
Microscopic calculation of interacting boson model parameters by potential-energy surface mapping
Bentley, I.; Frauendorf, S.
2011-06-15
A coherent state technique is used to generate an interacting boson model (IBM) Hamiltonian energy surface which is adjusted to match a mean-field energy surface. This technique allows the calculation of IBM Hamiltonian parameters, prediction of properties of low-lying collective states, as well as the generation of probability distributions of various shapes in the ground state of transitional nuclei, the last two of which are of astrophysical interest. The results for krypton, molybdenum, palladium, cadmium, gadolinium, dysprosium, and erbium nuclei are compared with experiment.
Onsager and Kaufman's Calculation of the Spontaneous Magnetization of the Ising Model
NASA Astrophysics Data System (ADS)
Baxter, R. J.
2011-11-01
Lars Onsager announced in 1949 that he and Bruria Kaufman had proved a simple formula for the spontaneous magnetization of the square-lattice Ising model, but did not publish their derivation. It was three years later when C.N. Yang published a derivation in Physical Review. In 1971 Onsager gave some clues to his and Kaufman's method, and there are copies of their correspondence in 1950 now available on the Web and elsewhere. Here we review how the calculation appears to have developed, and add a copy of a draft paper, almost certainly by Onsager and Kaufman, that obtains the result.
NASA Astrophysics Data System (ADS)
Paul, F.; Maisch, M.; Rothenbühler, C.; Hoelzle, M.; Haeberli, W.
2007-02-01
The observed rapid glacier wastage in the European Alps during the past 20 years already has strong impacts on the natural environment (rock fall, lake formation) as well as on human activities (tourism, hydro-power production, etc.) and poses several new challenges also for glacier monitoring. With a further increase of global mean temperature in the future, it is likely that Alpine glaciers and the high-mountain environment as an entire system will further develop into a state of imbalance. Hence, the assessment of future glacier geometries is a valuable prerequisite for various impact studies. In order to calculate and visualize in a consistent manner future glacier extent for a large number of individual glaciers (> 100) according to a given climate change scenario, we have developed an automated and simple but robust approach that is based on an empirical relationship between glacier size and the steady-state accumulation area ratio (AAR 0) in the Alps. The model requires digital glacier outlines and a digital elevation model (DEM) only and calculates new glacier geometries from a given shift of the steady-state equilibrium line altitude (ELA 0) by means of hypsographic modelling. We have calculated changes in number, area and volume for 3062 individual glacier units in Switzerland and applied six step changes in ELA 0 (from + 100 to + 600 m) combined with four different values of the AAR 0 (0.5, 0.6, 0.67, 0.75). For an AAR 0 of 0.6 and an ELA 0 rise of 200 m (400 m) we calculate a total area loss of - 54% (- 80%) and a corresponding volume loss of - 50% (- 78%) compared to the 1973 glacier extent. In combination with a geocoded satellite image, the future glacier outlines are also used for automated rendering of perspective visualisations. This is a very attractive tool for communicating research results to the general public. Our study is illustrated for a test site in the Upper Engadine (Switzerland), where landscape changes above timberline play an
Theoretical Model for the Calculation of Optical Properties of Gold Nanoparticles
NASA Astrophysics Data System (ADS)
Mendoza-García, A.; Romero-Depablos, A.; Ortega, M. A.; Paz, J. L.; Echevarría, L.
We have developed an analytical method to describe the optical properties of nanoparticles, whose results are in agreement with the observed experimental behavior according to the size of the nanoparticle under analysis. Our considerations to describe plasmonic absorption and dispersion are based on the combination of the two-level molecular system and the two-dimensional quantum box models. Employing the optical stochastic Bloch equations, we have determined the system's coherence, from which we have calculated expressions for the absorption coefficient and refractive index. The innovation of this methodology is that it allows us to take into account the solvent environment, which induce quantum effects not considered by classical treatments.
Shell-model calculations of beta-decay rates for s- and r-process nucleosyntheses
NASA Astrophysics Data System (ADS)
Takahashi, K.; Mathews, G. J.; Bloom, S. D.
1985-10-01
Examples of large-basis shell-model calculations of Gamow-Teller (BETA)-decay properties of specific interest in the astrophysical s- and r- processes are presented. Numerical results are given for: (1) the GT-matrix elements for the excited state decays of the unstable s-process nucleus Tc-99; and (2) the GT-strength function for the neutron-rich nucleus Cd-130, which lies on the r-process path. The results are discussed in conjunction with the astrophysics problems.
Zhang, Jianying; Chen, Gangling; Gong, Xuedong
2017-06-01
The quantitative structure-property relationship (QSPR) methodology was applied to describe and seek the relationship between the structures and energetic properties (and sensitivity) for some common energy compounds. An extended series of structural and energetic descriptors was obtained with density functional theory (DFT) B3LYP and semi-empirical PM3 approaches. Results indicate that QSPR model constructed using quantum descriptors can be applied to verify the confidence of calculation results compared with experimental data. It can be extended to predict the properties of similar compounds.
Empirical model functions to calculate hematocrit-dependent optical properties of human blood
NASA Astrophysics Data System (ADS)
Meinke, Martina; Müller, Gerhard; Helfmann, Jürgen; Friebel, Moritz
2007-04-01
The absorption coefficient, scattering coefficient, and effective scattering phase function of human red blood cells (RBCs) in saline solution were determined for eight different hematocrits (Hcts) between 0.84% and 42.1% in the wavelength range of 250-1100 nm using integrating sphere measurements and inverse Monte Carlo simulation. To allow for biological variability, averaged optical parameters were determined under flow conditions for ten different human blood samples. Based on this standard blood, empirical model functions are presented for the calculation of Hct-dependent optical properties for the RBCs. Changes in the optical properties when saline solution is replaced by blood plasma as the suspension medium were also investigated.
An approximate framework for quantum transport calculation with model order reduction
Chen, Quan; Li, Jun; Yam, Chiyung; Zhang, Yu; Wong, Ngai; Chen, Guanhua
2015-04-01
A new approximate computational framework is proposed for computing the non-equilibrium charge density in the context of the non-equilibrium Green's function (NEGF) method for quantum mechanical transport problems. The framework consists of a new formulation, called the X-formulation, for single-energy density calculation based on the solution of sparse linear systems, and a projection-based nonlinear model order reduction (MOR) approach to address the large number of energy points required for large applied biases. The advantages of the new methods are confirmed by numerical experiments.
Large-scale shell-model calculations for 32-39P isotopes
NASA Astrophysics Data System (ADS)
Srivastava, P. C.; Hirsch, J. G.; Ermamatov, M. J.; Kota, V. K. B.
2012-10-01
In this work, the structure of 32-39P isotopes is described in the framework of stateof-the-art large-scale shell-model calculations, employing the code ANTOINE with three modern effective interactions: SDPF-U, SDPF-NR and the extended pairing plus quadrupole-quadrupoletype forces with inclusion of monopole interaction (EPQQM). Protons are restricted to fill the sd shell, while neutrons are active in the sd - pf valence space. Results for positive and negative level energies and electromagnetic observables are compared with the available experimental data.
Model calculations for the retrieval of aerosols from satellite and aircraft radiances
NASA Astrophysics Data System (ADS)
Hickman, George D.; Souders, C.; Shettle, Eric P.; Duggin, Michael J.; Sweet, J. A.
1993-09-01
Model calculations of upwelling spectral radiances at aircraft and satellite altitudes have been made to assess the capability of different current and planned sensors to extract information on the atmospheric aerosols. The visible and near infrared channels on the AVHRR, CZCS, and SeaWiFS satellite sensors were used, as well as hypothetical multichannel instruments covering 400 - 1000 nm with bandwidths of 100, 20, or 10 nm. The sensitivity to the aerosol and environmental properties increased as the bandwidth of the channel decreased.
Overlap model and ab initio cluster calculations of polarisabilities of ions in solids
NASA Astrophysics Data System (ADS)
Domene, C.; Fowler, P. W.; Madden, P. A.; Wilson, M.; Wheatley, R. J.
1999-11-01
A recently developed overlap model for exchange-induction is used to simulate in-crystal anion polarisabilities for alkali halides and chalcogenides (LiF, NaF, KF, LiCl, NaCl, KCl, LiBr, NaBr, KBr, MgO, CaO, SrO, MgS, CaS and SrS) in overall qualitative agreement with results of ab initio cluster calculations and experiment. Extension to AgF supports the proposal that crystal-field splitting causes significant enhancement of cation polarisability for d 10 systems, in contrast to the demonstrated insensitivity of s 2 and p 6 spherical cations.
Airloads and Wake Geometry Calculations for an Isolated Tiltrotor Model in a Wind Tunnel
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2003-01-01
Th tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. For many years such correlation has been performed for helicopter rotors (rotors designed for edgewise flight), but correlation activities for tiltrotors have been limited, in part by the absence of appropriate measured data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, U4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) now provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will present calculations of airloads, wake geometry, and performance, including correlation with TRAM DNW measurements. The calculations were obtained using CAMRAD II, which is a modern rotorcraft comprehensive analysis, with advanced models intended for application to tiltrotor aircraft as well as helicopters. Comprehensive analyses have received extensive correlation with performance and loads measurements on helicopter rotors. The proposed paper is part of an initial effort to perform an equally extensive correlation with tiltrotor data. The correlation will establish the level of predictive capability achievable with current technology; identify the limitations of the current aerodynamic, wake, and structural models of tiltrotors; and lead to recommendations for research to extend tiltrotor aeromechanics analysis capability. The purpose of the Tilt Rotor Aeroacoustic Model (TRAM) experimental project is to provide data necessary to validate
Gouinaud, Laure; Katz, Ira; Martin, Andrew; Hazebroucq, Jean; Texereau, Joëlle; Caillibotte, Georges
2015-01-01
A numerical pressure loss model previously used for adult human airways has been modified to simulate the inhalation pressure distribution in a healthy 9-month-old infant lung morphology model. Pressure distributions are calculated for air as well as helium and xenon mixtures with oxygen to investigate the effects of gas density and viscosity variations for this age group. The results indicate that there are significant pressure losses in infant extrathoracic airways due to inertial effects leading to much higher pressures to drive nominal flows in the infant airway model than for an adult airway model. For example, the pressure drop through the nasopharynx model of the infant is much greater than that for the nasopharynx model of the adult; that is, for the adult-versus-child the pressure differences are 0.08 cm H2O versus 0.4 cm H2O, 0.16 cm H2O versus 1.9 cm H2O and 0.4 cm H2O versus 7.7 cm H2O, breathing helium-oxygen (78/22%), nitrogen-oxygen (78/22%) and xenon-oxygen (60/40%), respectively. Within the healthy lung, viscous losses are of the same order for the three gas mixtures, so the differences in pressure distribution are relatively small.
Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn
NASA Astrophysics Data System (ADS)
Gargano, A.; Coraggio, L.; Itaco, N.
2017-09-01
This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.
An analytical model for calculating internal dose conversion coefficients for non-human biota.
Amato, Ernesto; Italiano, Antonio
2014-05-01
To assess the radiation burden of non-human living organisms, dose coefficients are available in the literature, precalculated by assuming an ellipsoidal shape of each organism. A previously developed analytical method was applied for the determination of absorbed fractions inside ellipsoidal volumes from alpha, beta, and gamma radiations to the calculation of dose conversion coefficients (DCCs) for 15 reference organisms, animals and plants, either terrestrial, amphibian, or aquatic, and six radionuclides ((14)C, (90)Sr, (60)Co, (137)Cs, (238)U, and (241)Am). The results were compared with the reference values reported in Publication 108 of the International Commission on Radiological Protection, in which a different calculation approach for DCCs was employed. The results demonstrate that the present analytical method, originally intended for applications in internal dosimetry of nuclear medicine therapy, gives consistent results for all the beta-, beta-gamma-, and alpha-emitting radionuclides tested in a wide range of organism masses, between 8 mg and 1.3 kg. The applicability of the method proposed can take advantage from its ease of implementation in an ordinary electronic spreadsheet, allowing to calculate, for virtually all possible radionuclide emission spectra, the DCCs for ellipsoidal models of non-human living organisms in the environment.
Precipitation in Microalloyed Steel by Model Alloy Experiments and Thermodynamic Calculations
NASA Astrophysics Data System (ADS)
Frisk, Karin; Borggren, Ulrika
2016-10-01
Precipitation in microalloyed steel has been studied by applying thermodynamic calculations based on a description of the Gibbs energies of the individual phases over the full multicomponent composition range. To validate and improve the thermodynamic description, new experimental investigations of the phase separation in the cubic carbides/nitrides/carbonitrides in alloys containing Nb, V, Mo, and Cr, have been performed. Model alloys were designed to obtain equilibrium carbides/carbonitrides that are sufficiently large for measurements of compositions, making it possible to study the partitioning of the elements into different precipitates, showing distinctly different composition sets. The reliability of the calculations, when applied to multicomponent alloys, was tested by comparing with published experimental studies of precipitation in microalloyed steel. It is shown that thermodynamic calculations accurately describe the observed precipitation sequences. Further, they can reproduce several important features of precipitation processes in microalloyed steel such as the partitioning of Mo between matrix and precipitates and the variation of precipitate compositions depending on precipitation temperature.
Flint, Alexander C; Rao, Vivek A; Chan, Sheila L; Cullen, Sean P; Faigeles, Bonnie S; Smith, Wade S; Bath, Philip M; Wahlgren, Nils; Ahmed, Niaz; Donnan, Geoff A; Johnston, S Claiborne
2015-08-01
The Totaled Health Risks in Vascular Events (THRIVE) score is a previously validated ischemic stroke outcome prediction tool. Although simplified scoring systems like the THRIVE score facilitate ease-of-use, when computers or devices are available at the point of care, a more accurate and patient-specific estimation of outcome probability should be possible by computing the logistic equation with patient-specific continuous variables. We used data from 12 207 subjects from the Virtual International Stroke Trials Archive and the Safe Implementation of Thrombolysis in Stroke - Monitoring Study to develop and validate the performance of a model-derived estimation of outcome probability, the THRIVE-c calculation. Models were built with logistic regression using the underlying predictors from the THRIVE score: age, National Institutes of Health Stroke Scale score, and the Chronic Disease Scale (presence of hypertension, diabetes mellitus, or atrial fibrillation). Receiver operator characteristics analysis was used to assess model performance and compare the THRIVE-c model to the traditional THRIVE score, using a two-tailed Chi-squared test. The THRIVE-c model performed similarly in the randomly chosen development cohort (n = 6194, area under the curve = 0·786, 95% confidence interval 0·774-0·798) and validation cohort (n = 6013, area under the curve = 0·784, 95% confidence interval 0·772-0·796) (P = 0·79). Similar performance was also seen in two separate external validation cohorts. The THRIVE-c model (area under the curve = 0·785, 95% confidence interval 0·777-0·793) had superior performance when compared with the traditional THRIVE score (area under the curve = 0·746, 95% confidence interval 0·737-0·755) (P < 0·001). By computing the logistic equation with patient-specific continuous variables in the THRIVE-c calculation, outcomes at the individual patient level are more accurately estimated. Given the widespread
Large uncertainty in soil carbon modelling related to carbon input calculation method
NASA Astrophysics Data System (ADS)
Keel, Sonja G.; Leifeld, Jens; Taghizadeh-Toosi, Arezoo; Oleson, Jørgen E.
2016-04-01
A model-based inventory for carbon (C) sinks and sources in agricultural soils is being established for Switzerland. As part of this project, five frequently used allometric equations that estimate soil C inputs based on measured yields are compared. To evaluate the different methods, we calculate soil C inputs for a long-term field trial in Switzerland. This DOK experiment (bio-Dynamic, bio-Organic, and conventional (German: Konventionell)) compares five different management systems, that are applied to identical crop rotations. Average calculated soil C inputs vary largely between allometric equations and range from 1.6 t C ha-1 yr-1 to 2.6 t C ha-1 yr-1. Among the most important crops in Switzerland, the uncertainty is largest for barley (difference between highest and lowest estimate: 3.0 t C ha-1 yr-1). For the unfertilized control treatment, the estimated soil C inputs vary less between allometric equations than for the treatment that received mineral fertilizer and farmyard manure. Most likely, this is due to the higher yields in the latter treatment, i.e. the difference between methods might be amplified because yields differ more. To evaluate the influence of these allometric equations on soil C dynamics we simulate the DOK trial for the years 1977-2004 using the model C-TOOL (Taghizadeh-Toosi et al. 2014) and the five different soil C input calculation methods. Across all treatments, C-TOOL simulates a decrease in soil C in line with the experimental data. This decline, however, varies between allometric equations (-2.4 t C ha-1 to -6.3 t C ha-1 for the years 1977-2004) and has the same order of magnitude as the difference between treatments. In summary, the method to estimate soil C inputs is identified as a significant source of uncertainty in soil C modelling. Choosing an appropriate allometric equation to derive the input data is thus a critical step when setting up a model-based national soil C inventory. References Taghizadeh-Toosi A et al. (2014) C
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
Calculation of scalar structure functions from a vortex model of turbulent passive scalar transport
NASA Astrophysics Data System (ADS)
Higgins, Keith; Ooi, Andrew; Chong, M. S.
2008-02-01
A Saffman and Pullin [Phys. Fluids 8, 3072 (1996)] type vortex model for passive scalar structure functions is formulated. The intermittent turbulent fine-scale dynamics in the model is represented by numerical solutions of the advection-diffusion and Navier-Stokes equations in the form of axially strained vortex-scalar structures. The use of these structures is motivated by Pullin and Lundgren's [Phys. Fluids 13, 2553 (2001)] asymptotic strained spiral vortex model of turbulent passive scalar transport. Ensemble-averaged scalar structure functions, of even orders 2-10, are calculated from a range of vortex-scalar structures using Monte Carlo integration. For axisymmetric strained scalar fields, acceptable agreement of the second-order structure function with experimental data reported by Antonia and Van Atta [J. Fluid Mech. 84, 561 (1978)] is obtained. Structure functions are also calculated for a range of passive scalar spiral structures. These are generated by the winding of single and double scalar patches in single strained vortex patches and in merging strained vortices. Power-law scaling of the second- and higher-order structure functions is obtained from cases involving the winding of single scalar patches in an axisymmetric strained vortex patch. The second-order scaling exponents from these cases are in reasonable agreement with Kolmogorov-Oboukhov-Corrsin scaling and the experimental results of Antonia et al. [Phys. Rev. A 30, 2704 (1984)] and Gylfason and Warhaft [Phys. Fluids 16, 4012 (2004)]. However, the higher-order scaling exponents from these cases fall below theoretical predictions and experimental results. Higher-order moments are sensitive to the composition of the vortex-scalar structures, and various improvements are suggested that could enhance the performance of the model. The present approach is promising, and it is the first demonstration that a vortex model using simplified Navier-Stokes dynamics can produce some scalar structure
TH-C-BRD-02: Analytical Modeling and Dose Calculation Method for Asymmetric Proton Pencil Beams
Gelover, E; Wang, D; Hill, P; Flynn, R; Hyer, D
2014-06-15
Purpose: A dynamic collimation system (DCS), which consists of two pairs of orthogonal trimmer blades driven by linear motors has been proposed to decrease the lateral penumbra in pencil beam scanning proton therapy. The DCS reduces lateral penumbra by intercepting the proton pencil beam near the lateral boundary of the target in the beam's eye view. The resultant trimmed pencil beams are asymmetric and laterally shifted, and therefore existing pencil beam dose calculation algorithms are not capable of trimmed beam dose calculations. This work develops a method to model and compute dose from trimmed pencil beams when using the DCS. Methods: MCNPX simulations were used to determine the dose distributions expected from various trimmer configurations using the DCS. Using these data, the lateral distribution for individual beamlets was modeled with a 2D asymmetric Gaussian function. The integral depth dose (IDD) of each configuration was also modeled by combining the IDD of an untrimmed pencil beam with a linear correction factor. The convolution of these two terms, along with the Highland approximation to account for lateral growth of the beam along the depth direction, allows a trimmed pencil beam dose distribution to be analytically generated. The algorithm was validated by computing dose for a single energy layer 5×5 cm{sup 2} treatment field, defined by the trimmers, using both the proposed method and MCNPX beamlets. Results: The Gaussian modeled asymmetric lateral profiles along the principal axes match the MCNPX data very well (R{sup 2}≥0.95 at the depth of the Bragg peak). For the 5×5 cm{sup 2} treatment plan created with both the modeled and MCNPX pencil beams, the passing rate of the 3D gamma test was 98% using a standard threshold of 3%/3 mm. Conclusion: An analytical method capable of accurately computing asymmetric pencil beam dose when using the DCS has been developed.
Franco, E L; Simons, A R
1986-05-01
Two programs are described for the emulation of the dynamics of Reed-Frost progressive epidemics in a handheld programmable calculator (HP-41C series). The programs provide a complete record of cases, susceptibles, and immunes at each epidemic period using either the deterministic formulation or the trough analogue of the mechanical model for the stochastic version. Both programs can compute epidemics that include a constant rate of influx or outflux of susceptibles and single or double infectivity time periods.
CHEMEOS: a new chemical-picture-based model for plasma equation-of-state calculations
Hakel, P.; Kilcrease, D. P.
2004-01-01
We present the results of a new plasma equation-of-state (EOS) model currently under development at the Atomic and Optical Theory Group (T-4) in Los Alamos. This model is based on the chemical picture of the plasma and uses the free-energy-minimization technique and the occupation-probability formalism. The model is constructed as a combination of ideal and non-ideal contributions to the total Helmholtz free energy of the plasma including the effects of plasma microfields, strong coupling, and the hard-sphere description of the finite sizes of atomic species with bound electrons. These types of models have been recognized as a convenient and computationally inexpensive tool for modeling of local-thermal-equilibrium (LIE) plasmas for a broad range of temperatures and densities. We calculate the thermodynamic characteristics of the plasma (such as pressure and internal energy), and populations and occupation probabilities of atomic bound states. In addition to a smooth truncation of partition functions necessary for extracting ion populations from the system of Saha-type equations, the occupation probabilities can also be used for the merging of Rydberg line series into their associated bound-free edges. In the low-density, high-temperature regimes the plasma effects are adequately described by the Debye-Huckel model and its corresponding contribution to the total Helmholtz free energy of the plasma. In strongly-coupled plasmas, however, the Debye-Huckel approximation is no longer appropriate. In order to extend the validity of our EOS model to strongly-coupled plasmas while maintaining the analytic nature of our model, we adopt fits to the plasma free energy based on hypernetted-chain and Monte Carlo simulations. Our results for hydrogen are compared to other theoretical models. Hydrogen has been selected as a test-case on which improvements in EOS physics are benchmarked before analogous upgrades are included for any element in the EOS part of the new Los Alamos
NASA Technical Reports Server (NTRS)
Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.
1990-01-01
The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.
NASA Technical Reports Server (NTRS)
Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.
1990-01-01
The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.
Applying dynamic wake models to induced power calculations for an optimum rotor
NASA Astrophysics Data System (ADS)
Garcia-Duffy, Cristina
Recent studies have pointed out that conventional lifting rotors in forward flight have efficiencies far lower than the optimum efficiencies predicted by theory. This dissertation explains how a closed-form optimization of induced power with finite-state models is expanded to successfully reproduce the results for the optimization of induced power given by classical theories for axial flow and for a rotor in forward flight. Results for induced power change in forward flight and for different conditions will help the determination of what produces the efficiency in real rotors to be inferior to the predicted values by theoretical calculations. Mainly three factors contribute to the decreased efficiency for real rotors: a finite number of blades, the effect of lift tilt and the lift distribution. The ultimate goals of the present research effort are to: (1) develop a complete and comprehensive inflow model, and (2) determine which of these contribute to the drastic increase in induced power.
A New Generation of Cool White Dwarf Atmosphere Models Using Ab Initio Calculations
NASA Astrophysics Data System (ADS)
Blouin, S.; Dufour, P.; Kowalski, P. M.
2017-03-01
Due to their high photospheric density, cool helium-rich white dwarfs (particularly DZ, DQpec and ultracool) are often poorly described by current atmosphere models. As part of our ongoing efforts to design atmosphere models suitable for all cool white dwarfs, we investigate how the ionization ratio of heavy elements and the H2-He collision-induced absorption (CIA) spectrum are altered under fluid-like densities. For the conditions encountered at the photosphere of cool helium-rich white dwarfs, our ab initio calculations show that the ionization of most metals is inhibited and that the H2-He CIA spectrum is significantly distorted for densities higher than 0.1 g/cm3.
Large boson number IBM calculations and their relationship to the Bohr model
NASA Astrophysics Data System (ADS)
Thiamova, G.; Rowe, D. J.
2009-08-01
Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max = 40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N = v max = 40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B( E2) transition strengths ratios with increasing seniority.
FragBuilder: an efficient Python library to setup quantum chemistry calculations on peptides models.
Christensen, Anders S; Hamelryck, Thomas; Jensen, Jan H
2014-01-01
We present a powerful Python library to quickly and efficiently generate realistic peptide model structures. The library makes it possible to quickly set up quantum mechanical calculations on model peptide structures. It is possible to manually specify a specific conformation of the peptide. Additionally the library also offers sampling of backbone conformations and side chain rotamer conformations from continuous distributions. The generated peptides can then be geometry optimized by the MMFF94 molecular mechanics force field via convenient functions inside the library. Finally, it is possible to output the resulting structures directly to files in a variety of useful formats, such as XYZ or PDB formats, or directly as input files for a quantum chemistry program. FragBuilder is freely available at https://github.com/jensengroup/fragbuilder/ under the terms of the BSD open source license.
FragBuilder: an efficient Python library to setup quantum chemistry calculations on peptides models
Hamelryck, Thomas; Jensen, Jan H.
2014-01-01
We present a powerful Python library to quickly and efficiently generate realistic peptide model structures. The library makes it possible to quickly set up quantum mechanical calculations on model peptide structures. It is possible to manually specify a specific conformation of the peptide. Additionally the library also offers sampling of backbone conformations and side chain rotamer conformations from continuous distributions. The generated peptides can then be geometry optimized by the MMFF94 molecular mechanics force field via convenient functions inside the library. Finally, it is possible to output the resulting structures directly to files in a variety of useful formats, such as XYZ or PDB formats, or directly as input files for a quantum chemistry program. FragBuilder is freely available at https://github.com/jensengroup/fragbuilder/ under the terms of the BSD open source license. PMID:24688855
Trudell, Amanda S; Tuuli, Methodius G; Colditz, Graham A; Macones, George A; Odibo, Anthony O
2017-01-01
To generate a clinical prediction tool for stillbirth that combines maternal risk factors to provide an evidence based approach for the identification of women who will benefit most from antenatal testing for stillbirth prevention. Retrospective cohort study. Midwestern United States quaternary referral center. Singleton pregnancies undergoing second trimester anatomic survey from 1999-2009. Pregnancies with incomplete follow-up were excluded. Candidate predictors were identified from the literature and univariate analysis. Backward stepwise logistic regression with statistical comparison of model discrimination, calibration and clinical performance was used to generate final models for the prediction of stillbirth. Internal validation was performed using bootstrapping with 1,000 repetitions. A stillbirth risk calculator and stillbirth risk score were developed for the prediction of stillbirth at or beyond 32 weeks excluding fetal anomalies and aneuploidy. Statistical and clinical cut-points were identified and the tools compared using the Integrated Discrimination Improvement. Antepartum stillbirth. 64,173 women met inclusion criteria. The final stillbirth risk calculator and score included maternal age, black race, nulliparity, body mass index, smoking, chronic hypertension and pre-gestational diabetes. The stillbirth calculator and simple risk score demonstrated modest discrimination but clinically significant performance with no difference in overall performance between the tools [(AUC 0.66 95% CI 0.60-0.72) and (AUC 0.64 95% CI 0.58-0.70), (p = 0.25)]. A stillbirth risk score was developed incorporating maternal risk factors easily ascertained during prenatal care to determine an individual woman's risk for stillbirth and provide an evidenced based approach to the initiation of antenatal testing for the prediction and prevention of stillbirth.
Tuuli, Methodius G.; Colditz, Graham A.; Macones, George A.; Odibo, Anthony O.
2017-01-01
Objective To generate a clinical prediction tool for stillbirth that combines maternal risk factors to provide an evidence based approach for the identification of women who will benefit most from antenatal testing for stillbirth prevention. Design Retrospective cohort study Setting Midwestern United States quaternary referral center Population Singleton pregnancies undergoing second trimester anatomic survey from 1999–2009. Pregnancies with incomplete follow-up were excluded. Methods Candidate predictors were identified from the literature and univariate analysis. Backward stepwise logistic regression with statistical comparison of model discrimination, calibration and clinical performance was used to generate final models for the prediction of stillbirth. Internal validation was performed using bootstrapping with 1,000 repetitions. A stillbirth risk calculator and stillbirth risk score were developed for the prediction of stillbirth at or beyond 32 weeks excluding fetal anomalies and aneuploidy. Statistical and clinical cut-points were identified and the tools compared using the Integrated Discrimination Improvement. Main outcome measures Antepartum stillbirth Results 64,173 women met inclusion criteria. The final stillbirth risk calculator and score included maternal age, black race, nulliparity, body mass index, smoking, chronic hypertension and pre-gestational diabetes. The stillbirth calculator and simple risk score demonstrated modest discrimination but clinically significant performance with no difference in overall performance between the tools [(AUC 0.66 95% CI 0.60–0.72) and (AUC 0.64 95% CI 0.58–0.70), (p = 0.25)]. Conclusion A stillbirth risk score was developed incorporating maternal risk factors easily ascertained during prenatal care to determine an individual woman’s risk for stillbirth and provide an evidenced based approach to the initiation of antenatal testing for the prediction and prevention of stillbirth. PMID:28267756
Long-term changes in the mesosphere calculated by a two-dimensional model
NASA Astrophysics Data System (ADS)
Gruzdev, Aleksandr N.; Brasseur, Guy P.
2005-02-01
We have used the interactive two-dimensional model SOCRATES to investigate the thermal and the chemical response of the mesosphere to the changes in greenhouse gas concentrations observed in the past 50 years (CO2, CH4, water vapor, N2O, CFCs), and to specified changes in gravity wave drag and diffusion in the upper mesosphere. When considering the observed increase in the abundances of greenhouse gases for the past 50 years, a cooling of 3-7 K is calculated in the mesopause region together with a cooling of 4-6 K in the middle mesosphere. Changes in the meridional circulation of the mesosphere damp the pure radiative thermal effect of the greenhouse gases. The largest cooling in the winter upper mesosphere-mesopause region occurs when the observed increase in concentrations of greenhouse gases and the strengthening of the gravity wave drag and diffusion are considered simultaneously. Depending on the adopted strengthening of the gravity wave drag and diffusion, a cooling varying from typically 6-10 K to 10-20 K over the past 50 years is predicted in the extratropical upper mesosphere during wintertime. In summer, however, consistently with observations, the thermal response calculated by the model is insignificant in the vicinity of the mesopause. Although the calculated cooling of the winter mesopause is still less than suggested by some observations, these results lead to the conclusion that the increase in the abundances of greenhouse gases alone may not entirely explain the observed temperature trends in the mesosphere. Long-term changes in the dynamics of the middle atmosphere (and the troposphere), including changes in gravity wave activity may have contributed significantly to the observed long-term changes in thermal structure and chemical composition of the mesosphere.
Model calculating annual mean atmospheric dispersion factor for coastal site of nuclear power plant.
Hu, E B; Chen, J Y; Yao, R T; Zhang, M S; Gao, Z R; Wang, S X; Jia, P R; Liao, Q L
2001-07-01
This paper describes an atmospheric dispersion field experiment performed on the coastal site of nuclear power plant in the east part of China during 1995 to 1996. The three-dimension joint frequency are obtained by hourly observation of wind and temperature on a 100 m high tower; the frequency of the "event day of land and sea breezes" are given by observation of surface wind and land and sea breezes; the diffusion parameters are got from measurements of turbulent and wind tunnel simulation test. A new model calculating the annual mean atmospheric dispersion factor for coastal site of nuclear power plant is developed and established. This model considers not only the effect from mixing release and mixed layer but also the effect from the internal boundary layer and variation of diffusion parameters due to the distance from coast. The comparison between results obtained by the new model and current model shows that the ratio of annual mean atmospheric dispersion factor gained by the new model and the current one is about 2.0.
Schaffner, D W
1994-12-01
The inherent variability or 'variance' of growth rate measurements is critical to the development of accurate predictive models in food microbiology. A large number of measurements are typically needed to estimate variance. To make these measurements requires a significant investment of time and effort. If a single growth rate determination is based on a series of independent measurements, then a statistical bootstrapping technique can be used to simulate multiple growth rate measurements from a single set of experiments. Growth rate variances were calculated for three large datasets (Listeria monocytogenes, Listeria innocua, and Yersinia enterocolitica) from our laboratory using this technique. This analysis revealed that the population of growth rate measurements at any given condition are not normally distributed, but instead follow a distribution that is between normal and Poisson. The relationship between growth rate and temperature was modeled by response surface models using generalized linear regression. It was found that the assumed distribution (i.e. normal, Poisson, gamma or inverse normal) of the growth rates influenced the prediction of each of the models used. This research demonstrates the importance of variance and assumptions about the statistical distribution of growth rates on the results of predictive microbiological models.
Non-LTE model calculations for SN 1987A and the extragalactic distance scale
NASA Technical Reports Server (NTRS)
Schmutz, W.; Abbott, D. C.; Russell, R. S.; Hamann, W.-R.; Wessolowski, U.
1990-01-01
This paper presents model atmospheres for the first week of SN 1987A, based on the luminosity and density/velocity structure from hydrodynamic models of Woosley (1988). The models account for line blanketing, expansion, sphericity, and departures from LTE in hydrogen and helium and differ from previously published efforts because they represent ab initio calculations, i.e., they contain essentially no free parameters. The formation of the UV spectrum is dominated by the effects of line blanketing. In the absorption troughs, the Balmer line profiles were fit well by these models, but the observed emissions are significantly stronger than predicted, perhaps due to clumping. The generally good agreement between the present synthetic spectra and observations provides independent support for the overall accuracy of the hydrodynamic models of Woosley. The question of the accuracy of the Baade-Wesselink method is addressed in a detailed discussion of its approximations. While the application of the standard method produces a distance within an uncertainty of 20 percent in the case of SN 1987A, systematic errors up to a factor of 2 are possible, particularly if the precursor was a red supergiant.
Shen, Jiacheng; Wyman, Charles E
2012-01-01
A kinetic model was applied to improve determination of the sugar recovery standard (SRS) for biomass analysis. Three sets of xylose (0.10-1.00 g/L and 0.999-19.995 g/L) and glucose (0.206-1.602 g/L) concentrations were measured by HPLC following reaction of each for 1 h. Then, parameters in a kinetic model were fit to the resulting sugar concentration data, and the model was applied to predict the initial sugar concentrations and the best SRS value (SRS(p)). The initial sugar concentrations predicted by the model agreed with the actual initial sugar concentrations. Although the SRS(e) calculated directly from experimental data oscillated considerably with sugar concentration, the SRS(p) trend was smooth. Statistical analysis of errors and application of the F-test confirmed that application of the model reduced experimental errors in SRS(e). Reference SRS(e) values are reported for the three series of concentrations.
Thermal Model of Europa: Calculating the Effects of Surface Topography and Radiation from Jupiter
NASA Astrophysics Data System (ADS)
Bennett, Kristen; Paige, D.; Hayne, P.; Greenhagen, B.; Schenk, P.
2010-10-01
Europa's surface temperature distribution results from global effects such as insolation and heat flow, as well as local topography and possibly active tectonic processes. Accurate surface temperature models will greatly benefit future orbital investigations searching for global-scale variations in heat flow and local thermal anomalies resulting from frictional heating on faults or diapirs (Paige et al, this meeting). At the global scale, a major challenge for such models is the strong influence of Jupiter on the solar and infrared flux at Europa's surface. At the local scale, the thermal signature is dominated by complex topography. In order to address these two problems, we developed a model that modifies the Digital Moon program created by D. Paige and S. Meeker (2009) that uses a 3-dimensional geodesic gridding scheme to calculate the surface temperature of a body due to multiple scatterings of radiation and heat flow. We can account for Jupiter's influence on Europa by including data on Jupiter's solar and infrared radiation (which accounts for roughly 30% of the radiation at Europa), and on Europa's orbit (as Europa spends several minutes out of its 3.55 day orbital period in Jupiter's shadow). To address the issue of Europa's complicated terrain, we have simulated the effects of local heat flow as well as added topography and surface roughness to the thermal model by using digital elevation models produced by Schenk and Pappalardo (2004) that show altitude changes of several hundred meters and tectonic features that may produce regions of anomalously high heat flow.
A computer code for calculations in the algebraic collective model of the atomic nucleus
NASA Astrophysics Data System (ADS)
Welsh, T. A.; Rowe, D. J.
2016-03-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.
2014-02-15
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model
Lesperance, Marielle; Inglis-Whalen, M; Thomson, R M
2014-02-01
To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with(125)I, (103)Pd, or (131)Cs seeds, and to investigate doses to ocular structures. An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20-30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%-10% and 13%-14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%-17% and 29%-34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up to 16%. In the full eye model
NASA Astrophysics Data System (ADS)
Giannoglou, V.; Stylianidis, E.
2016-06-01
Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s) calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s) and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.
Jacob, D; Palacios, J J
2011-01-28
We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.
A Multilayered Box Model for Calculating Preliminary RemediationGoals in Soil Screening
Shan, Chao; Javandel, Iraj
2004-05-21
In the process of screening a soil against a certain contaminant, we define the health-risk based preliminary remediation goal (PRG) as the contaminant concentration above which some remedial action may be required. PRG is thus the first standard (or guidance) for judging a site. An over-estimated PRG (a too-large value) may cause us to miss some contaminated sites that can threaten human health and the environment. An under-estimated PRG (a too-small value), on the other hand, may lead to unnecessary cleanup and waste tremendous resources. The PRGs for soils are often calculated on the assumption that the contaminant concentration in soil does not change with time. However, that concentration usually decreases with time as a result of different chemical and transport mechanisms. The static assumption thus exaggerates the long-term exposure dose and results in a too-small PRG. We present a box model that considers all important transport processes and obeys the law of mass conservation. We can use the model as a tool to estimate the transient contaminant concentrations in air, soil and groundwater. Using these concentrations in conjunction with appropriate health risk parameters, we may estimate the PRGs for different contaminants. As an example, we calculated the tritium PRG for residential soils. The result is quite different from, but within the range of, the two versions of the corresponding PRG previously recommended by the U.S. EPA.
Elastic guided waves in plates with surface roughness. I. Model calculation
Lobkis, O.I.; Chimenti, D.E.
1997-07-01
This paper reports analytical research on the effect of surface roughness on ultrasonic guided waves in plates. The theoretical model is constructed by exploiting the phase-screen assumption that takes advantage of the Kirchhoff approximation, where, on a local scale, the roughness degrades only the signal phase. The effect of the rough surface on the guided wave is treated by decomposing the wave modes into their constituent partial waves and considering individually the effect of the roughness on the partial wave components as they reflect from the plate surfaces. An approximate dispersion relation is derived for the traction-free rough waveguide that is formally identical to the conventional Lamb wave equation, but incorporating the roughness parameter as a complex plate thickness. A more accurate version of the model calculation is generalized to fluid-immersed plates having only a single rough surface either on the same, or opposite, side of the plate as the incident ultrasonic field. Calculations of the reflection coefficients in the presence of roughness serve to illustrate the phenomena for the case of the guided waves. {copyright} {ital 1997 Acoustical Society of America.}
NASA Astrophysics Data System (ADS)
Margulis, Vl A.; Muryumin, E. E.; Gaiduk, E. A.
2016-05-01
An effective anisotropic tight-binding model is developed to analytically describe the low-energy electronic structure and optical response of phosphorene (a black phosphorus (BP) monolayer). Within the framework of the model, we derive explicit closed-form expressions, in terms of elementary functions, for the elements of the optical conductivity tensor of phosphorene. These relations provide a convenient parametrization of the highly anisotropic optical response of phosphorene, which allows the reflectance, transmittance, and absorbance of this material to be easily calculated as a function of the frequency of the incident radiation at arbitrary angles of incidence. The results of such a calculation are presented for both a free-standing phosphorene layer and the phosphorene layer deposited on a {{SiO}}2 substrate, and for the two principal cases of polarization of the incident radiation either parallel to or normal to the plane of incidence. Our findings (e.g., a ‘quasi-Brewster’ effect in the reflectance of the phosphorene/{{SiO}}2 overlayer system) pave the way for developing a new, purely optical method of distinguishing BP monolayers.
Calculation model scaling of CO laser with RF discharge in supersonic stream
NASA Astrophysics Data System (ADS)
Baranov, I. Y.; Koksharov, A. V.; Koptev, A. V.; Romodin, K. M.
2007-04-01
High-power gas lasers can be the effective tool in such applications as dismantlement of obsolete nuclear reactors, laser-hardening of the surfaces of railway rails etc. The carried out experiments have shown, that radio frequency (BY) discharge is an effective source of reception of the vibrational-excited CO molecules. Laser generation was shown on small-scale experimental installation. For transition to creation high-power CO laser the clear understanding of the processes occurring in a supersonic stream a CO mixture, excited by RF discharge is necessary, the calculation model of scaling CO laser with RF discharge in supersonic stream therefore was developed. The developed model, proceeding from the given power projected CO laser, allows to calculate parameters of laser installation and to optimize them with the purpose of reception of high value of efficiency and low cost of installation as a whole. The industrial CO laser for dismantlement of obsolete nuclear reactors and hardening the surfaces of railway rails is proposed. The estimated cost of laser is some hundred thousand dollars USA and small sizes of laser head give possibility to install it on manipulator without fiber-optic delivery.
A comparison of ozone trends from SME and SBUV satellite observations and model calculations
NASA Technical Reports Server (NTRS)
Rusch, D. W.; Clancy, R. T.
1988-01-01
Data on monthly ozone abundance trends near the stratopause, observed by the Ultraviolet Spectrometer (UVS) on the SME and by the Solar Backscatter Ultraviolet Instrument (SBUV) on NIMBUS-7 are presented for June, September, and January of the years 1982-1986. Globally averaged trends determined from the SME data (-0.5 + or - 1.3 percent/yr) were found to fall within model calculations by Rusch and Clancy (1988); the SBUV trends, on the other hand, were found to exceed maximum predicted ozone decreases by a factor of 3 or more. Detailed comparison of the two data sets indicated that an absolute offset of 3 percent/yr accounts for much of the difference between the two trends; the offset is considered to be due to incomplete characterization of the SBUV calibration drift. Both the UVS and SBUV data exhibited similar seasonal and latitudinal variations in ozone trends, which were reproduced by photochemical model calculations that included latitude-dependent NMC temperature trends over the 1982-1986 period.
A model to calculate consistent atmospheric emission projections and its application to Spain
NASA Astrophysics Data System (ADS)
Lumbreras, Julio; Borge, Rafael; de Andrés, Juan Manuel; Rodríguez, Encarnación
Global warming and air quality are headline environmental issues of our time and policy must preempt negative international effects with forward-looking strategies. As part of the revision of the European National Emission Ceilings Directive, atmospheric emission projections for European Union countries are being calculated. These projections are useful to drive European air quality analyses and to support wide-scale decision-making. However, when evaluating specific policies and measures at sectoral level, a more detailed approach is needed. This paper presents an original methodology to evaluate emission projections. Emission projections are calculated for each emitting activity that has emissions under three scenarios: without measures (business as usual), with measures (baseline) and with additional measures (target). The methodology developed allows the estimation of highly disaggregated multi-pollutant, consistent emissions for a whole country or region. In order to assure consistency with past emissions included in atmospheric emission inventories and coherence among the individual activities, the consistent emission projection (CEP) model incorporates harmonization and integration criteria as well as quality assurance/quality check (QA/QC) procedures. This study includes a sensitivity analysis as a first approach to uncertainty evaluation. The aim of the model presented in this contribution is to support decision-making process through the assessment of future emission scenarios taking into account the effect of different detailed technical and non-technical measures and it may also constitute the basis for air quality modelling. The system is designed to produce the information and formats related to international reporting requirements and it allows performing a comparison of national results with lower resolution models such as RAINS/GAINS. The methodology has been successfully applied and tested to evaluate Spanish emission projections up to 2020 for 26
Jinks, Rachel C; Royston, Patrick; Parmar, Mahesh K B
2015-10-12
Prognostic studies of time-to-event data, where researchers aim to develop or validate multivariable prognostic models in order to predict survival, are commonly seen in the medical literature; however, most are performed retrospectively and few consider sample size prior to analysis. Events per variable rules are sometimes cited, but these are based on bias and coverage of confidence intervals for model terms, which are not of primary interest when developing a model to predict outcome. In this paper we aim to develop sample size recommendations for multivariable models of time-to-event data, based on their prognostic ability. We derive formulae for determining the sample size required for multivariable prognostic models in time-to-event data, based on a measure of discrimination, D, developed by Royston and Sauerbrei. These formulae fall into two categories: either based on the significance of the value of D in a new study compared to a previous estimate, or based on the precision of the estimate of D in a new study in terms of confidence interval width. Using simulation we show that they give the desired power and type I error and are not affected by random censoring. Additionally, we conduct a literature review to collate published values of D in different disease areas. We illustrate our methods using parameters from a published prognostic study in liver cancer. The resulting sample sizes can be large, and we suggest controlling study size by expressing the desired accuracy in the new study as a relative value as well as an absolute value. To improve usability we use the values of D obtained from the literature review to develop an equation to approximately convert the commonly reported Harrell's c-index to D. A flow chart is provided to aid decision making when using these methods. We have developed a suite of sample size calculations based on the prognostic ability of a survival model, rather than the magnitude or significance of model coefficients. We have
Modeling a superficial radiotherapy X-ray source for relative dose calculations.
Johnstone, Christopher D; LaFontaine, Richard; Poirier, Yannick; Tambasco, Mauro
2015-05-08
The purpose of this study was to empirically characterize and validate a kilovoltage (kV) X-ray beam source model of a superficial X-ray unit for relative dose calculations in water and assess the accuracy of the British Journal of Radiology Supplement 25 (BJR 25) percentage depth dose (PDD) data. We measured central axis PDDs and dose profiles using an Xstrahl 150 X-ray system. We also compared the measured and calculated PDDs to those in the BJR 25. The Xstrahl source was modeled as an effective point source with varying spatial fluence and spectra. In-air ionization chamber measurements were made along the x- and y-axes of the X-ray beam to derive the spatial fluence and half-value layer (HVL) measurements were made to derive the spatially varying spectra. This beam characterization and resulting source model was used as input for our in-house dose calculation software (kVDoseCalc) to compute radiation dose at points of interest (POIs). The PDDs and dose profiles were measured using 2, 5, and 15 cm cone sizes at 80, 120, 140, and 150 kVp energies in a scanning water phantom using IBA Farmer-type ionization chambers of volumes 0.65 and 0.13 cc, respectively. The percent difference in the computed PDDs compared with our measurements range from -4.8% to 4.8%, with an overall mean percent difference and standard deviation of 1.5% and 0.7%, respectively. The percent difference between our PDD measurements and those from BJR 25 range from -14.0% to 15.7%, with an overall mean percent difference and standard deviation of 4.9% and 2.1%, respectively - showing that the measurements are in much better agreement with kVDoseCalc than BJR 25. The range in percent difference between kVDoseCalc and measurement for profiles was -5.9% to 5.9%, with an overall mean percent difference and standard deviation of 1.4% and 1.4%, respectively. The results demonstrate that our empirically based X-ray source modeling approach for superficial X-ray therapy can be used to accurately