NASA Astrophysics Data System (ADS)
Zhang, ZhenHua
2016-07-01
The high-spin rotational properties of two-quasiparticle bands in the doubly-odd 166Ta are analyzed using the cranked shell model with pairing correlations treated by a particle-number conserving method, in which the blocking effects are taken into account exactly. The experimental moments of inertia and alignments and their variations with the rotational frequency hω are reproduced very well by the particle-number conserving calculations, which provides a reliable support to the configuration assignments in previous works for these bands. The backbendings in these two-quasiparticle bands are analyzed by the calculated occupation probabilities and the contributions of each orbital to the total angular momentum alignments. The moments of inertia and alignments for the Gallagher-Moszkowski partners of these observed two-quasiparticle rotational bands are also predicted.
NASA Astrophysics Data System (ADS)
Li, Yu-Chun; He, Xiao-Tao
2016-07-01
Experimentally observed ground state band based on the 1/2-[521] Nilsson state and the first exited band based on the 7/2-[514] Nilsson state of the odd- Z nucleus 255Lr are studied by the cranked shell model (CSM) with the paring correlations treated by the particle-number-conserving (PNC) method. This is the first time the detailed theoretical investigations are performed on these rotational bands. Both experimental kinematic and dynamic moments of inertia ( J (1) and J (2)) versus rotational frequency are reproduced quite well by the PNC-CSM calculations. By comparing the theoretical kinematic moment of inertia J (1) with the experimental ones extracted from different spin assignments, the spin 17/2- → 13/2- is assigned to the lowest-lying 196.6(5) keV transition of the 1/2-[521] band, and 15/2- → 11/2- to the 189(1) keV transition of the 7/2-[514] band, respectively. The proton N = 7 major shell is included in the calculations. The intruder of the high- j low-Ω 1 j 15/2 (1/2-[770]) orbital at the high spin leads to band-crossings at ħω ≈ 0.20 ( ħω ≈ 0.25) MeV for the 7/2-[514] α = -1/2 ( α = +1/2) band, and at ħω ≈ 0.175 MeV for the 1/2-[521] α = -1/2 band, respectively. Further investigations show that the band-crossing frequencies are quadrupole deformation dependent.
Calculations of signature for Dy, Er, Yb nuclei
Mueller, W.F.; Jensen, H.J.; Reviot, W.
1993-10-01
Energy signature splitting {Delta}e` of rotational bands depends sensitively on deformation, pair correlations, and Fermi level in the particular nucleus. Calculating {Delta}e` is therefore very useful in understanding the experimentally observed properties of such bands. In principal, one can extract {Delta}e` from Total Routhian Surface (TRS) calculations as well as from the Cranked Shell Model (CSM). However, the codes available are not based on a fully self-consistent treatment of all critical parameters, deformation, pairing, and Fermi level. The TRS calculations, while modeling the deformation in a {open_quote}realistic{close_quotes} manner as a function of rotational frequency and changes in the quasiparticle configuration, have deficiencies particularly in the treatment of pairing. The CSM codes, on the other hand, estimate pairing and the location of the Fermi level more precisely than the TRS codes, but work under the assumption of a constant deformation. We have developed a method to calculate {Delta}e` that utilizes the most advanced features of both types of codes. This ensures that the best parameter values are used as input for calculating the routhians. As a test, we have used a series of odd-A Dy, Er, and Yb nuclei around A = 160 and compared the results for the vi{sub 13/2} shell with experimental data on {Delta}e`. Details of our method will be discussed and the comparison will be presented.
ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Model calculations of lightning electric fields
NASA Technical Reports Server (NTRS)
Master, M. J.; Uman, M. A.; Krider, E. P.
1982-01-01
Calculated time-domain waveforms and frequency spectra are presented for three of the most important processes in a lightning discharge to ground: the return stroke, the stepped leader, and the preliminary breakdown. For each of these processes, the model calculations are given for 200 m and 50 km. The calculations are compared with available time and frequency domain measurements.
Precipitates/Salts Model Sensitivity Calculation
P. Mariner
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.
Numerical Calculation of Model Rocket Trajectories.
ERIC Educational Resources Information Center
Keeports, David
1990-01-01
Discussed is the use of model rocketry to teach the principles of Newtonian mechanics. Included are forces involved; calculations for vertical launches; two-dimensional trajectories; and variations in mass, drag, and launch angle. (CW)
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.
Hybrid reduced order modeling for assembly calculations
Bang, Y.; Abdel-Khalik, H. S.; Jessee, M. A.; Mertyurek, U.
2013-07-01
While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system. (authors)
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
On a model of calculating bond strength
NASA Technical Reports Server (NTRS)
Yue, A. S.; Yang, T. T.; Lin, T. S.
1976-01-01
Diffusion bonding is a fabricating process to join the fibers and a matrix together forming a composite. The efficiency of the bonding process depends on temperature, time, and pressure. Based on a simplified pair potential model, an expression for the bond-energy at the fiber-matrix interface is formulated in terms of the above-mentioned three parameters. From this expression and the mean atomic distance, the bond-strength between the fibers and the matrix can be calculated.
Modeling, calculating, and analyzing multidimensional vibrational spectroscopies.
Tanimura, Yoshitaka; Ishizaki, Akihito
2009-09-15
Spectral line shapes in a condensed phase contain information from various dynamic processes that modulate the transition energy, such as microscopic dynamics, inter- and intramolecular couplings, and solvent dynamics. Because nonlinear response functions are sensitive to the complex dynamics of chemical processes, multidimensional vibrational spectroscopies can separate these processes. In multidimensional vibrational spectroscopy, the nonlinear response functions of a molecular dipole or polarizability are measured using ultrashort pulses to monitor inter- and intramolecular vibrational motions. Because a complex profile of such signals depends on the many dynamic and structural aspects of a molecular system, researchers would like to have a theoretical understanding of these phenomena. In this Account, we explore and describe the roles of different physical phenomena that arise from the peculiarities of the system-bath coupling in multidimensional spectra. We also present simple analytical expressions for a weakly coupled multimode Brownian system, which we use to analyze the results obtained by the experiments and simulations. To calculate the nonlinear optical response, researchers commonly use a particular form of a system Hamiltonian fit to the experimental results. The optical responses of molecular vibrational motions have been studied in either an oscillator model or a vibration energy state model. In principle, both models should give the same results as long as the energy states are chosen to be the eigenstates of the oscillator model. The energy state model can provide a simple description of nonlinear optical processes because the diagrammatic Liouville space theory that developed in the electronically resonant spectroscopies can easily handle three or four energy states involved in high-frequency vibrations. However, the energy state model breaks down if we include the thermal excitation and relaxation processes in the dynamics to put the system in a
Isomer ratio calculations using modeled discrete levels
Gardner, M.A.; Gardner, D.G.; Hoff, R.W.
1984-10-16
Isomer ratio calculations were made for the reactions: /sup 175/Lu(n,..gamma..)/sup 176m,g/Lu, /sup 175/Lu(n,2n)/sup 174m,g/Lu, /sup 237/Np(n,2n)/sup 236m,g/Np, /sup 241/Am(n,..gamma..)/sup 242m,g/Am, and /sup 243/Am(n,..gamma..)/sup 244m,g/Am using modeled level structures in the deformed, odd-odd product nuclei. The hundreds of discrete levels and their gamma-ray branching ratios provided by the modeling are necessary to achieve agreement with experiment. Many rotational bands must be included in order to obtain a sufficiently representative selection of K quantum numbers. The levels of each band must be extended to appropriately high values of angular momentum.
Density functional calculations on model tyrosyl radicals.
Himo, F; Gräslund, A; Eriksson, L A
1997-01-01
A gradient-corrected density functional theory approach (PWP86) has been applied, together with large basis sets (IGLO-III), to investigate the structure and hyperfine properties of model tyrosyl free radicals. In nature, these radicals are observed in, e.g., the charge transfer pathways in photosystem II (PSII) and in ribonucleotide reductases (RNRs). By comparing spin density distributions and proton hyperfine couplings with experimental data, it is confirmed that the tyrosyl radicals present in the proteins are neutral. It is shown that hydrogen bonding to the phenoxyl oxygen atom, when present, causes a reduction in spin density on O and a corresponding increase on C4. Calculated proton hyperfine coupling constants for the beta-protons show that the alpha-carbon is rotated 75-80 degrees out of the plane of the ring in PSII and Salmonella typhimurium RNR, but only 20-30 degrees in, e.g., Escherichia coli, mouse, herpes simplex, and bacteriophage T4-induced RNRs. Furthermore, based on the present calculations, we have revised the empirical parameters used in the experimental determination of the oxygen spin density in the tyrosyl radical in E. coli RNR and of the ring carbon spin densities, from measured hyperfine coupling constants. Images FIGURE 1 FIGURE 5 PMID:9083661
SU(3) in shell-model calculations
Millener, D.J.
1991-10-01
The essential steps in the formalism for performing multi-shell calculations in an SU(3) basis are outlined and examples of applications in which the SU(3) classification aids in the physical interpretation of structure calculation are given.
A radiation model for geocentric trajectory calculations
NASA Technical Reports Server (NTRS)
Malchow, H. L.; Whitney, C. K.
1975-01-01
A solar cell degradation model developed for the SECKSPOT trajectory optimization code is presented. The model is based on two analytic expressions, one describing solar cell power degradation as a function of 1 MeV equivalent fluence and cell base resistivity and thickness, and one describing a spatial field of 1 MeV equivalent electron flux. The model extends the latitude range, provides a continuous and smooth representation of the flux field, and provides for changing the cell characteristics. Construction of a 1 MeV electron flux model and of a power loss model are described. It is shown that modeling the 1 MeV flux field as a separate entity allows simple consideration of both front and back shielding, and that the coefficients relating to specific cell damage data can be simply updated using the latest cell damage data once the general analytical characteristics of the model have been established.
Model potential calculations of lithium transitions.
NASA Technical Reports Server (NTRS)
Caves, T. C.; Dalgarno, A.
1972-01-01
Semi-empirical potentials are constructed that have eigenvalues close in magnitude to the binding energies of the valence electron in lithium. The potentials include the long range polarization force between the electron and the core. The corresponding eigenfunctions are used to calculate dynamic polarizabilities, discrete oscillator strengths, photoionization cross sections and radiative recombination coefficients. A consistent application of the theory imposes a modification on the transition operator, but its effects are small for lithium. The method presented can be regarded as a numerical generalization of the widely used Coulomb approximation.
Model calculates was deposition for North Sea oils
Majeed, A.; Bringedal, B.; Overa, S. )
1990-06-18
A model for calculation of wax formation and deposition in pipelines and process equipment has been developed along with a new method for wax-equilibrium calculations using input from TBP distillation cuts. Selected results from the wax formation and deposition model have been compared with laboratory data from wax equilibrium and deposition experiments, and there have been some field applications of the model.
THREE-DIMENSIONAL MODEL FOR HYPERTHERMIA CALCULATIONS
Realistic three-dimensional models that predict temperature distributions with a high degree of spatial resolution in bodies exposed to electromagnetic (EM) fields are required in the application of hyperthermia for cancer treatment. To ascertain the thermophysiologic response of...
CALCULATION OF PHYSICOCHEMICAL PROPERTIES FOR ENVIRONMENTAL MODELING
Recent trends in environmental regulatory strategies dictate that EPA will rely heavily on predictive modeling to carry out the increasingly complex array of exposure and risk assessments necessary to develop scientifically defensible regulations. In response to this need, resea...
Quantum Biological Channel Modeling and Capacity Calculation
Djordjevic, Ivan B.
2012-01-01
Quantum mechanics has an important role in photosynthesis, magnetoreception, and evolution. There were many attempts in an effort to explain the structure of genetic code and transfer of information from DNA to protein by using the concepts of quantum mechanics. The existing biological quantum channel models are not sufficiently general to incorporate all relevant contributions responsible for imperfect protein synthesis. Moreover, the problem of determination of quantum biological channel capacity is still an open problem. To solve these problems, we construct the operator-sum representation of biological channel based on codon basekets (basis vectors), and determine the quantum channel model suitable for study of the quantum biological channel capacity and beyond. The transcription process, DNA point mutations, insertions, deletions, and translation are interpreted as the quantum noise processes. The various types of quantum errors are classified into several broad categories: (i) storage errors that occur in DNA itself as it represents an imperfect storage of genetic information, (ii) replication errors introduced during DNA replication process, (iii) transcription errors introduced during DNA to mRNA transcription, and (iv) translation errors introduced during the translation process. By using this model, we determine the biological quantum channel capacity and compare it against corresponding classical biological channel capacity. We demonstrate that the quantum biological channel capacity is higher than the classical one, for a coherent quantum channel model, suggesting that quantum effects have an important role in biological systems. The proposed model is of crucial importance towards future study of quantum DNA error correction, developing quantum mechanical model of aging, developing the quantum mechanical models for tumors/cancer, and study of intracellular dynamics in general. PMID:25371271
Droplet distribution models for visibility calculation
NASA Astrophysics Data System (ADS)
Bernardin, F.; Colomb, M.; Egal, F.; Morange, P.; Boreux, J.-J.
2010-07-01
More efficient predictions of fog occurrence and visibility are required in order to improve both safety and traffic management in critical adverse weather situations. Observation and simulation of the fog characteristics contribute to a better understanding of the phenomena and to adapt technical solutions against visibility reduction. The simulation of visibility reduction by fog condition using light scattering model depends on the size and concentration of droplets. Therefore it is necessary to include in the software some functions for the droplet distribution model rather than some data file of single measurement. The aim of the present work is to revisit some droplet distribution models of fog (Shettle and Fenn 1979) in order to actualise them by using recent experimental measures. Indeed the models mentioned above were established thanks to experimental data obtained with sensors of 70’s. Actual sensors are able to take into account droplets with radius 0.2 μm which was not the case with older sensors. A surface observation campaign was carried out at Palaiseau and Toulouse, France, between 2006 and 2008. These experiments allowed to collect microphysical data of fog and particularly droplet distributions of the fog, thanks to a "Palas" optical granulometer. Based on these data an analysis is carried out in order to provide a droplet distribution model. The first approach consists in testing the four Gamma laws proposed by Shettle and Fenn (1979). The adjustment of coefficients allows changing the characteristics from advection to radiation fog. These functions did not fit the new set of data collected with the Palas sensor. New algorithms based on Gamma and Lognormal laws are proposed and discussed in comparison to the previous models. For a road application, the coefficients of the proposed models are evaluated for different classes of visibility, ranged from 50 to 200 meters.
Beyond standard model calculations with Sherpa
Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; Siegert, Frank
2015-03-24
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.
Martian Radiation Environment: Model Calculations and Recent Measurements with "MARIE"
NASA Technical Reports Server (NTRS)
Saganti, P. B.; Cucinotta, F. A.; zeitlin, C. J.; Cleghorn, T. F.
2004-01-01
The Galactic Cosmic Ray spectra in Mars orbit were generated with the recently expanded HZETRN (High Z and Energy Transport) and QMSFRG (Quantum Multiple-Scattering theory of nuclear Fragmentation) model calculations. These model calculations are compared with the first eighteen months of measured data from the MARIE (Martian Radiation Environment Experiment) instrument onboard the 2001 Mars Odyssey spacecraft that is currently in Martian orbit. The dose rates observed by the MARIE instrument are within 10% of the model calculated predictions. Model calculations are compared with the MARIE measurements of dose, dose-equivalent values, along with the available particle flux distribution. Model calculated particle flux includes GCR elemental composition of atomic number, Z = 1-28 and mass number, A = 1-58. Particle flux calculations specific for the current MARIE mapping period are reviewed and presented.
Precipitates/Salts Model Calculations for Various Drift Temperature Environments
P. Marnier
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b).
Method and models for R-curve instability calculations
NASA Technical Reports Server (NTRS)
Orange, Thomas W.
1990-01-01
This paper presents a simple method for performing elastic R-curve instability calculations. For a single material-structure combination, the calculations can be done on some pocket calculators. On microcomputers and larger, it permits the development of a comprehensive program having libraries of driving force equations for different configurations and R-curve model equations for different materials. The paper also presents several model equations for fitting to experimental R-curve data, both linear elastic and elastoplastic. The models are fit to data from the literature to demonstrate their viability.
Method and models for R-curve instability calculations
NASA Technical Reports Server (NTRS)
Orange, Thomas W.
1988-01-01
This paper presents a simple method for performing elastic R-curve instability calculations. For a single material-structure combination, the calculations can be done on some pocket calculators. On microcomputers and larger, it permits the development of a comprehensive program having libraries of driving force equations for different configurations and R-curve model equations for different materials. The paper also presents several model equations for fitting to experimental R-curve data, both linear elastic and elastoplastic. The models are fit to data from the literature to demonstrate their viability.
Model calculations of nuclear data for biologically-important elements
Chadwick, M.B.; Blann, M.; Reffo, G.; Young, P.G.
1994-05-01
We describe calculations of neutron-induced reactions on carbon and oxygen for incident energies up to 70 MeV, the relevant clinical energy in radiation neutron therapy. Our calculations using the FKK-GNASH, GNASH, and ALICE codes are compared with experimental measurements, and their usefulness for modeling reactions on biologically-important elements is assessed.
In-Drift Microbial Communities Model Validation Calculations
D. M. Jolley
2001-09-24
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.
In-Drift Microbial Communities Model Validation Calculation
D. M. Jolley
2001-10-31
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.
IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS
D.M. Jolley
2001-12-18
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.
Campbell, David L.; Watts, Raymond D.
1978-01-01
Program listing, instructions, and example problems are given for 12 programs for the interpretation of geophysical data, for use on Hewlett-Packard models 67 and 97 programmable hand-held calculators. These are (1) gravity anomaly over 2D prism with = 9 vertices--Talwani method; (2) magnetic anomaly (?T, ?V, or ?H) over 2D prism with = 8 vertices?Talwani method; (3) total-field magnetic anomaly profile over thick sheet/thin dike; (4) single dipping seismic refractor--interpretation and design; (5) = 4 dipping seismic refractors--interpretation; (6) = 4 dipping seismic refractors?design; (7) vertical electrical sounding over = 10 horizontal layers--Schlumberger or Wenner forward calculation; (8) vertical electric sounding: Dar Zarrouk calculations; (9) magnetotelluric planewave apparent conductivity and phase angle over = 9 horizontal layers--forward calculation; (10) petrophysics: a.c. electrical parameters; (11) petrophysics: elastic constants; (12) digital convolution with = 10-1ength filter.
Approximate flash calculations for equation-of-state compositional models
Nghiem, L.X.; Li, Y.K.
1985-02-01
An approximate method for flash calculations (AFC) with an equation of state is presented. The equations for AFC are obtained by linearizing the thermodynamic equilibrium equations at an equilibrium condition termed reference condition. The AFC equations are much simpler than the actual equations for flash calculations and yet give almost the same results. A procedure for generating new reference conditions to keep the AFC results close to the true flash calculation (TFC) results is described. AFC is compared to TFC in the calculation of standard laboratory tests and in the simulation of gas injection processes with a composition model. Excellent results are obtained with AFC in less than half the original execution time.
Effective UV radiation from model calculations and measurements
NASA Technical Reports Server (NTRS)
Feister, Uwe; Grewe, Rolf
1994-01-01
Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.
HOM study and parameter calculation of the TESLA cavity model
NASA Astrophysics Data System (ADS)
Zeng, Ri-Hua; Schuh, Marcel; Gerigk, Frank; Wegner, Rolf; Pan, Wei-Min; Wang, Guang-Wei; Liu, Rong
2010-01-01
The Superconducting Proton Linac (SPL) is the project for a superconducting, high current H-accelerator at CERN. To find dangerous higher order modes (HOMs) in the SPL superconducting cavities, simulation and analysis for the cavity model using simulation tools are necessary. The existing TESLA 9-cell cavity geometry data have been used for the initial construction of the models in HFSS. Monopole, dipole and quadrupole modes have been obtained by applying different symmetry boundaries on various cavity models. In calculation, scripting language in HFSS was used to create scripts to automatically calculate the parameters of modes in these cavity models (these scripts are also available in other cavities with different cell numbers and geometric structures). The results calculated automatically are then compared with the values given in the TESLA paper. The optimized cavity model with the minimum error will be taken as the base for further simulation of the SPL cavities.
Microbial Communities Model Parameter Calculation for TSPA/SR
D. Jolley
2001-07-16
This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit and eclipse light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 s with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman.
Statistical Model Calculations for (n,γ) Reactions
NASA Astrophysics Data System (ADS)
Beard, Mary; Uberseder, Ethan; Wiescher, Michael
2015-05-01
Hauser-Feshbach (HF) cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well-observed result that different nuclear input models sensitively affect HF cross section calculations. Less well known however are the effects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, back-shift and giant dipole parameters), as well as effects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the effects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative effects of these model details will be discussed.
Nonlinear triggered lightning models for use in finite difference calculations
NASA Technical Reports Server (NTRS)
Rudolph, Terence; Perala, Rodney A.; Ng, Poh H.
1989-01-01
Two nonlinear triggered lightning models have been developed for use in finite difference calculations. Both are based on three species of air chemistry physics and couple nonlinearly calculated air conductivity to Maxwell's equations. The first model is suitable for use in three-dimensional modeling and has been applied to the analysis of triggered lightning on the NASA F106B Thunderstorm Research Aircraft. The model calculates number densities of positive ions, negative ions, and electrons as a function of time and space through continuity equations, including convective derivative terms. The set of equations is closed by using experimentally determined mobilities, and the mobilities are also used to determine the air conductivity. Results from the model's application to the F106B are shown. The second model is two-dimensional and incorporates an enhanced air chemistry formulation. Momentum conservation equations replace the mobility assumption of the first model. Energy conservation equations for neutrals, heavy ions, and electrons are also used. Energy transfer into molecular vibrational modes is accounted for. The purpose for the enhanced model is to include the effects of temperature into the air breakdown, a necessary step if the model is to simulate more than the very earliest stages of breakdown. Therefore, the model also incorporates a temperature-dependent electron avalanche rate. Results from the model's application to breakdown around a conducting ellipsoid placed in an electric field are shown.
New generation of universal modeling for centrifugal compressors calculation
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
The Universal Modeling method is in constant use from mid - 1990th. Below is presented the newest 6th version of the Method. The flow path configuration of 3D impellers is presented in details. It is possible to optimize meridian configuration including hub/shroud curvatures, axial length, leading edge position, etc. The new model of vaned diffuser includes flow non-uniformity coefficient based on CFD calculations. The loss model was built from the results of 37 experiments with compressors stages of different flow rates and loading factors. One common set of empirical coefficients in the loss model guarantees the efficiency definition within an accuracy of 0.86% at the design point and 1.22% along the performance curve. The model verification was made. Four multistage compressors performances with vane and vaneless diffusers were calculated. As the model verification was made, four multistage compressors performances with vane and vaneless diffusers were calculated. Two of these compressors have quite unusual flow paths. The modeling results were quite satisfactory in spite of these peculiarities. One sample of the verification calculations is presented in the text. This 6th version of the developed computer program is being already applied successfully in the design practice.
Separated transonic airfoil flow calculations with a nonequilibrium turbulence model
NASA Technical Reports Server (NTRS)
King, L. S.; Johnson, D. A.
1985-01-01
Navier-Stokes transonic airfoil calculations based on a recently developed nonequilibrium, turbulence closure model are presented for a supercritical airfoil section at transonic cruise conditions and for a conventional airfoil section at shock-induced stall conditions. Comparisons with experimental data are presented which show that this nonequilibrium closure model performs significantly better than the popular Baldwin-Lomax and Cebeci-Smith equilibrium algebraic models when there is boundary-layer separation that results from the inviscid-viscous interactions.
An Improved Radiative Transfer Model for Climate Calculations
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
Interactions of model biomolecules. Benchmark CC calculations within MOLCAS
NASA Astrophysics Data System (ADS)
Urban, Miroslav; PitoÅák, Michal; Neogrády, Pavel; Dedíková, Pavlína; Hobza, Pavel
2015-01-01
We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine - guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.
Interactions of model biomolecules. Benchmark CC calculations within MOLCAS
Urban, Miroslav; Pitoňák, Michal; Neogrády, Pavel; Dedíková, Pavlína; Hobza, Pavel
2015-01-22
We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine – guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.
Approximate flash calculations for equation-of-state compositional models--
Nghiem, L.X.; Li, Y.K. )
1990-02-01
An approximate flash-calculation (AFC) method with an equation of state (EOS) is presented. The equations for AFC are obtained by linearizing the thermodynamic equilibrium equations at an equilibrium condition called the reference condition. The AFC equations are much simpler than the actual equations for flash calculations and yet give almost the same results. A procedure for generating new reference conditions to keep the AFC results close to the true flash-calculation (TFC) results is described. AFC is compared with TFC in the calculation of standard laboratory tests and in the simulation of gas-injection processes with a compositional model. Excellent results are obtained with AFC in less than half the original execution time.
Fully Relativistic Calculations for Non-LTE Modeling
NASA Astrophysics Data System (ADS)
Fontes, Christopher J.; Zhang, Hong Lin; Abdallah, Joseph, Jr.; Clark, Robert E. H.; Kilcrease, David P.
1999-11-01
A set of fully relativistic codes has been developed to calculate non-LTE, configuration-average atomic models for use in ICF simulations. The codes are based on the same architecture as that used by existing atomic data codes at Los Alamos such as CATS, ACE, GIPPER and FINE. Therefore the new codes are just as easily used in detailed configuration calculations, similar to work reported at previous meetings. In keeping with earlier work we provide sample calculations for some simple gold models. The effect of a fully relativistic treatment on quantities such as average charge state, ion fractions and emissivity will be reported. The possibility of applying the new codes to a very large number of configurations will also be discussed.
Comparison of statistical model calculations for stable isotope neutron capture
NASA Astrophysics Data System (ADS)
Beard, M.; Uberseder, E.; Crowter, R.; Wiescher, M.
2014-09-01
It is a well-observed result that different nuclear input models sensitively affect Hauser-Feshbach (HF) cross-section calculations. Less well-known, however, are the effects on calculations originating from nonmodel aspects, such as experimental data truncation and transmission function energy binning, as well as code-dependent aspects, such as the definition of level-density matching energy and the inclusion of shell correction terms in the level-density parameter. To investigate these aspects, Maxwellian-averaged neutron capture cross sections (MACS) at 30 keV have been calculated using the well-established statistical Hauser-Feshbach model codes talys and non-smoker for approximately 340 nuclei. For the same nuclei, MACS predictions have also been obtained using two new HF codes, cigar and sapphire. Details of these two codes, which have been developed to contain an overlapping set of identically implemented nuclear physics input models, are presented. It is generally accepted that HF calculations are valid to within a factor of 3. It was found that this factor is dependent on both model and nonmodel details, such as the coarseness of the transmission function energy binning and data truncation, as well as variances in details regarding the implementation of level-density parameter, backshift, matching energy, and giant dipole strength function parameters.
Preliminary Modulus and Breakage Calculations on Cellulose Models
Technology Transfer Automated Retrieval System (TEKTRAN)
The Young’s modulus of polymers can be calculated by stretching molecular models with the computer. The molecule is stretched and the derivative of the changes in stored potential energy for several displacements, divided by the molecular cross-section area, is the stress. The modulus is the slope o...
Teaching Modelling Concepts: Enter the Pocket-Size Programmable Calculator.
ERIC Educational Resources Information Center
Gaar, Kermit A., Jr.
1980-01-01
Addresses the problem of the failure of students to see a physiological system in an integrated way. Programmable calculators armed with a printer are suggested as useful teaching devices that avoid the expense and the unavailability of computers for modelling in teaching physiology. (Author/SA)
Model calculations for diffuse molecular clouds. [interstellar hydrogen cloud model
NASA Technical Reports Server (NTRS)
Glassgold, A. E.; Langer, W. D.
1974-01-01
A steady state isobaric cloud model is developed. The pressure, thermal, electrical, and chemical balance equations are solved simultaneously with a simple one dimensional approximation to the equation of radiative transfer appropriate to diffuse clouds. Cooling is mainly by CII fine structure transitions, and a variety of heating mechanisms are considered. Particular attention is given to the abundance variation of H2. Inhomogeneous density distributions are obtained because of the attenuation of the interstellar UV field and the conversion from atomic to molecular hyrodgen. The effects of changing the model parameters are described and the applicability of the model to OAO-3 observations is discussed. Good qualitative agreement with the fractional H2 abundance determinations has been obtained. The observed kinetic temperatures near 80 K can also be achieved by grain photoelectron heating. The problem of the electron density is solved taking special account of the various hydrogen ions as well as heavier ones.
Microscopic Shell Model Calculations for the Fluorine Isotopes
NASA Astrophysics Data System (ADS)
Barrett, Bruce R.; Dikmen, Erdal; Maris, Pieter; Vary, James P.; Shirokov, Andrey M.
2015-10-01
Using a formalism based on the No Core Shell Model (NCSM), we have determined miscroscopically the core and single-particle energies and the effective two-body interactions that are the input to standard shell model (SSM) calculations. The basic idea is to perform a succession of a Okubo-Lee-Suzuki (OLS) transformation, a NCSM calculation, and a second OLS transformation to a further reduced space, such as the sd-shell, which allows the separation of the many-body matrix elements into an ``inert'' core part plus a few valence-nucleons calculation. In the present investigation we use this technique to calculate the properties of the nuclides in the Fluorine isotopic chain, using the JISP16 nucleon-nucleon interaction. The obtained SSM input, along with the results of the SSM calculations for the Fluorine isotopes, will be presented. This work supported in part by TUBITAK-BIDEB, the US DOE, the US NSF, NERSC, and the Russian Ministry of Education and Science.
On the pressure calculation for polarizable models in computer simulation.
Kiss, Péter T; Baranyai, András
2012-03-14
We present a short overview of pressure calculation in molecular dynamics or Monte Carlo simulations. The emphasis is given to polarizable models in order to resolve the controversy caused by the paper of M. J. Louwerse and E. J. Baerends [Chem. Phys. Lett. 421, 138 (2006)] about pressure calculation in systems with periodic boundaries. We systematically derive expressions for the pressure and show that despite the lack of explicit pairwise additivity, the pressure formula for polarizable models is identical with that of nonpolarizable ones. However, a strict condition for using this formula is that the induced dipole should be in perfect mechanical equilibrium prior to pressure calculation. The perfect convergence of induced dipoles ensures conservation of energy as well. We demonstrate using more cumbersome but exact methods that the derived expressions for the polarizable model of water provide correct numerical results. We also show that the inaccuracy caused by imperfect convergence of the induced dipoles correlates with the inaccuracy of the calculated pressure. PMID:22423830
Feasibility of supersonic diode pumped alkali lasers: Model calculations
Barmashenko, B. D.; Rosenwaks, S.
2013-04-08
The feasibility of supersonic operation of diode pumped alkali lasers (DPALs) is studied for Cs and K atoms applying model calculations, based on a semi-analytical model previously used for studying static and subsonic flow DPALs. The operation of supersonic lasers is compared with that measured and modeled in subsonic lasers. The maximum power of supersonic Cs and K lasers is found to be higher than that of subsonic lasers with the same resonator and alkali density at the laser inlet by 25% and 70%, respectively. These results indicate that for scaling-up the power of DPALs, supersonic expansion should be considered.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel
Free energy calculations for a flexible water model.
Habershon, Scott; Manolopoulos, David E
2011-11-28
In this work, we consider the problem of calculating the classical free energies of liquids and solids for molecular models with intramolecular flexibility. We show that thermodynamic integration from the fully-interacting solid of interest to a Debye crystal reference state, with anisotropic harmonic interactions derived from the Hessian of the original crystal, provides a straightforward route to calculating the Gibbs free energy of the solid. To calculate the molecular liquid free energy, it is essential to correctly account for contributions from both intermolecular and intramolecular motion; we employ thermodynamic integration to a Lennard-Jones reference fluid, coupled with direct evaluation of the molecular ro-vibrational partition function. These approaches are used to study the low-pressure classical phase diagram of the flexible q-TIP4P/F water model. We find that, while the experimental ice-I/liquid and ice-III/liquid coexistence lines are described reasonably well by this model, the ice-II phase is predicted to be metastable. In light of this finding, we go on to examine how the coupling between intramolecular flexibility and intermolecular interactions influences the computed phase diagram by comparing our results with those of the underlying rigid-body water model. PMID:21887423
Calculating Free Energy Changes in Continuum Solvation Models.
Ho, Junming; Ertem, Mehmed Z
2016-02-25
We recently showed for a large data set of pKas and reduction potentials that free energies calculated directly within the SMD continuum model compares very well with corresponding thermodynamic cycle calculations in both aqueous and organic solvents [ Phys. Chem. Chem. Phys. 2015 , 17 , 2859 ]. In this paper, we significantly expand the scope of our study to examine the suitability of this approach for calculating general solution phase kinetics and thermodynamics, in conjunction with several commonly used solvation models (SMD-M062X, SMD-HF, CPCM-UAKS, and CPCM-UAHF) for a broad range of systems. This includes cluster-continuum schemes for pKa calculations as well as various neutral, radical, and ionic reactions such as enolization, cycloaddition, hydrogen and chlorine atom transfer, and SN2 and E2 reactions. On the basis of this benchmarking study, we conclude that the accuracies of both approaches are generally very similar-the mean errors for Gibbs free energy changes of neutral and ionic reactions are approximately 5 and 25 kJ mol(-1), respectively. In systems where there are significant structural changes due to solvation, as is the case for certain ionic transition states and amino acids, the direct approach generally afford free energy changes that are in better agreement with experiment. PMID:26878566
Sample size calculation for the proportional hazards cure model.
Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin
2012-12-20
In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. PMID:22786805
ILNCSIM: improved lncRNA functional similarity calculation model.
Huang, Yu-An; Chen, Xing; You, Zhu-Hong; Huang, De-Shuang; Chan, Keith C C
2016-05-01
Increasing observations have indicated that lncRNAs play a significant role in various critical biological processes and the development and progression of various human diseases. Constructing lncRNA functional similarity networks could benefit the development of computational models for inferring lncRNA functions and identifying lncRNA-disease associations. However, little effort has been devoted to quantifying lncRNA functional similarity. In this study, we developed an Improved LNCRNA functional SIMilarity calculation model (ILNCSIM) based on the assumption that lncRNAs with similar biological functions tend to be involved in similar diseases. The main improvement comes from the combination of the concept of information content and the hierarchical structure of disease directed acyclic graphs for disease similarity calculation. ILNCSIM was combined with the previously proposed model of Laplacian Regularized Least Squares for lncRNA-Disease Association to further evaluate its performance. As a result, new model obtained reliable performance in the leave-one-out cross validation (AUCs of 0.9316 and 0.9074 based on MNDR and Lnc2cancer databases, respectively), and 5-fold cross validation (AUCs of 0.9221 and 0.9033 for MNDR and Lnc2cancer databases), which significantly improved the prediction performance of previous models. It is anticipated that ILNCSIM could serve as an effective lncRNA function prediction model for future biomedical researches. PMID:27028993
Optical model calculations of heavy-ion target fragmentation
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Norbury, J. W.
1986-01-01
The fragmentation of target nuclei by relativistic protons and heavy ions is described within the context of a simple abrasion-ablation-final-state interaction model. Abrasion is described by a quantum mechanical formalism utilizing an optical model potential approximation. Nuclear charge distributions of the excited prefragments are calculated by both a hypergeometric distribution and a method based upon the zero-point oscillations of the giant dipole resonance. Excitation energies are estimated from the excess surface energy resulting from the abrasion process and the additional energy deposited by frictional spectator interactions of the abraded nucleons. The ablation probabilities are obtained from the EVA-3 computer program. Isotope production cross sections for the spallation of copper targets by relativistic protons and for the fragmenting of carbon targets by relativistic carbon, neon, and iron projectiles are calculated and compared with available experimental data.
Effect of molecular models on viscosity and thermal conductivity calculations
NASA Astrophysics Data System (ADS)
Weaver, Andrew B.; Alexeenko, Alina A.
2014-12-01
The effect of molecular models on viscosity and thermal conductivity calculations is investigated. The Direct Simulation Monte Carlo (DSMC) method for rarefied gas flows is used to simulate Couette and Fourier flows as a means of obtaining the transport coefficients. Experimental measurements for argon (Ar) provide a baseline for comparison over a wide temperature range of 100-1,500 K. The variable hard sphere (VHS), variable soft sphere (VSS), and Lennard-Jones (L-J) molecular models have been implemented into a parallel version of Bird's one-dimensional DSMC code, DSMC1, and the model parameters have been recalibrated to the current experimental data set. While the VHS and VSS models only consider the short-range, repulsive forces, the L-J model also includes constributions from the long-range, dispersion forces. Theoretical results for viscosity and thermal conductivity indicate the L-J model is more accurate than the VSS model; with maximum errors of 1.4% and 3.0% in the range 300-1,500 K for L-J and VSS models, respectively. The range of validity of the VSS model is extended to 1,650 K through appropriate choices for the model parameters.
Infrared lens thermal effect: equivalent focal shift and calculating model
NASA Astrophysics Data System (ADS)
Zhang, Cheng-shuo; Shi, Zelin; Feng, Bin; Xu, Bao-shu
2014-11-01
It's well-know that the focal shift of infrared lens is the major factor in degeneration of imaging quality when temperature change. In order to figure out the connection between temperature change and focal shift, partial differential equations of thermal effect on light path are obtained by raytrace method, to begin with. The approximately solution of the PDEs show that focal shift is proportional to temperature change. And a formula to compute the proportional factor is given. In order to understand infrared lens thermal effect deeply, we use defocus by image plane shift at constant temperature to equivalently represent thermal effect on infrared lens. So equivalent focal shift (EFS) is defined and its calculating model is proposed at last. In order to verify EFS and its calculating model, Physical experimental platform including a motorized linear stage with built-in controller, blackbody, target, collimator, IR detector, computer and other devices is developed. The experimental results indicate that EFS make the image plane shift at constant temperature have the same influence on infrared lens as thermal effect and its calculating model is correct.
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
NASA Astrophysics Data System (ADS)
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
Freeway Travel Speed Calculation Model Based on ETC Transaction Data
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107
A PROPOSED BENCHMARK PROBLEM FOR SCATTER CALCULATIONS IN RADIOGRAPHIC MODELLING
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-03
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
Freeway travel speed calculation model based on ETC transaction data.
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107
Pseudo-Reaction Zone model calibration for Programmed Burn calculations
NASA Astrophysics Data System (ADS)
Chiquete, Carlos; Meyer, Chad D.; Quirk, James J.; Short, Mark
2015-06-01
The Programmed Burn (PB) engineering methodology for efficiently calculating detonation timing and energy delivery within high explosive (HE) engineering geometries separates the calculation of these two core components. Modern PB approaches utilize Detonation Shock Dynamics (DSD) to provide accurate time-of-arrival information throughout a given geometry, via an experimentally calibrated propagation law relating the surface normal velocity to its local curvature. The Pseudo-Reaction Zone (PRZ) methodology is then used to release the explosive energy in a finite span following the prescribed arrival of the DSD propagated front through a reactive, hydrodynamic calculation. The PRZ energy release rate must be coupled to the local burn velocity set by the DSD surface evolution. In order to synchronize the energy release to the attendant timing calculation, detonation velocity and front shapes resulting from reactive burn simulations utilizing the PRZ rate law and parameters will be fitted to analogues generated via the applied DSD propagation law, thus yielding the PRZ model calibration for the HE.
Acoustic intensity calculations for axisymmetrically modeled fluid regions
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.; Everstine, Gordon C.
1992-01-01
An algorithm for calculating acoustic intensities from a time harmonic pressure field in an axisymmetric fluid region is presented. Acoustic pressures are computed in a mesh of NASTRAN triangular finite elements of revolution (TRIAAX) using an analogy between the scalar wave equation and elasticity equations. Acoustic intensities are then calculated from pressures and pressure derivatives taken over the mesh of TRIAAX elements. Intensities are displayed as vectors indicating the directions and magnitudes of energy flow at all mesh points in the acoustic field. A prolate spheroidal shell is modeled with axisymmetric shell elements (CONEAX) and submerged in a fluid region of TRIAAX elements. The model is analyzed to illustrate the acoustic intensity method and the usefulness of energy flow paths in the understanding of the response of fluid-structure interaction problems. The structural-acoustic analogy used is summarized for completeness. This study uncovered a NASTRAN limitation involving numerical precision issues in the CONEAX stiffness calculation causing large errors in the system matrices for nearly cylindrical cones.
Hydrothermal hydration of Martian crust: illustration via geochemical model calculations.
Griffith, L L; Shock, E L
1997-04-25
If hydrothermal Systems existed on Mars, hydration of crustal rocks may have had the potential to affect the water budget of the planet. We have conducted geochemical model calculations to investigate the relative roles of host rock composition, temperature, water-to-rock ratio, and initial fluid oxygen fugacity on the mineralogy of hydrothermal alteration assemblages, as well as the effectiveness of alteration to store water in the crust as hydrous minerals. In order to place calculations for Mars in perspective, models of hydrothermal alteration of three genetically related Icelandic volcanics (a basalt, andesite, and rhyolite) are presented, together with results for compositions based on SNC meteorite samples (Shergotty and Chassigny). Temperatures from 150 degrees C to 250 degrees C, water-to-rock ratios from 0.1 to 1000, and two initial fluid oxygen fugacities are considered in the models. Model results for water-to-rock ratios less than 10 are emphasized because they are likely to be more applicable to Mars. In accord with studies of low-grade alteration of terrestrial rocks, we find that the major controls on hydrous mineral production are host rock composition and temperature. Over the range of conditions considered, the alteration of Shergotty shows the greatest potential for storing water as hydrous minerals, and the alteration of Icelandic rhyolite has the lowest potential. PMID:11541456
Hydrothermal hydration of Martian crust: illustration via geochemical model calculations
NASA Technical Reports Server (NTRS)
Griffith, L. L.; Shock, E. L.
1997-01-01
If hydrothermal Systems existed on Mars, hydration of crustal rocks may have had the potential to affect the water budget of the planet. We have conducted geochemical model calculations to investigate the relative roles of host rock composition, temperature, water-to-rock ratio, and initial fluid oxygen fugacity on the mineralogy of hydrothermal alteration assemblages, as well as the effectiveness of alteration to store water in the crust as hydrous minerals. In order to place calculations for Mars in perspective, models of hydrothermal alteration of three genetically related Icelandic volcanics (a basalt, andesite, and rhyolite) are presented, together with results for compositions based on SNC meteorite samples (Shergotty and Chassigny). Temperatures from 150 degrees C to 250 degrees C, water-to-rock ratios from 0.1 to 1000, and two initial fluid oxygen fugacities are considered in the models. Model results for water-to-rock ratios less than 10 are emphasized because they are likely to be more applicable to Mars. In accord with studies of low-grade alteration of terrestrial rocks, we find that the major controls on hydrous mineral production are host rock composition and temperature. Over the range of conditions considered, the alteration of Shergotty shows the greatest potential for storing water as hydrous minerals, and the alteration of Icelandic rhyolite has the lowest potential.
A review of Higgs mass calculations in supersymmetric models
NASA Astrophysics Data System (ADS)
Draper, Patrick; Rzehak, Heidi
2016-03-01
The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass in the Minimal Supersymmetric Standard Model, in particular the large radiative corrections required to lift mh to 125 GeV and their calculation via Feynman-diagrammatic and effective field theory techniques. This review is intended as an entry point for readers new to the field, and as a summary of the current status, including the existing analytic calculations and publicly-available computer codes.
2HDMC — two-Higgs-doublet model calculator
NASA Astrophysics Data System (ADS)
Eriksson, David; Rathsman, Johan; Stål, Oscar
2010-04-01
We describe version 1.0.6 of the public C++ code 2HDMC, which can be used to perform calculations in a general, CP-conserving, two-Higgs-doublet model (2HDM). The program features simple conversion between different parametrizations of the 2HDM potential, a flexible Yukawa sector specification with choices of different Z-symmetries or more general couplings, a decay library including all two-body — and some three-body — decay modes for the Higgs bosons, and the possibility to calculate observables of interest for constraining the 2HDM parameter space, as well as theoretical constraints from positivity and unitarity. The latest version of the 2HDMC code and full documentation is available from: http://www.isv.uu.se/thep/MC/2HDMC. New version program summaryProgram title: 2HDMC Catalogue identifier: AEFI_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFI_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL No. of lines in distributed program, including test data, etc.: 12 110 No. of bytes in distributed program, including test data, etc.: 92 731 Distribution format: tar.gz Programming language: C++ Computer: Any computer running Linux Operating system: Linux RAM: 5 Mb Catalogue identifier of previous version: AEFI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2010) 189 Classification: 11.1 External routines: GNU Scientific Library ( http://www.gnu.org/software/gsl/) Does the new version supersede the previous version?: Yes Nature of problem: Determining properties of the potential, calculation of mass spectrum, couplings, decay widths, oblique parameters, muon g-2, and collider constraints in a general two-Higgs-doublet model. Solution method: From arbitrary potential and Yukawa sector, tree-level relations are used to determine Higgs masses and couplings. Decay widths are calculated at leading order, including FCNC decays when applicable. Decays to off
Radiation Environment Variations at Mars - Model Calculations and Measurements
NASA Astrophysics Data System (ADS)
Saganti, Premkumar; Cucinotta, Francis
Variations in the space radiation environment due to changes in the GCR (Galactic Cosmic Ray) from the past (#23) solar cycle to the current one (#24) has been intriguing in many ways, with an unprecedented long duration of the recent solar minimum condition and a very low peak activity of the current solar maximum. Model calculated radiation data and assessment of variations in the particle flux - protons, alpha particles, and heavy ions of the GCR environment is essential for understanding radiation risk and for any future intended long-duration human exploration missions. During the past solar cycle, we have had most active and higher solar maximum (2001-2003) condition. In the beginning of the current solar cycle (#24), we experienced a very long duration of solar minimum (2009-2011) condition with a lower peak activity (2013-2014). At Mars, radiation measurements in orbit were obtained (onboard the 2001 Mars Odyssey spacecraft) during the past (#23) solar maximum condition. Radiation measurements on the surface of Mars are being currently measured (onboard the Mars Science Laboratory, 2012 - Curiosity) during the current (#24) solar peak activity (August 2012 - present). We present our model calculated radiation environment at Mars during solar maxima for solar cycles #23 and #24. We compare our earlier model calculations (Cucinotta et al., J. Radiat. Res., 43, S35-S39, 2002; Saganti et al., J. Radiat. Res., 43, S119-S124, 2002; and Saganti et al., Space Science Reviews, 110, 143-156, 2004) with the most recent radiation measurements on the surface of Mars (2012 - present).
Shape evolution of yrast-band in 78Kr
NASA Astrophysics Data System (ADS)
Joshi, P. K.; Jain, H. C.; Palit, R.; Mukherjee, G.; Nagaraj, S.
2002-03-01
Lifetimes have been measured up to the I=22 + level in the yrast positive-parity band for 78Kr using the recoil distance and lineshape analysis methods. The B(E2) and Qt values obtained from these measurements show a significant drop with increasing spin. The band crossings and the observed variation in Qt are understood through cranked shell-model, TRS and configuration-dependent shell-correction calculations assuming an oblate deformation for 78Kr at low spins.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose
NASA Technical Reports Server (NTRS)
Welton, Andrew; Lee, Kerry
2010-01-01
While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.
Calculations of hot gas ingestion for a STOVL aircraft model
NASA Technical Reports Server (NTRS)
Fricker, David M.; Holdeman, James D.; Vanka, Surya P.
1992-01-01
Hot gas ingestion problems for Short Take-Off, Vertical Landing (STOVL) aircraft are typically approached with empirical methods and experience. In this study, the hot gas environment around a STOVL aircraft was modeled as multiple jets in crossflow with inlet suction. The flow field was calculated with a Navier-Stokes, Reynolds-averaged, turbulent, 3D computational fluid dynamics code using a multigrid technique. A simple model of a STOVL aircraft with four choked jets at 1000 K was studied at various heights, headwind speeds, and thrust splay angles in a modest parametric study. Scientific visualization of the computed flow field shows a pair of vortices in front of the inlet. This and other qualitative aspects of the flow field agree well with experimental data.
Stealth Dark Matter: Model, lattice calculations, and constraints
NASA Astrophysics Data System (ADS)
Schaich, David; Lattice Strong Dynamics Collaboration
2016-03-01
A new strongly coupled dark sector can produce a well-motivated and phenomenologically interesting composite dark matter candidate. I will review a model recently proposed by the Lattice Strong Dynamics Collaboration in which the composite dark matter is naturally ``stealthy'': Although its constituents are charged the composite particle itself is electroweak neutral with vanishing magnetic moment and charge radius. This results in an extraordinarily small direct detection cross section dominated by the dimension-7 electromagnetic polarizability interaction. I will present direct detection constraints on the model that rely on our non-perturbative lattice calculations of the polarizability, as well as complementary constraints from collider experiments. Collider bounds require the stealth dark matter mass to be m > 300 GeV, while its cross section for spin-independent scattering with xenon is smaller than the coherent neutrino scattering background for m > 700 GeV.
Folding model calculations for 6He+12C elastic scattering
NASA Astrophysics Data System (ADS)
Awad, A. Ibraheem
2016-03-01
In the framework of the double folding model, we used the α+2n and di-triton configurations for the nuclear matter density of the 6He nucleus to generate the real part of the optical potential for the system 6He+12C. As an alternative, we also use the high energy approximation to generate the optical potential for the same system. The derived potentials are employed to analyze the elastic scattering differential cross section at energies of 38.3, 41.6 and 82.3 MeV/u. For the imaginary part of the potential we adopt the squared Woods-Saxon form. The obtained results are compared with the corresponding measured data as well as with available results in the literature. The calculated total reaction cross sections are investigated and compared with the optical limit Glauber model description.
Model for analytical calculation of nuclear photoabsorption at intermediate energies
NASA Astrophysics Data System (ADS)
Hütt, M.-Th.; Milstein, A. I.; Schumacher, M.
1997-02-01
The universal curve {σ}/{A} of nuclear photoabsorption is investigated within a Fermi gas model of nuclear matter. An energy range from pion threshold up to 400 MeV is considered. The interactions between nucleon, pion, Δ-isobar and photon are considered in the non-relativistic approximation with corrections of the order {1}/{M} taken into account with respect to proton mass. Analytical expressions are obtained, in which the influence of nuclear correlations and two-nucleon contributions is studied explicitly. The contributions of real and virtual pions are found to be sufficient to obtain agreement with experimental data in this energy range. An extension of the model calculation to nucleon knock-out reactions is discussed.
Plasmon-pole models affect band gaps in GW calculations
NASA Astrophysics Data System (ADS)
Larson, Paul; Wu, Zhigang
2013-03-01
Density functional theory calculations have long been known to underestimate the band gaps in semiconductors. Significant improvements have been made by using GW calculations that uses the self energy, defined as the product of the Green function (G) and screened Coulomb exchange (W). However, many approximations are made in the GW method, specifically the plasmon-pole approximation. This approximation replaces the integration necessary to produce W with a simple approximation to the inverse dielectric function. Four different plasmon-pole approximations have been tested using the tight-binding program ABINIT: Godby-Needs, Hybertsen-Louie, von der Linden-Horsch, and Engel-Farid. For many materials, the differences in the GW band gaps for the different plasmon-pole models are negligible, but for systems with localized electrons, the difference can be larger than 1 eV. The plasmon-pole approximation is generally chosen to best agree with experimental data, but this is misleading in that this ignores all of the other approximations used in the GW method. Improvements in plasmon-pole models in GW can only come about by trying to reproduce the results of the numerical integration rather than trying to reproduce experimental results.
Recent Developments in No-Core Shell-Model Calculations
Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R
2009-03-20
We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.
2HDMC - two-Higgs-doublet model calculator
NASA Astrophysics Data System (ADS)
Eriksson, David; Rathsman, Johan; Stål, Oscar
2010-01-01
We describe the public C++ code 2HDMC which can be used to perform calculations in a general, CP-conserving, two-Higgs-doublet model (2HDM). The program features simple conversion between different parametrizations of the 2HDM potential, a flexible Yukawa sector specification with choices of different Z-symmetries or more general couplings, a decay library including all two-body - and some three-body - decay modes for the Higgs bosons, and the possibility to calculate observables of interest for constraining the 2HDM parameter space, as well as theoretical constraints from positivity and unitarity. The latest version of the 2HDMC code and full documentation is available from: http://www.isv.uu.se/thep/MC/2HDMC. Program summaryProgram title:2HDMC Catalogue identifier: AEFI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL No. of lines in distributed program, including test data, etc.: 12 032 No. of bytes in distributed program, including test data, etc.: 90 699 Distribution format: tar.gz Programming language: C++ Computer: Any computer running Linux Operating system: Linux RAM: 5 Mb Classification: 11.1 External routines: GNU Scientific Library ( http://www.gnu.org/software/gsl/) Nature of problem: Determining properties of the potential, calculation of mass spectrum, couplings, decay widths, oblique parameters, muon g-2, and collider constraints in a general two-Higgs-doublet model. Solution method: From arbitrary potential and Yukawa sector, tree-level relations are used to determine Higgs masses and couplings. Decay widths are calculated at leading order, including FCNC decays when applicable. Decays to off-shell vector bosons are obtained by numerical integration. Observables are computed (analytically or numerically) as function of the input parameters. Restrictions: CP-violation is not treated. Running time: Less than 0
Theoretical model for calculation of helicity in solar active regions
NASA Astrophysics Data System (ADS)
Chatterjee, P.
We (Choudhuri, Chatterjee and Nandy, 2005) calculate helicities of solar active regions based on the idea of Choudhuri (2003) that poloidal flux lines get wrapped around a toroidal flux tube rising through the convection zone, thereby giving rise to the helicity. Rough estimates based on this idea compare favourably with the observed magnitude of helicity. We use our solar dynamo model based on the Babcock--Leighton α-effect to study how helicity varies with latitude and time. At the time of solar maximum, our theoretical model gives negative helicity in the northern hemisphere and positive helicity in the south, in accordance with observed hemispheric trends. However, we find that, during a short interval at the beginning of a cycle, helicities tend to be opposite of the preferred hemispheric trends. Next we (Chatterjee, Choudhuri and Petrovay 2006) use the above idea along with the sunspot decay model of Petrovay and Moreno-Insertis, (1997) to estimate the distribution of helicity inside a flux tube as it keeps collecting more azimuthal flux during its rise through the convection zone and as turbulent diffusion keeps acting on it. By varying parameters over reasonable ranges in our simple 1-d model, we find that the azimuthal flux penetrates the flux tube to some extent instead of being confined to a narrow sheath outside.
Assessment of Some Atomization Models Used in Spray Calculations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Bulzin, Dan
2011-01-01
The paper presents the results from a validation study undertaken as a part of the NASA s fundamental aeronautics initiative on high altitude emissions in order to assess the accuracy of several atomization models used in both non-superheat and superheat spray calculations. As a part of this investigation we have undertaken the validation based on four different cases to investigate the spray characteristics of (1) a flashing jet generated by the sudden release of pressurized R134A from cylindrical nozzle, (2) a liquid jet atomizing in a subsonic cross flow, (3) a Parker-Hannifin pressure swirl atomizer, and (4) a single-element Lean Direct Injector (LDI) combustor experiment. These cases were chosen because of their importance in some aerospace applications. The validation is based on some 3D and axisymmetric calculations involving both reacting and non-reacting sprays. In general, the predicted results provide reasonable agreement for both mean droplet sizes (D32) and average droplet velocities but mostly underestimate the droplets sizes in the inner radial region of a cylindrical jet.
Nonlinear damping calculation in cylindrical gear dynamic modeling
NASA Astrophysics Data System (ADS)
Guilbault, Raynald; Lalonde, Sébastien; Thomas, Marc
2012-04-01
The nonlinear dynamic problem posed by cylindrical gear systems has been extensively covered in the literature. Nonetheless, a significant proportion of the mechanisms involved in damping generation remains to be investigated and described. The main objective of this study is to contribute to this task. Overall, damping is assumed to consist of three sources: surrounding element contribution, hysteresis of the teeth, and oil squeeze damping. The first two contributions are considered to be commensurate with the supported load; for its part however, squeeze damping is formulated using expressions developed from the Reynolds equation. A lubricated impact analysis between the teeth is introduced in this study for the minimum film thickness calculation during contact losses. The dynamic transmission error (DTE) obtained from the final model showed close agreement with experimental measurements available in the literature. The nonlinear damping ratio calculated at different mesh frequencies and torque amplitudes presented average values between 5.3 percent and 8 percent, which is comparable to the constant 8 percent ratio used in published numerical simulations of an equivalent gear pair. A close analysis of the oil squeeze damping evidenced the inverse relationship between this damping effect and the applied load.
Selection of models to calculate the LLW source term
Sullivan, T.M. )
1991-10-01
Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab.
Ultrasonic energy in liposome production: process modelling and size calculation.
Barba, A A; Bochicchio, S; Lamberti, G; Dalmoro, A
2014-04-21
The use of liposomes in several fields of biotechnology, as well as in pharmaceutical and food sciences is continuously increasing. Liposomes can be used as carriers for drugs and other active molecules. Among other characteristics, one of the main features relevant to their target applications is the liposome size. The size of liposomes, which is determined during the production process, decreases due to the addition of energy. The energy is used to break the lipid bilayer into smaller pieces, then these pieces close themselves in spherical structures. In this work, the mechanisms of rupture of the lipid bilayer and the formation of spheres were modelled, accounting for how the energy, supplied by ultrasonic radiation, is stored within the layers, as the elastic energy due to the curvature and as the tension energy due to the edge, and to account for the kinetics of the bending phenomenon. An algorithm to solve the model equations was designed and the relative calculation code was written. A dedicated preparation protocol, which involves active periods during which the energy is supplied and passive periods during which the energy supply is set to zero, was defined and applied. The model predictions compare well with the experimental results, by using the energy supply rate and the time constant as fitting parameters. Working with liposomes of different sizes as the starting point of the experiments, the key parameter is the ratio between the energy supply rate and the initial surface area. PMID:24647821
Equilibrium Chemistry Calculations for Model Hot-Jupiter Atmospheres
NASA Astrophysics Data System (ADS)
Blumenthal, Sarah; Harrington, Joseph; Bowman, M. Oliver; Blecic, Jasmina
2014-11-01
Every planet in our solar system has different elemental abundances from our sun's. It is thus necessary to explore a variety of elemental abundances when investigating exoplanet atmospheres. Composition is key to unraveling a planet's formation history and determines the radiative behavior of an atmosphere, including its spectrum (Moses et al. 2013). We consider here two commonly discussed situations: [C]/[O] > 1 and 10x and 100x heavy-element enrichment. For planets above 1200 K, equilibrium chemistry is a valid starting point in atmospheric analysis. For HD 209458b, this assumption was verified by comparing the results of a robust kinetics code (non-ideal behavior) to the results of an equilibrium chemistry code (ideal behavior). Both codes output similar results for the dayside of the planet (Agundez et al. 2012). Using NASA's open-source Chemical Equilibrium Abundances code (McBride and Gordon 1996), we calculate the molecular abundances of species of interest across the dayside of model planets with a range of: elemental abundance profiles, degree of redistribution, relevant substellar temperatures, and pressures. We then explore the compositional gradient of each model planet atmosphere layer using synthetic abundance images of target spectroscopic species (water, methane, carbon monoxide). This work was supported by the NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program NNX13AF38G.
Full waveform modelling and misfit calculation using the VERCE platform
NASA Astrophysics Data System (ADS)
Garth, Thomas; Spinuso, Alessandro; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schwichtenberg, Horst; Frank, Anton; Vilotte, Jean-Pierre; Rietbrock, Andreas
2016-04-01
simulated and recorded waveforms, enabling seismologists to specify and steer their misfit analyses using existing python tools and libraries such as Pyflex and the dispel4py data-intensive processing library. All these processes, including simulation, data access, pre-processing and misfit calculation, are presented to the users of the gateway as dedicated and interactive workspaces. The VERCE platform can also be used to produce animations of seismic wave propagation through the velocity model, and synthetic shake maps. We demonstrate the functionality of the VERCE platform with two case studies, using the pre-loaded velocity model and mesh for Chile and Northern Italy. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shake map production and other full waveform applications, in a wide range of tectonic settings.
Effective Inflow Conditions for Turbulence Models in Aerodynamic Calculations
NASA Technical Reports Server (NTRS)
Spalart, Philippe R.; Rumsey, Christopher L.
2007-01-01
The selection of inflow values at boundaries far upstream of an aircraft is considered, for one- and two-equation turbulence models. Inflow values are distinguished from the ambient values near the aircraft, which may be much smaller. Ambient values should be selected first, and inflow values that will lead to them after the decay second; this is not always possible, especially for the time scale. The two-equation decay during the approach to the aircraft is shown; often, the time scale has been set too short for this decay to be calculated accurately on typical grids. A simple remedy for both issues is to impose floor values for the turbulence variables, outside the viscous sublayer, and it is argued that overriding the equations in this manner is physically justified. Selecting laminar ambient values is easy, if the boundary layers are to be tripped, but a more common practice is to seek ambient values that will cause immediate transition in boundary layers. This opens up a wide range of values, and selection criteria are discussed. The turbulent Reynolds number, or ratio of eddy viscosity to laminar viscosity has a huge dynamic range that makes it unwieldy; it has been widely mis-used, particularly by codes that set upper limits on it. The value of turbulent kinetic energy in a wind tunnel or the atmosphere is also of dubious value as an input to the model. Concretely, the ambient eddy viscosity must be small enough to preserve potential cores in small geometry features, such as flap gaps. The ambient frequency scale should also be small enough, compared with shear rates in the boundary layer. Specific values are recommended and demonstrated for airfoil flows
MCNPX Cosmic Ray Shielding Calculations with the NORMAN Phantom Model
NASA Technical Reports Server (NTRS)
James, Michael R.; Durkee, Joe W.; McKinney, Gregg; Singleterry Robert
2008-01-01
The United States is planning manned lunar and interplanetary missions in the coming years. Shielding from cosmic rays is a critical aspect of manned spaceflight. These ventures will present exposure issues involving the interplanetary Galactic Cosmic Ray (GCR) environment. GCRs are comprised primarily of protons (approx.84.5%) and alpha-particles (approx.14.7%), while the remainder is comprised of massive, highly energetic nuclei. The National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) has commissioned a joint study with Los Alamos National Laboratory (LANL) to investigate the interaction of the GCR environment with humans using high-fidelity, state-of-the-art computer simulations. The simulations involve shielding and dose calculations in order to assess radiation effects in various organs. The simulations are being conducted using high-resolution voxel-phantom models and the MCNPX[1] Monte Carlo radiation-transport code. Recent advances in MCNPX physics packages now enable simulated transport over 2200 types of ions of widely varying energies in large, intricate geometries. We report here initial results obtained using a GCR spectrum and a NORMAN[3] phantom.
Calculation of canopy resistance with a recursive evapotranspiration model
Technology Transfer Automated Retrieval System (TEKTRAN)
The calculation of hourly and daily crop evapotranspiration (ETc) from weather variables requires a corresponding hourly or daily value of canopy resistance (rc). An iterative method first proposed by MI Budyko to calculate ETc finds the surface canopy temperature (Ts) that satisfies the crop’s ener...
Carbon dioxide fluid-flow modeling and injectivity calculations
Burke, Lauri
2011-01-01
These results were used to classify subsurface formations into three permeability classifications for the probabilistic calculations of storage efficiency and containment risk of the U.S. Geological Survey geologic carbon sequestration assessment methodology. This methodology is currently in use to determine the total carbon dioxide containment capacity of the onshore and State waters areas of the United States.
National Stormwater Calculator - Version 1.1 (Model)
EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The SWC estimates runoff at a site based on available information ...
A simplified model for unstable temperature field calculation of gas turbine rotor
NASA Astrophysics Data System (ADS)
He, Guangxin
1989-06-01
A simplified model is presented for calculating the unstable temperature field of a cooled turbine rotor by the finite element method. In the simplified model, an outer radius for calculating has been chosen which is smaller than the radius of the fir-tree root groove's bottom. And an equivalent heat release coefficient has been introduced. Thus, the calculation can be treated as an axial symmetrical problem and carried out on a microcomputer. The simplified model has been used to calculate the unstable temperature field during the start-up of a rotor. A comparison with the three-dimensional calculated result shows that the simplified model is satisfactory.
[An empirical model for calculating electron dose distributions].
Leistner, H; Schüler, W
1990-01-01
Dose-distributions in radiation fields are calculated for purpose of irradiation planning from measured depth dose and cross-distributions predominantly. Especially in electron fields the measuring effort is high to this, because these distributions have to be measured for all occurring irradiation parameters and in many different tissue depths. At the very least it can be shown for the 6...10 MeV electron radiation of the linear accelerator Neptun 10p that all required distributions can be calculated from each separately measured depth dose and cross-distribution. For this depth dose distribution and the measured border decrease of cross-distribution are tabulated and the abscissas are submitted to a linear transformation x' = k.x. In case of depth dose distribution the transformation factor k is dependent on electron energy only and in cross-distribution on tissue depth and source-surface-distance additionally. PMID:2356295
Thermochemical data for CVD modeling from ab initio calculations
Ho, P.; Melius, C.F.
1993-12-31
Ab initio electronic-structure calculations are combined with empirical bond-additivity corrections to yield thermochemical properties of gas-phase molecules. A self-consistent set of heats of formation for molecules in the Si-H, Si-H-Cl, Si-H-F, Si-N-H and Si-N-H-F systems is presented, along with preliminary values for some Si-O-C-H species.
NASA Astrophysics Data System (ADS)
Panov, G. A.; Zakharov, M. A.
2015-11-01
The present work is devoted to the phase diagrams calculation of AIIIBV systems within the framework of the generalized lattice model taking account of volume effects. The theoretically calculated phase diagram is compared with the corresponding experimental diagrams.
Calculation of the Aerodynamic Behavior of the Tilt Rotor Aeroacoustic Model (TRAM) in the DNW
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 1/4-scale V- 22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance and airloads for helicopter mode operation, as well as calculated induced and profile power. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
Moeller, M. P.; Urbanik, II, T.; Desrosiers, A. E.
1982-03-01
This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuatlon tlmes for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies.
NASA Astrophysics Data System (ADS)
Dekker, C. M.; Sliggers, C. J.
To spur on quality assurance for models that calculate air pollution, quality criteria for such models have been formulated. By satisfying these criteria the developers of these models and producers of the software packages in this field can assure and account for the quality of their products. In this way critics and users of such (computer) models can gain a clear understanding of the quality of the model. Quality criteria have been formulated for the development of mathematical models, for their programming—including user-friendliness, and for the after-sales service, which is part of the distribution of such software packages. The criteria have been introduced into national and international frameworks to obtain standardization.
A stirling engine computer model for performance calculations
NASA Technical Reports Server (NTRS)
Tew, R.; Jefferies, K.; Miao, D.
1978-01-01
To support the development of the Stirling engine as a possible alternative to the automobile spark-ignition engine, the thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer. The modeling techniques used are presented. The performance of an existing rhombic-drive Stirling engine was simulated by use of this computer program, and some typical results are presented. Engine tests are planned in order to evaluate this model.
Chemically reacting supersonic flow calculation using an assumed PDF model
NASA Technical Reports Server (NTRS)
Farshchi, M.
1990-01-01
This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.
BEN: A model to calculate the economic benefit of noncompliance. User's manual
Not Available
1992-10-01
The Agency developed the BEN computer model to calculate the economic benefit a violator derives from delaying or avoiding compliance with environmental statutes. In general, the Agency uses the BEN computer model to assist its own staff in developing settlement penalty figures. While the primary purpose of the BEN model is to calculate the economic benefit of noncompliance, the model may also be used to calculate the after tax net present value of a pollution prevention or mitigation project and to calculate 'cash outs' in Superfund cases. The document, the BEN User's Manual, contains all the formulas that make up the BEN computer model and is freely available to the public upon request.
Code System for Calculating Ion Track Condensed Collision Model.
Energy Science and Technology Software Center (ESTSC)
1997-05-21
Version 00 ICOM calculates the transport characteristics of ion radiation for applicaton to radiation protection, dosimetry and microdosimetry, and radiation physics of solids. Ions in the range Z=1-92 are handled. The energy range for protons is 0.001-10,000 MeV. For other ions the energy range is 0.001-100MeV/nucleon. Computed quantities include stopping powers, ranges; spatial, angular and energy distributions of particle current and fluence; spatial distributions of the absorbed dose; and spatial distributions of thermalized ions.
Comparison of electron width models for fast line profile calculations
NASA Astrophysics Data System (ADS)
Iglesias, Carlos A.
2016-03-01
The first non-vanishing term in the perturbation expansion of the electron contribution to the line width, commonly used in spectral line broadening by plasmas, was previously expressed in terms of the thermally averaged bremsstrahlung Gaunt factor. The approximations in the derivation, however, suggest that the result is uncertain. The electron width formula is tested with the hydrogen Balmer series and found suspect. Calculations for the He II Lyman series also display similar difficulties. The limitation of this electron width formulation is traced to the absence of an explicit strong collision cutoff beyond which the second-order theory is invalid.
Calculation of astrophysical spallation reactions using the RENO model
NASA Technical Reports Server (NTRS)
Ayres, C. L.; Schmitt, W. F.; Merker, M.; Shen, B. S. P.
1974-01-01
The RENO model for the Monte-Carlo treatment of astrophysical spallation reactions has been used to generate preliminary cross-sections for the purpose of illustrating the discrete-nucleon approach to spallation modeling and to exhibit differences between two versions of RENO. Comparisons with experimental, theoretical, and semiempirical data demonstrate the practicability of the discrete-nucleon approach.-
Carbon fiber dispersion models used for risk analysis calculations
NASA Technical Reports Server (NTRS)
1979-01-01
For evaluating the downwind, ground level exposure contours from carbon fiber dispersion, two fiber release scenarios were chosen. The first is the fire and explosion release in which all of the fibers are released instantaneously. This model applies to accident scenarios where an explosion follows a short-duration fire in the aftermath of the accident. The second is the plume release scenario in which the total mass of fibers is released into the fire plume. This model applies to aircraft accidents where only a fire results. These models are described in detail.
40 CFR 600.207-93 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... a model type. 600.207-93 Section 600.207-93 Protection of Environment ENVIRONMENTAL PROTECTION... Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.207-93 Calculation of fuel economy values for a model type. (a) Fuel economy values for...
NASA Technical Reports Server (NTRS)
Maples, A. L.
1980-01-01
The operation of solidification model 1 is described. Model 1 calculates the macrosegregation in a rectangular ingot of a binary alloy as a result of horizontal axisymmetric bidirectional solidification. The calculation is restricted to steady-state solidification; there is no variation in final local average composition in the direction of isotherm movement. The physics of the model are given.
Detailed Configuration Calculations for Non-LTE Modeling
NASA Astrophysics Data System (ADS)
Fontes, Christopher J.; Abdallah, Joseph, Jr.; Clark, Robert E. H.; Kilcrease, David P.
1998-11-01
We continue our work to explore the feasibility of creating detailed atomic models for radiation-hydrodynamics simulations of ICF applications. By further optimizing our atomic data codes we are able to create non-LTE models with a level of complexity approximately one order of magnitude greater (in size) than previously obtained. We present emissivities for gold which include on the order of 75,000 configurations per temperature-density point. The inclusion of additional configurations has yielded improved results for quantities such as the ion fraction distributions, but the question of spectral convergence is yet unanswered. The creation of still larger models will be discussed as well as comparison with experiment and other theories. The possibility of using these models for in-line simulations will also be discussed.
STREAM MODELS FOR CALCULATING POLLUTIONAL EFFECTS OF STORMWATER RUNOFF
Three related studies are described which provide the means to quantify the pollutional and hydraulic effects on flowing streams caused by stormwater runoff. Mathematical stream models were developed to simulate the biological, physical, chemical, and hydraulic reactions which oc...
Mathematical model partitioning and packing for parallel computer calculation
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Milner, Edward J.
1986-01-01
This paper deals with the development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system. The identification of computational parallelism within the model equations is discussed. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. Next, an algorithm which packs the equations into a minimum number of processors is described. The results of applying the packing algorithm to a turboshaft engine model are presented.
A simple model for calculating air pollution within street canyons
NASA Astrophysics Data System (ADS)
Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.
2014-04-01
This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.
The Martian Plasma Environment: Model Calculations and Observations
NASA Astrophysics Data System (ADS)
Lichtenegger, H. I. M.; Dubinin, E.; Schwingenschuh, K.; Riedler, W.
Based on a modified version of the model of an induced martian magnetosphere developed by Luhmann (1990), the dynamics and spatial distribution of different planetary ion species is examined. Three main regions are identified: A cloud of ions travelling along cycloidal trajectories, a plasma mantle and a plasma sheet. The latter predominantly consists of oxygen ions of ionospheric origin with minor portions of light particles. Comparison of model results with Phobos-2 observations shows reasonable agreement.
Statistical-mechanical aids to calculating term-structure models
NASA Astrophysics Data System (ADS)
Ingber, Lester
1990-12-01
Recent work in statistical mechanics has developed new analytical and numerical techniques to solve coupled stochastic equations. This paper describes application of the very fast simulated reannealing and path-integral methodologies to the estimation of the Brennan and Schwartz two-factor term-structure (time-dependent) model of bond prices. It is shown that these methodologies can be utilized to estimate more complicated n-factor nonlinear models. Applications to other systems are stressed.
Remote field eddy current technique - Phantom exciter model calculations
NASA Astrophysics Data System (ADS)
Atherton, D. L.; Czura, W.
1993-03-01
High resolution results of finite element calculations for remote field eddy current 'phantom exciter' simulations of slit defect interactions using single through wall transit are presented. These show that fine circumferential slits cause almost no field perturbations in the case of nonferromagnetic tubes but big perturbations in ferromagnetic tubes where high magnetic H fields occur in the slits. Defect-induced magnetic field perturbations must therefore be considered in addition to eddy current perturbations when ferromagnetic materials are inspected, particularly in the case of fine slits orthogonal to the magnetic field direction. Additional details seen are the funnelling of energy into slits in ferromagnetic pipes and precursor disturbances of fields approaching defects. It is suggested that these are due to the reflection of the electromagnetic waves dictated by boundary conditions at the near-side defect boundary.
Model Calculations of Continuous-Wave Laser Ionization of Krypton
Bret D. Cannon
1999-07-27
This report describes modeling of a scheme that uses continuous-wave (CW) lasers to ionize selected isotopes of krypton with high isotopic selectivity. The models predict that combining this ionization scheme with mass spectrometric measurement of the resulting ions can be the basis for ultra-sensitive methods to measure {sup 85}Kr in the presence of a 10{sup 11} excess of the stable krypton isotopes. Two experimental setups are considered in this model: the first setup is for krypton as a static gas, the second is for krypton in an atomic beam. In the static gas experiment, for a total krypton press of 10{sup {minus}4} torr and 10 W of power in the cavity, the model predicts a total krypton ion current of 4.6 x 10{sup 8} s{sup {minus}1} and for a {sup 85}Kr/Kr of 10{sup {minus}11} a {sup 85}Kr ion current of 3.5 s{sup {minus}1} or about 10,000 per hour. The atomic beam setup allowed higher isotopic selectivity; the model predicts a {sup 85}Kr ion current of 18 s{sup {minus}1} or 65,000 per hour.
Improved Dielectric Solvation Model for Electronic Structure Calculations
Chipman, Daniel
2015-12-16
This project was originally funded for the three year period from 09/01/2009 to 08/31/2012. Subsequently a No-Cost Extension was approved for a revised end date of 11/30/2013. The primary goals of the project were to develop continuum solvation models for nondielectric short-range interactions between solvent and solute that arise from dispersion, exchange, and hydrogen bonding. These goals were accomplished and are reported in the five peer-reviewed journal publications listed in the bibliography below. The secondary goals of the project included derivation of analytic gradients for the models, improvement of the cavity integration scheme, application of the models to the core-level spectroscopy of water, and several other miscellaneous items. These goals were not accomplished because they depended on completion of the primary goals, after which there was a lack of time for any additional effort.
Glass viscosity calculation based on a global statistical modelling approach
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
The use of model potentials in molecular calculations. II
NASA Astrophysics Data System (ADS)
Sakai, Y.; Huzinaga, S.
1982-03-01
The model potential method is applied to CO, HCl, P2, Cl2, SH2, Cu2, Br2, Ni(CO)4, and Pd(CO)4. The results are generally very satisfactory. Reduction of computing cost is substantial for molecules containing heavy atoms.
Modeling the aeroacoustics of axial fans from CFD calculations
NASA Astrophysics Data System (ADS)
Salesky, Alexandre; Hennemand, Vincent; Kouidri, Smaine; Berthelot, Yves
2002-11-01
The main source of aeroacoustic noise in axial fans is the distribution of the fluctuating, unsteady, aerodynamic forces on the blades. Numerical simulations were carried out with the CFD code (NUMECA), first with steady flow conditions to validate the aerolic performances (pressure drop as a function of flow rate) of the simulated six-bladed axial fans. Simulations were then made with unsteady flows to compute the fluctuating force distributions on the blades. The turbulence was modeled either with the Baldwin-Lomax model or with the K-epsilon model (extended wall function). The numerical results were satisfactory both in terms of numerical convergence and in terms of the physical characteristic of the forces acting on the blades. The numerical results were then coupled into an in-house aeroacoustics code that computes the farfield radiated noise spectrum and directivity, based on the Ffowcs-Williams Hawkings formulation, or alternatively, on the simpler Lowson model. Results compared favorably with data obtained under nonanechoic conditions, based upon ISO 5801 and ISO 5136 standards.
On the figure of merit model for SEU rate calculations
Barak, J.; Reed, R.A.; LaBel, K.A.
1999-12-01
Petersen has introduced a one parameter characterization of a device by the Figure of Merit (FOM). It was claimed that this parameter was sufficient to estimate the SEU rate in almost all orbits. The present paper presents an analytic study of the FOM concept and compares the FOM model with other empirical models. It is found that indeed the FOM parameter gives, in most cases, a good agreement with the rates found using the full SEU cross section plots of the devices. The agreement is poorer in cases where a high portion of the proton flux comes from low energy protons and for very SEU-hard devices. This is demonstrated for certain devices (FPGAs) where the FOM predicted by proton may be smaller by an order of magnitude than the FOM from heavy ions.
Aeroelastic Calculations Using CFD for a Typical Business Jet Model
NASA Technical Reports Server (NTRS)
Gibbons, Michael D.
1996-01-01
Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.
Progress in Earth System Modeling since the ENIAC Calculation
NASA Astrophysics Data System (ADS)
Fung, I.
2009-05-01
The success of the first numerical weather prediction experiment on the ENIAC computer in 1950 was hinged on the expansion of the meteorological observing network, which led to theoretical advances in atmospheric dynamics and subsequently the implementation of the simplified equations on the computer. This paper briefly reviews the progress in Earth System Modeling and climate observations, and suggests a strategy to sustain and expand the observations needed to advance climate science and prediction.
Suomi NPP VIIRS Striping Analysis using Radiative Transfer Model Calculations
NASA Astrophysics Data System (ADS)
Wang, Z.; Cao, C.
2015-12-01
Modern satellite radiometers such as VIIRS have many detectors with slightly different relative spectral response (RSR). These differences can introduce artifacts such as striping in the imagery. In recent studies we have analyzed the striping pattern related to the detector level RSR difference in VIIRS Thermal Emissive Bands (TEB) M15 and M16, which includes line-by-line radiative transfer model (LBLRTM) detector level response study and onboard detector stability evaluation using the solar diffuser. Now we extend these analysis to the Reflective Solar Bands (RSB) using MODTRAN atmospheric radiative transfer model (RTM) for detector level radiance simulation. Previous studies analyzed the striping pattern in the images of VIIRS ocean color and reflectance in RSB, further studies about the root cause for striping are still needed. In this study, we will use the MODTRAN model at spectral resolution of 1 cm^-1 under different atmospheric conditions for VIIRS RSB, for example band M1 centered at 410nm which is used for Ocean Color product retrieval. The impact of detector level RSR difference, atmospheric dependency, and solar geometry on the striping in VIIRS SDR imagery will be investigated. The cumulative histogram method used successfully for the TEB striping analysis will be used to quantify the striping. These analysis help S-NPP and J1 to better understand the root cause for VIIRS image artifacts and reduce the uncertainties in geophysical retrievals to meet the user needs.
Chambers, R.; Laats, E.T.
1981-01-01
A preliminary set of nine evaluation models (EMs) was added to the FRAPCON-1 computer code, which is used to calculate fuel rod behavior in a nuclear reactor during steady-state operation. The intent was to provide an audit code to be used in the United States Nuclear Regulatory Commission (NRC) licensing activities when calculations of conservative fuel rod temperatures are required. The EMs place conservatisms on the calculation of rod temperature by modifying the calculation of rod power history, fuel and cladding behavior models, and materials properties correlations. Three of the nine EMs provide either input or model specifications, or set the reference temperature for stored energy calculations. The remaining six EMs were intended to add thermal conservatism through model changes. To determine the relative influence of these six EMs upon fuel behavior calculations for commercial power reactors, a sensitivity study was conducted. That study is the subject of this paper.
Some atmospheric scattering considerations relevant to BATSE: A model calculation
NASA Technical Reports Server (NTRS)
Young, John H.
1986-01-01
The orbiting Burst and Transient Source Experiement (BATSE) will locate gamma ray burst sources by analysis of the relative numbers of photons coming directly from a source and entering its prescribed array of detectors. In order to accurately locate burst sources it is thus necessary to identify and correct for any counts contributed by events other than direct entry by a mainstream photon. An effort is described which estimates the photon numbers which might be scattered into the BATSE detectors from interactions with the Earth atmosphere. A model was developed which yielded analytical expressions for single-scatter photon contributions in terms of source and satellite locations.
Nuclear model calculations and their role in space radiation research.
Townsend, L W; Cucinotta, F A; Heilbronn, L H
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. PMID:12539757
RADIOGRAPHIC BENCHMARK PROBLEM 2009 - SCATTER CALCULATIONS IN MODELLING
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2010-02-22
Code Validation is a permanent concern in computer simulation, and has been addressed repeatedly in eddy current and ultrasonic modelling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radio-graphic modelling, the scattered radiation prediction. An update of the results of the 2008 benchmark is presented. Additionally we discuss the extension of this benchmark on the lower energy part for 60 and 80 keV as well as for higher energies up to 10 MeV to study the contribution of pair production. Of special interest will be the primary radiation (attenuation law as reference), the total scattered radiation, the relative contribution of scattered radiation separated by order of scatter events (1st, 2nd, ..., 20th), and the spectrum of scattered radiation. We present the results of three Monte Carlo codes (MC-Ray, Sindbad and Moderato) as well as an analytical first order scattering code (VXI) and compare to MCNP as reference.
Nuclear model calculations and their role in space radiation research
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Heilbronn, L. H.
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. c2002 COSPAR. Published by Elsevier Science Ltd. All right reserved.
NASA Astrophysics Data System (ADS)
Preobrazhenskii, M. P.; Rudakov, O. B.
2016-01-01
A regression model for calculating the boiling point isobars of tetrachloromethane-organic solvent binary homogeneous systems is proposed. The parameters of the model proposed were calculated for a series of solutions. The correlation between the nonadditivity parameter of the regression model and the hydrophobicity criterion of the organic solvent is established. The parameter value of the proposed model is shown to allow prediction of the potential formation of azeotropic mixtures of solvents with tetrachloromethane.
Efficient distance calculation using the spherically-extended polytope (s-tope) model
NASA Technical Reports Server (NTRS)
Hamlin, Gregory J.; Kelley, Robert B.; Tornero, Josep
1991-01-01
An object representation scheme which allows for Euclidean distance calculation is presented. The object model extends the polytope model by representing objects as the convex hull of a finite set of spheres. An algorithm for calculating distances between objects is developed which is linear in the total number of spheres specifying the two objects.
The report describes a version of EPA's electrostatic precipitator (ESP) model suitable for use on a Texas Instruments Programmable 59 (TI-59) hand-held calculator. This version of the model allows the calculation of ESP collection efficiency, including corrections for non-ideal ...
Refilling of a Hydraulically Isolated Embolized Xylem Vessel: Model Calculations
VESALA, TIMO; HÖLTTÄ, TEEMU; PERÄMÄKI, MARTTI; NIKINMAA, EERO
2003-01-01
When they are hydraulically isolated, embolized xylem vessels can be refilled, while adjacent vessels remain under tension. This implies that the pressure of water in the refilling vessel must be equal to the bubble gas pressure, which sets physical constraints for recovery. A model of water exudation into the cylindrical vessel and of bubble dissolution based on the assumption of hydraulic isolation is developed. Refilling is made possible by the turgor of the living cells adjacent to the refilling vessel, and by a reflection coefficient below 1 for the exchange of solutes across the interface between the vessel and the adjacent cells. No active transport of solutes is assumed. Living cells are also capable of importing water from the water‐conducting vessels. The most limiting factors were found to be the osmotic potential of living cells and the ratio of the volume of the adjacent living cells to that of the embolized vessel. With values for these of 1·5 MPa and 1, respectively, refilling times were in the order of hours for a broad range of possible values of water conductivity coefficients and effective diffusion distances for dissolved air, when the xylem water tension was below 0·6 MPa and constant. Inclusion of the daily pattern for xylem tension improved the simulations. The simulated gas pressure within the refilling vessel was in accordance with recent experimental results. The study shows that the refilling process is physically possible under hydraulic isolation, while water in surrounding vessels is under negative pressure. However, the osmotic potentials in the refilling vessel tend to be large (in the order of 1 MPa). Only if the xylem water tension is, at most, twice atmospheric pressure, the reflection coefficient remains close to 1 (0·95) and the ratio of the volume of the adjacent living cells to that of the embolized vessel is about 2, does the osmotic potential stay below 0·4 MPa. PMID:12588721
Model Calculations of Ocean Acidification at the End Cretaceous
NASA Astrophysics Data System (ADS)
Tyrrell, T.; Merico, A.; Armstrong McKay, D. I.
2014-12-01
Most episodes of ocean acidification (OA) in Earth's past were either too slow or too minor to provide useful lessons for understanding the present. The end-Cretaceous event (66 Mya) is special in this sense, both because of its rapid onset and also because many calcifying species (including 100% of ammonites and >95% of calcareous nannoplankton and planktonic foraminifera) went extinct at this time. We used box models of the ocean carbon cycle to evaluate whether impact-generated OA could feasibly have been responsible for the calcifier mass extinctions. We simulated several proposed consequences of the asteroid impact: (1) vaporisation of gypsum (CaSO4) and carbonate (CaCO3) rocks at the point of impact, producing sulphuric acid and CO2 respectively; (2) generation of NOx by the impact pressure wave and other sources, producing nitric acid; (3) release of CO2 from wildfires, biomass decay and disinterring of fossil organic carbon and hydrocarbons; and (4) ocean stirring leading to introduction into the surface layer of deep water with elevated CO2. We simulated additions over: (A) a few years (e-folding time of 6 months), and also (B) a few days (e-folding time of 10 hours) for SO4 and NOx, as recently proposed by Ohno et al (2014. Nature Geoscience, 7:279-282). Sulphuric acid as a consequence of gypsum vaporisation was found to be the most important acidifying process. Results will also be presented of the amounts of SO4 required to make the surface ocean become extremely undersaturated (Ωcalcite<0.5) for different e-folding times and combinations of processes. These will be compared to estimates in the literature of how much SO4 was actually released.
A computer program for calculating relative-transmissivity input arrays to aid model calibration
Weiss, Emanuel
1982-01-01
A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.
Er, Li; Xiangying, Zeng
2014-01-01
To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models. PMID:25026574
A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models
ERIC Educational Resources Information Center
Shieh, Gwowen
2007-01-01
The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…
S-values calculated from a tomographic head/brain model for brain imaging
NASA Astrophysics Data System (ADS)
Chao, Tsi-chian; Xu, X. George
2004-11-01
A tomographic head/brain model was developed from the Visible Human images and used to calculate S-values for brain imaging procedures. This model contains 15 segmented sub-regions including caudate nucleus, cerebellum, cerebral cortex, cerebral white matter, corpus callosum, eyes, lateral ventricles, lenses, lentiform nucleus, optic chiasma, optic nerve, pons and middle cerebellar peduncle, skull CSF, thalamus and thyroid. S-values for C-11, O-15, F-18, Tc-99m and I-123 have been calculated using this model and a Monte Carlo code, EGS4. Comparison of the calculated S-values with those calculated from the MIRD (1999) stylized head/brain model shows significant differences. In many cases, the stylized head/brain model resulted in smaller S-values (as much as 88%), suggesting that the doses to a specific patient similar to the Visible Man could have been underestimated using the existing clinical dosimetry.
Rapid calculation of terrain parameters for radiation modeling from digital elevation data
NASA Technical Reports Server (NTRS)
Dozier, Jeff; Frew, James
1990-01-01
Digital elevation models are now widely used to calculate terrain parameters to determine incoming solar and longwave radiation for use in surface climate models, interpretation of remote-sensing data, and parameters in hydrologic models. Because of the large number of points in an elevation grid, fast algorithms are useful to save computation time. A description is given of rapid methods for calculating slope and azimuth, solar illumination angle, horizons, and view factors for radiation from sky and terrain. Calculation time is reduced by fast algorithms and lookup tables.
Application of Dynamic Grey-Linear Auto-regressive Model in Time Scale Calculation
NASA Astrophysics Data System (ADS)
Yuan, H. T.; Don, S. W.
2009-01-01
Because of the influence of different noise and the other factors, the running of an atomic clock is very complex. In order to forecast the velocity of an atomic clock accurately, it is necessary to study and design a model to calculate its velocity in the near future. By using the velocity, the clock could be used in the calculation of local atomic time and the steering of local universal time. In this paper, a new forecast model called dynamic grey-liner auto-regressive model is studied, and the precision of the new model is given. By the real data of National Time Service Center, the new model is tested.
Airloads and Wake Geometry Calculations for an Isolated Tiltrotor Model in a Wind Tunnel
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 0.25-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance for hover and helicopter mode operation, and airloads for helicopter mode. Calculated induced power, profile power, and wake geometry provide additional information about the aerodynamic behavior. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
Difficult Budgetary Decisions: A Desk-Top Calculator Model to Facilitate Executive Decisions.
ERIC Educational Resources Information Center
Tweddale, R. Bruce
Presented is a budgetary decision model developed to aid the executive officers in arriving at tentative decisions on enrollment, tuition rates, increased compensation, and level of staffing as they affect the total institutional budget. The model utilizes a desk-top programmable calculator (in this case, a Burroughs Model C 3660). The model…
Band Model Calculations for CFCl3 in the 8-12 micron Region
NASA Technical Reports Server (NTRS)
Silvaggio, Peter M.; Boese, Robert W.; Nanes, Roger
1980-01-01
A Goody random band model with a Voigt line profile is used to calculate the band absorption of CFCB at various pressures at room and stratospheric (216 K) temperatures. Absorption coefficients and line spacings are computed.
NASA Technical Reports Server (NTRS)
Maples, A. L.
1980-01-01
The software developed for the solidification model is presented. A link between the calculations and the FORTRAN code is provided, primarily in the form of global flow diagrams and data structures. A complete listing of the solidification code is given.
Large-scale shell-model calculations on the spectroscopy of N <126 Pb isotopes
NASA Astrophysics Data System (ADS)
Qi, Chong; Jia, L. Y.; Fu, G. J.
2016-07-01
Large-scale shell-model calculations are carried out in the model space including neutron-hole orbitals 2 p1 /2 ,1 f5 /2 ,2 p3 /2 ,0 i13 /2 ,1 f7 /2 , and 0 h9 /2 to study the structure and electromagnetic properties of neutron-deficient Pb isotopes. An optimized effective interaction is used. Good agreement between full shell-model calculations and experimental data is obtained for the spherical states in isotopes Pb-206194. The lighter isotopes are calculated with an importance-truncation approach constructed based on the monopole Hamiltonian. The full shell-model results also agree well with our generalized seniority and nucleon-pair-approximation truncation calculations. The deviations between theory and experiment concerning the excitation energies and electromagnetic properties of low-lying 0+ and 2+ excited states and isomeric states may provide a constraint on our understanding of nuclear deformation and intruder configuration in this region.
Analytical approach to calculation of response spectra from seismological models of ground motion
Safak, Erdal
1988-01-01
An analytical approach to calculate response spectra from seismological models of ground motion is presented. Seismological models have three major advantages over empirical models: (1) they help in an understanding of the physics of earthquake mechanisms, (2) they can be used to predict ground motions for future earthquakes and (3) they can be extrapolated to cases where there are no data available. As shown with this study, these models also present a convenient form for the calculation of response spectra, by using the methods of random vibration theory, for a given magnitude and site conditions. The first part of the paper reviews the past models for ground motion description, and introduces the available seismological models. Then, the random vibration equations for the spectral response are presented. The nonstationarity, spectral bandwidth and the correlation of the peaks are considered in the calculation of the peak response.
Collins, William; Iacono, Michael J.; Delamere, Jennifer S.; Mlawer, Eli J.; Shephard, Mark W.; Clough, Shepard A.; Collins, William D.
2008-04-01
A primary component of the observed, recent climate change is the radiative forcing from increased concentrations of long-lived greenhouse gases (LLGHGs). Effective simulation of anthropogenic climate change by general circulation models (GCMs) is strongly dependent on the accurate representation of radiative processes associated with water vapor, ozone and LLGHGs. In the context of the increasing application of the Atmospheric and Environmental Research, Inc. (AER) radiation models within the GCM community, their capability to calculate longwave and shortwave radiative forcing for clear sky scenarios previously examined by the radiative transfer model intercomparison project (RTMIP) is presented. Forcing calculations with the AER line-by-line (LBL) models are very consistent with the RTMIP line-by-line results in the longwave and shortwave. The AER broadband models, in all but one case, calculate longwave forcings within a range of -0.20 to 0.23 W m{sup -2} of LBL calculations and shortwave forcings within a range of -0.16 to 0.38 W m{sup -2} of LBL results. These models also perform well at the surface, which RTMIP identified as a level at which GCM radiation models have particular difficulty reproducing LBL fluxes. Heating profile perturbations calculated by the broadband models generally reproduce high-resolution calculations within a few hundredths K d{sup -1} in the troposphere and within 0.15 K d{sup -1} in the peak stratospheric heating near 1 hPa. In most cases, the AER broadband models provide radiative forcing results that are in closer agreement with high 20 resolution calculations than the GCM radiation codes examined by RTMIP, which supports the application of the AER models to climate change research.
Energy Science and Technology Software Center (ESTSC)
2007-07-09
Version 02 PRECO-2006 is a two-component exciton model code for the calculation of double differential cross sections of light particle nuclear reactions. PRECO calculates the emission of light particles (A = 1 to 4) from nuclear reactions induced by light particles on a wide variety of target nuclei. Their distribution in both energy and angle is calculated. Since it currently only considers the emission of up to two particles in any given reaction, it ismore » most useful for incident energies of 14 to 30 MeV when used as a stand-alone code. However, the preequilibrium calculations are valid up to at least around 100 MeV, and these can be used as input for more complete evaporation calculations, such as are performed in a Hauser-Feshbach model code. Finally, the production cross sections for specific product nuclides can be obtained« less
Evaluation model calculations with the water reactor analysis package (WRAP-EM)
Gregory, M.V.; Beranek, F.
1982-01-01
The Water Reactor Analysis Package-Evaluation Model (WRAP-EM) is a modular system of computer codes designed to provide the safety analyst with the capability of performing complete loss-of-coolant calculations for both pressurized- and boiling-water reactor systems. The system provides a licensing-type calculation capability and thus contains most of the Nuclear Regulatory Commission-Approved EM options, as described in the Code of Federal Regulations, Title 10, Part 50, Appendix K. All phases of an accident (blowdown, refill, and reflood) are modeled. The WRAP consists of modified versions of five preexisting codes (RELAP4/MOD5, GAPCON, FRAP, MOXY, and NORCOOL), the necessary interfaces to permit automatic transition from one code to the next during the transient calculations, plus a host of user-convenience features to aid the analyst faced with a multitude of EM calculations. The WRAP has been verified against both calculated and experimental results.
New quark-model calculations of photo-and electroproduction of N* and and Delta * resonances
Capstick, Simon
1992-06-01
An introduction is given to the calculation of resonance electromagnetic coupling in the nonrelativistic quark model. Recent improvements brought about by the inclusion of relativistic corrections to the transition operator are described. We show how such calculations may be further improved by the use of relativized-model wave functions, a modestly increased effective quark mass, and an ab initio calculation of the signs of the N-pidecay amplitudes of the resonances. A summary is given of the results for the photocouplings of all nonstrage baryons, as well as for certain amplitude ratios in electroproduction.
Calculation of delayed-neutron energy spectra in a QRPA-Hauser-Feshbach model
Kawano, Toshihiko; Moller, Peter; Wilson, William B
2008-01-01
Theoretical {beta}-delayed-neutron spectra are calculated based on the Quasiparticle Random-Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after {beta} decay to the granddaughter residual are more accurately calculated than in previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra agree reasonably well with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors.
Comparison of results of experimental research with numerical calculations of a model one-sided seal
NASA Astrophysics Data System (ADS)
Joachimiak, Damian; Krzyślak, Piotr
2015-06-01
Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
Thomas-Fermi Quark Model and Techniques to Improve Lattice QCD Calculation
NASA Astrophysics Data System (ADS)
Liu, Quan
Two topics are discussed separately in this thesis. In the first part a semiclassical quark model, called the Thomas-Fermi quark model, is reviewed. After a modified approach to spin in the model is introduced, I present the calculation of the spectra of octet and decuplet baryons. The six-quark doubly strange H-dibaryon state is also investigated. In the second part, two numerical techniques which improve latice QCD calculations are covered. The first one, which we call Polynomial-Preconditioned GMRES-DR(PP-GMRESDR), is used to speed up the calculation of large systems of linear equations in LQCD. The second one, called the Polynomial-Subtraction method, is used to help reduce the noise variance of the calculations for disconnected loops in LQCD.
The Calculation of Theoretical Chromospheric Models and the Interpretation of the Solar Spectrum
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.
1998-01-01
Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Rurucz on the detailed synthesis of the solar spectrum based on opacity data or over 58 million atomic and molecular lines. our goals are: to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.
Multi-Scale Thermohydrologic Model Sensitivity-Study Calculations in Support of the SSPA
Glascoe, L G; Buscheck, T A; Loosmore, G A; Sun, Y
2001-12-20
The purpose of this calculation report is to document the thermohydrologic (TH) model calculations performed for the Supplemental Science and Performance Analysis (SSPA), Volume 1, Section 5 and Volume 2 (BSC 2001d [DIRS 155950], BSC 2001e [DIRS 154659]). The calculations are documented here in accordance with AP-3.12Q REV0 ICN4 [DIRS 154418]. The Technical Working Plan (Twp) for this document is TWP-NGRM-MD-000015 Real. These TH calculations were primarily conducted using three model types: (1) the Multiscale Thermohydrologic (MSTH) model, (2) the line-averaged-heat-source, drift-scale thermohydrologic (LDTH) model, and (3) the discrete-heat-source, drift-scale thermal (DDT) model. These TH-model calculations were conducted to improve the implementation of the scientific conceptual model, quantify previously unquantified uncertainties, and evaluate how a lower-temperature operating mode (LTOM) would affect the in-drift TH environment. Simulations for the higher-temperature operating mode (HTOM), which is similar to the base case analyzed for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) (CRWMS M&O 2000j [DIRS 153246]), were also conducted for comparison with the LTOM. This Calculation Report describes (1) the improvements to the MSTH model that were implemented to reduce model uncertainty and to facilitate model validation, and (2) the sensitivity analyses conducted to better understand the influence of parameter and process uncertainty. The METHOD Section (Section 2) describes the improvements to the MSTH-model methodology and submodels. The ASSUMPTIONS Section (Section 3) lists the assumptions made (e.g., boundaries, material properties) for this methodology. The USE OF SOFTWARE Section (Section 4) lists the software, routines and macros used for the MSTH model and submodels supporting the SSPA. The CALCULATION Section (Section 5) lists the data used in the model and the manner in which the MSTH model is prepared and executed. And
Calculation of individual isotope equilibrium constants for implementation in geochemical models
Thorstenson, Donald C.; Parkhurst, David L.
2002-01-01
Theory is derived from the work of Urey to calculate equilibrium constants commonly used in geochemical equilibrium and reaction-transport models for reactions of individual isotopic species. Urey showed that equilibrium constants of isotope exchange reactions for molecules that contain two or more atoms of the same element in equivalent positions are related to isotope fractionation factors by , where is n the number of atoms exchanged. This relation is extended to include species containing multiple isotopes, for example and , and to include the effects of nonideality. The equilibrium constants of the isotope exchange reactions provide a basis for calculating the individual isotope equilibrium constants for the geochemical modeling reactions. The temperature dependence of the individual isotope equilibrium constants can be calculated from the temperature dependence of the fractionation factors. Equilibrium constants are calculated for all species that can be formed from and selected species containing , in the molecules and the ion pairs with where the subscripts g, aq, l, and s refer to gas, aqueous, liquid, and solid, respectively. These equilibrium constants are used in the geochemical model PHREEQC to produce an equilibrium and reaction-transport model that includes these isotopic species. Methods are presented for calculation of the individual isotope equilibrium constants for the asymmetric bicarbonate ion. An example calculates the equilibrium of multiple isotopes among multiple species and phases.
Hagedorn Model of Critical Behavior: Comparison of Lattice and SBM Calculations
NASA Astrophysics Data System (ADS)
Turko, Ludwik
The Statistical Bootstrap Model and the related concept of the limiting temperature began the discussion about phase transitions in the hadronic matter. This was also the origin of the quark-gluon plasma concept. We discuss here to which extent lattice studies of QCD critical behavior at non-zero chemical potential are compatible with the statistical bootstrap model calculations.
40 CFR 600.207-86 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) The manufacturer shall supply total model year sales projections for each car line/vehicle... a model type. 600.207-86 Section 600.207-86 Protection of Environment ENVIRONMENTAL PROTECTION... Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values for 1977 and Later...
OFF-CENTER SPHERICAL MODEL FOR DOSIMETRY CALCULATIONS IN CHICK BRAIN TISSUE
The paper presents calculations for the electric field and absorbed power density distribution in chick brain tissue inside a test tube, using an off-center spherical model. It is shown that the off-center spherical model overcomes many of the limitations of the concentric spheri...
The contrast model method for the thermodynamical calculation of air-air wet heat exchanger
NASA Astrophysics Data System (ADS)
Yuan, Xiugan; Mei, Fang
1989-02-01
The 'contrast model' method thermodynamic calculation of air-air crossflow wet heat exchangers with initial air condensation is presented. Contrast-model equations are derived from the actual heat exchanger equations as well as imaginary ones; it is then possible to proceed to a proof that the enthalpy efficiency of the contrast model equations is similar to the temperature efficiency of the dry heat exchanger. Conditions are noted under which it becomes possible to unify thermodynamic calculations for wet and dry heat exchangers.
Radiation damage in NaCl: Calculations with an extended Jain-Lidiard model
Soppe, W.J.; Prij, J.
1993-12-31
The colloid growth due to irradiation in a rock salt formation is calculated with an extended version of the Jain-Lidiard model. The extensions of the model comprise a description of the nucleation stage of the colloids and the role of impurities on the formation of defect centers. Results of model calculations are shown for a representative design for a high-level radioactive waste repository in a rock salt formation. It is concluded that it is unlikely that, near the waste containers, the fraction of NaCl that will be converted to metallic Na and molecular Cl centers will exceed a few mole percent.
Dobos, A. P.
2012-05-01
This paper describes an improved algorithm for calculating the six parameters required by the California Energy Commission (CEC) photovoltaic (PV) Calculator module model. Rebate applications in California require results from the CEC PV model, and thus depend on an up-to-date database of module characteristics. Currently, adding new modules to the database requires calculating operational coefficients using a general purpose equation solver - a cumbersome process for the 300+ modules added on average every month. The combination of empirical regressions and heuristic methods presented herein achieve automated convergence for 99.87% of the 5487 modules in the CEC database and greatly enhance the accuracy and efficiency by which new modules can be characterized and approved for use. The added robustness also permits general purpose use of the CEC/6 parameter module model by modelers and system analysts when standard module specifications are known, even if the module does not exist in a preprocessed database.
Accurate calculation of conductive conductances in complex geometries for spacecrafts thermal models
NASA Astrophysics Data System (ADS)
Garmendia, Iñaki; Anglada, Eva; Vallejo, Haritz; Seco, Miguel
2016-02-01
The thermal subsystem of spacecrafts and payloads is always designed with the help of Thermal Mathematical Models. In the case of the Thermal Lumped Parameter (TLP) method, the non-linear system of equations that is created is solved to calculate the temperature distribution and the heat power that goes between nodes. The accuracy of the results depends largely on the appropriate calculation of the conductive and radiative conductances. Several established methods for the determination of conductive conductances exist but they present some limitations for complex geometries. Two new methods are proposed in this paper to calculate accurately these conductive conductances: The Extended Far Field method and the Mid-Section method. Both are based on a finite element calculation but while the Extended Far Field method uses the calculation of node mean temperatures, the Mid-Section method is based on assuming specific temperature values. They are compared with traditionally used methods showing the advantages of these two new methods.
NASA Astrophysics Data System (ADS)
Paranin, Y.; Burmistrov, A.; Salikeev, S.; Fomina, M.
2015-08-01
Basic propositions of calculation procedures for oil free scroll compressors characteristics are presented. It is shown that mathematical modelling of working process in a scroll compressor makes it possible to take into account such factors influencing the working process as heat and mass exchange, mechanical interaction in working chambers, leakage through slots, etc. The basic mathematical model may be supplemented by taking into account external heat exchange, elastic deformation of scrolls, inlet and outlet losses, etc. To evaluate the influence of procedure on scroll compressor characteristics calculations accuracy different calculations were carried out. Internal adiabatic efficiency was chosen as a comparative parameter which evaluates the perfection of internal thermodynamic and gas-dynamic compressor processes. Calculated characteristics are compared with experimental values obtained for the compressor pilot sample.
The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation
Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt
2010-01-01
Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.
Absorbing-sphere model for calculating ion-ion recombination total cross sections.
NASA Technical Reports Server (NTRS)
Olson, R. E.
1972-01-01
An 'absorbing-sphere' model based on the Landau-Zener method is set up for calculating the upper limit thermal energy (300 K) reaction rate and the energy dependence of the total cross sections. The crucial parameter needed for the calculation is the electron detachment energy for the outer electron on the anion. It is found that the cross sections increase with decreasing electron detachment energy.
NASA Astrophysics Data System (ADS)
Piringer, Martin; Knauder, Werner; Petz, Erwin; Schauberger, Günther
2016-09-01
Direction-dependent separation distances to avoid odour annoyance, calculated with the Gaussian Austrian Odour Dispersion Model AODM and the Lagrangian particle diffusion model LASAT at two sites, are analysed and compared. The relevant short-term peak odour concentrations are calculated with a stability-dependent peak-to-mean algorithm. The same emission and meteorological data, but model-specific atmospheric stability classes are used. The estimate of atmospheric stability is obtained from three-axis ultrasonic anemometers using the standard deviations of the three wind components and the Obukhov stability parameter. The results are demonstrated for the Austrian villages Reidling and Weissbach with very different topographical surroundings and meteorological conditions. Both the differences in the wind and stability regimes as well as the decrease of the peak-to-mean factors with distance lead to deviations in the separation distances between the two sites. The Lagrangian model, due to its model physics, generally calculates larger separation distances. For worst-case calculations necessary with environmental impact assessment studies, the use of a Lagrangian model is therefore to be preferred over that of a Gaussian model. The study and findings relate to the Austrian odour impact criteria.
Code System to Calculate Nuclear Reaction Cross Sections by Evaporation Model.
Energy Science and Technology Software Center (ESTSC)
2000-11-27
Version: 00 Both STAPRE and STAPREF are included in this package. STAPRE calculates energy-averaged cross sections for nuclear reactions with emission of particles and gamma rays and fission. The models employed are the evaporation model with inclusion of pre-equilibrium decay and a gamma-ray cascade model. Angular momentum and parity conservation are accounted for. Major improvement in the 1976 STAPRE program relates to level density approach, implemented in subroutine ZSTDE. Generalized superfluid model is incorporated, boltzman-gasmore » modeling of intrinsic state density and semi-empirical modeling of a few-quasiparticle effects in total level density at equilibrium and saddle deformations of actinide nuclei. In addition to the activation cross sections, particle and gamma-ray production spectra are calculated. Isomeric state populations and production cross sections for gamma rays from low excited levels are obtained, too. For fission a single or a double humped barrier may be chosen.« less
Calculations of diffuser flows with an anisotropic K-epsilon model
NASA Astrophysics Data System (ADS)
Zhu, J.; Shih, T.-H.
1995-10-01
A newly developed anisotropic K-epsilon model is applied to calculate three axisymmetric diffuser flows with or without separation. The new model uses a quadratic stress-strain relation and satisfies the realizability conditions, i.e., it ensures both the positivity of the turbulent normal stresses and the Schwarz' inequality between any fluctuating velocities. Calculations are carried out with a finite-element method. A second-order accurate, bounded convection scheme and sufficiently fine grids are used to ensure numerical credibility of the solutions. The standard K-epsilon model is also used in order to highlight the performance of the new model. Comparison with the experimental data shows that the anisotropic K-epsilon model performs consistently better than does the standard K-epsilon model in all of the three test cases.
Calculations of Diffuser Flows with an Anisotropic K-Epsilon Model
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T.-H.
1995-01-01
A newly developed anisotropic K-epsilon model is applied to calculate three axisymmetric diffuser flows with or without separation. The new model uses a quadratic stress-strain relation and satisfies the realizability conditions, i.e., it ensures both the positivity of the turbulent normal stresses and the Schwarz' inequality between any fluctuating velocities. Calculations are carried out with a finite-element method. A second-order accurate, bounded convection scheme and sufficiently fine grids are used to ensure numerical credibility of the solutions. The standard K-epsilon model is also used in order to highlight the performance of the new model. Comparison with the experimental data shows that the anisotropic K-epsilon model performs consistently better than does the standard K-epsilon model in all of the three test cases.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1984-01-01
Models and spectra of sunspots were studied, because they are important to energy balance and variability discussions. Sunspot observations in the ultraviolet region 140 to 168 nn was obtained by the NRL High Resolution Telescope and Spectrograph. Extensive photometric observations of sunspot umbrae and prenumbrae in 10 chanels covering the wavelength region 387 to 3800 nm were made. Cool star opacities and model atmospheres were computed. The Sun is the first testcase, both to check the opacity calculations against the observed solar spectrum, and to check the purely theoretical model calculation against the observed solar energy distribution. Line lists were finally completed for all the molecules that are important in computing statistical opacities for energy balance and for radiative rate calculations in the Sun (except perhaps for sunspots). Because many of these bands are incompletely analyzed in the laboratory, the energy levels are not well enough known to predict wavelengths accurately for spectrum synthesis and for detailed comparison with the observations.
Tabulation of Mie scattering calculation results for microwave radiative transfer modeling
NASA Technical Reports Server (NTRS)
Yeh, Hwa-Young M.; Prasad, N.
1988-01-01
In microwave radiative transfer model simulations, the Mie calculations usually consume the majority of the computer time necessary for the calculations (70 to 86 percent for frequencies ranging from 6.6 to 183 GHz). For a large array of atmospheric profiles, the repeated calculations of the Mie codes make the radiative transfer computations not only expensive, but sometimes impossible. It is desirable, therefore, to develop a set of Mie tables to replace the Mie codes for the designated ranges of temperature and frequency in the microwave radiative transfer calculation. Results of using the Mie tables in the transfer calculations show that the total CPU time (IBM 3081) used for the modeling simulation is reduced by a factor of 7 to 16, depending on the frequency. The tables are tested by computing the upwelling radiance of 144 atmospheric profiles generated by a 3-D cloud model (Tao, 1986). Results are compared with those using Mie quantities computed from the Mie codes. The bias and root-mean-square deviation (RMSD) of the model results using the Mie tables, in general, are less than 1 K except for 37 and 90 GHz. Overall, neither the bias nor RMSD is worse than 1.7 K for any frequency and any viewing angle.
Eged, Katalin; Kis, Zoltán; Voigt, Gabriele
2006-01-01
After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the "location factor method" and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison. PMID:16095771
Modification of the Simons model for calculation of nonradial expansion plumes
NASA Technical Reports Server (NTRS)
Boyd, I. D.; Stark, J. P. W.
1989-01-01
The Simons model is a simple model for calculating the expansion plumes of rockets and thrusters and is a widely used engineering tool for the determination of spacecraft impingement effects. The model assumes that the density of the plume decreases radially from the nozzle exit. Although a high degree of success has been achieved in modeling plumes with moderate Mach numbers, the accuracy obtained under certain conditions is unsatisfactory. A modification made to the model that allows effective description of nonradial behavior in plumes is presented, and the conditions under which its use is preferred are prescribed.
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1989-01-01
Numerical calculations of turbulent reattaching shear layers in a divergent channel are presented. The turbulence is described by a multiple-time-scale turbulence model. The turbulent flow equations are solved by a control-volume based finite difference method. The computational results are compared with those obtained using k-epsilon turbulence models and algebraic Reynolds stress turbulence models. It is shown that the multiple-time-scale turbulence model yields significantly improved computational results than the other turbulence models in the region where the turbulence is in a strongly inequilibrium state.
A New Model to Calculate Friction Coefficients and Shear Stresses in Thermal Drilling
Qu, Jun; Blau, Peter Julian
2008-01-01
A new analytical model for thermal drilling (also known as friction drilling) has been developed. The model distinguishes itself from recent work of other investigators by improving on two aspects: (1) the new model defines material plastic flow in terms of the yield in shear rather than the yield in compression, and (2) it uses a single, variable friction coefficient instead of assuming two unrelated friction coefficients in fixed values. The time dependence of the shear stress and friction coefficient at the hole walls, which cannot be measured directly in thermal drilling, can be calculated using this model from experimentally-measured values of the instantaneous thrust force and torque. Good matches between the calculated shear strengths and the handbook values for thermally drilling low carbon steel confirm the model's validity.
Photon and electron absorbed fractions calculated from a new tomographic rat model
NASA Astrophysics Data System (ADS)
Peixoto, P. H. R.; Vieira, J. W.; Yoriyaz, H.; Lima, F. R. A.
2008-10-01
This paper describes the development of a tomographic model of a rat developed using CT images of an adult male Wistar rat for radiation transport studies. It also presents calculations of absorbed fractions (AFs) under internal photon and electron sources using this rat model and the Monte Carlo code MCNP. All data related to the developed phantom were made available for the scientific community as well as the MCNP inputs prepared for AF calculations in that phantom and also all estimated AF values, which could be used to obtain absorbed dose estimates—following the MIRD methodology—in rats similar in size to the presently developed model. Comparison between the rat model developed in this study and that published by Stabin et al (2006 J. Nucl. Med. 47 655) for a 248 g Sprague-Dawley rat, as well as between the estimated AF values for both models, has been presented.
NASA Astrophysics Data System (ADS)
Kazantzidis, A.; Balis, D. S.; Bais, A. F.; Kazadzis, S.; Galani, E.; Kosmidis, E.; Blumthaler, M.
2001-06-01
Spectral measurements of global solar irradiance, obtained under cloud-free conditions during the SUSPEN campaign (July 1997) in Thessaloniki, Greece, are compared with radiative transfer model calculations, showing an agreement to within ±5% for wavelengths higher that 305 nm. The uncertainties in the modeled spectra were analyzed with respect to the aerosol-related model input parameters (single-scattering albedo and asymmetry factor), which were not derivable from measurements. A range of single-scattering albedo values was used to investigate its impact on surface UV irradiance through comparison of measurements with model calculations. It was found that a difference in the single-scattering albedo of 0.1 changes the model-measurement ratio by 7%-14%, depending on solar zenith angle. Finally, an attempt was made to relate the estimated values of single-scattering albedo to wind direction and relative humidity, which control the origin and type of the aerosols in the area.
Photon and electron absorbed fractions calculated from a new tomographic rat model.
Peixoto, P H R; Vieira, J W; Yoriyaz, H; Lima, F R A
2008-10-01
This paper describes the development of a tomographic model of a rat developed using CT images of an adult male Wistar rat for radiation transport studies. It also presents calculations of absorbed fractions (AFs) under internal photon and electron sources using this rat model and the Monte Carlo code MCNP. All data related to the developed phantom were made available for the scientific community as well as the MCNP inputs prepared for AF calculations in that phantom and also all estimated AF values, which could be used to obtain absorbed dose estimates--following the MIRD methodology--in rats similar in size to the presently developed model. Comparison between the rat model developed in this study and that published by Stabin et al (2006 J. Nucl. Med. 47 655) for a 248 g Sprague-Dawley rat, as well as between the estimated AF values for both models, has been presented. PMID:18758003
A model of the circulating blood for use in radiation dose calculations
Hui, T.E.; Poston, J.W. Sr.
1987-01-01
Over the last few years there has been a significant increase in the use of radionuclides in leukocyte, platelet, and erythrocyte imaging procedures. Radiopharmaceutical used in these procedures are confined primarily to the blood, have short half-lives, and irradiate the body as they move through the circulatory system. There is a need for a model, to describe the circulatory system in an adult human, which can be used to provide radiation absorbed dose estimates for these procedures. A simplified model has been designed assuming a static circulatory system and including major organs of the body. The model has been incorporated into the MIRD phantom and calculations have been completed for a number of exposure situations and radionuclides of clinical importance. The model will be discussed in detail and results of calculations using this model will be presented.
A model of the circulating blood for use in radiation dose calculations
Hui, T.E.; Poston, J.W. Sr.
1987-12-31
Over the last few years there has been a significant increase in the use of radionuclides in leukocyte, platelet, and erythrocyte imaging procedures. Radiopharmaceutical used in these procedures are confined primarily to the blood, have short half-lives, and irradiate the body as they move through the circulatory system. There is a need for a model, to describe the circulatory system in an adult human, which can be used to provide radiation absorbed dose estimates for these procedures. A simplified model has been designed assuming a static circulatory system and including major organs of the body. The model has been incorporated into the MIRD phantom and calculations have been completed for a number of exposure situations and radionuclides of clinical importance. The model will be discussed in detail and results of calculations using this model will be presented.
Model calculations of radiative capture of nucleons in MeV region
Betak, E.
2006-03-13
We address calculations of the neutron and the proton radiative capture at incident energies up to 20 MeV on medium and heavy nuclei. The main formalism used is the pre-equilibrium (exciton) model of {gamma} emission. A link to the Consistent Direct-Semidirect model is noticed as well. The resulting pre-equilibrium (plus equilibrium) calculations of the radiative capture excitation functions are compared to experimental data and also some cross section trends important for possible production of therapeutic radioisotopes are extracted.
Two-loop Higgs mass calculations in supersymmetric models beyond the MSSM with SARAH and SPheno
NASA Astrophysics Data System (ADS)
Goodsell, Mark D.; Nickel, Kilian; Staub, Florian
2015-01-01
We present an extension to the Mathematica package SARAH which allows for Higgs mass calculations at the two-loop level in a wide range of supersymmetric (SUSY) models beyond the MSSM. These calculations are based on the effective potential approach and include all two-loop corrections which are independent of electroweak gauge couplings. For the numerical evaluation Fortran code for SPheno is generated by SARAH. This allows the prediction of the Higgs mass in more complicated SUSY models with the same precision that most state-of-the-art spectrum generators provide for the MSSM.
Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow
NASA Astrophysics Data System (ADS)
Kemerink, G. J.; Pleiter, F.
1986-08-01
The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.
Calculation of the Entropy of Lattice Polymer Models from Monte Carlo Trajectories.
White, Ronald P; Funt, Jason; Meirovitch, Hagai
2005-07-20
While lattice models are used extensively for macromolecules (synthetic polymers proteins, etc), calculation of the absolute entropy, S, and the free energy, F, from a given Monte Carlo (MC) trajectory is not straightforward. Recently we have developed the hypothetical scanning MC (HSMC) method for calculating S and F of fluids. Here we extend HSMC to self-avoiding walks on a square lattice and discuss its wide applicability to complex polymer lattice models. HSMC is independent of existing techniques and thus constitutes an independent research tool; it provides rigorous upper and lower bounds for F, which can be obtained from a very small sample and even from a single chain conformation. PMID:16912812
A model to calculate the induced dose rate around an 18 MV ELEKTA linear accelerator.
Perrin, Bruce; Walker, Anne; Mackay, Ranald
2003-03-01
The dose rate due to activity induced by (gamma, n) reactions around an ELEKTA Precise accelerator running at 18 MV is reported. A model to calculate the induced dose rate for a variety of working practices has been derived and compared to the measured values. From this model, the dose received by the staff using the machine can be estimated. From measured dose rates at the face of the linear accelerator for a 10 x 10 cm2 jaw setting at 18 MV an activation coefficient per MU was derived for each of the major activation products. The relative dose rates at points around the linac head, for different energy and jaw settings, were measured. Dose rates adjacent to the patient support system and portal imager were also measured. A model to calculate the dose rate at these points was derived, and compared to those measured over a typical working week. The model was then used to estimate the maximum dose to therapists for the current working schedule on this machine. Calculated dose rates at the linac face agreed to within +/- 12% of those measured over a week, with a typical dose rate of 4.5 microSv h(-1) 2 min after the beam has stopped. The estimated maximum annual whole body dose for a treatment therapist, with the machine treating at only 18 MV, for 60000 MUs per week was 2.5 mSv. This compares well with value of 2.9 mSv published for a Clinac 21EX. A model has been derived to calculate the dose from the four dominant activation products of an ELEKTA Precise 18 MV linear accelerator. This model is a useful tool to calculate the induced dose rate around the treatment head. The model can be used to estimate the dose to the staff for typical working patterns. PMID:12696804
A comparison of Monte Carlo and model-based dose calculations in radiotherapy using MCNPTV
NASA Astrophysics Data System (ADS)
Wyatt, Mark S.; Miller, Laurence F.
2006-06-01
Monte Carlo calculations for megavoltage radiotherapy beams represent the next generation of dose calculation in the clinical environment. In this paper, calculations obtained by the MCNP code based on CT data from a human pelvis are compared against those obtained by a commercial radiotherapy treatment system (CMS XiO). The MCNP calculations are automated by the use of MCNPTV (MCNP Treatment Verification), an integrated application developed in Visual Basic that runs on a Windows-based PC. The linear accelerator beam is modeled as a finite point source, and validated by comparing depth dose curves and lateral profiles in a water phantom to measured data. Calculated water phantom PDDs are within 1% of measured data, but the lateral profiles exhibit differences of 2.4, 5.5, and 5.7 mm at the 60%, 40%, and 20% isodose lines, respectively. A MCNP calculation is performed using the CT data and 15 points are selected for comparison with XiO. Results are generally within the uncertainty of the MCNP calculation, although differences up to 13.2% are seen in the presence of large heterogeneities.
NASA Astrophysics Data System (ADS)
Akasaka, Ryo
2008-08-01
An assessment of thermodynamic models for HFC refrigerant mixtures based on Helmholtz energy equations of state was made through critical-point calculations for ternary and quaternary mixtures. The calculations were performed using critical-point criteria expressed in terms of the Helmholtz free energy. For three ternary mixtures: difluoromethane (R-32) + pentafluoroethane (R-125) + 1,1,1,2-tetrafluoroethane (R-134a), R-125 + R-134a + 1,1,1-trifluoroethane (R-143a), and carbon dioxide (CO2) + R-32 + R-134a, and one quaternary mixture, R-32 + R-125 + R-134a + R-143a, calculated critical points were compared with experimental values, and the capability of the mixture models for representing the critical behavior was discussed.
A finite element method for shear stresses calculation in composite blade models
NASA Astrophysics Data System (ADS)
Paluch, B.
1991-09-01
A finite-element method is developed for accurately calculating shear stresses in helicopter blade models, induced by torsion and shearing forces. The method can also be used to compute the equivalent torsional stiffness of the section, their transverse shear coefficient, and the position of their center of torsion. A grid generator method which is a part of the calculation program is also described and used to discretize the sections quickly and to condition the grid data reliably. The finite-element method was validated on a few sections composed of isotropic materials and was then applied to a blade model sections made of composite materials. Good agreement was obtained between the calculated and experimental data.
Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.
2011-06-15
We present the calculation of the atmospheric neutrino fluxes with an interaction model named JAM, which is used in PHITS (Particle and Heavy-Ion Transport code System) [K. Niita et al., Radiation Measurements 41, 1080 (2006).]. The JAM interaction model agrees with the HARP experiment [H. Collaboration, Astropart. Phys. 30, 124 (2008).] a little better than DPMJET-III[S. Roesler, R. Engel, and J. Ranft, arXiv:hep-ph/0012252.]. After some modifications, it reproduces the muon flux below 1 GeV/c at balloon altitudes better than the modified DPMJET-III, which we used for the calculation of atmospheric neutrino flux in previous works [T. Sanuki, M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 75, 043005 (2007).][M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, and T. Sanuki, Phys. Rev. D 75, 043006 (2007).]. Some improvements in the calculation of atmospheric neutrino flux are also reported.
Missing final states and the spectral endpoint in exciton model calculations
Kalbach, C.
2006-02-15
Recent studies of (n, xp) spectra at incident energies of 28 to 63 MeV have emphasized a previously noted trend that exciton model calculations do not extend to high enough emission energies in some (p, xn) and (n, xp) reactions. Improved agreement between experiment and calculation is achieved by including in the residual nucleus state density those configurations that can be populated but were not being counted because the Fermi level moves down during particle emission. This necessitates minor adjustments in other model parameters. The situation is generalized to reactions with complex particle channels, and significant effects are seen in the calculations for a few reactions on light targets, though the average level of agreement with experiment is unchanged from earlier work.
Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model.
Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong
2012-10-17
We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion k at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials. PMID:23014503
Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model
NASA Astrophysics Data System (ADS)
Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong
2012-10-01
We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion over \\vec{k} at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.
Modelling lateral beam quality variations in pencil kernel based photon dose calculations
NASA Astrophysics Data System (ADS)
Nyholm, T.; Olofsson, J.; Ahnesjö, A.; Karlsson, M.
2006-08-01
Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error
NASA Technical Reports Server (NTRS)
Boudreau, R. D.
1973-01-01
A numerical model is developed which calculates the atmospheric corrections to infrared radiometric measurements due to absorption and emission by water vapor, carbon dioxide, and ozone. The corrections due to aerosols are not accounted for. The transmissions functions for water vapor, carbon dioxide, and water are given. The model requires as input the vertical distribution of temperature and water vapor as determined by a standard radiosonde. The vertical distribution of carbon dioxide is assumed to be constant. The vertical distribution of ozone is an average of observed values. The model also requires as input the spectral response function of the radiometer and the nadir angle at which the measurements were made. A listing of the FORTRAN program is given with details for its use and examples of input and output listings. Calculations for four model atmospheres are presented.
Electron-N/sub 2/ scattering calculations with a parameter-free model polarization potential
Morrison, M.A.; Saha, B.C.; Gibson, T.L.
1987-10-15
We have extended our variationally determined nonadiabatic polarization potential (Gibson and Morrison, Phys. Rev. A 29, 2497 (1984)) to the e-N/sub 2/ system and calculated elastic, total momentum transfer, and rotational excitation cross sections. This model potential, which requires no scaling and contains no adjustable parameters, is presented in tabular and analytic (fitted) form for possible use in future studies. We evaluated the static potential at the near-Hartree-Fock level of accuracy and included exchange effects exactly via the linear algebraic method of Collins and Schneider (Phys. Rev. A 24, 2387 (1981)). Diverse cross sections based on this model are in excellent agreement with existing experiment. We also compare various scattering quantities calculated with our model to prior theoretical results and to newly determined numbers using two other model potentials: a cutoff phenomenological form and the correlation-polarization potential of O'Connell and Lane (Phys. Rev. A 27, 1893 (1983)).
Improved Ionospheric Electrodynamic Models and Application to Calculating Joule Heating Rates
NASA Technical Reports Server (NTRS)
Weimer, D. R.
2004-01-01
Improved techniques have been developed for empirical modeling of the high-latitude electric potentials and magnetic field aligned currents (FAC) as a function of the solar wind parameters. The FAC model is constructed using scalar magnetic Euler potentials, and functions as a twin to the electric potential model. The improved models have more accurate field values as well as more accurate boundary locations. Non-linear saturation effects in the solar wind-magnetosphere coupling are also better reproduced. The models are constructed using a hybrid technique, which has spherical harmonic functions only within a small area at the pole. At lower latitudes the potentials are constructed from multiple Fourier series functions of longitude, at discrete latitudinal steps. It is shown that the two models can be used together in order to calculate the total Poynting flux and Joule heating in the ionosphere. An additional model of the ionospheric conductivity is not required in order to obtain the ionospheric currents and Joule heating, as the conductivity variations as a function of the solar inclination are implicitly contained within the FAC model's data. The models outputs are shown for various input conditions, as well as compared with satellite measurements. The calculations of the total Joule heating are compared with results obtained by the inversion of ground-based magnetometer measurements. Like their predecessors, these empirical models should continue to be a useful research and forecast tools.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1985-01-01
Solar chromospheric models are described. The models included are based on the observed spectrum, and on the assumption of hydrostatic equilibrium. The calculations depend on realistic solutions of the radiative transfer and statistical equilibrium equations for optically thick lines and continua, and on including the effects of large numbers of lines throughout the spectrum. Although spectroheliograms show that the structure of the chromosphere is highly complex, one-dimensional models of particular features are reasonably successful in matching observed spectra. Such models were applied to the interpretation of chromospheric observations.
Martelli, Saulo; Kersh, Mariana E; Pandy, Marcus G
2015-10-15
The determination of femoral strain in post-menopausal women is important for studying bone fragility. Femoral strain can be calculated using a reference musculoskeletal model scaled to participant anatomies (referred to as scaled-generic) combined with finite-element models. However, anthropometric errors committed while scaling affect the calculation of femoral strains. We assessed the sensitivity of femoral strain calculations to scaled-generic anthropometric errors. We obtained CT images of the pelves and femora of 10 healthy post-menopausal women and collected gait data from each participant during six weight-bearing tasks. Scaled-generic musculoskeletal models were generated using skin-mounted marker distances. Image-based models were created by modifying the scaled-generic models using muscle and joint parameters obtained from the CT data. Scaled-generic and image-based muscle and hip joint forces were determined by optimisation. A finite-element model of each femur was generated from the CT images, and both image-based and scaled-generic principal strains were computed in 32 regions throughout the femur. The intra-participant regional RMS error increased from 380 με (R2=0.92, p<0.001) to 4064 με (R2=0.48, p<0.001), representing 5.2% and 55.6% of the tensile yield strain in bone, respectively. The peak strain difference increased from 2821 με in the proximal region to 34,166 με at the distal end of the femur. The inter-participant RMS error throughout the 32 femoral regions was 430 με (R2=0.95, p<0.001), representing 5.9% of bone tensile yield strain. We conclude that scaled-generic models can be used for determining cohort-based averages of femoral strain whereas image-based models are better suited for calculating participant-specific strains throughout the femur. PMID:26315919
NASA Technical Reports Server (NTRS)
Popinceanu, N. G.; Kremmer, I.
1974-01-01
A mechano-acoustic model is reported for calculating acoustic energy radiated by a working gear. According to this model, a gear is an acoustic coublet formed of the two wheels. The wheel teeth generate cylindrical acoustic waves while the front surfaces of the teeth behave like vibrating pistons. Theoretical results are checked experimentally and good agreement is obtained with open gears. The experiments show that the air noise effect is negligible as compared with the structural noise transmitted to the gear box.
Stochastic estimation of level density in nuclear shell-model calculations
NASA Astrophysics Data System (ADS)
Shimizu, Noritaka; Utsuno, Yutaka; Futamura, Yasunori; Sakurai, Tetsuya; Mizusaki, Takahiro; Otsuka, Takaharu
2016-06-01
An estimation method of the nuclear level density stochastically based on nuclear shell-model calculations is introduced. In order to count the number of the eigen-values of the shell-model Hamiltonian matrix, we perform the contour integral of the matrix element of a resolvent. The shifted block Krylov subspace method enables us its efficient computation. Utilizing this method, the contamination of center-of-mass motion is clearly removed.
Optical model calculations of 14.6A GeV silicon fragmentation cross sections
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Khan, Ferdous; Tripathi, Ram K.
1993-01-01
An optical potential abrasion-ablation collision model is used to calculate hadronic dissociation cross sections for a 14.6 A GeV(exp 28) Si beam fragmenting in aluminum, tin, and lead targets. The frictional-spectator-interaction (FSI) contributions are computed with two different formalisms for the energy-dependent mean free path. These estimates are compared with experimental data and with estimates obtained from semi-empirical fragmentation models commonly used in galactic cosmic ray transport studies.
NASA Technical Reports Server (NTRS)
Svizhenko, Alexel; Anantram, M. P.; Maiti, Amitesh
2003-01-01
This paper presents viewgraphs on the modeling of the electromechanical response of carbon nanotubes, utilizing molecular dynamics and transport calculations. The topics include: 1) Simulations of the experiment; 2) Effect of diameter, length and temperature; and 3) Study of sp3 coordination-"The Table experiment".
The 'Little Ice Age' - Northern Hemisphere average observations and model calculations
NASA Technical Reports Server (NTRS)
Robock, A.
1979-01-01
Numerical energy balance climate model calculations of the average surface temperature of the Northern Hemisphere for the past 400 years are compared with a new reconstruction of the past climate. Forcing with volcanic dust produces the best simulation, whereas expressing the solar constant as a function of the envelope of the sunspot number gives very poor results.
40 CFR 600.207-86 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for a model type. 600.207-86 Section 600.207-86 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later...
Inductance Calculation and New Modeling of a Synchronous Reluctance Motor Using Flux Linkages
NASA Astrophysics Data System (ADS)
Nashiki, Masayuki; Inoue, Yoshimitu; Kawai, Youichi; Okuma, Shigeru
New modeling of a synchronous reluctance motor SynRM which has non-linear magnetic characteristics is proposed. And a control method of the SynRM using the new model is developed. The new model is based on the inductance data table or the flux linkage data table which is calculated with the flux linkages of the SynRM at each current (id, iq). Detailed calculation method of the inductances is described. The calculated torque TA with the inductance data table is compared with the torque Tfem which is calculated by FEM and the difference is less than 5% at the rated torque. Therefore the accuracy of the new model is certified. And the same method is applicable to an interior permanent magnet synchronous motor IPMSM. The high performance motor control is realized. The exact current commands (id, iq), the exact voltage feed-forward commands (FFd, FFq) and the adaptive current loop gain (Gd, Gq) are obtained using the FEM data of the motor.
Nuclear shell model calculations of the effective interaction and other effective operators
NASA Astrophysics Data System (ADS)
Thoresen, Michael Joseph
1997-12-01
Recent breakthroughs in effective interaction and effective operator techniques allow us to take a new look at this field that has seen limited progress in the past twenty years. A comparison of the old and new techniques will shed some new light on the use of effective interactions and effective operators in shell model calculations of light nuclei. Three different methods of calculating the effective interaction and effective operators are described and compared. A large model-space no-core shell-model calculation for 6Li is used as the basis for comparison. In the no-core calculation all nucleons are active in a model space involving all configurations with energies up to 8/hbar/Omega. The second method is a perturbation expansion for the effective interaction and effective operators, using an inert 4He core and two valence particles. In particular, the electric quadrupole and magnetic dipole operators are studied to determine the effective charges to be used in connection with one- body operators in this shell-model space. The third method is a model-space truncation scheme, which maps operators in a large model space into operators in smaller, truncated model spaces. The effect of going to larger excitation spaces will be examined as well as the convergence trends regarding increases in the excitation space. The results from these three approaches are compared in order to gain new insight into the nature of effective interactions and operators in truncated model spaces. We find that by going to energies of 8/hbar/Omega we can accurately reproduce the experimental values for the binding energy, excitation spectrum, electric quadrupole moment and magnetic dipole moment of 6Li and that there is a definite model-space dependence for these operators. To obtain results similar to the 8/hbar/Omega ones in a truncated 2/hbar/Omega model space we use effective operators and effective charges. Effective charges of approximately 1.1e for the effective proton charge and 0
A mathematical model of the nine-month pregnant woman for calculating specific absorbed fractions
Watson, E.E.; Stabin, M.G.
1986-01-01
Existing models that allow calculation of internal doses from radionuclide intakes by both men and women are based on a mathematical model of Reference Man. No attempt has been made to allow for the changing geometric relationships that occur during pregnancy which would affect the doses to the mother's organs and to the fetus. As pregnancy progresses, many of the mother's abdominal organs are repositioned, and their shapes may be somewhat changed. Estimation of specific absorbed fractions requires that existing mathematical models be modified to accommodate these changes. Specific absorbed fractions for Reference Woman at three, six, and nine months of pregnancy should be sufficient for estimating the doses to the pregnant woman and the fetus. This report describes a model for the pregnant woman at nine months. An enlarged uterus was incorporated into a model for Reference Woman. Several abdominal organs as well as the exterior of the trunk were modified to accommodate the new uterus. This model will allow calculation of specific absorbed fractions for the fetus from photon emitters in maternal organs. Specific absorbed fractions for the repositioned maternal organs from other organs can also be calculated. 14 refs., 2 figs.
Goorley, J T; Kiger, W S; Zamenhof, R G
2002-02-01
As clinical trials of Neutron Capture Therapy (NCT) are initiated in the U.S. and other countries, new treatment planning codes are being developed to calculate detailed dose distributions in patient-specific models. The thorough evaluation and comparison of treatment planning codes is a critical step toward the eventual standardization of dosimetry, which, in turn, is an essential element for the rational comparison of clinical results from different institutions. In this paper we report development of a reference suite of computational test problems for NCT dosimetry and discuss common issues encountered in these calculations to facilitate quantitative evaluations and comparisons of NCT treatment planning codes. Specifically, detailed depth-kerma rate curves were calculated using the Monte Carlo radiation transport code MCNP4B for four different representations of the modified Snyder head phantom, an analytic, multishell, ellipsoidal model, and voxel representations of this model with cubic voxel sizes of 16, 8, and 4 mm. Monoenergetic and monodirectional beams of 0.0253 eV, 1, 2, 10, 100, and 1000 keV neutrons, and 0.2, 0.5, 1, 2, 5, and 10 MeV photons were individually simulated to calculate kerma rates to a statistical uncertainty of <1% (1 std. dev.) in the center of the head model. In addition, a "generic" epithermal neutron beam with a broad neutron spectrum, similar to epithermal beams currently used or proposed for NCT clinical trials, was computed for all models. The thermal neutron, fast neutron, and photon kerma rates calculated with the 4 and 8 mm voxel models were within 2% and 4%, respectively, of those calculated for the analytical model. The 16 mm voxel model produced unacceptably large discrepancies for all dose components. The effects from different kerma data sets and tissue compositions were evaluated. Updating the kerma data from ICRU 46 to ICRU 63 data produced less than 2% difference in kerma rate profiles. The depth-dose profile data
The impact of MM5 and WRF meteorology over complex terrain on CHIMERE model calculations
NASA Astrophysics Data System (ADS)
de Meij, A.; Gzella, A.; Thunis, P.; Cuvelier, C.; Bessagnet, B.; Vinuesa, J. F.; Menut, L.
2009-01-01
The objective of this study is to evaluate the impact of meteorological input data on calculated gas and aerosol concentrations. We use two different meteorological models (MM5 and WRF) together with the chemistry transport model CHIMERE. We focus on the Po valley area (Italy) for January and June 2005. Firstly we evaluate the meteorological parameters with observations. The analysis shows that the performance of both models is similar, however some small differences are still noticeable. Secondly, we analyze the impact of using MM5 and WRF on calculated PM10 and O3 concentrations. In general CHIMERE/MM5 and CHIMERE/WRF underestimate the PM10 concentrations for January. The difference in PM10 concentrations for January between CHIMERE/MM5 and CHIMERE/WRF is around a factor 1.6 (PM10 higher for CHIMERE/MM5). This difference and the larger underestimation in PM10 concentrations by CHIMERE/WRF are related to the differences in heat fluxes and the resulting PBL heights calculated by WRF. In general the PBL height by WRF meteorology is a factor 2.8 higher at noon in January than calculated by MM5. This study showed that the difference in microphysics scheme has an impact on the profile of cloud liquid water (CLW) calculated by the meteorological driver and therefore on the production of SO4 aerosol. A sensitivity analysis shows that changing the Noah Land Surface Model (LSM) for the 5-layer soil temperature model, the calculated monthly mean PM10 concentrations increase by 30%, due to the change in the heat fluxes and the resulting PBL heights. For June, PM10 calculated concentrations by CHIMERE/MM5 and CHIMERE/WRF are similar and agree with the observations. Calculated O3 values for June are in general overestimated by a factor 1.3 by CHIMERE/MM5 and CHIMRE/WRF. The reason for this is that daytime NO2 concentrations are a higher than the observations and nighttime NO concentrations (titration effect) are underestimated.
The role of convective model choice in calculating the climate impact of doubling CO2
NASA Technical Reports Server (NTRS)
Lindzen, R. S.; Hou, A. Y.; Farrell, B. F.
1982-01-01
The role of the parameterization of vertical convection in calculating the climate impact of doubling CO2 is assessed using both one-dimensional radiative-convective vertical models and in the latitude-dependent Hadley-baroclinic model of Lindzen and Farrell (1980). Both the conventional 6.5 K/km and the moist-adiabat adjustments are compared with a physically-based, cumulus-type parameterization. The model with parameterized cumulus convection has much less sensitivity than the 6.5 K/km adjustment model at low latitudes, a result that can be to some extent imitiated by the moist-adiabat adjustment model. However, when averaged over the globe, the use of the cumulus-type parameterization in a climate model reduces sensitivity only approximately 34% relative to models using 6.5 K/km convective adjustment. Interestingly, the use of the cumulus-type parameterization appears to eliminate the possibility of a runaway greenhouse.
A global average model of atmospheric aerosols for radiative transfer calculations
NASA Technical Reports Server (NTRS)
Toon, O. B.; Pollack, J. B.
1976-01-01
A global average model is proposed for the size distribution, chemical composition, and optical thickness of stratospheric and tropospheric aerosols. This aerosol model is designed to specify the input parameters to global average radiative transfer calculations which assume the atmosphere is horizontally homogeneous. The model subdivides the atmosphere at multiples of 3 km, where the surface layer extends from the ground to 3 km, the upper troposphere from 3 to 12 km, and the stratosphere from 12 to 45 km. A list of assumptions made in construction of the model is presented and discussed along with major model uncertainties. The stratospheric aerosol is modeled as a liquid mixture of 75% H2SO4 and 25% H2O, while the tropospheric aerosol consists of 60% sulfate and 40% soil particles above 3 km and of 50% sulfate, 35% soil particles, and 15% sea salt below 3 km. Implications and consistency of the model are discussed.
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Blunt, Martin J.; LaForce, Tara C.
2014-04-01
Chemical equilibrium calculations are essential for many environmental problems. It is also a fundamental tool for chemical kinetics and reactive transport modelling, since these applications may require hundreds to billions equilibrium calculations in a single simulation. Therefore, an equilibrium method for such critical applications must be very efficient, robust and accurate. In this work we demonstrate the potential effectiveness of a novel Gibbs energy minimisation algorithm for reactive transport simulations. The algorithm includes strategies to converge from poor initial guesses; capabilities to specify non-linear equilibrium constraints such as pH of an aqueous solution and activity or fugacity of a species; a rigorous phase stability test to determine the unstable phases; and a strategy to boost the convergence speed of the calculations to quadratic rates, requiring only few iterations to converge. We use this equilibrium method to solve geochemical problems relevant to carbon storage in saline aquifers, where aqueous, gaseous and minerals phases are present. The problems are formulated to mimic the ones found in kinetics and transport simulations, where a sequence of equilibrium calculations are performed, each one using the previous solution as the initial guess. The efficiency and convergence rates of the calculations are presented, which require an average of 1-2 iterations. These results indicate that critical applications such as chemical kinetics and reactive transport modelling can potentially benefit by using this multiphase equilibrium algorithm.
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Spurr, R. J. D.; Shia, R. L.; Yung, Y. L.
2014-12-01
Radiative transfer (RT) computations are an essential component of energy budget calculations in climate models. However, full treatment of RT processes is computationally expensive, prompting usage of 2-stream approximations in operational climate models. This simplification introduces errors of the order of 10% in the top of the atmosphere (TOA) fluxes [Randles et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT simulations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those (few) optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Here, we extend the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Comparisons between the new model, called Universal Principal Component Analysis model for Radiative Transfer (UPCART), 2-stream models (such as those used in climate applications) and line-by-line RT models are performed, in order for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the TOA for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and solar and viewing geometries. We demonstrate that very accurate radiative forcing estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases as compared to an exact line-by-line RT model. The model is comparable in speeds to 2-stream models, potentially rendering UPCART useful for operational General Circulation Models (GCMs). The operational speed and accuracy of UPCART can be further
Turner, D.R.; Pabalan, R.T. )
1999-01-01
Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.
Turner, D.R.; Pabalan, R.T.
1999-11-01
Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.
Calculation of signal-to-noise ratio (SNR) of infrared detection system based on MODTRAN model
NASA Astrophysics Data System (ADS)
Lu, Xue; Li, Chuang; Fan, Xuewu
2013-09-01
Signal-to-noise ratio (SNR) is an important parameter of infrared detection system. SNR of infrared detection system is determined by the target infrared radiation, atmospheric transmittance, background infrared radiation and the detector noise. The infrared radiation flux in the atmosphere is determined by the selective absorption of the gas molecules, the atmospheric environment, and the transmission distance of the radiation, etc, so the atmospheric transmittance and infrared radiance flux are intricate parameters. A radiometric model for the calculation of SNR of infrared detection system is developed and used to evaluate the effects of various parameters on signal-to-noise ratio (SNR). An atmospheric modeling tool, MODTRAN, is used to model wavelength-dependent atmospheric transmission and sky background radiance. Then a new expression of SNR is deduced. Instead of using constants such as average atmospheric transmission and average wavelength in traditional method, it uses discrete values for atmospheric transmission and sky background radiance. The integrals in general expression of SNR are converted to summations. The accuracy of SNR obtained from the new method can be improved. By adopting atmospheric condition of the 1976 US standard, no clouds urban aerosols, fall-winter aerosol profiles, the typical spectrum characters of sky background radiance and transmittance are computed by MODTRON. Then the operating ranges corresponding to the threshold quantity of SNR are calculated with the new method. The calculated operating ranges are more close to the measured operating range than those calculated with the traditional method.
Bimodality emerges from transport model calculations of heavy ion collisions at intermediate energy
NASA Astrophysics Data System (ADS)
Mallik, S.; Das Gupta, S.; Chaudhuri, G.
2016-04-01
This work is a continuation of our effort [S. Mallik, S. Das Gupta, and G. Chaudhuri, Phys. Rev. C 91, 034616 (2015)], 10.1103/PhysRevC.91.034616 to examine if signatures of a phase transition can be extracted from transport model calculations of heavy ion collisions at intermediate energy. A signature of first-order phase transition is the appearance of a bimodal distribution in Pm(k ) in finite systems. Here Pm(k ) is the probability that the maximum of the multiplicity distribution occurs at mass number k . Using a well-known model for event generation [Botzmann-Uehling-Uhlenbeck (BUU) plus fluctuation], we study two cases of central collision: mass 40 on mass 40 and mass 120 on mass 120. Bimodality is seen in both the cases. The results are quite similar to those obtained in statistical model calculations. An intriguing feature is seen. We observe that at the energy where bimodality occurs, other phase-transition-like signatures appear. There are breaks in certain first-order derivatives. We then examine if such breaks appear in standard BUU calculations without fluctuations. They do. The implication is interesting. If first-order phase transition occurs, it may be possible to recognize that from ordinary BUU calculations. Probably the reason this has not been seen already is because this aspect was not investigated before.
Poston, J.W.
1989-01-01
This presentation will review and describe the development of pediatric phantoms for use in radiation dose calculations . The development of pediatric models for dose calculations essentially paralleled that of the adult. In fact, Snyder and Fisher at the Oak Ridge National Laboratory reported on a series of phantoms for such calculations in 1966 about two years before the first MIRD publication on the adult human phantom. These phantoms, for a newborn, one-, five-, ten-, and fifteen-year old, were derived from the adult phantom. The pediatric'' models were obtained through a series of transformations applied to the major dimensions of the adult, which were specified in a Cartesian coordinate system. These phantoms suffered from the fact that no real consideration was given to the influence of these mathematical transformations on the actual organ sizes in the other models nor to the relation of the resulting organ masses to those in humans of the particular age. Later, an extensive effort was invested in designing individual'' pediatric phantoms for each age based upon a careful review of the literature. Unfortunately, the phantoms had limited use and only a small number of calculations were made available to the user community. Examples of the phantoms, their typical dimensions, common weaknesses, etc. will be discussed.
Poston, J.W.
1989-12-31
This presentation will review and describe the development of pediatric phantoms for use in radiation dose calculations . The development of pediatric models for dose calculations essentially paralleled that of the adult. In fact, Snyder and Fisher at the Oak Ridge National Laboratory reported on a series of phantoms for such calculations in 1966 about two years before the first MIRD publication on the adult human phantom. These phantoms, for a newborn, one-, five-, ten-, and fifteen-year old, were derived from the adult phantom. The ``pediatric`` models were obtained through a series of transformations applied to the major dimensions of the adult, which were specified in a Cartesian coordinate system. These phantoms suffered from the fact that no real consideration was given to the influence of these mathematical transformations on the actual organ sizes in the other models nor to the relation of the resulting organ masses to those in humans of the particular age. Later, an extensive effort was invested in designing ``individual`` pediatric phantoms for each age based upon a careful review of the literature. Unfortunately, the phantoms had limited use and only a small number of calculations were made available to the user community. Examples of the phantoms, their typical dimensions, common weaknesses, etc. will be discussed.
A simple model for calculating the kinetics of protein folding from three-dimensional structures.
Muñoz, V; Eaton, W A
1999-09-28
An elementary statistical mechanical model was used to calculate the folding rates for 22 proteins from their known three-dimensional structures. In this model, residues come into contact only after all of the intervening chain is in the native conformation. An additional simplifying assumption is that native structure grows from localized regions that then fuse to form the complete native molecule. The free energy function for this model contains just two contributions-conformational entropy of the backbone and the energy of the inter-residue contacts. The matrix of inter-residue interactions is obtained from the atomic coordinates of the three-dimensional structure. For the 18 proteins that exhibit two-state equilibrium and kinetic behavior, profiles of the free energy versus the number of native peptide bonds show two deep minima, corresponding to the native and denatured states. For four proteins known to exhibit intermediates in folding, the free energy profiles show additional deep minima. The calculated rates of folding the two-state proteins, obtained by solving a diffusion equation for motion on the free energy profiles, reproduce the experimentally determined values surprisingly well. The success of these calculations suggests that folding speed is largely determined by the distribution and strength of contacts in the native structure. We also calculated the effect of mutations on the folding kinetics of chymotrypsin inhibitor 2, the most intensively studied two-state protein, with some success. PMID:10500173
NASA Astrophysics Data System (ADS)
Yano, Masato; Hirose, Kenji; Yoshikawa, Minoru; Thermal management technology Team
Facile property calculation model for adsorption chillers was developed based on equilibrium adsorption cycles. Adsorption chillers are one of promising systems that can use heat energy efficiently because adsorption chillers can generate cooling energy using relatively low temperature heat energy. Properties of adsorption chillers are determined by heat source temperatures, adsorption/desorption properties of adsorbent, and kinetics such as heat transfer rate and adsorption/desorption rate etc. In our model, dependence of adsorption chiller properties on heat source temperatures was represented using approximated equilibrium adsorption cycles instead of solving conventional time-dependent differential equations for temperature changes. In addition to equilibrium cycle calculations, we calculated time constants for temperature changes as functions of heat source temperatures, which represent differences between equilibrium cycles and real cycles that stemmed from kinetic adsorption processes. We found that the present approximated equilibrium model could calculate properties of adsorption chillers (driving energies, cooling energies, and COP etc.) under various driving conditions quickly and accurately within average errors of 6% compared to experimental data.
Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing. PMID:22217596
Model potential calculation of the thermal donor energy spectrum in silicon
NASA Astrophysics Data System (ADS)
Chen, C. S.; Schroder, D. K.
1988-06-01
The two-parameter model potential originally proposed by Ning and Sah [Phys. Rev. B 4, 3468 (1971)] for calculating the ground-state energies of group V and group VI impurities in silicon is extended to the variational calculation of the thermal donor ionization energies. In the multivalley effective mass approximation, the theoretical results are in excellent agreement with the reported experimental data. This provides additional evidence for the assumption that thermal donors consist of five to thirteen oxygen atoms, as first proposed by Ourmazd, Schröter, and Bourret [J. Appl. Phys. 56, 1670 (1984)].
Model potential calculation of the thermal donor energy spectrum in silicon
Chen, C.S.; Schroder, D.K.
1988-06-15
The two-parameter model potential originally proposed by Ning and Sah (Phys. Rev. B 4, 3468 (1971)) for calculating the ground-state energies of group V and group VI impurities in silicon is extended to the variational calculation of the thermal donor ionization energies. In the multivalley effective mass approximation, the theoretical results are in excellent agreement with the reported experimental data. This provides additional evidence for the assumption that thermal donors consist of five to thirteen oxygen atoms, as first proposed by Ourmazd, Schroeter, and Bourret (J. Appl. Phys. 56, 1670 (1984)).
Shell model calculation for Te and Sn isotopes in the vicinity of {sup 100}Sn
Yakhelef, A.; Bouldjedri, A.
2012-06-27
New Shell Model calculations for even-even isotopes {sup 104-108}Sn and {sup 106,108}Te, in the vicinity of {sup 100}Sn have been performed. The calculations have been carried out using the windows version of NuShell-MSU. The two body matrix elements TBMEs of the effective interaction between valence nucleons are obtained from the renormalized two body effective interaction based on G-matrix derived from the CD-bonn nucleon-nucleon potential. The single particle energies of the proton and neutron valence spaces orbitals are defined from the available spectra of lightest odd isotopes of Sb and Sn respectively.
Development of a New Shielding Model for JB-Line Dose Rate Calculations
Buckner, M.R.
2001-08-09
This report describes the shielding model development for the JB-Line Upgrade project. The product of this effort is a simple-to-use but accurate method of estimating the personnel dose expected for various operating conditions on the line. The current techniques for shielding calculations use transport codes such as ANISN which, while accurate for geometries which can be accurately approximated as one dimensional slabs, cylinders or spheres, fall short in calculating configurations in which two-or three-dimensional effects (e.g., streaming) play a role in the dose received by workers.
Bonn potential and shell-model calculations for N=126 isotones
Coraggio, L.; Covello, A.; Gargano, A.; Itaco, N.; Kuo, T. T. S.
1999-12-01
We have performed shell-model calculations for the N=126 isotones {sup 210}Po, {sup 211}At, and {sup 212}Rn using a realistic effective interaction derived from the Bonn-A nucleon-nucleon potential by means of a G-matrix folded-diagram method. The calculated binding energies, energy spectra, and electromagnetic properties show remarkably good agreement with the experimental data. The results of this paper complement those of our previous study on neutron hole Pb isotopes, confirming that realistic effective interactions are now able to reproduce with quantitative accuracy the spectroscopic properties of complex nuclei. (c) 1999 The American Physical Society.
Molecular Modeling for Calculation of Mechanical Properties of Epoxies with Moisture Ingress
NASA Technical Reports Server (NTRS)
Clancy, Thomas C.; Frankland, Sarah J.; Hinkley, J. A.; Gates, T. S.
2009-01-01
Atomistic models of epoxy structures were built in order to assess the effect of crosslink degree, moisture content and temperature on the calculated properties of a typical representative generic epoxy. Each atomistic model had approximately 7000 atoms and was contained within a periodic boundary condition cell with edge lengths of about 4 nm. Four atomistic models were built with a range of crosslink degree and moisture content. Each of these structures was simulated at three temperatures: 300 K, 350 K, and 400 K. Elastic constants were calculated for these structures by monitoring the stress tensor as a function of applied strain deformations to the periodic boundary conditions. The mechanical properties showed reasonably consistent behavior with respect to these parameters. The moduli decreased with decreasing crosslink degree with increasing temperature. The moduli generally decreased with increasing moisture content, although this effect was not as consistent as that seen for temperature and crosslink degree.
A hybrid model to calculate the forward delay time of heterojunction bipolar transistors
NASA Astrophysics Data System (ADS)
Kumar, T.; Cahay, M.; Shi, S.; Roenker, K.; Stanchina, W. E.
1995-07-01
The forward delay time (τ F) of heterojunction bipolar transistors (HBTs) is calculated using a hybrid model of carrier transport. A rigorous quantum-mechanical treatment of electron tunneling and thermionic emission across the spike at the emitter-base junction is used to determine the energy of the electron flux injected into the base region. This flux is used as an initial distribution in a regional Monte Carlo simulator to model electron transport from base to sub-collector. In this paper, we estimate the base transit time using the impulse response technique and the collector delay time using the expression of Laux and Lai (IEEE Electron Device Letters, 11, 174, 1990). Improvements to the hybrid model proposed here to reduce some of the discrepancies between measured and calculated values of ƒ τ for some InAlAs/InGaAs and InP/InGaAs structures reported in the literature are discussed.
Calculating kaon fragmentation functions from the Nambu-Jona-Lasinio jet model
Matevosyan, Hrayr H.; Thomas, Anthony W.; Bentz, Wolfgang
2011-04-01
The Nambu-Jona-Lasinio (NJL)-jet model provides a sound framework for calculating the fragmentation functions in an effective chiral quark theory, where the momentum and isospin sum rules are satisfied without the introduction of ad hoc parameters. Earlier studies of the pion fragmentation functions using the NJL model within this framework showed qualitative agreement with the empirical parametrizations. Here we extend the NJL-jet model by including the strange quark. The corrections to the pion fragmentation functions and corresponding kaon fragmentation functions are calculated using the elementary quark to quark-meson fragmentation functions from NJL. The results for the kaon fragmentation functions exhibit a qualitative agreement with the empirical parametrizations, while the unfavored strange quark fragmentation to pions is shown to be of the same order of magnitude as the unfavored light quark. The results of these studies are expected to provide important guidance for the analysis of a large variety of semi-inclusive data.
A novel model for calculating the inter-electrode capacitance of wedge-strip anode.
Zhao, Airong; Ni, Qiliang
2016-04-01
The wedge strip anode (WSA) detector has been widely used in particle detection. In this work, a novel model for calculating the inter-electrode capacitance of WSA was proposed on the basis of conformal transformations and the partial capacitance method. Based on the model, the inter-electrode capacitance within a period was calculated besides the total inter-electrode capacitance. As a result, the effects of the WSA design parameters on the inter-electrode capacitance are systematically analyzed. It is found that the inter-electrode capacitance monotonically increases with insulated gap and substrate permittivity but not with the period. In order to prove the validation of the model, two round WSAs were manufactured by employing the picosecond laser micro-machining technology. It is found that 9%-15% errors between the theoretical and experimental results can be obtained, which is better than that obtained by employing ANSYS software. PMID:27131648
A novel model for calculating the inter-electrode capacitance of wedge-strip anode
NASA Astrophysics Data System (ADS)
Zhao, Airong; Ni, Qiliang
2016-04-01
The wedge strip anode (WSA) detector has been widely used in particle detection. In this work, a novel model for calculating the inter-electrode capacitance of WSA was proposed on the basis of conformal transformations and the partial capacitance method. Based on the model, the inter-electrode capacitance within a period was calculated besides the total inter-electrode capacitance. As a result, the effects of the WSA design parameters on the inter-electrode capacitance are systematically analyzed. It is found that the inter-electrode capacitance monotonically increases with insulated gap and substrate permittivity but not with the period. In order to prove the validation of the model, two round WSAs were manufactured by employing the picosecond laser micro-machining technology. It is found that 9%-15% errors between the theoretical and experimental results can be obtained, which is better than that obtained by employing ANSYS software.
PNS calculations for 3-D hypersonic corner flow with two turbulence models
NASA Technical Reports Server (NTRS)
Smith, Gregory E.; Liou, May-Fun; Benson, Thomas J.
1988-01-01
A three-dimensional parabolized Navier-Stokes code has been used as a testbed to investigate two turbulence models, the McDonald Camarata and Bushnell Beckwith model, in the hypersonic regime. The Bushnell Beckwith form factor correction to the McDonald Camarata mixing length model has been extended to three-dimensional flow by use of an inverse averaging of the resultant length scale contributions from each wall. Two-dimensional calculations are compared with experiment for Mach 18 helium flow over a 4-deg wedge. Corner flow calculations have been performed at Mach 11.8 for a Reynolds number of .67 x 10 to the 6th, based on the duct half-width, and a freestream stagnation temperature of 1750-deg Rankine.
Direct comparison between two {gamma}-alumina structural models by DFT calculations
Ferreira, Ary R.; Martins, Mateus J.F.; Konstantinova, Elena; Capaz, Rodrigo B.; Souza, Wladmir F.; Chiaro, Sandra Shirley X.; Leitao, Alexandre A.
2011-05-15
We selected two important {gamma}-alumina models proposed in literature, a spinel-like one and a nonspinel one, to perform a theoretical comparison. Using ab initio calculations, the models were compared regarding their thermodynamic stability, lattice vibrational modes, and bulk electronic properties. The spinel-like model is thermodynamically more stable by 4.55 kcal/mol per formula unit on average from 0 to 1000 K. The main difference between the models is in their simulated infrared spectra, with the spinel-like model showing the best agreement with experimental data. Analysis of the electronic density of states and charge transfer between atoms reveal the similarity on the electronic structure of the two models, despite some minor differences. -- Graphical abstract: Two {gamma}-Alumina bulk models selected in this work for a comparison focusing in the electronic structure and thermodynamics of the systems. (a) The nonspinel model and (b) the spinel-like model. Display Omitted Highlights: {yields} There is still a debate about the {gamma}-Alumina structure in the literature. {yields} Models of surfaces are constructed from different bulk structural models. {yields} Two models commonly used in the literate were selected and compared. {yields} One model reproduce better the experimental data. {yields} Both presented a similar electronic structure.
Wang, Junmei; Hou, Tingjun
2012-05-25
It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS
Rock Physics Model and Brittleness Index Calculation for Shale Gas Study in Jambi Basin, Indonesia
NASA Astrophysics Data System (ADS)
Fatkhan, Fatkhan; Fauzi, Inusa P.; Sule, Rachmat; Usman, Alfian
2014-05-01
Research about shale gas is often conducted in oil and gas industries since the demand of energy supply has increased recently. Indonesia is newly interested on researching, exploring and even producing shale gas. To seek prospects of shale gas play in an area, one needs to look into some of characteristics. This paper describes about rock physics model that is used to investigate a prospect zone of shale gas play by looking into percentage of TOC and brittleness index. Method used to modeling rock physics are as follows, first Hashin-Shtrikman bound is employed to estimate percentage of minerals, then inclusions are modeled by Kuster-Toksoz method and finally kerogens are calculated by Ciz and Shapiro's model. In addition, we compared between inclusion saturated by kerogen and water and inclusion filled up by only kerogen. Modulus Young is used to estimate brittleness index. Then in order to map and delineate brittle area, simultaneous seismic inversion method using pre stack data is employed to generate volume of P-wave, S-wave and density. Finally, these volumes are used to calculate Modulus Young value. Since the area of study has a very thick shale then the area is divided into four zones based on modulus shear and bulk values. The rock physics model shows that there are two zones having quartz-rich mineral and the inclusion saturated by water and kerogen. More over Modulus Young calculations show there are two zones having high values or more than 50%. The rock physics model can be used for predicting mineralogy leading into zones of prospect brittle shale. These zones are then correlated with brittleness index calculations. In addition, results show that the study area has a shale gas prospect for further exploration.
Surface water management: a user's guide to calculate a water balance using the CREAMS model
Lane, L.J.
1984-11-01
The hydrologic component of the CREAMS model is described and discussed in terms of calculating a surface water balance for shallow land burial systems used for waste disposal. Parameter estimates and estimation procedures are presented in detail in the form of a user's guide. Use of the model is illustrated with three examples based on analysis of data from Los Alamos, New Mexico and Rock Valley, Nevada. Use of the model in design of trench caps for shallow land burial systems is illustrated with the example applications at Los Alamos.
On calculating the transfer of carbon-13 in reservoir models of the carbon cycle
TANS, PIETER P.
1980-10-01
An approach to calculating the transfer of isotopic tracers in reservoir models is outlined that takes into account the effects of isotopic fractionation at phase boundaries without any significant approximations. Simultaneous variations in both the rare isotopic tracer and the total elemental (the sum of its isotopes) concentration are considered. The proposed procedure is applicable to most models of the carbon cycle and a four-box model example is discussed. Although the exact differential equations are non-linear, a simple linear approximation exists that gives insight into the nature of the solution. The treatment will be in terms of isotopic ratios which are the directly measured quantities.
The EXPURT model for calculating external gamma doses from deposited material in inhabited areas.
Jones, J A; Singer, L N; Brown, J
2006-01-01
EXPURT, NRPB's model for calculating external gamma doses in inhabited areas, was originally developed in the mid-1980s. Deposition on surfaces in the area, the subsequent transfer of material between different surfaces or its removal from the system, and dose rates in various locations from material on the different surfaces are modelled. The model has been updated to take account of more recent experimental data on the transfer rates between surfaces and to make it more flexible for use in assessing dose rates following an accidental release. EXPURT is a compartmental model and models the transfer of material between the surfaces using a set of first order differential equations. It enables the impact of the decontamination of surfaces on doses and dose rates to be explored. The paper describes the EXPURT model and presents some preliminary results obtained using it. PMID:16242820
Calculated flame temperature (CFT) modeling of fuel mixture lower flammability limits.
Zhao, Fuman; Rogers, William J; Mannan, M Sam
2010-02-15
Heat loss can affect experimental flammability limits, and it becomes indispensable to quantify flammability limits when apparatus quenching effect becomes significant. In this research, the lower flammability limits of binary hydrocarbon mixtures are predicted using calculated flame temperature (CFT) modeling, which is based on the principle of energy conservation. Specifically, the hydrocarbon mixture lower flammability limit is quantitatively correlated to its final flame temperature at non-adiabatic conditions. The modeling predictions are compared with experimental observations to verify the validity of CFT modeling, and the minor deviations between them indicated that CFT modeling can represent experimental measurements very well. Moreover, the CFT modeling results and Le Chatelier's Law predictions are also compared, and the agreement between them indicates that CFT modeling provides a theoretical justification for the Le Chatelier's Law. PMID:19819067
Direct comparison between two γ-alumina structural models by DFT calculations
NASA Astrophysics Data System (ADS)
Ferreira, Ary R.; Martins, Mateus J. F.; Konstantinova, Elena; Capaz, Rodrigo B.; Souza, Wladmir F.; Chiaro, Sandra Shirley X.; Leitão, Alexandre A.
2011-05-01
We selected two important γ-alumina models proposed in literature, a spinel-like one and a nonspinel one, to perform a theoretical comparison. Using ab initio calculations, the models were compared regarding their thermodynamic stability, lattice vibrational modes, and bulk electronic properties. The spinel-like model is thermodynamically more stable by 4.55 kcal/mol per formula unit on average from 0 to 1000 K. The main difference between the models is in their simulated infrared spectra, with the spinel-like model showing the best agreement with experimental data. Analysis of the electronic density of states and charge transfer between atoms reveal the similarity on the electronic structure of the two models, despite some minor differences.
The impact of nuclear mass models on r-process nucleosynthesis network calculations
NASA Astrophysics Data System (ADS)
Vaughan, Kelly
2002-10-01
An insight into understanding various nucleosynthesis processes is via modelling of the process with network calculations. My project focus is r-process network calculations where the r-process is nucleosynthesis via rapid neutron capture thought to take place in high entropy supernova bubbles. One of the main uncertainties of the simulations is the Nuclear Physics input. My project investigates the role that nuclear masses play in the resulting abundances. The code tecode, involves rapid (n,γ) capture reactions in competition with photodisintegration and β decay onto seed nuclei. In order to fully analyze the effects of nuclear mass models on the relative isotopic abundances, calculations were done from the network code, keeping the initial environmental parameters constant throughout. The supernova model investigated by Qian et al (1996) in which two r-processes, of high and low frequency with seed nucleus ^90Se and of fixed luminosity (fracL_ν_e(0)r_7(0)^2 ˜= 8.77), contribute to the nucleosynthesis of the heavier elements. These two r-processes, however, do not contribute equally to the total abundance observed. The total isotopic abundance produced from both events was therefore calculated using equation refabund. Y(H+L) = fracY(H)+fY(L)f+1 <~belabund where Y(H) denotes the relative isotopic abundance produced in the high frequency event, Y(L) corresponds to the low freqeuncy event and f is the ratio of high event matter to low event matter produced. Having established reliable, fixed parameters, the network code was run using data files containing parameters such as the mass excess, neutron separation energy, β decay rates and neutron capture rates based around three different nuclear mass models. The mass models tested are the HFBCS model (Hartree-Fock BCS) derived from first principles, the ETFSI-Q model (Extended Thomas-Fermi with Strutinsky Integral including shell Quenching) known for its particular successes in the replication of Solar System
Model calculations of extreme ultraviolet gain from laser-irradiated aluminium foils
NASA Astrophysics Data System (ADS)
Pert, G. J.; Tallents, G. J.
1981-05-01
Calculations are presented on the development of gain in expanding aluminum plasmas produced by the irradiation of thin foil targets with laser radiation. The atomic physics of the expanding aluminum plasma is also considered together with the question of whether such plasmas can indeed be generated by laser irradiation of foil targets. Two-dimensional fluid code calculations are discussed to demonstrate that the model used in the atomic calculations gives a reasonable representation of the expanding laser plasma. It is pointed out that the development of the hydrogen-like ion recombination laser as an X-ray laser requires the use of ions with Z of about 25. Laser action with aluminum at 38.7 A would be an encouraging step towards X-ray laser action, being about mid-way between the current carbon fiber experiments at 182 A and true X-ray laser action at about 10 A.
A new method for modeling rough membrane surface and calculation of interfacial interactions.
Zhao, Leihong; Zhang, Meijia; He, Yiming; Chen, Jianrong; Hong, Huachang; Liao, Bao-Qiang; Lin, Hongjun
2016-01-01
Membrane fouling control necessitates the establishment of an effective method to assess interfacial interactions between foulants and rough surface membrane. This study proposed a new method which includes a rigorous mathematical equation for modeling membrane surface morphology, and combination of surface element integration (SEI) method and the composite Simpson's approach for assessment of interfacial interactions. The new method provides a complete solution to quantitatively calculate interfacial interactions between foulants and rough surface membrane. Application of this method in a membrane bioreactor (MBR) showed that, high calculation accuracy could be achieved by setting high segment number, and moreover, the strength of three energy components and energy barrier was remarkably impaired by the existence of roughness on the membrane surface, indicating that membrane surface morphology exerted profound effects on membrane fouling in the MBR. Good agreement between calculation prediction and fouling phenomena was found, suggesting the feasibility of this method. PMID:26519696
Study on the calculation models of bus delay at bays using queueing theory and Markov chain.
Sun, Feng; Sun, Li; Sun, Shao-Wei; Wang, Dian-Hai
2015-01-01
Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays. PMID:25759720
Study on the Calculation Models of Bus Delay at Bays Using Queueing Theory and Markov Chain
Sun, Li; Sun, Shao-wei; Wang, Dian-hai
2015-01-01
Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays. PMID:25759720
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy. PMID:26734567
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy. PMID:26734567
Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation
NASA Astrophysics Data System (ADS)
Yang, J.; Li, J. S.; Qin, L.; Xiong, W.; Ma, C.-M.
2004-06-01
The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams.
Large-scale shell model calculations for structure of Ni and Cu isotopes
NASA Astrophysics Data System (ADS)
Tsunoda, Yusuke; Otsuka, Takaharu; Shimizu, Noritaka; Honma, Michio; Utsuno, Yutaka
2014-09-01
We study nuclear structure of Ni and Cu isotopes, especially neutron-rich ones in the N ~ 40 region by Monte Carlo shell model (MCSM) calculations in pfg9d5 model space (0f7 / 2 , 1p3 / 2 , 0f5 / 2 , 1p1 / 2 , 0g9 / 2 , 1d5 / 2). Effects of excitation across N = 40 and other gaps are important to describe properties such as deformation, and we include this effects by using the pfg9d5 model space. We can calculate in this large model space without any truncation, as an advantage of MCSM. In the MCSM, a wave function is represented as a linear combination of angular-momentum- and parity-projected deformed Slater determinants. We can study intrinsic shapes of nuclei by using quadrupole deformations of MCSM basis states before projection. In doubly-magic 68Ni, there are oblate and prolate deformed bands as well as the spherical ground state from the calculation. Such shape coexistence can be explained by introducing the mechanism called Type II shell evolution, driven by changes of configurations within the same nucleus mainly due to the tensor force.
A calculation model for primary intensity distributions from cylindrically symmetric x-ray lenses
NASA Astrophysics Data System (ADS)
Hristov, Dimitre; Maltz, Jonathan
2008-02-01
A calculation model for the quantitative prediction of primary intensity fluence distributions obtained by the Bragg diffraction focusing of kilovoltage radiation by cylindrical x-ray lenses is presented. The mathematical formalism describes primary intensity distributions from cylindrically-symmetric x-ray lenses, with a planar isotropic radiation source located in a plane perpendicular to the lens axis. The presence of attenuating medium inserted between the lens and the lens focus is accounted for by energy-dependent attenuation. The influence of radiation scattered within the media is ignored. Intensity patterns are modeled under the assumption that photons that are not interacting with the lens are blocked out at any point of interest. The main characteristics of the proposed calculation procedure are that (i) the application of vector formalism allows universal treatment of all cylindrical lenses without the need of explicit geometric constructs; (ii) intensity distributions resulting from x-ray diffraction are described by a 3D generalization of the mosaic spread concept; (iii) the calculation model can be immediately coupled to x-ray diffraction simulation packages such as XOP and Shadow. Numerical simulations based on this model are to facilitate the design of focused orthovoltage treatment (FOT) systems employing cylindrical x-ray lenses, by providing insight about the influence of the x-ray source and lens parameters on quantities of dosimetric interest to radiation therapy.
Seth, Ajay; Delp, Scott L.
2015-01-01
Biomechanics researchers often use multibody models to represent biological systems. However, the mapping from biology to mechanics and back can be problematic. OpenSim is a popular open source tool used for this purpose, mapping between biological specifications and an underlying generalized coordinate multibody system called Simbody. One quantity of interest to biomechanical researchers and clinicians is “muscle moment arm,” a measure of the effectiveness of a muscle at contributing to a particular motion over a range of configurations. OpenSim can automatically calculate these quantities for any muscle once a model has been built. For simple cases, this calculation is the same as the conventional moment arm calculation in mechanical engineering. But a muscle may span several joints (e.g., wrist, neck, back) and may follow a convoluted path over various curved surfaces. A biological joint may require several bodies or even a mechanism to accurately represent in the multibody model (e.g., knee, shoulder). In these situations we need a careful definition of muscle moment arm that is analogous to the mechanical engineering concept, yet generalized to be of use to biomedical researchers. Here we present some biomechanical modeling challenges and how they are resolved in OpenSim and Simbody to yield biologically meaningful muscle moment arms. PMID:25905111
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations
NASA Astrophysics Data System (ADS)
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun
2015-10-01
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
Nikjoo, H; Uehara, S; Pinsky, L; Cucinotta, Francis A
2007-01-01
Space activities in earth orbit or in deep space pose challenges to the estimation of risk factors for both astronauts and instrumentation. In space, risk from exposure to ionising radiation is one of the main factors limiting manned space exploration. Therefore, characterising the radiation environment in terms of the types of radiations and the quantity of radiation that the astronauts are exposed to is of critical importance in planning space missions. In this paper, calculations of the response of TEPC to protons and carbon ions were reported. The calculations have been carried out using Monte Carlo track structure simulation codes for the walled and the wall-less TEPC counters. The model simulates nonhomogenous tracks in the sensitive volume of the counter and accounts for direct and indirect events. Calculated frequency- and dose-averaged lineal energies 0.3 MeV-1 GeV protons are presented and compared with the experimental data. The calculation of quality factors (QF) were made using individual track histories. Additionally, calculations of absolute frequencies of energy depositions in cylindrical targets, 100 nm height by 100 nm diameter, when randomly positioned and oriented in water irradiated with 1 Gy of protons of energy 0.3-100 MeV, is presented. The distributions show the clustering properties of protons of different energies in a 100 nm by 100 nm cylinder. PMID:17513858
MPS solidification model. Analysis and calculation of macrosegregation in a casting ingot
NASA Technical Reports Server (NTRS)
Poirier, D. R.; Maples, A. L.
1985-01-01
Work performed on several existing solidification models for which computer codes and documentation were developed is presented. The models describe the solidification of alloys in which there is a time varying zone of coexisting solid and liquid phases; i.e., the S/L zone. The primary purpose of the models is to calculate macrosegregation in a casting or ingot which results from flow of interdendritic liquid in this S/L zone during solidification. The flow, driven by solidification contractions and by gravity acting on density gradients in the interdendritic liquid, is modeled as flow through a porous medium. In Model 1, the steady state model, the heat flow characteristics are those of steady state solidification; i.e., the S/L zone is of constant width and it moves at a constant velocity relative to the mold. In Model 2, the unsteady state model, the width and rate of movement of the S/L zone are allowed to vary with time as it moves through the ingot. Each of these models exists in two versions. Models 1 and 2 are applicable to binary alloys; models 1M and 2M are applicable to multicomponent alloys.
Gritti, Fabrice; Guiochon, Georges
2014-08-22
Computer calculations of gradient chromatograms were performed by taking into account the adsorption behavior of the strong eluent in RPLC and the true Henry constant of the analytes. This improves the accuracy of classical gradient calculations, which all assume no affinity of the eluent modifier for the stationary phase and that the linear solvation strength model (LSSM) applies. The excess adsorption isotherm of acetonitrile with respect to water was measured by the minor disturbance method onto a Symmetry-C₁₈ RPLC adsorbent. The variations of the Henry constants of a nine compound mixture with the volume fraction of acetonitrile in the aqueous mobile phase were measured. The equilibrium dispersive model of chromatography combined with orthogonal collocation on finite elements was used to calculate chromatograms of the sample mixture for four gradient times decreasing from 25 to 1 min. The results predict a loss of resolution for the less retained analytes when the gradient times becomes smaller than 4 min. They also predict that this behavior can be eliminated when applying a quadratic gradient profile rather than a classical linear gradient. The predictions were validated by the agreement between the calculated and experimental chromatograms. PMID:24999065
Calculation of the viscosity of binary liquids at various temperatures using Jouyban-Acree model.
Jouyban, Abolghasem; Khoubnasabjafari, Maryam; Vaez-Gharamaleki, Zahra; Fekari, Zohreh; Acree, William Eugene
2005-05-01
Applicability of the Jouyban-Acree model for calculating absolute viscosity of binary liquid mixtures with respect to temperature and mixture composition is proposed. The correlation ability of the model is evaluated by employing viscosity data of 143 various aqueous and non-aqueous liquid mixtures at various temperatures collected from the literature. The results show that the model is able to correlate the data with an overall percentage deviation (PD) of 1.9+/-2.5%. In order to test the prediction capability of the model, three experimental viscosities from the highest and lowest temperatures along with the viscosities of neat liquids at all temperatures have been employed to train the model, then the viscosity values at other mixture compositions and temperatures were predicted and the overall PD obtained is 2.6+/-4.0%. PMID:15863923
PHASE-OTI: A pre-equilibrium model code for nuclear reactions calculations
NASA Astrophysics Data System (ADS)
Elmaghraby, Elsayed K.
2009-09-01
The present work focuses on a pre-equilibrium nuclear reaction code (based on the one, two and infinity hypothesis of pre-equilibrium nuclear reactions). In the PHASE-OTI code, pre-equilibrium decays are assumed to be single nucleon emissions, and the statistical probabilities come from the independence of nuclei decay. The code has proved to be a good tool to provide predictions of energy-differential cross sections. The probability of emission was calculated statistically using bases of hybrid model and exciton model. However, more precise depletion factors were used in the calculations. The present calculations were restricted to nucleon-nucleon interactions and one nucleon emission. Program summaryProgram title: PHASE-OTI Catalogue identifier: AEDN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5858 No. of bytes in distributed program, including test data, etc.: 149 405 Distribution format: tar.gz Programming language: Fortran 77 Computer: Pentium 4 and Centrino Duo Operating system: MS Windows RAM: 128 MB Classification: 17.12 Nature of problem: Calculation of the differential cross section for nucleon induced nuclear reaction in the framework of pre-equilibrium emission model. Solution method: Single neutron emission was treated by assuming occurrence of the reaction in successive steps. Each step is called phase because of the phase transition nature of the theory. The probability of emission was calculated statistically using bases of hybrid model [1] and exciton model [2]. However, more precise depletion factor was used in the calculations. Exciton configuration used in the code is that described in earlier work [3]. Restrictions: The program is restricted to single nucleon emission and nucleon
Necessity of using heterogeneous ellipsoidal Earth model with terrain to calculate co-seismic effect
NASA Astrophysics Data System (ADS)
Cheng, Huihong; Zhang, Bei; Zhang, Huai; Huang, Luyuan; Qu, Wulin; Shi, Yaolin
2016-04-01
Co-seismic deformation and stress changes, which reflect the elasticity of the earth, are very important in the earthquake dynamics, and also to other issues, such as the evaluation of the seismic risk, fracture process and triggering of earthquake. Lots of scholars have researched the dislocation theory and co-seismic deformation and obtained the half-space homogeneous model, half-space stratified model, spherical stratified model, and so on. Especially, models of Okada (1992) and Wang (2003, 2006) are widely applied in the research of calculating co-seismic and post-seismic effects. However, since both semi-infinite space model and layered model do not take the role of the earth curvature or heterogeneity or topography into consideration, there are large errors in calculating the co-seismic displacement of a great earthquake in its impacted area. Meanwhile, the computational methods of calculating the co-seismic strain and stress are different between spherical model and plane model. Here, we adopted the finite element method which could well deal with the complex characteristics (such as anisotropy, discontinuities) of rock and different conditions. We use the mash adaptive technique to automatically encrypt the mesh at the fault and adopt the equivalent volume force replace the dislocation source, which can avoid the difficulty in handling discontinuity surface with conventional (Zhang et al., 2015). We constructed an earth model that included earth's layered structure and curvature, the upper boundary was set as a free surface and the core-mantle boundary was set under buoyancy forces. Firstly, based on the precision requirement, we take a testing model - - a strike-slip fault (the length of fault is 500km and the width is 50km, and the slippage is 10m) for example. Because of the curvature of the Earth, some errors certainly occur in plane coordinates just as previous studies (Dong et al., 2014; Sun et al., 2012). However, we also found that: 1) the co
Supersonic flow calculation using a Reynolds-stress and an eddy thermal diffusivity turbulence model
NASA Technical Reports Server (NTRS)
Sommer, T. P.; So, R. M. C.; Zhang, H. S.
1993-01-01
A second-order model for the velocity field and a two-equation model for the temperature field are used to calculate supersonic boundary layers assuming negligible real gas effects. The modeled equations are formulated on the basis of an incompressible assumption and then extended to supersonic flows by invoking Morkovin's hypothesis, which proposes that compressibility effects are completely accounted for by mean density variations alone. In order to calculate the near-wall flow accurately, correction functions are proposed to render the modeled equations asymptotically consistent with the behavior of the exact equations near a wall and, at the same time, display the proper dependence on the molecular Prandtl number. Thus formulated, the near-wall second order turbulence model for heat transfer is applicable to supersonic flows with different Prandtl numbers. The model is validated against flows with different Prandtl numbers and supersonic flows with free-stream Mach numbers as high as 10 and wall temperature ratios as low as 0.3. Among the flow cases considered, the momentum thickness Reynolds number varies from approximately 4,000 to approximately 21,000. Good correlation with measurements of mean velocity, temperature, and its variance is obtained. Discernible improvements in the law-of-the-wall are observed, especially in the range where the big-law applies.
NASA Astrophysics Data System (ADS)
Ulmer, W.; Schaffner, B.
2011-03-01
We have developed a model for proton depth dose and lateral distributions based on Monte Carlo calculations (GEANT4) and an integration procedure of Bethe-Bloch equation (BBE). The model accounts for the transport of primary and secondary protons, the creation of recoil protons and heavy recoil nuclei as well as lateral scattering of these contributions. The buildup, which is experimentally observed in higher energy depth dose curves, is modeled by an inclusion of two different origins: (1) secondary reaction protons with a contribution of ca. 65% of the buildup (for monoenergetic protons). (2) Landau tails as well as Gaussian type of fluctuations for range straggling effects. All parameters of the model for initially monoenergetic proton beams have been obtained from Monte Carlo calculations or checked by them. Furthermore, there are a few parameters, which can be obtained by fitting the model to the measured depth dose curves in order to describe individual characteristics of the beamline—the most important being the initial energy spread. We find that the free parameters of the depth dose model can be predicted for any intermediate energy from a couple of measured curves.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1986-01-01
Calculated results based on two chromospheric flare models F1 and F2 of Machado, et al., (1980) are presented. Two additional models are included: F1*, which has enhanced temperatures relative to the weak-flare model F1 in the upper photosphere and low chromosphere, and F3 which has enhanced temperatures relative to the strong flare model F2 in the upper chromosphere. Each model is specified by means of a given variation of the temperature as a function of column mass. The corresponding variation of particle density and the geometrical height scale are determined by assuming hydrostatic equilibrium. The coupled equations of statistical equilibrium is solved as is radiative transfer for H, H-, He I-II, C I-IV, Si I-II, Mg I-II, Fe, Al, O I-II, Na, and Ca II. The overall absorption and emission of radiation by lines throughout the spectrum is determined by means of a reduced set of opacities sampled from a compilation of over 10 to the 7th power individual lines. That the white flight flare continuum may arise by extreme chromospheric overheating as well as by an enhancement of the minimum temperature region is also shown. The radiative cooling rate calculations for our brightest flare model suggest that chromospheric overheating provides enhanced radiation that could cause significant heating deep in the flare atmosphere.
A numerical model for calculating vibration from a railway tunnel embedded in a full-space
NASA Astrophysics Data System (ADS)
Hussein, M. F. M.; Hunt, H. E. M.
2007-08-01
Vibration generated by underground railways transmits to nearby buildings causing annoyance to inhabitants and malfunctioning to sensitive equipment. Vibration can be isolated through countermeasures by reducing the stiffness of railpads, using floating-slab tracks and/or supporting buildings on springs. Modelling of vibration from underground railways has recently gained more importance on account of the need to evaluate accurately the performance of vibration countermeasures before these are implemented. This paper develops an existing model, reported by Forrest and Hunt, for calculating vibration from underground railways. The model, known as the Pipe-in-Pipe model, has been developed in this paper to account for anti-symmetrical inputs and therefore to model tangential forces at the tunnel wall. Moreover, three different arrangements of supports are considered for floating-slab tracks, one which can be used to model directly-fixed slabs. The paper also investigates the wave-guided solution of the track, the tunnel, the surrounding soil and the coupled system. It is shown that the dynamics of the track have significant effect on the results calculated in the wavenumber-frequency domain and therefore an important role on controlling vibration from underground railways.
Large-scale shell-model calculations of nuclei around mass 210
NASA Astrophysics Data System (ADS)
Teruya, E.; Higashiyama, K.; Yoshinaga, N.
2016-06-01
Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.
Improved analytical flux surface representation and calculation models for poloidal asymmetries
NASA Astrophysics Data System (ADS)
Collart, T. G.; Stacey, W. M.
2016-05-01
An orthogonalized flux-surface aligned curvilinear coordinate system has been developed from an up-down asymmetric variation of the "Miller" flux-surface equilibrium model. It is found that the new orthogonalized "asymmetric Miller" model representation of equilibrium flux surfaces provides a more accurate match than various other representations of DIII-D [J. L. Luxon, Nucl. Fusion 42, 614-633 (2002)] discharges to flux surfaces calculated using the DIII-D Equilibrium Fitting tokamak equilibrium reconstruction code. The continuity and momentum balance equations were used to develop a system of equations relating asymmetries in plasma velocities, densities, and electrostatic potential in this curvilinear system, and detailed calculations of poloidal asymmetries were performed for a DIII-D discharge.
Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ruf, Joe
2007-01-01
As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.
Most predictions of the effect of climate change on species’ ranges are based on correlations between climate and current species’ distributions. These so-called envelope models may be a good first approximation, but we need demographically mechanistic models to incorporate the ...
Phenomenological Rashba model for calculating the electron energy spectrum on a cylinder
NASA Astrophysics Data System (ADS)
Savinskiĭ, S. S.; Belosludtsev, A. V.
2007-05-01
The energy spectrum of an electron on the surface of a cylinder is calculated using the Pauli equation with an additional term that takes into account the spin-orbit interaction. This term is taken in the approximation of a phenomenological Rashba model, which provides exact expressions for the wave functions and the electron energy spectrum on the cylinder surface in a static magnetic field.
Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model
NASA Technical Reports Server (NTRS)
White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.
1989-01-01
A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.
Double-step truncation procedure for large-scale shell-model calculations
NASA Astrophysics Data System (ADS)
Coraggio, L.; Gargano, A.; Itaco, N.
2016-06-01
We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model Hamiltonian, in order to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform a unitary transformation of the original Hamiltonian from its model space into the truncated one. This transformation generates a new shell-model Hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model Hamiltonian defined in a large model space, set up by seven proton and five neutron single-particle orbitals outside 88Sr. We study the dependence of shell-model results upon different truncations of the original model space for the Zr, Mo, Ru, Pd, Cd, and Sn isotopic chains, showing the reliability of this truncation procedure.
Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.
Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P
2016-06-14
Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates. PMID:27139005
Influence of polarization and a source model for dose calculation in MRT
Bartzsch, Stefan Oelfke, Uwe; Lerch, Michael; Petasecca, Marco; Bräuer-Krisch, Elke
2014-04-15
Purpose: Microbeam Radiation Therapy (MRT), an alternative preclinical treatment strategy using spatially modulated synchrotron radiation on a micrometer scale, has the great potential to cure malignant tumors (e.g., brain tumors) while having low side effects on normal tissue. Dose measurement and calculation in MRT is challenging because of the spatial accuracy required and the arising high dose differences. Dose calculation with Monte Carlo simulations is time consuming and their accuracy is still a matter of debate. In particular, the influence of photon polarization has been discussed in the literature. Moreover, it is controversial whether a complete knowledge of phase space trajectories, i.e., the simulation of the machine from the wiggler to the collimator, is necessary in order to accurately calculate the dose. Methods: With Monte Carlo simulations in the Geant4 toolkit, the authors investigate the influence of polarization on the dose distribution and the therapeutically important peak to valley dose ratios (PVDRs). Furthermore, the authors analyze in detail phase space information provided byMartínez-Rovira et al. [“Development and commissioning of a Monte Carlo photon model for the forthcoming clinical trials in microbeam radiation therapy,” Med. Phys. 39(1), 119–131 (2012)] and examine its influence on peak and valley doses. A simple source model is developed using parallel beams and its applicability is shown in a semiadjoint Monte Carlo simulation. Results are compared to measurements and previously published data. Results: Polarization has a significant influence on the scattered dose outside the microbeam field. In the radiation field, however, dose and PVDRs deduced from calculations without polarization and with polarization differ by less than 3%. The authors show that the key consequences from the phase space information for dose calculations are inhomogeneous primary photon flux, partial absorption due to inclined beam incidence outside
[Models for calculating risks as a tool in screening for cardiovascular diseases].
Bryndorf, T E; Petersen, H H; Baastrup, A; Bremmelgaard, A; Videbaek, J
1990-04-16
In connection with screening for risk factors for ischaemic heart disease in Bispebjerg Hospital, we have assessed three different models for calculation of the risk, employed on our own material. A total of 462 persons participated in the screening and 275 of these were under the age of 65 years. Out of these 275, 92 had plasma cholesterol values over or equal to 7.0 mmol/l and or smoked over 20 gram tobacco daily. On comparison between three models for calculation of the risk: one American, one British and one Swedish, moderate agreement was observed: the correlation coefficients varied between 0.75 and 0.89. The reason for this may be that the models for calculation of the risk are constructed on the basis of statistics already described from epidemiological investigations in which coincidence is demonstrated between selected observable factors and ischaemic heart disease. It is thus possible that the factors which we measure and possibly attempt to influence are not pathogenetic. We consider, therefore, that risk scoring should be employed with caution. As causal connection between ischaemic heart disease and cholesterol and smoking, respectively, have been demonstrated with reasonable certainty, we consider that it is reasonable to screen and intervene for these factors alone. PMID:2330641
First-Principles Calculations, Experimental Study, and Thermodynamic Modeling of the Al-Co-Cr System
Liu, Xuan L.; Gheno, Thomas; Lindahl, Bonnie B.; Lindwall, Greta; Gleeson, Brian; Liu, Zi-Kui
2015-01-01
The phase relations and thermodynamic properties of the condensed Al-Co-Cr ternary alloy system are investigated using first-principles calculations based on density functional theory (DFT) and phase-equilibria experiments that led to X-ray diffraction (XRD) and electron probe micro-analysis (EPMA) measurements. A thermodynamic description is developed by means of the calculations of phase diagrams (CALPHAD) method using experimental and computational data from the present work and the literature. Emphasis is placed on modeling the bcc-A2, B2, fcc-γ, and tetragonal-σ phases in the temperature range of 1173 to 1623 K. Liquid, bcc-A2 and fcc-γ phases are modeled using substitutional solution descriptions. First-principles special quasirandom structures (SQS) calculations predict a large bcc-A2 (disordered)/B2 (ordered) miscibility gap, in agreement with experiments. A partitioning model is then used for the A2/B2 phase to effectively describe the order-disorder transitions. The critically assessed thermodynamic description describes all phase equilibria data well. A2/B2 transitions are also shown to agree well with previous experimental findings. PMID:25875037
A comparison of Viking UVI auroral observations and model calculations of camera responses
Steele, D.P.; McEwen, D.J.; Murphree, J.S.
1995-03-01
The authors have selected a number of events observed by the UV imager on Viking both in the UV Lyman, Birge, Hopfield wavelength, and the O I 130.4 nm and 135.6 nm bands, where there was simultaneous DMSP F7 particle data for comparision. These events were selected from times of quiet to moderately active ionospheric conditions with stable electron precipitation around the region of observation. They have then done model calculations of auroral emissions, corresponding radiative transfer, and folded in the response functions for the UV cameras. They achieved good comparisons with 5 of the 6 events which were modeled.
Henderson, J D; Olson, R D; Ravis, W R
1985-08-01
A polyexponential curve-stripping program, KIN, is described for use on the HP-41CV programmable calculator. The program may be used in the analysis of plasma-concentration-time curves for a three-compartment intravenous bolus or infusion model with linear elimination processes. The coefficients and hybrid rate constants of the exponential function are then used to compute pharmacokinetic parameters (volume of the central compartment, intercompartmental rate transfer constants), which may be used as initial estimates of model parameters in non-linear regression curve-fitting procedures. PMID:3839870
Transverse space charge effect calculation in the Synergia accelerator modeling toolkit
Okonechnikov, Konstantin; Amundson, James; Macridin, Alexandru; /Fermilab
2009-09-01
This paper describes a transverse space charge effect calculation algorithm, developed in the context of accelerator modeling toolkit Synergia. The introduction to the space charge problem and the Synergia modeling toolkit short description are given. The developed algorithm is explained and the implementation is described in detail. As a result of this work a new space charge solver was developed and integrated into the Synergia toolkit. The solver showed correct results in comparison to existing Synergia solvers and delivered better performance in the regime where it is applicable.
Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models
Stiglic, Gregor; Pajnkihar, Majda
2015-01-01
Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999–2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001), with a significantly higher AUC (p < 0.001) of 0.774 (0.740) for the pre-diabetes group. Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were
Hoak, T.E. |; Sundberg, K.R.; Ortoleva, P.
1998-12-31
The analysis carried out in the Chemical Interaction of Rocks and Fluids Basin (CIRFB) model describes the chemical and physical evolution of the entire system. One aspect of this is the deformation of the rocks, and its treatment with a rigorous flow and rheological model. This type of analysis depends on knowing the state of the model domain`s boundaries as functions of time. In the Andrews and Ector County areas of the Central Basin Platform of West Texas, the authors calculate this shortening with a simple interpretation of the basic motion and a restoration of the Ellenburger formation. Despite its simplicity, this calculation reveals two distinct periods of shortening/extension, a relatively uniform directionality to all the deformation, and the localization of deformation effects to the immediate vicinities of the major faults in the area. Conclusions are drawn regarding the appropriate expressions of these boundary conditions in the CIRFB model and possible implications for exploration.
Two-dimensional model calculation of fluorine-containing reservoir species in the stratosphere
NASA Technical Reports Server (NTRS)
Kaye, Jack A.; Douglass, Anne R.; Jackman, Charles H.; Stolarski, Richard S.; Zander, R.
1991-01-01
Two-dimensional model calculations have been carried out of the distributions of the fluorine-containing reservoir species HF, CF2O, and CFClO. HF constitutes the largest fluorine reservoir in the stratosphere, but CF2O also makes an important contribution to the inorganic fluorine budget. CFClO amounts are most important in the tropical lower stratosphere. HF amounts increase with altitude throughout the stratosphere, while those of CF2O and CFClO fall off above their mixing ratio peaks due to photolysis. The model is in good qualitative agreement with observed vertical profiles of HF and CF2O but tends to underestimate the total column of HF. The calculated CFClO distribution is in good agreement with the very limited data. The disagreement in the HF columns is likely due to small inaccuracies in the model's treatment of lower stratospheric photolysis of chlorofluorocarbons. The model results support the suggestion that CF2O may be heterogeneously converted to HF on the surface of polar stratospheric cloud particles. The model results also suggest that the quantum yield for photolysis of CF2O is near unity.
An empirical model for calculation of the collimator contamination dose in therapeutic proton beams
NASA Astrophysics Data System (ADS)
Vidal, M.; De Marzi, L.; Szymanowski, H.; Guinement, L.; Nauraye, C.; Hierso, E.; Freud, N.; Ferrand, R.; François, P.; Sarrut, D.
2016-02-01
Collimators are used as lateral beam shaping devices in proton therapy with passive scattering beam lines. The dose contamination due to collimator scattering can be as high as 10% of the maximum dose and influences calculation of the output factor or monitor units (MU). To date, commercial treatment planning systems generally use a zero-thickness collimator approximation ignoring edge scattering in the aperture collimator and few analytical models have been proposed to take scattering effects into account, mainly limited to the inner collimator face component. The aim of this study was to characterize and model aperture contamination by means of a fast and accurate analytical model. The entrance face collimator scatter distribution was modeled as a 3D secondary dose source. Predicted dose contaminations were compared to measurements and Monte Carlo simulations. Measurements were performed on two different proton beam lines (a fixed horizontal beam line and a gantry beam line) with divergent apertures and for several field sizes and energies. Discrepancies between analytical algorithm dose prediction and measurements were decreased from 10% to 2% using the proposed model. Gamma-index (2%/1 mm) was respected for more than 90% of pixels. The proposed analytical algorithm increases the accuracy of analytical dose calculations with reasonable computation times.
Exciton Model Calculations up to 200 MeV: The Optical Model Points the Way
Duijvestijn, M.C.; Koning, A.J.
2005-05-24
We present a preequilibrium model for nucleons with incident energies from 7 to 200 MeV, for nuclides in the mass range A {>=} 24. This is accomplished by a new global approach for the two-component exciton model which, together with the complementary compound and direct reaction mechanisms, enables a description of continuum energy spectra over the whole outgoing energy range. We develop new forms for the internal transition rates with collision probabilities based on a recent optical-model potential. To connect with conventional semi-classical analyses, we derive from this approach a new energy-dependent form for the average square matrix element M2. We include surface effects and multiple preequilibrium emission up to any order. To assess the predictive power of our model, we have tested it against a complete experimental database of (n,xn) (n,xp) (p,xn), and (p,xp) spectra. In this paper we will show some examples.
The truth is out there: measured, calculated and modelled benthic fluxes.
NASA Astrophysics Data System (ADS)
Pakhomova, Svetlana; Protsenko, Elizaveta
2016-04-01
In a modern Earth science there is a great importance of understanding the processes, forming the benthic fluxes as one of element sources or sinks to or from the water body, which affects the elements balance in the water system. There are several ways to assess benthic fluxes and here we try to compare the results obtained by chamber experiments, calculated from porewater distributions and simulated with model. Benthic fluxes of dissolved elements (oxygen, nitrogen species, phosphate, silicate, alkalinity, iron and manganese species) were studied in the Baltic and Black Seas from 2000 to 2005. Fluxes were measured in situ using chamber incubations (Jch) and at the same time sediment cores were collected to assess the porewater distribution at different depths to calculate diffusive fluxes (Jpw). Model study was carried out with benthic-pelagic biogeochemical model BROM (O-N-P-Si-C-S-Mn-Fe redox model). It was applied to simulate biogeochemical structure of the water column and upper sediment and to assess the vertical fluxes (Jmd). By the behaviour at the water-sediment interface all studied elements can be divided into three groups: (1) elements which benthic fluxes are determined by the concentrations gradient only (Si, Mn), (2) elements which fluxes depend on redox conditions in the bottom water (Fe, PO4, NH4), and (3) elements which fluxes are strongly connected with organic matter fate (O2, Alk, NH4). For the first group it was found that measured fluxes are always higher than calculated diffusive fluxes (1.5
Nalli, Nicholas R; Minnett, Peter J; van Delst, Paul
2008-07-20
Although published sea surface infrared (IR) emissivity models have gained widespread acceptance for remote sensing applications, discrepancies have been identified against field observations obtained from IR Fourier transform spectrometers at view angles approximately > 40 degrees. We therefore propose, in this two-part paper, an alternative approach for calculating surface-leaving IR radiance that treats both emissivity and atmospheric reflection in a systematic yet practical manner. This first part presents the theoretical basis, development, and computations of the proposed model. PMID:18641735
Scheuerell, Mark D
2016-01-01
Stock-recruitment models have been used for decades in fisheries management as a means of formalizing the expected number of offspring that recruit to a fishery based on the number of parents. In particular, Ricker's stock recruitment model is widely used due to its flexibility and ease with which the parameters can be estimated. After model fitting, the spawning stock size that produces the maximum sustainable yield (S MSY) to a fishery, and the harvest corresponding to it (U MSY), are two of the most common biological reference points of interest to fisheries managers. However, to date there has been no explicit solution for either reference point because of the transcendental nature of the equation needed to solve for them. Therefore, numerical or statistical approximations have been used for more than 30 years. Here I provide explicit formulae for calculating both S MSY and U MSY in terms of the productivity and density-dependent parameters of Ricker's model. PMID:27004147
Development of an algebraic stress/two-layer model for calculating thrust chamber flow fields
NASA Technical Reports Server (NTRS)
Chen, C. P.; Shang, H. M.; Huang, J.
1993-01-01
Following the consensus of a workshop in Turbulence Modeling for Liquid Rocket Thrust Chambers, the current effort was undertaken to study the effects of second-order closure on the predictions of thermochemical flow fields. To reduce the instability and computational intensity of the full second-order Reynolds Stress Model, an Algebraic Stress Model (ASM) coupled with a two-layer near wall treatment was developed. Various test problems, including the compressible boundary layer with adiabatic and cooled walls, recirculating flows, swirling flows and the entire SSME nozzle flow were studied to assess the performance of the current model. Detailed calculations for the SSME exit wall flow around the nozzle manifold were executed. As to the overall flow predictions, the ASM removes another assumption for appropriate comparison with experimental data, to account for the non-isotropic turbulence effects.
2016-01-01
Stock-recruitment models have been used for decades in fisheries management as a means of formalizing the expected number of offspring that recruit to a fishery based on the number of parents. In particular, Ricker’s stock recruitment model is widely used due to its flexibility and ease with which the parameters can be estimated. After model fitting, the spawning stock size that produces the maximum sustainable yield (SMSY) to a fishery, and the harvest corresponding to it (UMSY), are two of the most common biological reference points of interest to fisheries managers. However, to date there has been no explicit solution for either reference point because of the transcendental nature of the equation needed to solve for them. Therefore, numerical or statistical approximations have been used for more than 30 years. Here I provide explicit formulae for calculating both SMSY and UMSY in terms of the productivity and density-dependent parameters of Ricker’s model. PMID:27004147
Development of an algebraic stress/two-layer model for calculating thrust chamber flow fields
NASA Astrophysics Data System (ADS)
Chen, C. P.; Shang, H. M.; Huang, J.
1993-07-01
Following the consensus of a workshop in Turbulence Modeling for Liquid Rocket Thrust Chambers, the current effort was undertaken to study the effects of second-order closure on the predictions of thermochemical flow fields. To reduce the instability and computational intensity of the full second-order Reynolds Stress Model, an Algebraic Stress Model (ASM) coupled with a two-layer near wall treatment was developed. Various test problems, including the compressible boundary layer with adiabatic and cooled walls, recirculating flows, swirling flows and the entire SSME nozzle flow were studied to assess the performance of the current model. Detailed calculations for the SSME exit wall flow around the nozzle manifold were executed. As to the overall flow predictions, the ASM removes another assumption for appropriate comparison with experimental data, to account for the non-isotropic turbulence effects.
NASA Astrophysics Data System (ADS)
Homma, H.; Murayama, T.
We investigate the chemical evolution model explaining the chemical composition and the star formation histories (SFHs) simultaneously for the dwarf spheroidal galaxies (dSphs). Recently, wide imaging photometry and multi-object spectroscopy give us a large number of data. Therefore, we start to develop the chemical evolution model based on an SFH given by photometric observations and estimates a metallicity distribution function (MDF) comparing with spectroscopic observations. With this new model we calculate the chemical evolution for 4 dSphs (Fornax, Sculptor, Leo II, Sextans), and then we found that the model of 0.1 Gyr for the delay time of type Ia SNe is too short to explain the observed [alpha /Fe] vs. [Fe/H] diagrams.
NASA Astrophysics Data System (ADS)
Fujii, Hiroyuki; Okawa, Shinpei; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao
2015-12-01
Development of a physically accurate and computationally efficient photon migration model for turbid media is crucial for optical computed tomography such as diffuse optical tomography. For the development, this paper constructs a space-time coupling model of the radiative transport equation with the photon diffusion equation. In the coupling model, a space-time regime of the photon migration is divided into the ballistic and diffusive regimes with the interaction between the both regimes to improve the accuracy of the results and the efficiency of computation. The coupling model provides an accurate description of the photon migration in various turbid media in a wide range of the optical properties, and reduces computational loads when compared with those of full calculation of the RTE.
NASA Astrophysics Data System (ADS)
Wong, Michael H.; Atreya, Sushil K.; Kuhn, William R.; Romani, Paul N.; Mihalka, Kristen M.
2015-01-01
Models of cloud condensation under thermodynamic equilibrium in planetary atmospheres are useful for several reasons. These equilibrium cloud condensation models (ECCMs) calculate the wet adiabatic lapse rate, determine saturation-limited mixing ratios of condensing species, calculate the stabilizing effect of latent heat release and molecular weight stratification, and locate cloud base levels. Many ECCMs trace their heritage to Lewis (Lewis, J.S. [1969]. Icarus 10, 365-378) and Weidenschilling and Lewis (Weidenschilling, S.J., Lewis, J.S. [1973]. Icarus 20, 465-476). Calculation of atmospheric structure and gas mixing ratios are correct in these models. We resolve errors affecting the cloud density calculation in these models by first calculating a cloud density rate: the change in cloud density with updraft length scale. The updraft length scale parameterizes the strength of the cloud-forming updraft, and converts the cloud density rate from the ECCM into cloud density. The method is validated by comparison with terrestrial cloud data. Our parameterized updraft method gives a first-order prediction of cloud densities in a “fresh” cloud, where condensation is the dominant microphysical process. Older evolved clouds may be better approximated by another 1-D method, the diffusive-precipitative Ackerman and Marley (Ackerman, A.S., Marley, M.S. [2001]. Astrophys. J. 556, 872-884) model, which represents a steady-state equilibrium between precipitation and condensation of vapor delivered by turbulent diffusion. We re-evaluate observed cloud densities in the Galileo Probe entry site (Ragent, B. et al. [1998]. J. Geophys. Res. 103, 22891-22910), and show that the upper and lower observed clouds at ∼0.5 and ∼3 bars are consistent with weak (cirrus-like) updrafts under conditions of saturated ammonia and water vapor, respectively. The densest observed cloud, near 1.3 bar, requires unexpectedly strong updraft conditions, or higher cloud density rates. The cloud
NASA Astrophysics Data System (ADS)
Goorley, T.; Kiger, W. S.; Zamenhof, R.
As Boron Neutron Capture Therapy (BNCT) clinical trials are initiated in more countries, new treatment planning software programs are being developed to calculate dose distributions in patient specific models. A reference suite of test problems, i.e., head phantom irradiations and resulting depth-dose curves, would allow quantitative comparison of the treatment planning software. This paper presents sets of central axis depth vs. dose curves calculated with the Monte Carlo radiation transport code MCNP4B for five different representations of the Snyder head phantom. The first is a multi-shell analytic ellipsoidal representation, and the remaining four are voxelized representations with cube edge lengths of 16, 10, 8 and 4 mm. For these calculations, 10 cm diameter monoenergetic and monodirectional neutron and photon beams were incident along the central axes of the models. Individual beams of 0.0253 eV, 1, 2, 10, 100 and 1000 keV neutrons, and 0.2, 0.5, 1, 2, 5, and 10 MeV photons were simulated to high statistical convergence, with statistical error less than 1% in the center of the model. A "generic" epithermal neutron beam, with 1% fast flux contamination and 10% thermal flux contamination, similar to those proposed for BNCT treatments, was also simulated with all five models. Computations for both of the smaller sized voxel models produced thermal neutron, fast neutron, and gamma dose rates within 4% of those from the analytical representation. It is proposed that these data sets be used by the BNCT community for the verification of existing and new BNCT treatment planning software.
Band-gap shrinkage calculations and analytic model for strained bulk InGaAsP
NASA Astrophysics Data System (ADS)
Connelly, Michael J.
2015-02-01
Band-gap shrinkage is an important effect in semiconductor lasers and optical amplifiers. In the former it leads to an increase in the lasing wavelength and in the latter an increase in the gain peak wavelength as the bias current is increased. The most common model used for carrier-density dependent band-gap shrinkage is a cube root dependency on carrier density, which is strictly only true for high carrier densities and low temperatures. This simple model, involves a material constant which is treated as a fitting parameter. Strained InGaAsP material is commonly used to fabricate polarization insensitive semiconductor optical amplifiers (SOAs). Most mathematical models for SOAs use the cube root bandgap shrinkage model. However, because SOAs are often operated over a wide range of drive currents and input optical powers leading to large variations in carrier density along the amplifier length, for improved model accuracy it is preferable to use band-gap shrinkage calculated from knowledge of the material bandstructure. In this letter the carrier density dependent band-gap shrinkage for strained InGaAsP is calculated by using detailed non-parabolic conduction and valence band models. The shrinkage dependency on temperature and both tensile and compressive strain is investigated and compared to the cube root model, for which it shows significant deviation. A simple power model, showing an almost square-root dependency, is derived for carrier densities in the range usually encountered in InGaAsP laser diodes and SOAs.
GPU-based ultra-fast dose calculation using a finite size pencil beam model
NASA Astrophysics Data System (ADS)
Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B.
2009-10-01
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.
Obtaining model parameters for real materials from ab-initio calculations: Heisenberg exchange
NASA Astrophysics Data System (ADS)
Korotin, Dmitry; Mazurenko, Vladimir; Anisimov, Vladimir; Streltsov, Sergey
An approach to compute exchange parameters of the Heisenberg model in plane-wave based methods is presented. This calculation scheme is based on the Green's function method and Wannier function projection technique. It was implemented in the framework of the pseudopotential method and tested on such materials as NiO, FeO, Li2MnO3, and KCuF3. The obtained exchange constants are in a good agreement with both the total energy calculations and experimental estimations for NiO and KCuF3. In the case of FeO our calculations explain the pressure dependence of the Néel temperature. Li2MnO3 turns out to be a Slater insulator with antiferromagnetic nearest neighbor exchange defined by the spin splitting. The proposed approach provides a unique way to analyze magnetic interactions, since it allows one to calculate orbital contributions to the total exchange coupling and study the mechanism of the exchange coupling. The work was supported by a grant from the Russian Scientific Foundation (Project No. 14-22-00004).
Three-body calculations for the K ‑ pp system within potential models
NASA Astrophysics Data System (ADS)
Kezerashvili, R. Ya; Tsiklauri, S. M.; Filikhin, I.; Suslov, V. M.; Vlahovic, B.
2016-06-01
We present three-body nonrelativistic calculations within the framework of a potential model for the kaonic cluster K ‑ pp using two methods: the method of hyperspherical harmonics in the momentum representation and the method of Faddeev equations in configuration space. To perform numerical calculations, different NN and antikaon–nucleon interactions are applied. The results of the calculations for the ground-state energy for the K ‑ pp system obtained by both methods are in reasonable agreement. Although the ground-state energy is not sensitive to the pp interaction, it shows very strong dependence on the K ‑ p potential. We show that the dominant clustering of the {K}-{pp} system in the configuration Λ (1405) + p allows us to calculate the binding energy to good accuracy within a simple cluster approach for the differential Faddeev equations. The theoretical discrepancies in the binding energy and width for the K ‑ pp system related to the different pp and K ‑ p interactions are addressed.
VISA-II sensitivity study of code calculations: Input and analytical model parameters
Simonen, E.P.; Johnson, K.I.; Simonen, F.A.; Liebetrau, A.M.
1986-11-01
The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less. Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.
Sharp, K A
1998-01-01
A description is given of a method to calculate the electron transfer reorganization energy (lambda) in proteins using the linear or nonlinear Poisson-Boltzmann (PB) equation. Finite difference solutions to the linear PB equation are then used to calculate lambda for intramolecular electron transfer reactions in the photosynthetic reaction center from Rhodopseudomonas viridis and the ruthenated heme proteins cytochrome c, myoglobin, and cytochrome b and for intermolecular electron transfer between two cytochrome c molecules. The overall agreement with experiment is good considering both the experimental and computational difficulties in estimating lambda. The calculations show that acceptor/donor separation and position of the cofactors with respect to the protein/solvent boundary are equally important and, along with the overall polarizability of the protein, are the major determinants of lambda. In agreement with previous studies, the calculations show that the protein provides a low reorganization environment for electron transfer. Agreement with experiment is best if the protein polarizability is modeled with a low (<8) average effective dielectric constant. The effect of buried waters on the reorganization energy of the photosynthetic reaction center was examined and found to make a contribution ranging from 0.05 eV to 0.27 eV, depending on the donor/acceptor pair. PMID:9512022
GPU-based ultra-fast dose calculation using a finite size pencil beam model.
Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B
2009-10-21
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy. PMID:19794244
Experimental Evaluation of The Accuracy of Model Calculated Emission Data For A Motorway
NASA Astrophysics Data System (ADS)
Corsmeier, U.; Fiedler, F.; Kohler, M.; Kalthoff, N.; Vogel, B.; Vogel, H.
Precise emission data are of essential importance for all kind of atmospheric disper- sion models as well as for climate change modelling on every scale. However, in many cases the anthropogenic emissions from traffic, industry and households on one hand and biogenic emissions on the other hand are calculated with unknown certainty. Traffic emissions are usually calculated using emission factors based on idealized driving cycles and statistical data on road traffic. Up to now only in few investiga- tions calculated emission data for roadways are compared with real world emissions (Ingalls, 1989; Staehelin et al., 1995; Weingartner et al., 1997; Vogel et al., 2000). Especially, no comparisons were performed for the emissions of particulate matter of motorways. In order to check the quality of simulated gaseous and particulate traf- fic emissions, the project BAB II was designed by the Institute für Meteorologie und Klimaforschung (IMK) at the Forschungszentrum Karlsruhe, Germany, to determine real world gaseous and particulate traffic emissions by measuring the concentration profiles and profiles of wind at both sides of a highly frequented motorway. In addi- tion parameters describing the traffic situation (traffic density, driving speed, motor type, type of catalyst) were obtained. Therefore, it is possible to compare measured and calculated emission data for NOx, CO, individual volatile organic compounds, and size resolved particulate matter. At May 11, 2001 between 11:00 and 18:00 CEST the mean wind direction was northeast, which is perpendicular to the motorway. The resulting mean vertical profiles of CO, NOx and particulate matter during that period are clearly influenced by the road traffic emissions. The plume originating from the traffic emissions is found at the south side of the motorway and reaches a height of 25 m to 30 m. The difference between windward and lee values in the lowest level (8 m) is on average 50 ppb for CO and 15 ppb for NOx. Depending
A comparison of three radiation models for the calculation of nozzle arcs
NASA Astrophysics Data System (ADS)
Dixon, C. M.; Yan, J. D.; Fang, M. T. C.
2004-12-01
Three radiation models, the semi-empirical model based on net emission coefficients (Zhang et al 1987 J. Phys. D: Appl. Phys. 20 386-79), the five-band P1 model (Eby et al 1998 J. Phys. D: Appl. Phys. 31 1578-88), and the method of partial characteristics (Aubrecht and Lowke 1994 J. Phys. D: Appl.Phys. 27 2066-73, Sevast'yanenko 1979 J. Eng. Phys. 36 138-48), are used to calculate the radiation transfer in an SF6 nozzle arc. The temperature distributions computed by the three models are compared with the measurements of Leseberg and Pietsch (1981 Proc. 4th Int. Symp. on Switching Arc Phenomena (Lodz, Poland) pp 236-40) and Leseberg (1982 PhD Thesis RWTH Aachen, Germany). It has been found that all three models give similar distributions of radiation loss per unit time and volume. For arcs burning in axially dominated flow, such as arcs in nozzle flow, the semi-empirical model and the P1 model give accurate predictions when compared with experimental results. The prediction by the method of partial characteristics is poorest. The computational cost is the lowest for the semi-empirical model.
Recalibration of the Shear Stress Transport Model to Improve Calculation of Shock Separated Flows
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.
2013-01-01
The Menter Shear Stress Transport (SST) k . turbulence model is one of the most widely used two-equation Reynolds-averaged Navier-Stokes turbulence models for aerodynamic analyses. The model extends Menter s baseline (BSL) model to include a limiter that prevents the calculated turbulent shear stress from exceeding a prescribed fraction of the turbulent kinetic energy via a proportionality constant, a1, set to 0.31. Compared to other turbulence models, the SST model yields superior predictions of mild adverse pressure gradient flows including those with small separations. In shock - boundary layer interaction regions, the SST model produces separations that are too large while the BSL model is on the other extreme, predicting separations that are too small. In this paper, changing a1 to a value near 0.355 is shown to significantly improve predictions of shock separated flows. Several cases are examined computationally and experimental data is also considered to justify raising the value of a1 used for shock separated flows.
An anatomically realistic lung model for Monte Carlo-based dose calculations
Liang Liang; Larsen, Edward W.; Chetty, Indrin J.
2007-03-15
Treatment planning for disease sites with large variations of electron density in neighboring tissues requires an accurate description of the geometry. This self-evident statement is especially true for the lung, a highly complex organ having structures with a wide range of sizes that range from about 10{sup -4} to 1 cm. In treatment planning, the lung is commonly modeled by a voxelized geometry obtained using computed tomography (CT) data at various resolutions. The simplest such model, which is often used for QA and validation work, is the atomic mix or mean density model, in which the entire lung is homogenized and given a mean (volume-averaged) density. The purpose of this paper is (i) to describe a new heterogeneous random lung model, which is based on morphological data of the human lung, and (ii) use this model to assess the differences in dose calculations between an actual lung (as represented by our model) and a mean density (homogenized) lung. Eventually, we plan to use the random lung model to assess the accuracy of CT-based treatment plans of the lung. For this paper, we have used Monte Carlo methods to make accurate comparisons between dose calculations for the random lung model and the mean density model. For four realizations of the random lung model, we used a single photon beam, with two different energies (6 and 18 MV) and four field sizes (1x1, 5x5, 10x10, and 20x20 cm{sup 2}). We found a maximum difference of 34% of D{sub max} with the 1x1, 18 MV beam along the central axis (CAX). A ''shadow'' region distal to the lung, with dose reduction up to 7% of D{sub max}, exists for the same realization. The dose perturbations decrease for larger field sizes, but the magnitude of the differences in the shadow region is nearly independent of the field size. We also observe that, compared to the mean density model, the random structures inside the heterogeneous lung can alter the shape of the isodose lines, leading to a broadening or shrinking of the
NASA Astrophysics Data System (ADS)
Wu, Qiong; Li, Shu-Suo; Ma, Yue; Gong, Sheng-Kai
2012-10-01
The diffusion coefficients of several alloying elements (Al, Mo, Co, Ta, Ru, W, Cr, Re) in Ni are directly calculated using the five-frequency model and the first principles density functional theory. The correlation factors provided by the five-frequency model are explicitly calculated. The calculated diffusion coefficients show their excellent agreement with the available experimental data. Both the diffusion pre-factor (D0) and the activation energy (Q) of impurity diffusion are obtained. The diffusion coefficients above 700 K are sorted in the following order: DAl > DCr > DCo > DTa > DMo > DRu > DW > DRe. It is found that there is a positive correlation between the atomic radius of the solute and the jump energy of Ni that results in the rotation of the solute-vacancy pair (E1). The value of E2-E1 (E2 is the solute diffusion energy) and the correlation factor each also show a positive correlation. The larger atoms in the same series have lower diffusion activation energies and faster diffusion coefficients.
NASA Astrophysics Data System (ADS)
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods
Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.
Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B
2014-11-01
The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and γ-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The γ-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm. PMID:24947967
NASA Astrophysics Data System (ADS)
Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.
2011-06-01
We present the calculation of the atmospheric neutrino fluxes with an interaction model named JAM, which is used in PHITS (Particle and Heavy-Ion Transport code System) [K. Niita , Radiation MeasurementsRMEAEP1350-4487 41, 1080 (2006).10.1016/j.radmeas.2006.07.013]. The JAM interaction model agrees with the HARP experiment [H. Collaboration, Astropart. Phys. 30, 124 (2008).APHYEE0927-650510.1016/j.astropartphys.2008.07.007] a little better than DPMJET-III [S. Roesler, R. Engel, and J. Ranft, arXiv:hep-ph/0012252.]. After some modifications, it reproduces the muon flux below 1GeV/c at balloon altitudes better than the modified DPMJET-III, which we used for the calculation of atmospheric neutrino flux in previous works [T. Sanuki, M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. DPRVDAQ1550-7998 75, 043005 (2007).10.1103/PhysRevD.75.043005][M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, and T. Sanuki, Phys. Rev. DPRVDAQ1550-7998 75, 043006 (2007).10.1103/PhysRevD.75.043006]. Some improvements in the calculation of atmospheric neutrino flux are also reported.
Tosso, Rodrigo D; Andujar, Sebastian A; Gutierrez, Lucas; Angelina, Emilio; Rodríguez, Ricaurte; Nogueras, Manuel; Baldoni, Héctor; Suvire, Fernando D; Cobo, Justo; Enriz, Ricardo D
2013-08-26
A molecular modeling study on dihydrofolate reductase (DHFR) inhibitors was carried out. By combining molecular dynamics simulations with semiempirical (PM6), ab initio, and density functional theory (DFT) calculations, a simple and generally applicable procedure to evaluate the binding energies of DHFR inhibitors interacting with the human enzyme is reported here, providing a clear picture of the binding interactions of these ligands from both structural and energetic viewpoints. A reduced model for the binding pocket was used. This approach allows us to perform more accurate quantum mechanical calculations as well as to obtain a detailed electronic analysis using the quantum theory of atoms in molecules (QTAIM) technique. Thus, molecular aspects of the binding interactions between inhibitors and the DHFR are discussed in detail. A significant correlation between binding energies obtained from DFT calculations and experimental IC₅₀ values was obtained, predicting with an acceptable qualitative accuracy the potential inhibitor effect of nonsynthesized compounds. Such correlation was experimentally corroborated synthesizing and testing two new inhibitors reported in this paper. PMID:23834278
Linden, D.S.
1993-05-01
The traditional two-fluid model of superconducting conductivity was modified to make it accurate, while remaining fast, for designing and simulating microwave devices. The modification reflects the BCS coherence effects in the conductivity of a superconductor, and is incorporated through the ratio of normal to superconducting electrons. This modified ratio is a simple analytical expression which depends on frequency, temperature and material parameters. This modified two-fluid model allows accurate and rapid calculation of the microwave surface impedance of a superconductor in the clean and dirty limits and in the weak- and strong-coupled regimes. The model compares well with surface resistance data for Nb and provides insight into Nb3Sn and Y1Ba2Cu3O(7-delta). Numerical calculations with the modified two-fluid model are an order of magnitude faster than the quasi-classical program by Zimmermann (1), and two to five orders of magnitude faster than Halbritter's BCS program (2) for surface resistance.
Development of Aerosol Models for Radiative Flux Calculations at ARM Sites
Ogren, John A.; Dutton, Ellsworth G.; McComiskey, Allison C.
2006-09-30
The direct radiative forcing (DRF) of aerosols, the change in net radiative flux due to aerosols in non-cloudy conditions, is an essential quantity for understanding the human impact on climate change. Our work has addressed several key issues that determine the accuracy, and identify the uncertainty, with which aerosol DRF can be modeled. These issues include the accuracy of several radiative transfer models when compared to measurements and to each other in a highly controlled closure study using data from the ARM 2003 Aerosol IOP. The primary focus of our work has been to determine an accurate approach to assigning aerosol properties appropriate for modeling over averaged periods of time and space that represent the observed regional variability of these properties. We have also undertaken a comprehensive analysis of the aerosol properties that contribute most to uncertainty in modeling aerosol DRF, and under what conditions they contribute the most uncertainty. Quantification of these issues enables the community to better state accuracies of radiative forcing calculations and to concentrate efforts in areas that will decrease uncertainties in these calculations in the future.
Evaluation of Generalized Born Model Accuracy for Absolute Binding Free Energy Calculations.
Zeller, Fabian; Zacharias, Martin
2014-06-27
Generalized Born (GB) implicit solvent models are widely used in molecular dynamics simulations to evaluate the interactions of biomolecular complexes. The continuum treatment of the solvent results in significant computational savings in comparison to an explicit solvent representation. It is, however, not clear how accurately the GB approach reproduces the absolute free energies of biomolecular binding. On the basis of induced dissociation by means of umbrella sampling simulations, the absolute binding free energies of small proline-rich peptide ligands and a protein receptor were calculated. Comparative simulations according to the same protocol were performed by employing an explicit solvent model and various GB-type implicit solvent models in combination with a nonpolar surface tension term. The peptide ligands differed in a key residue at the peptide-protein interface, including either a nonpolar, a neutral polar, a positively charged, or a negatively charged group. For the peptides with a neutral polar or nonpolar interface residue, very good agreement between the explicit solvent and GB implicit solvent results was found. Deviations in the main separation free energy contributions are smaller than 1 kcal/mol. In contrast, for peptides with a charged interface residue, significant deviations of 2-4 kcal/mol were observed. The results indicate that recent GB models can compete with explicit solvent representations in total binding free energy calculations as long as no charged residues are present at the binding interface. PMID:24941018
Calculations of axisymmetric vortex sheet roll-up using a panel and a filament model
NASA Technical Reports Server (NTRS)
Kantelis, J. P.; Widnall, S. E.
1986-01-01
A method for calculating the self-induced motion of a vortex sheet using discrete vortex elements is presented. Vortex panels and vortex filaments are used to simulate two-dimensional and axisymmetric vortex sheet roll-up. A straight forward application using vortex elements to simulate the motion of a disk of vorticity with an elliptic circulation distribution yields unsatisfactroy results where the vortex elements move in a chaotic manner. The difficulty is assumed to be due to the inability of a finite number of discrete vortex elements to model the singularity at the sheet edge and due to large velocity calculation errors which result from uneven sheet stretching. A model of the inner portion of the spiral is introduced to eliminate the difficulty with the sheet edge singularity. The model replaces the outermost portion of the sheet with a single vortex of equivalent circulation and a number of higher order terms which account for the asymmetry of the spiral. The resulting discrete vortex model is applied to both two-dimensional and axisymmetric sheets. The two-dimensional roll-up is compared to the solution for a semi-infinite sheet with good results.
Ernren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.
1999-01-01
Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH)4, under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability.In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem.In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem.
Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.
1995-11-01
During certain hypothetical severe accidents in a nuclear power plant, radionuclides could be released to the environment as a plume. Prediction of the atmospheric dispersion and transport of these radionuclides is important for assessment of the risk to the public from such accidents. A simplified PC-based model was developed that predicts time-integrated air concentration of each radionuclide at any location from release as a function of time integrated source strength using the Gaussian plume model. The solution procedure involves direct analytic integration of air concentration equations over time and position, using simplified meteorology. The formulation allows for dry and wet deposition, radioactive decay and daughter buildup, reactor building wake effects, the inversion lid effect, plume rise due to buoyancy or momentum, release duration, and grass height. Based on air and ground concentrations of the radionuclides, the early dose to an individual is calculated via cloudshine, groundshine, and inhalation. The model also calculates early health effects based on the doses. This paper presents aspects of the model that would be of interest to the prediction of environmental flows and their public consequences.
Field evaluation of a two-dimensinal hydrodynamic model near boulders for habitat calculation
Waddle, Terry
2010-01-01
Two-dimensional hydrodynamic models are now widely used in aquatic habitat studies. To test the sensitivity of calculated habitat outcomes to limitations of such a model and of typical field data, bathymetry, depth and velocity data were collected for three discharges in the vicinity of two large boulders in the South Platte River (Colorado) and used in the River2D model. Simulated depth and velocity were compared with observed values at 204 locations and the differences in habitat numbers produced by observed and simulated conditions were calculated. The bulk of the differences between simulated and observed depth and velocity values were found to lie within the likely error of measurement. However, the effect of flow simulation outliers on potential habitat outcomes must be considered when using 2D models for habitat simulation. Furthermore, the shape of the habitat suitability relation can influence the effects of simulation errors. Habitat relations with steep slopes in the velocity ranges found in similar study areas are expected to be sensitive to the magnitude of error found here. Comparison of habitat values derived from simulated and observed depth and velocity revealed a small tendency to under-predict habitat values.
Field evaluation of a two-dimensional hydrodynamic model near boulders for habitat calculation
Waddle, Terry
2009-01-01
Two-dimensional hydrodynamic models are now widely used in aquatic habitat studies. To test the sensitivity of calculated habitat outcomes to limitations of such a model and of typical field data, bathmetry, depth and velocity data were collected for three discharges in the vicinity of two large boulders in the South Platte River (Colorado) and used in the River2D model. Simulated depth and velocity were compared with observed values at 204 locations and the differences in habitat numbers produced by observed and simulated conditions were calculated. The bulk of the differences between simulated and observed depth and velocity values were found to lie within the likely error of measurement. However, the effect of flow simulation outliers on potential habitat outcomes must be considered when using 2D models for habitat simulation. Furthermore, the shape of the habitat suitability relation can influence the effects of simulation errors. Habitat relations with steep slopes in the velocity ranges found in similar study areas are expected to be sensitive to the magnitude of error found here. Comparison of habitat values derived from simulated and observed depth and velocity revealed a small tendency to under-predict habitat values.
Direct calculation of ice homogeneous nucleation rate for a molecular model of water
Haji-Akbari, Amir; Debenedetti, Pablo G.
2015-01-01
Ice formation is ubiquitous in nature, with important consequences in a variety of environments, including biological cells, soil, aircraft, transportation infrastructure, and atmospheric clouds. However, its intrinsic kinetics and microscopic mechanism are difficult to discern with current experiments. Molecular simulations of ice nucleation are also challenging, and direct rate calculations have only been performed for coarse-grained models of water. For molecular models, only indirect estimates have been obtained, e.g., by assuming the validity of classical nucleation theory. We use a path sampling approach to perform, to our knowledge, the first direct rate calculation of homogeneous nucleation of ice in a molecular model of water. We use TIP4P/Ice, the most accurate among existing molecular models for studying ice polymorphs. By using a novel topological approach to distinguish different polymorphs, we are able to identify a freezing mechanism that involves a competition between cubic and hexagonal ice in the early stages of nucleation. In this competition, the cubic polymorph takes over because the addition of new topological structural motifs consistent with cubic ice leads to the formation of more compact crystallites. This is not true for topological hexagonal motifs, which give rise to elongated crystallites that are not able to grow. This leads to transition states that are rich in cubic ice, and not the thermodynamically stable hexagonal polymorph. This mechanism provides a molecular explanation for the earlier experimental and computational observations of the preference for cubic ice in the literature. PMID:26240318
A Comparison of Model Calculation and Measurement of Absorbed Dose for Proton Irradiation. Chapter 5
NASA Technical Reports Server (NTRS)
Zapp, N.; Semones, E.; Saganti, P.; Cucinotta, F.
2003-01-01
With the increase in the amount of time spent EVA that is necessary to complete the construction and subsequent maintenance of ISS, it will become increasingly important for ground support personnel to accurately characterize the radiation exposures incurred by EVA crewmembers. Since exposure measurements cannot be taken within the organs of interest, it is necessary to estimate these exposures by calculation. To validate the methods and tools used to develop these estimates, it is necessary to model experiments performed in a controlled environment. This work is such an effort. A human phantom was outfitted with detector equipment and then placed in American EMU and Orlan-M EVA space suits. The suited phantom was irradiated at the LLUPTF with proton beams of known energies. Absorbed dose measurements were made by the spaceflight operational dosimetrist from JSC at multiple sites in the skin, eye, brain, stomach, and small intestine locations in the phantom. These exposures are then modeled using the BRYNTRN radiation transport code developed at the NASA Langley Research Center, and the CAM (computerized anatomical male) human geometry model of Billings and Yucker. Comparisons of absorbed dose calculations with measurements show excellent agreement. This suggests that there is reason to be confident in the ability of both the transport code and the human body model to estimate proton exposure in ground-based laboratory experiments.
Direct calculation of ice homogeneous nucleation rate for a molecular model of water.
Haji-Akbari, Amir; Debenedetti, Pablo G
2015-08-25
Ice formation is ubiquitous in nature, with important consequences in a variety of environments, including biological cells, soil, aircraft, transportation infrastructure, and atmospheric clouds. However, its intrinsic kinetics and microscopic mechanism are difficult to discern with current experiments. Molecular simulations of ice nucleation are also challenging, and direct rate calculations have only been performed for coarse-grained models of water. For molecular models, only indirect estimates have been obtained, e.g., by assuming the validity of classical nucleation theory. We use a path sampling approach to perform, to our knowledge, the first direct rate calculation of homogeneous nucleation of ice in a molecular model of water. We use TIP4P/Ice, the most accurate among existing molecular models for studying ice polymorphs. By using a novel topological approach to distinguish different polymorphs, we are able to identify a freezing mechanism that involves a competition between cubic and hexagonal ice in the early stages of nucleation. In this competition, the cubic polymorph takes over because the addition of new topological structural motifs consistent with cubic ice leads to the formation of more compact crystallites. This is not true for topological hexagonal motifs, which give rise to elongated crystallites that are not able to grow. This leads to transition states that are rich in cubic ice, and not the thermodynamically stable hexagonal polymorph. This mechanism provides a molecular explanation for the earlier experimental and computational observations of the preference for cubic ice in the literature. PMID:26240318
Calculations of inflaton decays and reheating: with applications to no-scale inflation models
Ellis, John; Garcia, Marcos A.G.; Nanopoulos, Dimitri V.; Olive, Keith A.
2015-07-30
We discuss inflaton decays and reheating in no-scale Starobinsky-like models of inflation, calculating the effective equation-of-state parameter, w, during the epoch of inflaton decay, the reheating temperature, T{sub reh}, and the number of inflationary e-folds, N{sub ∗}, comparing analytical approximations with numerical calculations. We then illustrate these results with applications to models based on no-scale supergravity and motivated by generic string compactifications, including scenarios where the inflaton is identified as an untwisted-sector matter field with direct Yukawa couplings to MSSM fields, and where the inflaton decays via gravitational-strength interactions. Finally, we use our results to discuss the constraints on these models imposed by present measurements of the scalar spectral index n{sub s} and the tensor-to-scalar perturbation ratio r, converting them into constraints on N{sub ∗}, the inflaton decay rate and other parameters of specific no-scale inflationary models.
Model-based dose calculations for {sup 125}I lung brachytherapy
Sutherland, J. G. H.; Furutani, K. M.; Garces, Y. I.; Thomson, R. M.
2012-07-15
Purpose: Model-baseddose calculations (MBDCs) are performed using patient computed tomography (CT) data for patients treated with intraoperative {sup 125}I lung brachytherapy at the Mayo Clinic Rochester. Various metallic artifact correction and tissue assignment schemes are considered and their effects on dose distributions are studied. Dose distributions are compared to those calculated under TG-43 assumptions. Methods: Dose distributions for six patients are calculated using phantoms derived from patient CT data and the EGSnrc user-code BrachyDose. {sup 125}I (GE Healthcare/Oncura model 6711) seeds are fully modeled. Four metallic artifact correction schemes are applied to the CT data phantoms: (1) no correction, (2) a filtered back-projection on a modified virtual sinogram, (3) the reassignment of CT numbers above a threshold in the vicinity of the seeds, and (4) a combination of (2) and (3). Tissue assignment is based on voxel CT number and mass density is assigned using a CT number to mass density calibration. Three tissue assignment schemes with varying levels of detail (20, 11, and 5 tissues) are applied to metallic artifact corrected phantoms. Simulations are also performed under TG-43 assumptions, i.e., seeds in homogeneous water with no interseed attenuation. Results: Significant dose differences (up to 40% for D{sub 90}) are observed between uncorrected and metallic artifact corrected phantoms. For phantoms created with metallic artifact correction schemes (3) and (4), dose volume metrics are generally in good agreement (less than 2% differences for all patients) although there are significant local dose differences. The application of the three tissue assignment schemes results in differences of up to 8% for D{sub 90}; these differences vary between patients. Significant dose differences are seen between fully modeled and TG-43 calculations with TG-43 underestimating the dose (up to 36% in D{sub 90}) for larger volumes containing higher proportions of
NASA Astrophysics Data System (ADS)
Ueki, K.; Iwamori, H.
2014-12-01
Partial melting of mantle peridotite is an essential process for both material fractionation and cooling of the Earth. Melt generation process in the natural system is an open system process in terms of both energy and mass, and evolves with time. Thermodynamic modeling is a powerful approach to describe such phase relation, mass balance and energy balance during melting. This study presents a new thermodynamic model for the calculation of phase relations during the melting of anhydrous spinel lherzolite at pressures between 1-2.5 GPa. The model is based on the total energy minimization algorithm for calculating phase equilibria within multicomponent systems and the thermodynamic configuration of Ueki and Iwamori [2013]. The model is based on a SiO2-Al2O3-FeO-Fe3O4-MgO-CaO system that includes silicate melt, olivine, clinopyroxene, orthopyroxene, and spinel as possible phases. The thermodynamic parameters for silicate melt end-member components are newly calibrated with a expanded-pressure calibration database. The temperatures and pressures used in this newly compiled calibration dataset are 1230-1600◦C and 0.9-3 GPa, corresponding to the stability range of spinel lherzolite. The modeling undertaken during this study reproduces the general features of experimentally determined spinel lherzolite melting phase relations at 1-2.5 GPa, including the solidus temperature, the melt composition, melting reaction, the degree of melting and the dF/dT curve. This new thermodynamic modeling also reproduces phase relations of various bulk compositions from relatively fertile to deplete spinel lherzolite and can be used in the modeling of multi-pressure mantle melting within various natural settings.
Very narrow band model calculations of atmospheric fluxes and cooling rates
Bernstein, L.S.; Berk, A.; Acharya, P.K.; Robertson, D.C.
1996-10-15
A new very narrow band model (VNBM) approach has been developed and incorporated into the MODTRAN atmospheric transmittance-radiance code. The VNBM includes a computational spectral resolution of 1 cm{sup {minus}1}, a single-line Voigt equivalent width formalism that is based on the Rodgers-Williams approximation and accounts for the finite spectral width of the interval, explicit consideration of line tails, a statistical line overlap correction, a new sublayer integration approach that treats the effect of the sublayer temperature gradient on the path radiance, and the Curtis-Godson (CG) approximation for inhomogeneous paths. A modified procedure for determining the line density parameter 1/d is introduced, which reduces its magnitude. This results in a partial correction of the VNBM tendency to overestimate the interval equivalent widths. The standard two parameter CG approximation is used for H{sub 2}O and CO{sub 2}, while the Goody three parameter CG approximation is used for O{sub 3}. Atmospheric flux and cooling rate predictions using a research version of MODTRAN, MODR, are presented for H{sub 2}O (with and without the continuum), CO{sub 2}, and O{sub 3} for several model atmospheres. The effect of doubling the CO{sub 2} concentration is also considered. These calculations are compared to line-by-line (LBL) model calculations using the AER, GLA, GFDL, and GISS codes. The MODR predictions fall within the spread of the LBL results. The effects of decreasing the band model spectral resolution are illustrated using CO{sub 2} cooling rate and flux calculations. 36 refs., 18 figs., 1 tab.
Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models
Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.
2008-01-01
Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483
Improved Frequency Fluctuation Model for Spectral Line Shape Calculations in Fusion Plasmas
NASA Astrophysics Data System (ADS)
Ferri, S.; Calisti, A.; Mossé, C.; Talin, B.; Lisitsa, V.
2010-10-01
A very fast method to calculate spectral line shapes emitted by plasmas accounting for charge particle dynamics and effects of an external magnetic field is proposed. This method relies on a new formulation of the Frequency Fluctuation Model (FFM), which yields to an expression of the dynamic line profile as a functional of the static distribution function of frequencies. This highly efficient formalism, not limited to hydrogen-like systems, allows to calculate pure Stark and Stark-Zeeman line shapes for a wide range of density, temperature and magnetic field values, which is of importance in plasma physics and astrophysics. Various applications of this method are presented for conditions related to fusion plasmas.
Calculation of the wetting parameter from a cluster model in the framework of nanothermodynamics.
García-Morales, V; Cervera, J; Pellicer, J
2003-06-01
The critical wetting parameter omega(c) determines the strength of interfacial fluctuations in critical wetting transitions. In this Brief Report, we calculate omega(c) from considerations on critical liquid clusters inside a vapor phase. The starting point is a cluster model developed by Hill and Chamberlin in the framework of nanothermodynamics [Proc. Natl. Acad. Sci. USA 95, 12779 (1998)]. Our calculations yield results for omega(c) between 0.52 and 1.00, depending on the degrees of freedom considered. The findings are in agreement with previous experimental results and give an idea of the universal dynamical behavior of the clusters when approaching criticality. We suggest that this behavior is a combination of translation and vortex rotational motion (omega(c)=0.84). PMID:16241275
NASA Astrophysics Data System (ADS)
He, Yuping
2015-03-01
We present calculations of the thermal transport coefficients of Si-based clathrates and solar perovskites, as obtained from ab initio calculations and models, where all input parameters derived from first principles. We elucidated the physical mechanisms responsible for the measured low thermal conductivity in Si-based clatherates and predicted their electronic properties and mobilities, which were later confirmed experimentally. We also predicted that by appropriately tuning the carrier concentration, the thermoelectric figure of merit of Sn and Pb based perovskites may reach values ranging between 1 and 2, which could possibly be further increased by optimizing the lattice thermal conductivity through engineering perovskite superlattices. Work done in collaboration with Prof. G. Galli, and supported by DOE/BES Grant No. DE-FG0206ER46262.
Solar particle events observed at Mars: dosimetry measurements and model calculations
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Saganti, Premkumar B.; Zeitlin, Cary J.; Cucinotta, Francis A.
2004-01-01
During the period from March 13, 2002 to mid-September, 2002, six solar particle events (SPE) were observed by the MARIE instrument onboard the Odyssey Spacecraft in Martian Orbit. These events were observed also by the GOES 8 satellite in Earth orbit, and thus represent the first time that the same SPE have been observed at these separate locations. The characteristics of these SPE are examined, given that the active regions of the solar disc from which the event originated can usually be identified. The dose rates at Martian orbit are calculated, both for the galactic and solar components of the ionizing particle radiation environment. The dose rates due to galactic cosmic rays (GCR) agree well with the HZETRN model calculations. Published by Elsevier Ltd on behalf of COSPAR.
An equivalent circuit model and power calculations for the APS SPX crab cavities.
Berenc, T. )
2012-03-21
An equivalent parallel resistor-inductor-capacitor (RLC) circuit with beam loading for a polarized TM110 dipole-mode cavity is developed and minimum radio-frequency (rf) generator requirements are calculated for the Advanced Photon Source (APS) short-pulse x-ray (SPX) superconducting rf (SRF) crab cavities. A beam-loaded circuit model for polarized TM110 mode crab cavities was derived. The single-cavity minimum steady-state required generator power has been determined for the APS SPX crab cavities for a storage ring current of 200mA DC current as a function of external Q for various vertical offsets including beam tilt and uncontrollable detuning. Calculations to aid machine protection considerations were given.
Corrections to vibrational transition probabilities calculated from a three-dimensional model.
NASA Technical Reports Server (NTRS)
Stallcop, J. R.
1972-01-01
Corrections to the collision-induced vibration transition probability calculated by Hansen and Pearson from a three-dimensional semiclassical model are examined. These corrections come from the retention of higher order terms in the expansion of the interaction potential and the use of the actual value of the deflection angle in the calculation of the transition probability. It is found that the contribution to the transition cross section from previously neglected potential terms can be significant for short range potentials and for the large relative collision velocities encountered at high temperatures. The correction to the transition cross section obtained from the use of actual deflection angles will not be appreciable unless the change in the rotational quantum number is large.
NASA Astrophysics Data System (ADS)
Li, Zheng; Sohn, Ilyoup; Levin, Deborah A.; Modest, Michael F.
2011-05-01
The current work implemented excited levels of atomic N and corresponding electron impact excitation/de-excitation and ionization processes in DSMC. Results show that when excitation models are included, the Stardust 68.9 km re-entry flow has an observable change in the ion number densities and electron temperature. Adding in the excited levels of atoms improves the degree of ionization by providing additional intermediate steps to ionization. The extra ionization reactions consume the electron energy and reduce the electron temperature. The DSMC results of number densities of excited levels are lower than the prediction of quasi steady state calculation. Comparison of radiation calculations using electronic excited populations from DSMC and QSS indicates that, at the stagnation point, there is about 20% difference of the radiative heat flux between DSMC and QSS.
Solar particle events observed at Mars: dosimetry measurements and model calculations.
Cleghorn, Timothy F; Saganti, Premkumar B; Zeitlin, Cary J; Cucinotta, Francis A
2004-01-01
During the period from March 13, 2002 to mid-September, 2002, six solar particle events (SPE) were observed by the MARIE instrument onboard the Odyssey Spacecraft in Martian Orbit. These events were observed also by the GOES 8 satellite in Earth orbit, and thus represent the first time that the same SPE have been observed at these separate locations. The characteristics of these SPE are examined, given that the active regions of the solar disc from which the event originated can usually be identified. The dose rates at Martian orbit are calculated, both for the galactic and solar components of the ionizing particle radiation environment. The dose rates due to galactic cosmic rays (GCR) agree well with the HZETRN model calculations. PMID:15791734
Sensitivity of model calculations to uncertain inputs, with an application to neutron star envelopes
NASA Technical Reports Server (NTRS)
Epstein, R. I.; Gudmundsson, E. H.; Pethick, C. J.
1983-01-01
A method is given for determining the sensitivity of certain types of calculations to the uncertainties in the input physics or model parameters; this method is applicable to problems that involve solutions to coupled, ordinary differential equations. In particular the sensitivity of calculations of the thermal structure of neutron star envelopes to uncertainties in the opacity and equation of state is examined. It is found that the uncertainties in the relationship between the surface and interior temperatures of a neutron star are due almost entirely to the imprecision in the values of the conductive opacity in the region where the ions form a liquid; here the conductive opacity is, for the most part, due to the scattering of electrons from ions.
HEMCO: a versatile software component for calculating and validating emissions in atmospheric models
NASA Astrophysics Data System (ADS)
Keller, C. A.; Long, M. S.; Yantosca, R.; da Silva, A.; Pawson, S.; Jacob, D. J.
2014-12-01
Accurate representation of emissions is essential in global models of atmospheric composition. New and updated emission inventories are continuously being developed by research groups and agencies, reflecting both improving knowledge and actual changes in emissions. Timely incorporation of this new information into atmospheric models is crucial but can involve laborious programming. Here, we present the Harvard-NASA Emission Component version 1.0 (HEMCO), a stand-alone software component for computing emissions in global atmospheric models. HEMCO determines emissions from different sources, regions, and species on a user-defined grid and can combine, overlay, and update a set of data inventories and scale factors, as specified by the user through the HEMCO configuration file. New emission inventories at any spatial and temporal resolution are readily added to HEMCO and can be accessed by the user without any preprocessing of the data files or modification of the source code. Emissions that depend on dynamic source types and local environmental variables such as wind speed or surface temperature are calculated in separate HEMCO extensions. By providing a widely applicable framework for specifying constituent emissions, HEMCO is designed to ease sensitivity studies and model comparisons, as well as inverse modeling in which emissions are adjusted iteratively. So far, we have implemented HEMCO in the GEOS-Chem chemical transport model and in the NASA Goddard Earth Observing System Model (GEOS-5) along with its integrated data assimilation system.
Two-dimensional dynamical drainage-flow model with Monte Carlo transport and diffusion calculations
Garrett, A.J.; Smith, F.G. III
1982-09-01
A simplified drainage flow model was developed from the equations of motion and the mass continuity equation in a terrain-following coordinate system. The equations were reduced to a two-dimensional system by vertically integrating over the drainage layer. A numerical solution for the drainage layer depth and wind field was obtained using a fourth order finite difference scheme. A Monte Carlo simulation was used to calculate the transport and diffusion of tracer gases. Model simulations of drainage flow have been compared to observations from the 1980 Geysers area experiments. The Geysers area is mountainous, with steep slopes, some of which are steeper than 10/sup 0/. The model predictions of wind direction are good, but with speeds are not predicted as accurately. Simulation of perfluorocarbon tracer concentrations were in good agreement with observed values. Maximum tracer concentration was predicted to within a factor of five. While predicted plume arrival was somewhat early, the model closely predicted the duration of the passage for plume concentrations greater than 0.5 ppt. The two-dimensional model was found to work equally well in simulating drainage flows over the Savannah River Plant (SRP) and surrounding terrain with slopes of around 1/sup 0/. The model correctly predicted that drainage winds at SRP are usually shallower than 60 m, which is the height at which meteorological towers measure the winds in the SRP production areas. The modest computational requirements of the model make it suitable for use in screening potential industrial sites.
Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth
2014-12-01
There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.
Model of the catalytic mechanism of human aldose reductase based on quantum chemical calculations.
Cachau, R. C.; Howard, E. H.; Barth, P. B.; Mitschler, A. M.; Chevrier, B. C.; Lamour, V.; Joachimiak, A.; Sanishvili, R.; Van Zandt, M.; Sibley, E.; Moras, D.; Podjarny, A.; UPR de Biologie Structurale; National Cancer Inst.; Univ. Louis Pasteur; Inst. for Diabetes Discovery, Inc.
2000-01-01
Aldose Reductase is an enzyme involved in diabetic complications, thoroughly studied for the purpose of inhibitor development. The structure of an enzyme-inhibitor complex solved at sub-atomic resolution has been used to develop a model for the catalytic mechanism. This model has been refined using a combination of Molecular Dynamics and Quantum calculations. It shows that the proton donation, the subject of previous controversies, is the combined effect of three residues: Lys 77, Tyr 48 and His 110. Lys 77 polarises the Tyr 48 OH group, which donates the proton to His 110, which becomes doubly protonated. His 110 then moves and donates the proton to the substrate. The key information from the sub-atomic resolution structure is the orientation of the ring and the single protonafion of the His 110 in the enzyme-inhibitor complex. This model is in full agreement with all available experimental data.
Model calculations of edge dislocation defects and vacancies in α-Iron lattice
NASA Astrophysics Data System (ADS)
Petrov, L.; Troev, T.; Nankov, N.; Popov, E.
2010-01-01
Two models of defects in perfect α-iron lattice were discussed. In the perfect bcc iron lattice 42×42×42 ao (ao = 2,87 Å) an edge dislocation was created, moving the second half of the bulk on one ao distance. This action generates a little volume in the middle of the bulk witch increases of the positron lifetime (PLT) calculated using the superimposed-atom method of Puska and Nieminen [1]. The result of 118 ps PLT in simple edge dislocation's model is in a good concurrence with earlier publications and experimental data [2]. Through the dislocation line one, two and three vacancies were localized. These models give the results for PLT of 146, 157 and 167 ps respectively. The computer simulations were performed using Finnis-Sinclair (FS) N-body potential.
He, G.; Doolen, G.D.; Chen, S.
1999-12-01
The longitudinal structure function (LSF) and the transverse structure function (TSF) in isotropic turbulence are calculated using a vortex model. The vortex model is composed of the Rankine and Burgers vortices which have the exponential distributions in the vortex Reynolds number and vortex radii. This model exhibits a power law in the inertial range and satisfies the minimal condition of isotropy that the second-order exponent of the LSF in the inertial range is equal to that of the TSF. Also observed are differences between longitudinal and transverse structure functions caused by intermittency. These differences are related to their scaling differences which have been previously observed in experiments and numerical simulations. {copyright} {ital 1999 American Institute of Physics.}
A model for calculating effects of liquid waste disposal in deep saline aquifers
Intercomp Resource Development and Engineering, Inc.
1976-01-01
A transient, three-dimensional subsurface waste-disposal model has been developed to provide methodology to design and test waste-disposal systems. The model is a finite-difference solution to the pressure, energy, and mass-transport equations. Equation parameters such as viscosity and density are allowed to be functions of the equations ' dependent variables. Multiple user options allow the choice of x, y, and z cartesian or r and z radial coordinates, various finite-difference methods, iterative and direct matrix solution techniques, restart options , and various provisions for output display. The addition of well-bore heat and pressure-loss calculations to the model makes available to the ground-water hydrologist the most recent advances from the oil and gas reservoir engineering field. (Woodard-USGS)
NASA Technical Reports Server (NTRS)
Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.
1988-01-01
A physically based ground hydrology model is presented that includes the processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff. Data from the Goddard Institute for Space Studies GCM were used as inputs for off-line tests of the model in four 8 x 10 deg regions, including Brazil, Sahel, Sahara, and India. Soil and vegetation input parameters were caculated as area-weighted means over the 8 x 10 deg gridbox; the resulting hydrological quantities were compared to ground hydrology model calculations performed on the 1 x 1 deg cells which comprise the 8 x 10 deg gridbox. Results show that the compositing procedure worked well except in the Sahel, where low soil water levels and a heterogeneous land surface produce high variability in hydrological quantities; for that region, a resolution better than 8 x 10 deg is needed.
Calculating the renormalisation group equations of a SUSY model with Susyno
NASA Astrophysics Data System (ADS)
Fonseca, Renato M.
2012-10-01
Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features
Yu, Lan; Zhan, Tingting; Zhan, Xiancheng; Wei, Guocui; Tan, Xiaoying; Wang, Xiaolan; Li, Chengrong
2014-11-01
The osmotic pressure of xylitol solution at a wide concentration range was calculated according to the UNIFAC model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with UNIFAC model calculations from dilute to saturated solution. Results indicate that air humidity osmometry measurements are comparable to UNIFAC model calculations at a wide concentration range by two one-sided test and multiple testing corrections. The air humidity osmometry is applicable to measure the osmotic pressure and the osmotic pressure can be calculated from the concentration. PMID:24032449
NASA Astrophysics Data System (ADS)
Akiyama, S.; Kawaji, K.; Fujihara, S.
2013-12-01
Since fault fracturing due to an earthquake can simultaneously cause ground motion and tsunami, it is appropriate to evaluate the ground motion and the tsunami by single fault model. However, several source models are used independently in the ground motion simulation or the tsunami simulation, because of difficulty in evaluating both phenomena simultaneously. Many source models for the 2011 off the Pacific coast of Tohoku Earthquake are proposed from the inversion analyses of seismic observations or from those of tsunami observations. Most of these models show the similar features, which large amount of slip is located at the shallower part of fault area near the Japan Trench. This indicates that the ground motion and the tsunami can be evaluated by the single source model. Therefore, we examine the possibility of the tsunami prediction, using the fault model estimated from seismic observation records. In this study, we try to carry out the tsunami simulation using the displacement field of oceanic crustal movements, which is calculated from the ground motion simulation of the 2011 off the Pacific coast of Tohoku Earthquake. We use two fault models by Yoshida et al. (2011), which are based on both the teleseismic body wave and on the strong ground motion records. Although there is the common feature in those fault models, the amount of slip near the Japan trench is lager in the fault model from the strong ground motion records than in that from the teleseismic body wave. First, the large-scale ground motion simulations applying those fault models used by the voxel type finite element method are performed for the whole eastern Japan. The synthetic waveforms computed from the simulations are generally consistent with the observation records of K-NET (Kinoshita (1998)) and KiK-net stations (Aoi et al. (2000)), deployed by the National Research Institute for Earth Science and Disaster Prevention (NIED). Next, the tsunami simulations are performed by the finite
A simple model for calculating tsunami flow speed from tsunami deposits
Jaffe, B.E.; Gelfenbuam, G.
2007-01-01
This paper presents a simple model for tsunami sedimentation that can be applied to calculate tsunami flow speed from the thickness and grain size of a tsunami deposit (the inverse problem). For sandy tsunami deposits where grain size and thickness vary gradually in the direction of transport, tsunami sediment transport is modeled as a steady, spatially uniform process. The amount of sediment in suspension is assumed to be in equilibrium with the steady portion of the long period, slowing varying uprush portion of the tsunami. Spatial flow deceleration is assumed to be small and not to contribute significantly to the tsunami deposit. Tsunami deposits are formed from sediment settling from the water column when flow speeds on land go to zero everywhere at the time of maximum tsunami inundation. There is little erosion of the deposit by return flow because it is a slow flow and is concentrated in topographic lows. Variations in grain size of the deposit are found to have more effect on calculated tsunami flow speed than deposit thickness. The model is tested using field data collected at Arop, Papua New Guinea soon after the 1998 tsunami. Speed estimates of 14??m/s at 200??m inland from the shoreline compare favorably with those from a 1-D inundation model and from application of Bernoulli's principle to water levels on buildings left standing after the tsunami. As evidence that the model is applicable to some sandy tsunami deposits, the model reproduces the observed normal grading and vertical variation in sorting and skewness of a deposit formed by the 1998 tsunami.
Beyond Gaussians: a study of single-spot modeling for scanning proton dose calculation
NASA Astrophysics Data System (ADS)
Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong
2012-02-01
Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field size effects on dose output. In this study, we developed a pencil beam algorithm for scanning proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy
A compressible near-wall turbulence model for boundary layer calculations
NASA Technical Reports Server (NTRS)
So, R. M. C.; Zhang, H. S.; Lai, Y. G.
1992-01-01
A compressible near-wall two-equation model is derived by relaxing the assumption of dynamical field similarity between compressible and incompressible flows. This requires justifications for extending the incompressible models to compressible flows and the formulation of the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilational part, which is directly affected by these changes. This approach isolates terms with explicit dependence on compressibility so that they can be modeled accordingly. An equation that governs the transport of the solenoidal dissipation rate with additional terms that are explicitly dependent on the compressibility effects is derived similarly. A model with an explicit dependence on the turbulent Mach number is proposed for the dilational dissipation rate. Thus formulated, all near-wall incompressible flow models could be expressed in terms of the solenoidal dissipation rate and straight-forwardly extended to compressible flows. Therefore, the incompressible equations are recovered correctly in the limit of constant density. The two-equation model and the assumption of constant turbulent Prandtl number are used to calculate compressible boundary layers on a flat plate with different wall thermal boundary conditions and free-stream Mach numbers. The calculated results, including the near-wall distributions of turbulence statistics and their limiting behavior, are in good agreement with measurements. In particular, the near-wall asymptotic properties are found to be consistent with incompressible behavior; thus suggesting that turbulent flows in the viscous sublayer are not much affected by compressibility effects.
Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation
Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong
2013-01-01
Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
PFLOW: A 3-D Numerical Modeling Tool for Calculating Fluid-Pressure Diffusion from Coulomb Strain
NASA Astrophysics Data System (ADS)
Wolf, L. W.; Lee, M.; Meir, A.; Dyer, G.; Ma, K.; Chan, C.
2009-12-01
A new 3D time-dependent pore-pressure diffusion model PFLOW is developed to investigate the response of pore fluids to the crustal deformation generated by strong earthquakes in heterogeneous geologic media. Given crustal strain generated by changes in Coulomb stress, this MATLAB-based code uses Skempton's coefficient to calculate resulting changes fluid pressure. Pore-pressure diffusion can be tracked over time in a user-defined model space with user-prescribed Neumann or Dirchilet boundary conditions and with spatially variable values of permeability. PFLOW employs linear or quadratic finite elements for spatial discretization and first order or second order, explicit or implicit finite difference discretization in time. PFLOW is easily interfaced with output from deformation modeling programs such as Coulomb (Toda et al., 2007) or 3D-DEF (Gomberg and Ellis, 1994). The code is useful for investigating to first-order the evolution of pore pressure changes induced by changes in Coulomb stress and their possible relation to water-level changes in wells or changes in stream discharge. It can also be used for student research and classroom instruction. As an example application, we calculate the coseismic pore pressure changes and diffusion induced by volumetric strain associated with the 1999 Chi-Chi earthquake (Mw = 7.6) in Taiwan. The Chi-Chi earthquake provides an unique opportunity to investigate the spatial and time-dependent poroelastic response of near-field rocks and sediments because there exist extensive observational data of water-level changes and crustal deformation. The integrated model allows us to explore whether changes in Coulomb stress can adequately explain hydrologic anomalies observed in areas such as Taiwan’s western foothills and the Choshui River alluvial plain. To calculate coseismic strain, we use the carefully calibrated finite fault-rupture model of Ma et al. (2005) and the deformation modeling code Coulomb 3.1 (Toda et al., 2007
NASA Astrophysics Data System (ADS)
Zurita-Milla, Raul; Mehdipoor, Hamed; Batarseh, Sana; Ault, Toby; Schwartz, Mark D.
2014-05-01
Models that predict the timing of recurrent biological events play an important role in supporting the systematic study of phenological changes at a variety of spatial and temporal scales. One set of such models are the extended Spring indices (SI-x). These models predicts a suite of phenological metrics ("first leaf" and "first bloom," "last freeze" and the "damage index") from temperature data and geographic location (to model the duration of the day). The SI-x models were calibrated using historical phenological and weather observations from the continental US. In particular, the models relied on first leaf and first bloom observations for lilac and honeysuckle and on daily minimum and maximum temperature values from a number of weather stations located near to the sites where phenological observations were made. In this work, we study the use of DAYMET (http://daymet.ornl.gov/) to calculate the SI-x models over the continental USA. DAYMET offers daily gridded maximum and minimum temperature values for the period 1980 to 2012. Using an automatic downloader, we downloaded complete DAYMET temperature time series for the over 1100 geographic locations where historical lilac observations were made. The temperature values were parsed and, using the recently available MATLAB code, the SI-x indices were calculated. Subsequently, the predicted first leaf and first bloom dates were compared with historical lilac observations. The RMSE between predicted and observed lilac leaf/bloom dates was calculated after identifying data from the same geographic location and year. Results were satisfactory for the lilac observations in the Eastern US (e.g. the RMSE for the blooming date was of about 5 days). However, the correspondence between the observed and predicted lilac values in the West was rather week (e.g. RMSE for the blooming date of about 22 days). This might indicate that DAYMET temperature data in this region of the US might contain larger uncertainties due to a more
Development and application of a random lung model for dose calculations in radiotherapy
NASA Astrophysics Data System (ADS)
Liang, Liang
Radiotherapy requires accurate dose calculations in the human body, especially in disease sites with large variations of electron density in neighboring tissues, such as the lung. Currently, the lung is modeled by a voxelized geometry interpolated from computed tomography (CT) scans to various resolutions. The simplest such voxelized lung, the atomic mix model, is a homogenized whole lung with a volume-averaged bulk density. However, according traditional transport theory, even the relatively fine CT voxelization of the lung is not valid, due to the extremely small mean free path (MFP) of the electrons. The purpose of this thesis is to study the impact of the lung's heterogeneities on dose calculations in lung treatment planning. We first extend the traditional atomic mix theory for charged particles by approximating the Boltzmann equation for electrons to its Fokker-Planck (FP) limit, and then applying a formal asymptotic analysis to the BFP equation. This analysis raises the length scale for homogenizing a heterogeneous medium from the electron mean free path (MFP) to the much larger electron transport MFP. Then, using the lung's anatomical data and our new atomic mix theory, we build a realistic 2 1/2-D random lung model. The dose distributions for representative realizations of the random lung model are compared to those from the atomic mix approximation of the random lung model, showing that significant perturbations may occur with small field sizes and large lung structures. We also apply our random lung model to a more realistic lung phantom and investigate the effect of CT resolutions on lung treatment planning. We show that, compared to the reference 1 x 1 mm2 CT resolution, a 2 x 2 mm2 CT resolution is sufficient to voxelize the lung, while significant deviations in dose can be observed with a larger 4 x 4 mm 2 CT resolution. We use the Monte Carlo method extensively in this thesis, to avoid systematic errors caused by inaccurate heterogeneity corrections
Wang, Junmei; Cieplak, Piotr; Li, Jie; Hou, Tingjun; Luo, Ray; Duan, Yong
2011-01-01
In this work, four types of polarizable models have been developed for calculating interactions between atomic charges and induced point dipoles. These include the Applequist, Thole linear, Thole exponential model, and the Thole Tinker-like. The polarizability models have been optimized to reproduce the experimental static molecular polarizabilities obtained from the molecular refraction measurements on a set of 420 molecules reported by Bosque and Sales. We grouped the models into five sets depending on the interaction types, i.e. whether the interactions of two atoms that form bond, bond angle and dihedral angle are turned off or scaled down. When 1-2 (bonded), 1-3 (separated by two bonds) interactions are turned off and/or 1-4 (separated by three bonds) interactions are scaled down, all the models including the Applequist model achieved similar performance: the average percentage errors (APE) ranges from 1.15% to 1.23%, and The average unsigned errors (AUE) ranges from 0.143 to 0.158 Å3. When the short-range 1-2, 1-3 and full 1-4 terms are taken into account (Set D models), the APE ranges from 1.30% to 1.58% for the three Thole models whereas the Applequist model (DA) has a significantly larger APE (3.82%). The AUE ranges from 0.166 to 0.196 Å3 for the three Thole models, compared to 0.446 Å3 for the Applequist model. Further assessement using the 70-molecule van Duijnen and Swart data set clearly showed that the developed models are both accurate and highly transferable and are in fact more accurate than the models developed using this particular data set (Set E models). The fact that A, B, and C model sets are notably more accurate than both D and E model sets strongly suggests that the inclusion of 1-2 and 1-3 interactions reduces the transferability and accuracy. PMID:21391553
Wang, Junmei; Cieplak, Piotr; Li, Jie; Hou, Tingjun; Luo, Ray; Duan, Yong
2011-03-31
In this work, four types of polarizable models have been developed for calculating interactions between atomic charges and induced point dipoles. These include the Applequist, Thole linear, Thole exponential model, and the Thole Tinker-like. The polarizability models have been optimized to reproduce the experimental static molecular polarizabilities obtained from the molecular refraction measurements on a set of 420 molecules reported by Bosque and Sales. We grouped the models into five sets depending on the interaction types, that is, whether the interactions of two atoms that form the bond, bond angle, and dihedral angle are turned off or scaled down. When 1-2 (bonded) and 1-3 (separated by two bonds) interactions are turned off, 1-4 (separated by three bonds) interactions are scaled down, or both, all models including the Applequist model achieved similar performance: the average percentage error (APE) ranges from 1.15 to 1.23%, and the average unsigned error (AUE) ranges from 0.143 to 0.158 Å(3). When the short-range 1-2, 1-3, and full 1-4 terms are taken into account (set D models), the APE ranges from 1.30 to 1.58% for the three Thole models, whereas the Applequist model (DA) has a significantly larger APE (3.82%). The AUE ranges from 0.166 to 0.196 Å(3) for the three Thole models, compared with 0.446 Å(3) for the Applequist model. Further assessment using the 70-molecule van Duijnen and Swart data set clearly showed that the developed models are both accurate and highly transferable and are in fact have smaller errors than the models developed using this particular data set (set E models). The fact that A, B, and C model sets are notably more accurate than both D and E model sets strongly suggests that the inclusion of 1-2 and 1-3 interactions reduces the transferability and accuracy. PMID:21391553
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-cycle fuel economy values for a model type. 600.209-08 Section 600.209-08 Protection of Environment... MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.209-08 Calculation of vehicle-specific 5-cycle fuel economy values for a model type. (a) Base level....
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-cycle fuel economy values for a model type. 600.209-08 Section 600.209-08 Protection of Environment... MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values § 600.209-08 Calculation of vehicle-specific 5-cycle fuel economy values for a model type. (a) Base level....
A Geometric Computational Model for Calculation of Longwall Face Effect on Gate Roadways
NASA Astrophysics Data System (ADS)
Mohammadi, Hamid; Ebrahimi Farsangi, Mohammad Ali; Jalalifar, Hossein; Ahmadi, Ali Reza
2016-01-01
In this paper a geometric computational model (GCM) has been developed for calculating the effect of longwall face on the extension of excavation-damaged zone (EDZ) above the gate roadways (main and tail gates), considering the advance longwall mining method. In this model, the stability of gate roadways are investigated based on loading effects due to EDZ and caving zone (CZ) above the longwall face, which can extend the EDZ size. The structure of GCM depends on four important factors: (1) geomechanical properties of hanging wall, (2) dip and thickness of coal seam, (3) CZ characteristics, and (4) pillar width. The investigations demonstrated that the extension of EDZ is a function of pillar width. Considering the effect of pillar width, new mathematical relationships were presented to calculate the face influence coefficient and characteristics of extended EDZ. Furthermore, taking GCM into account, a computational algorithm for stability analysis of gate roadways was suggested. Validation was carried out through instrumentation and monitoring results of a longwall face at Parvade-2 coal mine in Tabas, Iran, demonstrating good agreement between the new model and measured results. Finally, a sensitivity analysis was carried out on the effect of pillar width, bearing capacity of support system and coal seam dip.
Krishnamurthy, C. V.; Shankar, M.; Vardhan, J. Vishnu; Balasubramaniam, Krishnan
2006-03-06
The 2004 ultrasonic benchmark problem requires models to predict, given a reference pulse waveform, the pulse echo response of cylindrical voids of various radii located in an elastic solid for various incidence angles of a transducer immersed in water. We present the results of calculations based on the patch element model, recently developed at CNDE, to determine the response of an SDH in aluminum for specific oblique incidence angles. Patch element model calculations for a scan across the SDH, involving a range of oblique incidence angles, are also presented. Measured pulse-echo scans involving the SDH response under oblique incidence conditions are reported. In addition, through transmission measurements involving a pinducer as a receiver and an immersion planar probe as a transmitter under oblique incidence conditions are also reported in a defect-free Aluminum block. These pinducer-based measurements on a defect-free block are utilised to characterize the fields at the chosen depth. Comparisons are made between predictions and measurements for the pulse-echo response of a SDH.
Calculation and Analysis of Liquid Holdup in Lower Blast Furnace by Model Experiments
NASA Astrophysics Data System (ADS)
Xiong, Wei; Bi, Xue-Gong; Wang, Guo-Qiang; Yang, Fu
2012-06-01
A hydromechanics experiment on the countercurrent flow of gas and liquid simulating the flow conditions in the lower blast furnace was carried out. A cold model of a packed bed with various packing materials and liquids was used to study the holdup of liquid. Correlations for static holdup, dynamic holdup, and total holdup were obtained. A good agreement was found between the calculated and experimental data. A mathematical model simulating the flow fields was applied to study the effect of liquid holdup in blast furnace. The results of the model calculation show that static holdup is the determinant of the total holdup of molten materials when the blast furnace works in stable condition. The slag phase generally reaches flooding holdup ahead of the hot metal. The radial distribution of gas flow is almost not influenced by the holdup of molten materials, but it has a greater influence on the pressure drop. The size of coke has far greater influence on static holdup than liquid properties does. The study is useful for acquiring a deeper understanding of the complex phenomena in the blast furnace and for determining appropriate operational actions under different production conditions.
Calculation and Analysis of Magnetic Gradient Tensor Components of Global Magnetic Models
NASA Astrophysics Data System (ADS)
Schiffler, M.; Queitsch, M.; Schneider, M.; Goepel, A.; Stolz, R.; Krech, W.; Meyer, H. G.; Kukowski, N.
2014-12-01
Global Earth's magnetic field models like the International Geomagnetic Reference Field (IGRF), the World Magnetic Model (WMM) or the High Definition Geomagnetic Model (HDGM) are harmonic analysis regressions to available magnetic observations stored as spherical harmonic coefficients. Input data combine recordings from magnetic observatories, airborne magnetic surveys and satellite data. The advance of recent magnetic satellite missions like SWARM and its predecessors like CHAMP offer high resolution measurements while providing a full global coverage. This deserves expansion of the theoretical framework of harmonic synthesis to magnetic gradient tensor components. Measurement setups for Full Tensor Magnetic Gradiometry equipped with high sensitive gradiometers like the JeSSY STAR system can directly measure the gradient tensor components, which requires precise knowledge about the background regional gradients which can be calculated with this extension. In this study we develop the theoretical framework for calculation of the magnetic gradient tensor components from the harmonic series expansion and apply our approach to the IGRF and HDGM. The gradient tensor component maps for entire Earth's surface produced for the IGRF show low gradients reflecting the variation from the dipolar character, whereas maps for the HDGM (up to degree N=729) reveal new information about crustal structure, especially across the oceans, and deeply situated ore bodies. From the gradient tensor components, the rotational invariants, the Eigenvalues, and the normalized source strength (NSS) are calculated. The NSS focuses on shallower and stronger anomalies. Euler deconvolution using either the tensor components or the NSS applied to the HDGM reveals an estimate of the average source depth for the entire magnetic crust as well as individual plutons and ore bodies. The NSS reveals the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern
Renormalization effects on the MSSM from a calculable model of a strongly coupled hidden sector
Arai, Masato; Okada, Nobuchika
2011-10-01
We investigate possible renormalization effects on the low-energy mass spectrum of the minimal supersymmetric standard model (MSSM), using a calculable model of strongly coupled hidden sector. We model the hidden sector by N=2 supersymmetric quantum chromodynamics with gauge group SU(2)xU(1) and N{sub f}=2 matter hypermultiplets, perturbed by a Fayet-Iliopoulos term which breaks the supersymmetry down to N=0 on a metastable vacuum. In the hidden sector the Kaehler potential is renormalized. Upon identifying a hidden sector modulus with the renormalization scale, and extrapolating to the strongly coupled regime using the Seiberg-Witten solution, the contribution from the hidden sector to the MSSM renormalization group flows is computed. For concreteness, we consider a model in which the renormalization effects are communicated to the MSSM sector via gauge mediation. In contrast to the perturbative toy examples of hidden sector renormalization studied in the literature, we find that our strongly coupled model exhibits rather intricate effects on the MSSM soft scalar mass spectrum, depending on how the hidden sector fields are coupled to the messenger fields. This model provides a concrete example in which the low-energy spectrum of MSSM particles that are expected to be accessible in collider experiments is obtained using strongly coupled hidden sector dynamics.
Calculation of optical properties for hot plasmas using a screened hydrogenic model
NASA Astrophysics Data System (ADS)
Rubiano, J. G.; Rodríguez, R.; Florido, R.; Mendoza, M. A.; Gil, J. M.; Martel, P.; Mínguez, E.
2006-06-01
In work a hydrogenic versions of the code ATOM3R-OP is presented. This flexible code has been developed to obtain optical properties for plasmas in a wide range of densities and temperatures named and the Hydrogenic versions is intended to couple with hydrodynamic codes. The code is structured in three modules devoted to the calculation of the atomic magnitudes, the ionic abundances and the optical properties, respectively, which are briefly described. Finally, bound-bound opacities and emissivities of Carbon plasma computed with this model are compared with more sophisticated self-consistent codes.
A simplified model for calculating early offsite consequences from nuclear reactor accidents
Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.
1988-07-01
A personal computer-based model, SMART, has been developed that uses an integral approach for calculating early offsite consequences from nuclear reactor accidents. The solution procedure uses simplified meteorology and involves direct analytic integration of air concentration equations over time and position. This is different from the discretization approach currently used in the CRAC2 and MACCS codes. The SMART code is fast-running, thereby providing a valuable tool for sensitivity and uncertainty studies. The code was benchmarked against both MACCS version 1.4 and CRAC2. Results of benchmarking and detailed sensitivity/uncertainty analyses using SMART are presented. 34 refs., 21 figs., 24 tabs.
Onsager and Kaufman's Calculation of the Spontaneous Magnetization of the Ising Model
NASA Astrophysics Data System (ADS)
Baxter, R. J.
2011-11-01
Lars Onsager announced in 1949 that he and Bruria Kaufman had proved a simple formula for the spontaneous magnetization of the square-lattice Ising model, but did not publish their derivation. It was three years later when C.N. Yang published a derivation in Physical Review. In 1971 Onsager gave some clues to his and Kaufman's method, and there are copies of their correspondence in 1950 now available on the Web and elsewhere. Here we review how the calculation appears to have developed, and add a copy of a draft paper, almost certainly by Onsager and Kaufman, that obtains the result.
Theoretical Model for the Calculation of Optical Properties of Gold Nanoparticles
NASA Astrophysics Data System (ADS)
Mendoza-García, A.; Romero-Depablos, A.; Ortega, M. A.; Paz, J. L.; Echevarría, L.
We have developed an analytical method to describe the optical properties of nanoparticles, whose results are in agreement with the observed experimental behavior according to the size of the nanoparticle under analysis. Our considerations to describe plasmonic absorption and dispersion are based on the combination of the two-level molecular system and the two-dimensional quantum box models. Employing the optical stochastic Bloch equations, we have determined the system's coherence, from which we have calculated expressions for the absorption coefficient and refractive index. The innovation of this methodology is that it allows us to take into account the solvent environment, which induce quantum effects not considered by classical treatments.
Model calculation of oscillatory magnetic breakdown in metals with multiply degenerate bands
NASA Astrophysics Data System (ADS)
Thalmeier, P.; Falicov, L. M.
1981-03-01
We present a model calculation of the oscillatory magnetoresistance in a metal with three degenerate bands. We have in mind the example of body-centered cubic iron where, in the neighborhood of the H point of the Brillouin zone, three bands have multiple intersections and contacts. For magnetic fields along the [011] direction, the Fermi surface in the vicinity of H exhibits a complicated three-band interferometer which leads to complex oscillations in the magnetoresistance. A Fourier analysis of this magnetoresistance reveals that frequencies corresponding to split-beam interference, closed-orbit interference, and mixed type are all present with comparable strength. The connection to the experimental situation is discussed.
NASA Astrophysics Data System (ADS)
Xiong, Xiaozhen; McMillin, Larry M.
2005-01-01
Many current rapid transmittance algorithms, specifically the Optical Path Transmittance (OPTRAN), are based on use of effective transmittances to account for the effects of polychromatic radiation on the transmittance calculations. We document how OPTRAN was modified by replacing the effective transmittance concept with a correction term. Use of the correction term solves some numerical problems that were associated with use of effective transmittances, greatly reduces the line-by-line computational burden, and allows for the efficient inclusion of more gases. This correction method can easily be applied to any other fast models that use the effective transmittance approach.
An approximate framework for quantum transport calculation with model order reduction
Chen, Quan; Li, Jun; Yam, Chiyung; Zhang, Yu; Wong, Ngai; Chen, Guanhua
2015-04-01
A new approximate computational framework is proposed for computing the non-equilibrium charge density in the context of the non-equilibrium Green's function (NEGF) method for quantum mechanical transport problems. The framework consists of a new formulation, called the X-formulation, for single-energy density calculation based on the solution of sparse linear systems, and a projection-based nonlinear model order reduction (MOR) approach to address the large number of energy points required for large applied biases. The advantages of the new methods are confirmed by numerical experiments.
Microscopic calculation of interacting boson model parameters by potential-energy surface mapping
Bentley, I.; Frauendorf, S.
2011-06-15
A coherent state technique is used to generate an interacting boson model (IBM) Hamiltonian energy surface which is adjusted to match a mean-field energy surface. This technique allows the calculation of IBM Hamiltonian parameters, prediction of properties of low-lying collective states, as well as the generation of probability distributions of various shapes in the ground state of transitional nuclei, the last two of which are of astrophysical interest. The results for krypton, molybdenum, palladium, cadmium, gadolinium, dysprosium, and erbium nuclei are compared with experiment.
TH-C-BRD-02: Analytical Modeling and Dose Calculation Method for Asymmetric Proton Pencil Beams
Gelover, E; Wang, D; Hill, P; Flynn, R; Hyer, D
2014-06-15
Purpose: A dynamic collimation system (DCS), which consists of two pairs of orthogonal trimmer blades driven by linear motors has been proposed to decrease the lateral penumbra in pencil beam scanning proton therapy. The DCS reduces lateral penumbra by intercepting the proton pencil beam near the lateral boundary of the target in the beam's eye view. The resultant trimmed pencil beams are asymmetric and laterally shifted, and therefore existing pencil beam dose calculation algorithms are not capable of trimmed beam dose calculations. This work develops a method to model and compute dose from trimmed pencil beams when using the DCS. Methods: MCNPX simulations were used to determine the dose distributions expected from various trimmer configurations using the DCS. Using these data, the lateral distribution for individual beamlets was modeled with a 2D asymmetric Gaussian function. The integral depth dose (IDD) of each configuration was also modeled by combining the IDD of an untrimmed pencil beam with a linear correction factor. The convolution of these two terms, along with the Highland approximation to account for lateral growth of the beam along the depth direction, allows a trimmed pencil beam dose distribution to be analytically generated. The algorithm was validated by computing dose for a single energy layer 5×5 cm{sup 2} treatment field, defined by the trimmers, using both the proposed method and MCNPX beamlets. Results: The Gaussian modeled asymmetric lateral profiles along the principal axes match the MCNPX data very well (R{sup 2}≥0.95 at the depth of the Bragg peak). For the 5×5 cm{sup 2} treatment plan created with both the modeled and MCNPX pencil beams, the passing rate of the 3D gamma test was 98% using a standard threshold of 3%/3 mm. Conclusion: An analytical method capable of accurately computing asymmetric pencil beam dose when using the DCS has been developed.
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
Large uncertainty in soil carbon modelling related to carbon input calculation method
NASA Astrophysics Data System (ADS)
Keel, Sonja G.; Leifeld, Jens; Taghizadeh-Toosi, Arezoo; Oleson, Jørgen E.
2016-04-01
A model-based inventory for carbon (C) sinks and sources in agricultural soils is being established for Switzerland. As part of this project, five frequently used allometric equations that estimate soil C inputs based on measured yields are compared. To evaluate the different methods, we calculate soil C inputs for a long-term field trial in Switzerland. This DOK experiment (bio-Dynamic, bio-Organic, and conventional (German: Konventionell)) compares five different management systems, that are applied to identical crop rotations. Average calculated soil C inputs vary largely between allometric equations and range from 1.6 t C ha-1 yr-1 to 2.6 t C ha-1 yr-1. Among the most important crops in Switzerland, the uncertainty is largest for barley (difference between highest and lowest estimate: 3.0 t C ha-1 yr-1). For the unfertilized control treatment, the estimated soil C inputs vary less between allometric equations than for the treatment that received mineral fertilizer and farmyard manure. Most likely, this is due to the higher yields in the latter treatment, i.e. the difference between methods might be amplified because yields differ more. To evaluate the influence of these allometric equations on soil C dynamics we simulate the DOK trial for the years 1977-2004 using the model C-TOOL (Taghizadeh-Toosi et al. 2014) and the five different soil C input calculation methods. Across all treatments, C-TOOL simulates a decrease in soil C in line with the experimental data. This decline, however, varies between allometric equations (-2.4 t C ha-1 to -6.3 t C ha-1 for the years 1977-2004) and has the same order of magnitude as the difference between treatments. In summary, the method to estimate soil C inputs is identified as a significant source of uncertainty in soil C modelling. Choosing an appropriate allometric equation to derive the input data is thus a critical step when setting up a model-based national soil C inventory. References Taghizadeh-Toosi A et al. (2014) C
NASA Technical Reports Server (NTRS)
Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.
1990-01-01
The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.
Accurate calculation and modeling of the adiabatic connection in density functional theory
NASA Astrophysics Data System (ADS)
Teale, A. M.; Coriani, S.; Helgaker, T.
2010-04-01
Using a recently implemented technique for the calculation of the adiabatic connection (AC) of density functional theory (DFT) based on Lieb maximization with respect to the external potential, the AC is studied for atoms and molecules containing up to ten electrons: the helium isoelectronic series, the hydrogen molecule, the beryllium isoelectronic series, the neon atom, and the water molecule. The calculation of AC curves by Lieb maximization at various levels of electronic-structure theory is discussed. For each system, the AC curve is calculated using Hartree-Fock (HF) theory, second-order Møller-Plesset (MP2) theory, coupled-cluster singles-and-doubles (CCSD) theory, and coupled-cluster singles-doubles-perturbative-triples [CCSD(T)] theory, expanding the molecular orbitals and the effective external potential in large Gaussian basis sets. The HF AC curve includes a small correlation-energy contribution in the context of DFT, arising from orbital relaxation as the electron-electron interaction is switched on under the constraint that the wave function is always a single determinant. The MP2 and CCSD AC curves recover the bulk of the dynamical correlation energy and their shapes can be understood in terms of a simple energy model constructed from a consideration of the doubles-energy expression at different interaction strengths. Differentiation of this energy expression with respect to the interaction strength leads to a simple two-parameter doubles model (AC-D) for the AC integrand (and hence the correlation energy of DFT) as a function of the interaction strength. The structure of the triples-energy contribution is considered in a similar fashion, leading to a quadratic model for the triples correction to the AC curve (AC-T). From a consideration of the structure of a two-level configuration-interaction (CI) energy expression of the hydrogen molecule, a simple two-parameter CI model (AC-CI) is proposed to account for the effects of static correlation on the
Comparison of calculation models for determination of the mesopause temperature using SATI images
NASA Astrophysics Data System (ADS)
Atanassov, Atanas Marinov
2011-06-01
The Spectral Airglow Temperature Imager is an instrument for ground-based spectroscopic measurements of the night-glow atmosphere emissions. This instrument was developed specially for gravity wave investigation. The measured airglow spectra are matched to synthetic spectra calculated in advance for determination of the temperature in the mesopause region where the radiation maximum of some О 2 emissions is situated. The synthetic spectra are transformed into a format which corresponds to the measured spectra in order to be matched. This transformation is based on the known values of the refractive index and the central wavelength of the interference filter used. A substantial part of the processing algorithms of the SATI images is connected with determination of these two filter parameters. The results of the original and newly-proposed algorithms for filter parameter calculation and their importance for the final results for temperature determination on the basis of the О 2 (864-868 nm) emission measurements are presented. Considerable systematic differences (˜20 K) between temperatures at different points in the mesopause retrieved by the two algorithms are established. The advantage of the proposed algorithm over the original one is illustrated by retrieved rotational temperatures and by lower error values. Furthermore, the irregular errors in the nocturnal variation of the temperature retrieved by the original algorithm are absent when the proposed approach is applied. The error investigation in the calculations and the stability of the individual components of the processing algorithms and the calculation models may be helpful in achieving better results and enhancing the potentialities of the SATI instrument.
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Franco, E L; Simons, A R
1986-05-01
Two programs are described for the emulation of the dynamics of Reed-Frost progressive epidemics in a handheld programmable calculator (HP-41C series). The programs provide a complete record of cases, susceptibles, and immunes at each epidemic period using either the deterministic formulation or the trough analogue of the mechanical model for the stochastic version. Both programs can compute epidemics that include a constant rate of influx or outflux of susceptibles and single or double infectivity time periods. PMID:3962973
Magnusson, J.; Andersson, J.; Bjoernander, M.; Nordblad, P.; Svedlindh, P.
1995-05-01
Experimental results of the temperature, field, and time dependence of the magnetization in high-temperature superconductors displaying the paramagnetic Meissner effect are compared with numerical results from model calculations. In experiments the relaxation rate of the zero-field-cooled magnetization exhibits novel field-dependent properties and the field-cooled magnetization is found to increase with time. A model based on an ensemble of superconducting loops, each loop containing an ordinary Josephson junction or a {pi} junction, is shown to be able to account for most of the experimental results. The time-dependent magnetization is explained by thermally activated flipping of spontaneous orbital magnetic moments, a dynamical process which is fundamentally different from the flux-creep phenomenon usually observed in type-II superconductors.
Test of nuclear level density inputs for Hauser-Feshbach model calculations
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Hornish, M. J.; Massey, T. N.; Salas, A.
2007-10-15
The energy spectra of neutrons, protons, and {alpha}-particles have been measured from the d+{sup 59}Co and {sup 3}He+{sup 58}Fe reactions leading to the same compound nucleus, {sup 61}Ni. The experimental cross sections have been compared to Hauser-Feshbach model calculations using different input level density models. None of them have been found to agree with experiment. It manifests the serious problem with available level density parametrizations especially those based on neutron resonance spacings and density of discrete levels. New level densities and corresponding Fermi-gas parameters have been obtained for reaction product nuclei such as {sup 60}Ni, {sup 60}Co, and {sup 57}Fe.
Transonic viscous flow calculations for a turbine cascade with a two equation turbulence model
NASA Technical Reports Server (NTRS)
Boretti, A. A.
1989-01-01
A numerical method for the study of steady, transonic, turbulent viscous flow through plane turbine cascades is presented. The governing equations are written in Favre-averaged form and closed with a first order model. The turbulent quantities are expressed according to a two-equation kappa-epsilon model where low Reynolds number and compressibility effects are included. The solution is obtained by using a pseudo-unsteady method with improved perturbation propagation properties. The equations are discretized in space by using a finite volume formulation. An explicit multistage dissipative Runge-Kutta algorithm is then used to advance the flow equations in the pseudo-time. First results of calculations compare fairly well with experimental data.
A computer code for calculations in the algebraic collective model of the atomic nucleus
NASA Astrophysics Data System (ADS)
Welsh, T. A.; Rowe, D. J.
2016-03-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.
Calculation of exact p-values when SNPs are tested using multiple genetic models
2014-01-01
Background Several methods have been proposed to account for multiple comparisons in genetic association studies. However, investigators typically test each of the SNPs using multiple genetic models. Association testing using the Cochran-Armitage test for trend assuming an additive, dominant, or recessive genetic model, is commonly performed. Thus, each SNP is tested three times. Some investigators report the smallest p-value obtained from the three tests corresponding to the three genetic models, but such an approach inherently leads to inflated type 1 errors. Because of the small number of tests (three) and high correlation (functional dependence) among these tests, the procedures available for accounting for multiple tests are either too conservative or fail to meet the underlying assumptions (e.g., asymptotic multivariate normality or independence among the tests). Results We propose a method to calculate the exact p-value for each SNP using different genetic models. We performed simulations, which demonstrated the control of type 1 error and power gains using the proposed approach. We applied the proposed method to compute p-value for a polymorphism eNOS -786T>C which was shown to be associated with breast cancer risk. Conclusions Our findings indicate that the proposed method should be used to maximize power and control type 1 errors when analyzing genetic data using additive, dominant, and recessive models. PMID:24950707
A three-dimensional tunnel model for calculation of train-induced ground vibration
NASA Astrophysics Data System (ADS)
Forrest, J. A.; Hunt, H. E. M.
2006-07-01
The frequency range of interest for ground vibration from underground urban railways is approximately 20 to 100 Hz. For typical soils, the wavelengths of ground vibration in this frequency range are of the order of the spacing of train axles, the tunnel diameter and the distance from the tunnel to nearby building foundations. For accurate modelling, the interactions between these entities therefore have to be taken into account. This paper describes an analytical three-dimensional model for the dynamics of a deep underground railway tunnel of circular cross-section. The tunnel is conceptualised as an infinitely long, thin cylindrical shell surrounded by soil of infinite radial extent. The soil is modelled by means of the wave equations for an elastic continuum. The coupled problem is solved in the frequency domain by Fourier decomposition into ring modes circumferentially and a Fourier transform into the wavenumber domain longitudinally. Numerical results for the tunnel and soil responses due to a normal point load applied to the tunnel invert are presented. The tunnel model is suitable for use in combination with track models to calculate the ground vibration due to excitation by running trains and to evaluate different track configurations.
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.
2014-02-15
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model
Using molecular dynamics and quantum mechanics calculations to model fluorescence observables
Speelman, Amy L.; Muñoz-Losa, Aurora; Hinkle, Katie L.; VanBeek, Darren B.; Mennucci, Benedetta; Krueger, Brent P.
2011-01-01
We provide a critical examination of two different methods for generating a donor-acceptor electronic coupling trajectory from a molecular dynamics (MD) trajectory and three methods for sampling that coupling trajectory, allowing the modeling of experimental observables directly from the MD simulation. In the first coupling method we perform a single quantum-mechanical (QM) calculation to characterize the excited state behavior, specifically the transition dipole moment, of the fluorescent probe, which is then mapped onto the configuration space sampled by MD. We then utilize these transition dipoles within the ideal dipole approximation (IDA) to determine the electronic coupling between the probes that mediates the transfer of energy. In the second method we perform a QM calculation on each snapshot and use the complete transition densities to calculate the electronic coupling without need for the IDA. The resulting coupling trajectories are then sampled using three methods ranging from an independent sampling of each trajectory point (the Independent Snapshot Method) to a Markov chain treatment that accounts for the dynamics of the coupling in determining effective rates. The results show that the IDA significantly overestimates the energy transfer rate (by a factor of 2.6) during the portions of the trajectory in which the probes are close to each other. Comparison of the sampling methods shows that the Markov chain approach yields more realistic observables at both high and low FRET efficiencies. Differences between the three sampling methods are discussed in terms of the different mechanisms for averaging over structural dynamics in the system. Convergence of the Markov chain method is carefully examined. Together, the methods for estimating coupling and for sampling the coupling provide a mechanism for directly connecting the structural dynamics modeled by MD with fluorescence observables determined through FRET experiments. PMID:21417498
Long-term changes in the mesosphere calculated by a two-dimensional model
NASA Astrophysics Data System (ADS)
Gruzdev, Aleksandr N.; Brasseur, Guy P.
2005-02-01
We have used the interactive two-dimensional model SOCRATES to investigate the thermal and the chemical response of the mesosphere to the changes in greenhouse gas concentrations observed in the past 50 years (CO2, CH4, water vapor, N2O, CFCs), and to specified changes in gravity wave drag and diffusion in the upper mesosphere. When considering the observed increase in the abundances of greenhouse gases for the past 50 years, a cooling of 3-7 K is calculated in the mesopause region together with a cooling of 4-6 K in the middle mesosphere. Changes in the meridional circulation of the mesosphere damp the pure radiative thermal effect of the greenhouse gases. The largest cooling in the winter upper mesosphere-mesopause region occurs when the observed increase in concentrations of greenhouse gases and the strengthening of the gravity wave drag and diffusion are considered simultaneously. Depending on the adopted strengthening of the gravity wave drag and diffusion, a cooling varying from typically 6-10 K to 10-20 K over the past 50 years is predicted in the extratropical upper mesosphere during wintertime. In summer, however, consistently with observations, the thermal response calculated by the model is insignificant in the vicinity of the mesopause. Although the calculated cooling of the winter mesopause is still less than suggested by some observations, these results lead to the conclusion that the increase in the abundances of greenhouse gases alone may not entirely explain the observed temperature trends in the mesosphere. Long-term changes in the dynamics of the middle atmosphere (and the troposphere), including changes in gravity wave activity may have contributed significantly to the observed long-term changes in thermal structure and chemical composition of the mesosphere.
A model to calculate consistent atmospheric emission projections and its application to Spain
NASA Astrophysics Data System (ADS)
Lumbreras, Julio; Borge, Rafael; de Andrés, Juan Manuel; Rodríguez, Encarnación
Global warming and air quality are headline environmental issues of our time and policy must preempt negative international effects with forward-looking strategies. As part of the revision of the European National Emission Ceilings Directive, atmospheric emission projections for European Union countries are being calculated. These projections are useful to drive European air quality analyses and to support wide-scale decision-making. However, when evaluating specific policies and measures at sectoral level, a more detailed approach is needed. This paper presents an original methodology to evaluate emission projections. Emission projections are calculated for each emitting activity that has emissions under three scenarios: without measures (business as usual), with measures (baseline) and with additional measures (target). The methodology developed allows the estimation of highly disaggregated multi-pollutant, consistent emissions for a whole country or region. In order to assure consistency with past emissions included in atmospheric emission inventories and coherence among the individual activities, the consistent emission projection (CEP) model incorporates harmonization and integration criteria as well as quality assurance/quality check (QA/QC) procedures. This study includes a sensitivity analysis as a first approach to uncertainty evaluation. The aim of the model presented in this contribution is to support decision-making process through the assessment of future emission scenarios taking into account the effect of different detailed technical and non-technical measures and it may also constitute the basis for air quality modelling. The system is designed to produce the information and formats related to international reporting requirements and it allows performing a comparison of national results with lower resolution models such as RAINS/GAINS. The methodology has been successfully applied and tested to evaluate Spanish emission projections up to 2020 for 26
NASA Astrophysics Data System (ADS)
Jacob, D.; Palacios, J. J.
2011-01-01
We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.
NASA Astrophysics Data System (ADS)
Margulis, Vl A.; Muryumin, E. E.; Gaiduk, E. A.
2016-05-01
An effective anisotropic tight-binding model is developed to analytically describe the low-energy electronic structure and optical response of phosphorene (a black phosphorus (BP) monolayer). Within the framework of the model, we derive explicit closed-form expressions, in terms of elementary functions, for the elements of the optical conductivity tensor of phosphorene. These relations provide a convenient parametrization of the highly anisotropic optical response of phosphorene, which allows the reflectance, transmittance, and absorbance of this material to be easily calculated as a function of the frequency of the incident radiation at arbitrary angles of incidence. The results of such a calculation are presented for both a free-standing phosphorene layer and the phosphorene layer deposited on a {{SiO}}2 substrate, and for the two principal cases of polarization of the incident radiation either parallel to or normal to the plane of incidence. Our findings (e.g., a ‘quasi-Brewster’ effect in the reflectance of the phosphorene/{{SiO}}2 overlayer system) pave the way for developing a new, purely optical method of distinguishing BP monolayers.
A Multilayered Box Model for Calculating Preliminary RemediationGoals in Soil Screening
Shan, Chao; Javandel, Iraj
2004-05-21
In the process of screening a soil against a certain contaminant, we define the health-risk based preliminary remediation goal (PRG) as the contaminant concentration above which some remedial action may be required. PRG is thus the first standard (or guidance) for judging a site. An over-estimated PRG (a too-large value) may cause us to miss some contaminated sites that can threaten human health and the environment. An under-estimated PRG (a too-small value), on the other hand, may lead to unnecessary cleanup and waste tremendous resources. The PRGs for soils are often calculated on the assumption that the contaminant concentration in soil does not change with time. However, that concentration usually decreases with time as a result of different chemical and transport mechanisms. The static assumption thus exaggerates the long-term exposure dose and results in a too-small PRG. We present a box model that considers all important transport processes and obeys the law of mass conservation. We can use the model as a tool to estimate the transient contaminant concentrations in air, soil and groundwater. Using these concentrations in conjunction with appropriate health risk parameters, we may estimate the PRGs for different contaminants. As an example, we calculated the tritium PRG for residential soils. The result is quite different from, but within the range of, the two versions of the corresponding PRG previously recommended by the U.S. EPA.
Calculated dosimetric parameters of the IoGold 125I source model 3631-A.
Wierzbicki, J G; Rivard, M J; Waid, D S; Arterbery, V E
1998-11-01
Basic dosimetric parameters as recommended by the AAPM Task Group No. 43 (TG-43) have been determined for recently available IoGold 125I brachytherapy seeds. Monte Carlo methods (MCNP) were used in the calculation of these parameters in water, and results compared with soon to be published experimental parameters also for 125I IoGold seeds as well with parameters for model 6702 and 6711 125I seeds. These parameters were the radial dose function, anisotropy factor and constant, and the dose rate constant. Using MCNP, values for the radial dose function at 0.5, 2.0, and 5.0 cm were 1.053, 0.877, and 0.443, respectively. The anisotropy factor was 0.975, 0.946, 0.945, and 0.952 at 0.5, 1.0, 2.0, and 5.0 cm, respectively, with an anisotropy constant of 0.95. The IoGold dose rate constant was determined by excluding the low energy titanium characteristic x rays produced in the IoGold titanium capsule. Using this post TG-43 revised NIST air kerma methodology, the IoGold dose rate constant was 0.96 cGy h-1 U-1. These calculatively determined parameters for IoGold seeds were compared with those determined experimentally for IoGold seeds, and also compared with parameters determined for model 6702 and 6711 seeds as presented in TG-43. PMID:9829245
Modelling defects in Ni–Al with EAM and DFT calculations
NASA Astrophysics Data System (ADS)
Bianchini, F.; Kermode, J. R.; De Vita, A.
2016-05-01
We present detailed comparisons between the results of embedded atom model (EAM) and density functional theory (DFT) calculations on defected Ni alloy systems. We find that the EAM interatomic potentials reproduce low-temperature structural properties in both the γ and {γ\\prime} phases, and yield accurate atomic forces in bulk-like configurations even at temperatures as high as ∼1200 K. However, they fail to describe more complex chemical bonding, in configurations including defects such as vacancies or dislocations, for which we observe significant deviations between the EAM and DFT forces, suggesting that derived properties such as (free) energy barriers to vacancy migration and dislocation glide may also be inaccurate. Testing against full DFT calculations further reveals that these deviations have a local character, and are typically severe only up to the first or second neighbours of the defect. This suggests that a QM/MM approach can be used to accurately reproduce QM observables, fully exploiting the EAM potential efficiency in the MM zone. This approach could be easily extended to ternary systems for which developing a reliable and fully transferable EAM parameterisation would be extremely challenging e.g. Ni alloy model systems with a W or Re-containing QM zone.
A comparison of ozone trends from SME and SBUV satellite observations and model calculations
NASA Technical Reports Server (NTRS)
Rusch, D. W.; Clancy, R. T.
1988-01-01
Data on monthly ozone abundance trends near the stratopause, observed by the Ultraviolet Spectrometer (UVS) on the SME and by the Solar Backscatter Ultraviolet Instrument (SBUV) on NIMBUS-7 are presented for June, September, and January of the years 1982-1986. Globally averaged trends determined from the SME data (-0.5 + or - 1.3 percent/yr) were found to fall within model calculations by Rusch and Clancy (1988); the SBUV trends, on the other hand, were found to exceed maximum predicted ozone decreases by a factor of 3 or more. Detailed comparison of the two data sets indicated that an absolute offset of 3 percent/yr accounts for much of the difference between the two trends; the offset is considered to be due to incomplete characterization of the SBUV calibration drift. Both the UVS and SBUV data exhibited similar seasonal and latitudinal variations in ozone trends, which were reproduced by photochemical model calculations that included latitude-dependent NMC temperature trends over the 1982-1986 period.
NASA Astrophysics Data System (ADS)
Tkachenko, E. A.; Postnikov, D. V.; Blesman, A. I.; Polonyankin, D. A.
2016-02-01
The paper justifies the usefulness of preliminary ion implantation before forming the protective coating by magnetron sputtering in order to improve its adhesion and hence the coating durability. The important characteristics of coatings include the adhesion force and energy. To select the optimal modes of coatings formation, materials and equipment it is proposed the theoretical method of the adhesion force calculation in binary metallic systems. The adhesion force and energy depend on the elemental distribution in the depth of the coating and on the single bond force as in the substrate and in the coating. In addition the adhesion force is also determined by the coefficient taking into account the reduction of the possible bond number and depending on the surface purity and the structural defects presence. The developed model includes all of the above factors. The elements distribution over the depth of the coating was estimated using a kinetic model of mass transfer by vacancy mechanism. The paper presents the results of the adhesion force calculation for the chromium coating on the surface of A21382 steel.
A 3-D Theoretical Model for Calculating Plasma Effects in Germanium Detectors
NASA Astrophysics Data System (ADS)
Wei, Wenzhao; Liu, Jing; Mei, Dongming; Cubed Collaboration
2015-04-01
In the detection of WIMP-induced nuclear recoil with Ge detectors, the main background source is the electron recoil produced by natural radioactivity. The capability of discriminating nuclear recoil (n) from electron recoil (γ) is crucial to WIMP searches. Digital pulse shape analysis is an encouraging approach to the discrimination of nuclear recoil from electron recoil since nucleus is much heavier than electron and heavier particle generates ionization more densely along its path, which forms a plasma-like cloud of charge that shields the interior from the influence of the electric field. The time needed for total disintegration of this plasma region is called plasma time. The plasma time depends on the initial density and radius of the plasma-like cloud, diffusion constant for charge carriers, and the strength of electric field. In this work, we developed a 3-D theoretical model for calculating the plasma time in Ge detectors. Using this model, we calculated the plasma time for both nuclear recoils and electron recoils to study the possibility for Ge detectors to realize n/ γ discrimination and improve detector sensitivity in detecting low-mass WIMPs. This work is supported by NSF in part by the NSF PHY-0758120, DOE Grant DE-FG02-10ER46709, and the State of South Dakota.
NASA Astrophysics Data System (ADS)
Giannoglou, V.; Stylianidis, E.
2016-06-01
Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s) calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s) and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.
NASA Astrophysics Data System (ADS)
Inaniwa, T.; Kanematsu, N.
2015-01-01
In scanned carbon-ion (C-ion) radiotherapy, some primary C-ions undergo nuclear reactions before reaching the target and the resulting particles deliver doses to regions at a significant distance from the central axis of the beam. The effects of these particles on physical dose distribution are accounted for in treatment planning by representing the transverse profile of the scanned C-ion beam as the superposition of three Gaussian distributions. In the calculation of biological dose distribution, however, the radiation quality of the scanned C-ion beam has been assumed to be uniform over its cross-section, taking the average value over the plane at a given depth (monochrome model). Since these particles, which have relatively low radiation quality, spread widely compared to the primary C-ions, the radiation quality of the beam should vary with radial distance from the central beam axis. To represent its transverse distribution, we propose a trichrome beam model in which primary C-ions, heavy fragments with atomic number Z ≥ 3, and light fragments with Z ≤ 2 are assigned to the first, second, and third Gaussian components, respectively. Assuming a realistic beam-delivery system, we performed computer simulations using Geant4 Monte Carlo code for analytical beam modeling of the monochrome and trichrome models. The analytical beam models were integrated into a treatment planning system for scanned C-ion radiotherapy. A target volume of 20 × 20 × 40 mm3 was defined within a water phantom. A uniform biological dose of 2.65 Gy (RBE) was planned for the target with the two beam models based on the microdosimetric kinetic model (MKM). The plans were recalculated with Geant4, and the recalculated biological dose distributions were compared with the planned distributions. The mean target dose of the recalculated distribution with the monochrome model was 2.72 Gy (RBE), while the dose with the trichrome model was 2.64 Gy (RBE). The monochrome model
A finite-difference time-domain technique was used to calculate the specific absorption rate (SAR) at various sites in a heterogeneous block model of man. he block model represented a close approximation to a full-scale heterogeneous phantom model. oth models were comprised of a ...
Modeling a superficial radiotherapy X-ray source for relative dose calculations.
Johnstone, Christopher D; LaFontaine, Richard; Poirier, Yannick; Tambasco, Mauro
2015-01-01
The purpose of this study was to empirically characterize and validate a kilovoltage (kV) X-ray beam source model of a superficial X-ray unit for relative dose calculations in water and assess the accuracy of the British Journal of Radiology Supplement 25 (BJR 25) percentage depth dose (PDD) data. We measured central axis PDDs and dose profiles using an Xstrahl 150 X-ray system. We also compared the measured and calculated PDDs to those in the BJR 25. The Xstrahl source was modeled as an effective point source with varying spatial fluence and spectra. In-air ionization chamber measurements were made along the x- and y-axes of the X-ray beam to derive the spatial fluence and half-value layer (HVL) measurements were made to derive the spatially varying spectra. This beam characterization and resulting source model was used as input for our in-house dose calculation software (kVDoseCalc) to compute radiation dose at points of interest (POIs). The PDDs and dose profiles were measured using 2, 5, and 15 cm cone sizes at 80, 120, 140, and 150 kVp energies in a scanning water phantom using IBA Farmer-type ionization chambers of volumes 0.65 and 0.13 cc, respectively. The percent difference in the computed PDDs compared with our measurements range from -4.8% to 4.8%, with an overall mean percent difference and standard deviation of 1.5% and 0.7%, respectively. The percent difference between our PDD measurements and those from BJR 25 range from -14.0% to 15.7%, with an overall mean percent difference and standard deviation of 4.9% and 2.1%, respectively - showing that the measurements are in much better agreement with kVDoseCalc than BJR 25. The range in percent difference between kVDoseCalc and measurement for profiles was -5.9% to 5.9%, with an overall mean percent difference and standard deviation of 1.4% and 1.4%, respectively. The results demonstrate that our empirically based X-ray source modeling approach for superficial X-ray therapy can be used to accurately
Calculation of Heavy Ion Inactivation and Mutation Rates in Radial Dose Model of Track Structure
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wilson, John W.; Shavers, Mark R.; Katz, Robert
1997-01-01
In the track structure model, the inactivation cross section is found by summing an inactivation probability over all impact parameters from the ion to the sensitive sites within the cell nucleus. The inactivation probability is evaluated by using the dose response of the system to gamma rays and the radial dose of the ions and may be equal to unity at small impact parameters. We apply the track structure model to recent data with heavy ion beams irradiating biological samples of E. Coli, B. Subtilis spores, and Chinese hamster (V79) cells. Heavy ions have observed cross sections for inactivation that approach and sometimes exceed the geometric size of the cell nucleus. We show how the effects of inactivation may be taken into account in the evaluation of the mutation cross sections in the track structure model through correlation of sites for gene mutation and cell inactivation. The model is fit to available data for HPRT (hypoxanthine guanine phosphoribosyl transferase) mutations in V79 cells, and good agreement is found. Calculations show the high probability for mutation by relativistic ions due to the radial extension of ions track from delta rays. The effects of inactivation on mutation rates make it very unlikely that a single parameter such as LET (linear energy transfer) can be used to specify radiation quality for heavy ion bombardment.
Comparative Assessment of Models and Methods To Calculate Grid Electricity Emissions.
Ryan, Nicole A; Johnson, Jeremiah X; Keoleian, Gregory A
2016-09-01
Due to the complexity of power systems, tracking emissions attributable to a specific electrical load is a daunting challenge but essential for many environmental impact studies. Currently, no consensus exists on appropriate methods for quantifying emissions from particular electricity loads. This paper reviews a wide range of the existing methods, detailing their functionality, tractability, and appropriate use. We identified and reviewed 32 methods and models and classified them into two distinct categories: empirical data and relationship models and power system optimization models. To illustrate the impact of method selection, we calculate the CO2 combustion emissions factors associated with electric-vehicle charging using 10 methods at nine charging station locations around the United States. Across the methods, we found an up to 68% difference from the mean CO2 emissions factor for a given charging site among both marginal and average emissions factors and up to a 63% difference from the average across average emissions factors. Our results underscore the importance of method selection and the need for a consensus on approaches appropriate for particular loads and research questions being addressed in order to achieve results that are more consistent across studies and allow for soundly supported policy decisions. The paper addresses this issue by offering a set of recommendations for determining an appropriate model type on the basis of the load characteristics and study objectives. PMID:27499211
Fission yield calculation using toy model based on Monte Carlo simulation
Jubaidah; Kurniadi, Rizal
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
2014-01-01
Background Exact drug dosing in isolated limb perfusion (ILP) and infusion (ILI) is essential. We developed and evaluated a model for calculating the volume of extremities and compared this model with body weight- and height-dependent parameters. Methods The extremity was modeled by a row of coupled truncated cones. The sizes of the truncated cone bases were derived from the circumference measurements of the extremity at predefined levels (5 cm). The resulting volumes were added. This extremity volume model was correlated to the computed tomography (CT) volume data of the extremity (total limb volume). The extremity volume was also correlated with the patient’s body weight, body mass index (BMI) and ideal body weight (IBW). The no-fat CT limb volume was correlated with the circumference-measured limb volume corrected by the ideal-body-weight to actual-body-weight ratio (IBW corrected-limb-volume). Results The correlation between the CT volume and the volume measured by the circumference was high and significant. There was no correlation between the limb volume and the bare body weight, BMI or IBW. The correlation between the no-fat CT volume and IBW-corrected limb volume was high and significant. Conclusions An appropriate drug dosing in ILP can be achieved by combining the limb volume with the simple circumference measurements and the IBW to body-weight ratio. PMID:24684972
Modeling for calculation of vanadium oxide film composition in reactive-sputtering process
Yu He; Jiang Yadong; Wang Tao; Wu Zhiming; Yu Junsheng; Wei Xiongbang
2010-05-15
A modified model describing the changing ratio of vanadium to oxide on the target and substrate as a function of oxygen flow is described. Actually, this ratio is extremely sensitive to the deposition conditions during the vanadium oxide (VO{sub x}) reactive magnetron-sputtering process. The method in this article is an extension of a previously presented Berg's model, where only a single stoichiometry compound layer was taken into consideration. This work deals with reactive magnetron sputtering of vanadium oxide films with different oxygen contents from vanadium metal target. The presence of vanadium mixed oxides at both target and substrate surface produced during reactive-sputtering process are included. It shows that the model can be used for the optimization of film composition with respect to oxygen flow in a stable hysteresis-free reactive-sputtering process. A systematic experimental study of deposition rate of VO{sub x} with respect to target ion current was also made. Compared to experimental results, it was verified that the theoretical calculation from modeling is in good agreement with the experimental counterpart.
Monte Carlo Modeling of Computed Tomography Ceiling Scatter for Shielding Calculations.
Edwards, Stephen; Schick, Daniel
2016-04-01
Radiation protection for clinical staff and members of the public is of paramount importance, particularly in occupied areas adjacent to computed tomography scanner suites. Increased patient workloads and the adoption of multi-slice scanning systems may make unshielded secondary scatter from ceiling surfaces a significant contributor to dose. The present paper expands upon an existing analytical model for calculating ceiling scatter accounting for variable room geometries and provides calibration data for a range of clinical beam qualities. The practical effect of gantry, false ceiling, and wall attenuation in limiting ceiling scatter is also explored and incorporated into the model. Monte Carlo simulations were used to calibrate the model for scatter from both concrete and lead surfaces. Gantry attenuation experimental data showed an effective blocking of scatter directed toward the ceiling at angles up to 20-30° from the vertical for the scanners examined. The contribution of ceiling scatter from computed tomography operation to the effective dose of individuals in areas surrounding the scanner suite could be significant and therefore should be considered in shielding design according to the proposed analytical model. PMID:26910026
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Present state-of-the-art of two-phase flow model calculations
Lyczkowski, R.W.; Ding, Jianmin; Bouillard, J.X.
1992-07-01
Argonne National Laboratory (ANL) has developed two- and three-dimensional computer programs to predict hydrodynamics in complex fluid/solids systems including atmospheric and pressurized bubbling and circulating fluidized-bed combustors and gasifiers, concentrated suspension (slurry) piping systems and advanced particle-bed reactors for space-based applications, for example. The computer programs are based upon phenomenological mechanistic models and can predict frequency of bubble formation, bubble size and growth, bubble frequency and rise-velocity, solids volume fraction, gas and solids velocities and low dimension chaotic attracters. The results of these hydrodynamic calculations are used as inputs to mechanistic models to predict heat transfer and erosion and have been used to produce simplified models and guidelines to assist in design and scaling. An extensive coordinated effort involving industry, government, and university laboratory data has served to validate the various models. Babcock & Wilcox (B&W), in close collaboration with ANL, has developed the three dimensional FORCE2 computer program which is both transient as well as steady-state.
Present state-of-the-art of two-phase flow model calculations
Lyczkowski, R.W.; Ding, Jianmin; Bouillard, J.X.
1992-07-01
Argonne National Laboratory (ANL) has developed two- and three-dimensional computer programs to predict hydrodynamics in complex fluid/solids systems including atmospheric and pressurized bubbling and circulating fluidized-bed combustors and gasifiers, concentrated suspension (slurry) piping systems and advanced particle-bed reactors for space-based applications, for example. The computer programs are based upon phenomenological mechanistic models and can predict frequency of bubble formation, bubble size and growth, bubble frequency and rise-velocity, solids volume fraction, gas and solids velocities and low dimension chaotic attracters. The results of these hydrodynamic calculations are used as inputs to mechanistic models to predict heat transfer and erosion and have been used to produce simplified models and guidelines to assist in design and scaling. An extensive coordinated effort involving industry, government, and university laboratory data has served to validate the various models. Babcock Wilcox (B W), in close collaboration with ANL, has developed the three dimensional FORCE2 computer program which is both transient as well as steady-state.
Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite
NASA Astrophysics Data System (ADS)
Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia
2015-12-01
Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.
Online calculation of global marine halocarbon emissions in the chemistry climate model EMAC
NASA Astrophysics Data System (ADS)
Lennartz, Sinikka T.; Krysztofiak-Tong, Gisèle; Sinnhuber, Björn-Martin; Marandino, Christa A.; Tegtmeier, Susann; Krüger, Kirstin; Ziska, Franziska; Quack, Birgit
2015-04-01
Marine produced trace gases such as dibromomethane (CH2Br2), bromoform (CHBr3) and methyl iodide (CH3I) significantly impact tropospheric and stratospheric chemistry. Marine emissions are the dominant source of halocarbons to the atmosphere, and therefore, it is crucial to represent them accurately in order to model their impact on atmospheric chemistry. Chemistry climate models are a frequently used tool for quantifying the influence of halocarbons on ozone depletion. In these model simulations, marine emissions of halocarbons have mainly been prescribed from established emission climatologies, thus neglecting the interaction with the actual state of the atmosphere in the model. Here, we calculate halocarbon marine emissions for the first time online by coupling the submodel AIRSEA to the chemical climate model EMAC. Our method combines prescribed water concentrations and varying atmospheric concentrations derived from the model instead of using fixed emission climatologies. This method has a number of conceptual and practical advantages, as the modelled emissions can respond consistently to changes in temperature, wind speed, possible sea ice cover and atmospheric concentration in the model. Differences between the climatology-based and the new approach (2-18%) result from consideration of the actual, time-varying state of the atmosphere and the consideration of air-side transfer velocities. Extensive comparison to observations from aircraft, ships and ground stations reveal that interactively computing the air-sea flux from prescribed water concentrations leads to equally or more accurate atmospheric concentrations in the model compared to using constant emission climatologies. The effect of considering the actual state of the atmosphere is largest for gases with concentrations close to equilibrium in the surface ocean, such as CH2Br2. Halocarbons with comparably long atmospheric lifetimes, e.g. CH2Br2, are reflected more accurately in EMAC when compared to time
Evaluating range-expansion models for calculating nonnative species' expansion rate
Preuss, Sonja; Low, Matthew; Cassel-Lundhagen, Anna; Berggren, Åsa
2014-01-01
Species range shifts associated with environmental change or biological invasions are increasingly important study areas. However, quantifying range expansion rates may be heavily influenced by methodology and/or sampling bias. We compared expansion rate estimates of Roesel's bush-cricket (Metrioptera roeselii, Hagenbach 1822), a nonnative species currently expanding its range in south-central Sweden, from range statistic models based on distance measures (mean, median, 95th gamma quantile, marginal mean, maximum, and conditional maximum) and an area-based method (grid occupancy). We used sampling simulations to determine the sensitivity of the different methods to incomplete sampling across the species' range. For periods when we had comprehensive survey data, range expansion estimates clustered into two groups: (1) those calculated from range margin statistics (gamma, marginal mean, maximum, and conditional maximum: ˜3 km/year), and (2) those calculated from the central tendency (mean and median) and the area-based method of grid occupancy (˜1.5 km/year). Range statistic measures differed greatly in their sensitivity to sampling effort; the proportion of sampling required to achieve an estimate within 10% of the true value ranged from 0.17 to 0.9. Grid occupancy and median were most sensitive to sampling effort, and the maximum and gamma quantile the least. If periods with incomplete sampling were included in the range expansion calculations, this generally lowered the estimates (range 16–72%), with exception of the gamma quantile that was slightly higher (6%). Care should be taken when interpreting rate expansion estimates from data sampled from only a fraction of the full distribution. Methods based on the central tendency will give rates approximately half that of methods based on the range margin. The gamma quantile method appears to be the most robust to incomplete sampling bias and should be considered as the method of choice when sampling the entire
Wang, Junmei; Cieplak, Piotr; Li, Jie; Cai, Qin; Hsieh, MengJuei; Luo, Ray; Duan, Yong
2012-01-01
In the previous publications of this series, we presented a set of Thole induced dipole interaction models using four types of screening functions. In this work, we document our effort to refine the van der Waals parameters for the Thole polarizable models. Following the philosophy of AMBER force field development, the van der Waals (vdW) parameters were tuned for the Thole model with linear screening function to reproduce both the ab initio interaction energies and the experimental densities of pure liquids. An in-house genetic algorithm was applied to maximize the fitness of “chromosomes” which is a function of the root-mean-square errors (RMSE) of interaction energy and liquid density. To efficiently explore the vdW parameter space, a novel approach was developed to estimate the liquid densities for a given vdW parameter set using the mean residue-residue interaction energies through interpolation/extrapolation. This approach allowed the costly molecular dynamics simulations be performed at the end of each optimization cycle only and eliminated the simulations during the cycle. Test results show notable improvements over the original AMBER FF99 vdW parameter set as indicated by the reduction in errors of the calculated pure liquid density (d), heat of vaporization (Hvap) and hydration energy. The average percent error (APE) of the densities of 59 pure liquids was reduced from 5.33% to 2.97%; the RMSE of Hvap was reduced from 1.98 kcal/mol to 1.38 kcal/mol; the RMSE of solvation free energies of 15 compounds was reduced from 1.56 kcal/mol to 1.38 kcal/mol. For the interaction energies of 1639 dimers, the overall performance of the optimized vdW set is slightly better than the original FF99 vdW set (RMSE of 1.56 versus 1.63 kcal/mol). The optimized vdW parameter set was also evaluated for the exponential screening function used in the Amoeba force field to assess its applicability for different types of screening functions. Encouragingly, comparable
Wang, Junmei; Cieplak, Piotr; Li, Jie; Cai, Qin; Hsieh, Meng-Juei; Luo, Ray; Duan, Yong
2012-06-21
In the previous publications of this series, we presented a set of Thole induced dipole interaction models using four types of screening functions. In this work, we document our effort to refine the van der Waals parameters for the Thole polarizable models. Following the philosophy of AMBER force field development, the van der Waals (vdW) parameters were tuned for the Thole model with linear screening function to reproduce both the ab initio interaction energies and the experimental densities of pure liquids. An in-house genetic algorithm was applied to maximize the fitness of "chromosomes" which is a function of the root-mean-square errors (RMSE) of interaction energy and liquid density. To efficiently explore the vdW parameter space, a novel approach was developed to estimate the liquid densities for a given vdW parameter set using the mean residue-residue interaction energies through interpolation/extrapolation. This approach allowed the costly molecular dynamics simulations be performed at the end of each optimization cycle only and eliminated the simulations during the cycle. Test results show notable improvements over the original AMBER FF99 vdW parameter set, as indicated by the reduction in errors of the calculated pure liquid densities (d), heats of vaporization (H(vap)), and hydration energies. The average percent error (APE) of the densities of 59 pure liquids was reduced from 5.33 to 2.97%; the RMSE of H(vap) was reduced from 1.98 to 1.38 kcal/mol; the RMSE of solvation free energies of 15 compounds was reduced from 1.56 to 1.38 kcal/mol. For the interaction energies of 1639 dimers, the overall performance of the optimized vdW set is slightly better than the original FF99 vdW set (RMSE of 1.56 versus 1.63 kcal/mol). The optimized vdW parameter set was also evaluated for the exponential screening function used in the Amoeba force field to assess its applicability for different types of screening functions. Encouragingly, comparable performance was
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
NASA Technical Reports Server (NTRS)
Barton, Jonathan S.; Hall, Dorothy K.; Sigurosson, Oddur; Williams, Richard S., Jr.; Smith, Laurence C.; Garvin, James B.
1999-01-01
Two ascending European Space Agency (ESA) Earth Resources Satellites (ERS)-1/-2 tandem-mode, synthetic aperture radar (SAR) pairs are used to calculate the surface elevation of Hofsjokull, an ice cap in central Iceland. The motion component of the interferometric phase is calculated using the 30 arc-second resolution USGS GTOPO30 global digital elevation product and one of the ERS tandem pairs. The topography is then derived by subtracting the motion component from the other tandem pair. In order to assess the accuracy of the resultant digital elevation model (DEM), a geodetic airborne laser-altimetry swath is compared with the elevations derived from the interferometry. The DEM is also compared with elevations derived from a digitized topographic map of the ice cap from the University of Iceland Science Institute. Results show that low temporal correlation is a significant problem for the application of interferometry to small, low-elevation ice caps, even over a one-day repeat interval, and especially at the higher elevations. Results also show that an uncompensated error in the phase, ramping from northwest to southeast, present after tying the DEM to ground-control points, has resulted in a systematic error across the DEM.
Matching excluded-volume hadron-resonance gas models and perturbative QCD to lattice calculations
NASA Astrophysics Data System (ADS)
Albright, M.; Kapusta, J.; Young, C.
2014-08-01
We match three hadronic equations of state at low energy densities to a perturbatively computed equation of state of quarks and gluons at high energy densities. One of them includes all known hadrons treated as point particles, which approximates attractive interactions among hadrons. The other two include, in addition, repulsive interactions in the form of excluded volumes occupied by the hadrons. A switching function is employed to make the crossover transition from one phase to another without introducing a thermodynamic phase transition. A χ2 fit to accurate lattice calculations with temperature 100
Determination of a silane intermolecular force field potential model from an ab initio calculation
Li, Arvin Huang-Te; Chao, Sheng D.; Chang, Chien-Cheng
2010-12-15
Intermolecular interaction potentials of the silane dimer in 12 orientations have been calculated by using the Hartree-Fock (HF) self-consistent theory and the second-order Moeller-Plesset (MP2) perturbation theory. We employed basis sets from Pople's medium-size basis sets [up to 6-311++G(3df, 3pd)] and Dunning's correlation consistent basis sets (up to the triply augmented correlation-consistent polarized valence quadruple-zeta basis set). We found that the minimum energy orientations were the G and H conformers. We have suggested that the Si-H attractions, the central silicon atom size, and electronegativity play essential roles in weakly binding of a silane dimer. The calculated MP2 potential data were employed to parametrize a five-site force field for molecular simulations. The Si-Si, Si-H, and H-H interaction parameters in a pairwise-additive, site-site potential model for silane molecules were regressed from the ab initio energies.
Test of 600 and 750 MeV NN matrix on elastic scattering Glauber model calculations
NASA Astrophysics Data System (ADS)
Brissaud, I.
1980-09-01
The 600 and 750 MeV proton nucleus elastic scattering cross section and polarization calculations have been performed in the framework of the Glauber model to test the pp and pn scattering amplitudes deduced from a phase shift analysis by Bystricky, Lechanoine and Lehar. It is well known that up to now we do not possess a non-phenomenological NN scattering matrix at intermediate energies. However proton-nucleus scattering analyses are used to extract information about short range correlations1), Δ resonance2) or pion condensation presences)... etc. Most scattering calculations made at these energies have been done with phenomenological NN amplitudes having a gaussian q-dependence 10050_2005_Article_BF01438168_TeX2GIFE1.gif A(q) = {kσ }/{4π }(α + i) e^{ - β ^2 q^2 /2} and 10050_2005_Article_BF01438168_TeX2GIFE2.gif C(q) = {kσ }/{4π }iq(α + i) D_e - β ^2 q^2 /2 K and σ being respectively the projectile momentum and the total pN total cross section. The parameters α, β and D are badly known and are adjusted by fitting some specific reactions as p+4He elastic scattering4). Even when these amplitudes provide good fits to the data, our understanding of the dynamics of the scattering remains obscure.
Cluster form factor calculation in the ab initio no-core shell model
Navratil, Petr
2004-11-01
We derive expressions for cluster overlap integrals or channel cluster form factors for ab initio no-core shell model (NCSM) wave functions. These are used to obtain the spectroscopic factors and can serve as a starting point for the description of low-energy nuclear reactions. We consider the composite system and the target nucleus to be described in the Slater determinant (SD) harmonic oscillator (HO) basis while the projectile eigenstate to be expanded in the Jacobi coordinate HO basis. This is the most practical case. The spurious center of mass components present in the SD bases are removed exactly. The calculated cluster overlap integrals are translationally invariant. As an illustration, we present results of cluster form factor calculations for <{sup 5}He vertical bar{sup 4}He+n>, <{sup 5}He vertical bar{sup 3}H+d>, <{sup 6}Li vertical bar{sup 4}He+d>, <{sup 6}Be vertical bar{sup 3}He+{sup 3}He>, <{sup 7}Li vertical bar{sup 4}He+{sup 3}H>, <{sup 7}Li vertical bar{sup 6}Li+n>, <{sup 8}Be vertical bar{sup 6}Li+d>, <{sup 8}Be vertical bar{sup 7}Li+p>, <{sup 9}Li vertical bar{sup 8}Li+n>, and <{sup 13}C vertical bar{sup 12}C+n>, with all the nuclei described by multi-({Dirac_h}/2{pi}){omega} NCSM wave functions.
Wang, Junmei; Cieplak, Piotr; Li, Jie; Wang, Jun; Cai, Qin; Hsieh, MengJuei; Lei, Hongxing; Luo, Ray; Duan, Yong
2011-01-01
In the companion paper, we presented a set of induced dipole interaction models using four types of screening functions, which include the Applequist (no screening), the Thole linear, the Thole exponential model, and the Thole Tinker-like (another form of exponential screening function) functions. In this work, we evaluate the performance of polarizability models using large set of amino acid analog pairs that are frequently observed in protein structures as a benchmark. For each amino acid pair we calculated quantum mechanical interaction energies at the MP2/aug-cc-pVTZ//MP2/6-311++G(d,p) level with the basis set superposition error (BSSE) correction and compared them with molecular mechanics results. Encouragingly, all the polarizable models significantly outperform the additive F94 and F03 models (mimicking AMBER ff94/ff99 and ff03 force fields, respectively) in reproducing the BSSE-corrected quantum mechanical interaction energies. Particularly, the root-mean-square errors (RMSE) for three Thole models in Set A (where the 1–2 and 1–3 interactions are turned off and all 1–4 interactions are included) are 1.456, 1.417 and 1.406 kcal/mol for Model AL (Thole Linear), Model AE (Thole exponential) and Model AT (Thole Tinker-like), respectively. In contrast, the RMSE are 3.729 and 3.433 kcal/mol for F94 and F03 models, respectively. A similar trend was observed for the average unsigned errors (AUE), which are 1.057, 1.025, 1.011, 2.219 and 2.070 kcal/mol for AL, AE, AT, F94/ff99 and F03, respectively. Analyses based on the trend line slopes indicate that the two fixed charge models substantially underestimate the relative strengths of non-charge-charge interactions by 24% (F03) and 35% (F94), respectively, whereas the four polarizable models over-estimate the relative strengths by 5% (AT), 3% (AL, AE) and 13% (AA), respectively. Agreement was further improved by adjusting the van der Waals parameters. Judging from the notably improved accuracy in comparison to
Wang, Junmei; Cieplak, Piotr; Li, Jie; Wang, Jun; Cai, Qin; Hsieh, MengJuei; Lei, Hongxing; Luo, Ray; Duan, Yong
2011-03-31
In the companion paper, we presented a set of induced dipole interaction models using four types of screening functions, which include the Applequist (no screening), the Thole linear, the Thole exponential model, and the Thole Tinker-like (another form of exponential screening function) functions. In this work, we evaluate the performance of polarizability models using a large set of amino acid analog pairs in conformations that are frequently observed in protein structures as a benchmark. For each amino acid pair, we calculated quantum mechanical interaction energies at the MP2/aug-cc-pVTZ//MP2/6-311++G(d,p) level with the basis set superposition error (BSSE) correction and compared them with molecular mechanics results. Encouragingly, all polarizable models significantly outperform the additive F94 and F03 models (mimicking AMBER ff94/ff99 and ff03 force fields, respectively) in reproducing the BSSE-corrected quantum mechanical interaction energies. In particular, the root-mean-square errors (RMSEs) for three Thole models in Set A (where the 1-2 and 1-3 interactions are turned off and all 1-4 interactions are included) are 1.456, 1.417, and 1.406 kcal/mol for model AL (Thole Linear), model AE (Thole exponential), and model AT (Thole Tinker-like), respectively. In contrast, the RMSEs are 3.729 and 3.433 kcal/mol for F94 and F03 models, respectively. A similar trend was observed for the average unsigned errors (AUEs), which are 1.057, 1.025, 1.011, 2.219, and 2.070 kcal/mol for AL, AE, AT, F94/ff99, and F03, respectively. Analyses based on the trend line slopes indicate that the two fixed charge models substantially underestimate the relative strengths of noncharge-charge interactions by 24 (F03) and 35% (F94), respectively, whereas the four polarizable models overestimate the relative strengths by 5 (AT), 3 (AL, AE), and 13% (AA), respectively. Agreement was further improved by adjusting the van der Waals parameters. Judging from the notably improved accuracy in
The calculational modeling of impurity mass transfer in NPP circuits with liquid metal coolant
NASA Astrophysics Data System (ADS)
Alexeev, V.; Kozlov, F.; Kumaev, V.; Orlova, E.; Klimanova, Yu; Torbenkova, I.
2008-02-01
The authors create three levels of models (one-dimensional, two-dimensional and three-dimensional) for estimation of impurity mass transfer in sodium circuits units as well as applicable computational programs. In the one-dimensional model the flow path elements are simulated by annular channels. The Lagrange coordinate system is used in the mathematical description of processes in channels. The two-dimensional model is based on the porous body approximation and enables one to simulate global spatial distributions of coolant flow velocity fields, coolant and fuel rods temperatures, and concentration distribution of transferred substances. The mathematical description of passive multicomponent impurity transfer is carried out using the system of the differential equations with sources and impurity diffusion, written for each component. The equations are solved by the finite-difference method. The developed version of the three-dimensional code is based on a general approach of the spatial three-dimensional description of thermal-hydraulic and mass-transfer processes in fuel rod bundles. The determining system of finite-difference equations of hydrodynamics and heat exchange is obtained using the control volume approach. As a result of the performed calculations, valuable data on corrosion products transfer in the primary circuit of the BN-600 reactor are obtained.
Symmetry-Adapted Ab Initio Shell Model for Nuclear Structure Calculations
NASA Astrophysics Data System (ADS)
Draayer, J. P.; Dytrych, T.; Launey, K. D.; Langr, D.
2012-05-01
An innovative concept, the symmetry-adapted ab initio shell model, that capitalizes on partial as well as exact symmetries that underpin the structure of nuclei, is discussed. This framework is expected to inform the leading features of nuclear structure and reaction data for light and medium mass nuclei, which are currently inaccessible by theory and experiment and for which predictions of modern phenomenological models often diverge. We use powerful computational and group-theoretical algorithms to perform ab initio CI (configuration-interaction) calculations in a model space spanned by SU(3) symmetry-adapted many-body configurations with the JISP16 nucleon-nucleon interaction. We demonstrate that the results for the ground states of light nuclei up through A = 16 exhibit a strong dominance of low-spin and high-deformation configurations together with an evident symplectic structure. This, in turn, points to the importance of using a symmetry-adapted framework, one based on an LS coupling scheme with the associated spatial configurations organized according to deformation.
NASA Astrophysics Data System (ADS)
Dementyeva, S. O.; Ilin, N. V.; Mareev, E. A.
2015-03-01
Modern methods for predicting thunderstorms and lightnings with the use of high-resolution numerical models are considered. An analysis of the Lightning Potential Index (LPI) is performed for various microphysics parameterizations with the use of the Weather Research and Forecasting (WRF) model. The maximum index values are shown to depend significantly on the type of parameterization. This makes it impossible to specify a single threshold LPI for various parameterizations as a criterion for the occurrence of lightning flashes. The topographic LPI maps underestimate the sizes of regions of likely thunderstorm-hazard events. Calculating the electric field under the assumption that ice and graupel are the main charge carriers is considered a new algorithm of lightning prediction. The model shows that the potential difference (between the ground and cloud layer at a given altitude) sufficient to generate a discharge is retained in a larger region than is predicted by the LPI. The main features of the spatial distribution of the electric field and potential agree with observed data.
Calculating model for equivalent thermal defocus amount in infrared imaging system
NASA Astrophysics Data System (ADS)
Zhang, Chengshuo; Shi, Zelin; Xu, Baoshu; Feng, Bin
2016-01-01
The main effect of temperature change on infrared imaging system is the focus shift of infrared lenses. This paper analyzes the equivalent influence on imaging between the temperature change and the defocus at room temperature. In order to quantify the equivalence, we define an equivalent thermal defocus amount (ETDA). The ETDA describes the distance of the photosensitive surface shifting at room temperature, which has the same effect on imaging as the temperature changes. To model the ETDA, the expression of the focal shift as a function of temperature is obtained by solving partial differential equations for the thermal effect on light path firstly with some approximations. Then point spread functions of the thermal effect and defocus at room temperature are modeled based on wave aberration. The calculating model of ETDA is finally established by making their PSFs equal under the condition that the cutoff frequency of infrared imaging systems is much smaller than that of infrared lens. The experimental results indicate that defocus of ETDA at room temperature has the same influence on imaging as the thermal effect. Prospectively, experiments at high/low temperature can be replaced by experiments at room temperature with ETDA.
A Simplified 1-D Model for Calculating CO2 Leakage through Conduits
Zhang, Y.; Oldenburg, C.M.
2011-02-15
In geological CO{sub 2} storage projects, a cap rock is generally needed to prevent CO{sub 2} from leaking out of the storage formation. However, the injected CO{sub 2} may still encounter some discrete flow paths such as a conductive well or fault (here referred to as conduits) through the cap rock allowing escape of CO{sub 2} from the storage formation. As CO{sub 2} migrates upward, it may migrate into the surrounding formations. The amount of mass that is lost to the formation is called attenuation. This report describes a simplified model to calculate the CO{sub 2} mass flux at different locations of the conduit and the amount of attenuation to the surrounding formations. From the comparison among the three model results, we can conclude that the steady-state conduit model (SSCM) provides a more accurate solution than the PMC at a given discretization. When there is not a large difference between the permeability of the surrounding formation and the permeability of the conduits, and there is leak-off at the bottom formation (the formation immediately above the CO{sub 2} plume), a fine discretization is needed for an accurate solution. Based on this comparison, we propose to use the SSCM in the rapid prototype for now given it does not produce spurious oscillations, and is already in FORTRAN and therefore can be easily made into a dll for use in GoldSim.
The oxidation of isoprene in the troposphere - Mechanism and model calculations
NASA Technical Reports Server (NTRS)
Brewer, D. A.; Ogliaruso, M. A.; Augustsson, T. R.; Levine, J. S.
1984-01-01
Calculations have been performed for 15 deg N and 45 deg N latitude continental conditions using a one-dimensional, steady state photochemical model that incorporates a chemical mechanism describing the oxidation of isoprene by OH and O3 in the troposphere. At the higher latitude, anthropogenic hydrocarbon emission effects on NO(x) vertical profiles, as well as those of HNO3, overshadow isoprene emissions effects; at the lower latitude, reduced anthropogenic emissions and increased isoprene emissions respectively yield 26 and 4 percent increases in NO(x) and HNO3 column contents. It is suggested that a significant quantity of isoprene goes to the formation of longer carbon chain oxygenated organic species.
Hot-spot model for calculating the threshold for shock initiation of pyrotechnic mixtures
Maiden, D.E.; Nutt, G.L.
1986-05-14
A model for predicting the pressure required to initiate a reaction in pyrotechnic mixtures is described. The pore temperature is determined by calculating the dynamics of pore collapse. An approximate solution for the motion of the pore radius is determined as a function of the pore size, viscosity, yield stress and pressure. The heating of the material surrounding the pore is given by an approximate solution of the heat conduction equation with a source term accounting for viscoplastic heating as a function of the pore motion. Ignition occurs when the surface temperature of the pore matches the hot-spot ignition criterion. The hot-spot ignition temperatures for 2Al/Fe/sub 2/O/sub 3/, Ti/2B, and Ti/C are determined. Predictions for the ignition pressure of 2Al/Fe/sub 2/O/sub 3/ (thermite) are in resonable agreement with experiment. 18 refs.
β-decay half-life of V50 calculated by the shell model
NASA Astrophysics Data System (ADS)
Haaranen, M.; Srivastava, P. C.; Suhonen, J.; Zuber, K.
2014-10-01
In this work we survey the detectability of the β- channel of 2350V leading to the first excited 2+ state in 2450Cr. The electron-capture (EC) half-life corresponding to the transition of 2350V to the first excited 2+ state in 2250Ti had been measured earlier. Both of the mentioned transitions are 4th-forbidden non-unique. We have performed calculations of all the involved wave functions by using the nuclear shell model with the GXPF1A interaction in the full f-p shell. The computed half-life of the EC branch is in good agreement with the measured one. The predicted half-life for the β- branch is in the range ≈2×1019 yr whereas the present experimental lower limit is 1.5×1018 yr. We discuss also the experimental lay-out needed to detect the β--branch decay.
S-model calculations for high-energy-electron-impact double ionization of helium
NASA Astrophysics Data System (ADS)
Gasaneo, G.; Mitnik, D. M.; Randazzo, J. M.; Ancarani, L. U.; Colavecchia, F. D.
2013-04-01
In this paper the double ionization of helium by high-energy electron impact is studied. The corresponding four-body Schrödinger equation is transformed into a set of driven equations containing successive orders in the projectile-target interaction. The transition amplitude obtained from the asymptotic limit of the first-order solution is shown to be equivalent to the familiar first Born approximation. The first-order driven equation is solved within a generalized Sturmian approach for an S-wave (e,3e) model process with high incident energy and small momentum transfer corresponding to published measurements. Two independent numerical implementations, one using spherical and the other hyperspherical coordinates, yield mutual agreement. From our ab initio solution, the transition amplitude is extracted, and single differential cross sections are calculated and could be taken as benchmark values to test other numerical methods in a previously unexplored energy domain.
Ab Initio No-Core Shell Model Calculations Using Realistic Two- and Three-Body Interactions
Navratil, P; Ormand, W E; Forssen, C; Caurier, E
2004-11-30
There has been significant progress in the ab initio approaches to the structure of light nuclei. One such method is the ab initio no-core shell model (NCSM). Starting from realistic two- and three-nucleon interactions this method can predict low-lying levels in p-shell nuclei. In this contribution, we present a brief overview of the NCSM with examples of recent applications. We highlight our study of the parity inversion in {sup 11}Be, for which calculations were performed in basis spaces up to 9{Dirac_h}{Omega} (dimensions reaching 7 x 10{sup 8}). We also present our latest results for the p-shell nuclei using the Tucson-Melbourne TM three-nucleon interaction with several proposed parameter sets.
Realistic shell-model calculations and exotic nuclei around {sup 132}Sn
Covello, A.; Itaco, N.; Coraggio, L.; Gargano, A.
2008-11-11
We report on a study of exotic nuclei around doubly magic {sup 132}Sn in terms of the shell model employing a realistic effective interaction derived from the CD-Bonn nucleon-nucleon potential. The short-range repulsion of the latter is renormalized by constructing a smooth low-momentum potential, V{sub low-k}, that is used directly as input for the calculation of the effective interaction. In this paper, we focus attention on proton-neutron multiplets in the odd-odd nuclei {sup 134}Sb, {sup 136}Sb. We show that the behavior of these multiplets is quite similar to that of the analogous multiplets in the counterpart nuclei in the {sup 208}Pb region, {sup 210}Bi and {sup 212}Bi.
Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.
Energy Science and Technology Software Center (ESTSC)
2014-11-01
Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometrymore » Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.« less
Frost, G. J.; Fried, Alan; Lee, Y.- N.; Wert, B.; Henry, B.; Drummond, J. R.; Evans, M. J.; Fehsenfeld, Fred C.; Goldan, P. D.; Holloway, J. S.; Hubler, Gerhard F.; Jakoubek, R.; Jobson, B Tom T.; Knapp, K.; Kuster, W. C.; Roberts, J.; Rudolph, Jochen; Ryerson, T. B.; Stohl, A.; Stroud, C.; Sueper, D. T.; Trainer, Michael; Williams, J.
2002-04-18
Formaldehyde (CH2O) measurements from two independent instruments are compared with photochemical box model calculations. The measurements were made on the National Oceanic and Atmospheric Administration P-3 aircraft as part of the 1997 North Atlantic Regional Experiment (NARE 97). The data set considered here consists of air masses sampled between 0 and 8 km over the North Atlantic Ocean which do not show recent influence from emissions or transport. These air masses therefore should be in photochemical steady state with respect to CH2O when constrained by the other P-3 measurements, and methane oxidation was expected to be the predominant source of CH2O in these air masses. For this data set both instruments measured identical CH2O concentrations to within 40 parts per trillion by volume (pptv) on average over the 0–800 pptv range, although differences larger than the combined 2s total uncertainty estimates were observed between the two instruments in 11% of the data. Both instruments produced higher CH2O concentrations than the model in more than 90% of this data set, with a median measured-modeled [CH2O] difference of 0.13 or 0.18 ppbv (depending on the instrument), or about a factor of 2. Such large differences cannot be accounted for by varying model input parameters within their respective uncertainty ranges. After examining the possible reasons for the model-measurement discrepancy, we conclude that there are probably one or more additional unknown sources of CH2O in the North Atlantic troposphere.
NASA Astrophysics Data System (ADS)
Frost, G. J.; Fried, A.; Lee, Y.-N.; Wert, B.; Henry, B.; Drummond, J. R.; Evans, M. J.; Fehsenfeld, F. C.; Goldan, P. D.; Holloway, J. S.; Hübler, G.; Jakoubek, R.; Jobson, B. T.; Knapp, K.; Kuster, W. C.; Roberts, J.; Rudolph, J.; Ryerson, T. B.; Stohl, A.; Stroud, C.; Sueper, D. T.; Trainer, M.; Williams, J.
2002-04-01
Formaldehyde (CH2O) measurements from two independent instruments are compared with photochemical box model calculations. The measurements were made on the National Oceanic and Atmospheric Administration P-3 aircraft as part of the 1997 North Atlantic Regional Experiment (NARE 97). The data set considered here consists of air masses sampled between 0 and 8 km over the North Atlantic Ocean which do not show recent influence from emissions or transport. These air masses therefore should be in photochemical steady state with respect to CH2O when constrained by the other P-3 measurements, and methane oxidation was expected to be the predominant source of CH2O in these air masses. For this data set both instruments measured identical CH2O concentrations to within 40 parts per trillion by volume (pptv) on average over the 0-800 pptv range, although differences larger than the combined 2σ total uncertainty estimates were observed between the two instruments in 11% of the data. Both instruments produced higher CH2O concentrations than the model in more than 90% of this data set, with a median measured-modeled [CH2O] difference of 0.13 or 0.18 ppbv (depending on the instrument), or about a factor of 2. Such large differences cannot be accounted for by varying model input parameters within their respective uncertainty ranges. After examining the possible reasons for the model-measurement discrepancy, we conclude that there are probably one or more additional unknown sources of CH2O in the North Atlantic troposphere.
Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.
Blann, Marshall
2014-11-01
Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometry Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-cycle fuel economy values for a model type. 600.209-08 Section 600.209-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Calculating Fuel Economy Values § 600.209-08 Calculation of vehicle-specific 5-cycle fuel economy values for...
Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Hada, Megumi; Cucinotta, Francis A.; Wu, Honglu
2014-01-01
The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ''biological Bragg curve'' is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta, et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called "overkill".
NASA Astrophysics Data System (ADS)
Mathews, Alyssa
Emissions from the combustion of fossil fuels are a growing pollution concern throughout the global community, as they have been linked to numerous health issues. The freight transportation sector is a large source of these emissions and is expected to continue growing as globalization persists. Within the US, the expanding development of the natural gas industry is helping to support many industries and leading to increased transportation. The process of High Volume Hydraulic Fracturing (HVHF) is one of the newer advanced extraction techniques that is increasing natural gas and oil reserves dramatically within the US, however the technique is very resource intensive. HVHF requires large volumes of water and sand per well, which is primarily transported by trucks in rural areas. Trucks are also used to transport waste away from HVHF well sites. This study focused on the emissions generated from the transportation of HVHF materials to remote well sites, dispersion, and subsequent health impacts. The Geospatial Intermodal Freight Transport (GIFT) model was used in this analysis within ArcGIS to identify roadways with high volume traffic and emissions. High traffic road segments were used as emissions sources to determine the atmospheric dispersion of particulate matter using AERMOD, an EPA model that calculates geographic dispersion and concentrations of pollutants. Output from AERMOD was overlaid with census data to determine which communities may be impacted by increased emissions from HVHF transport. The anticipated number of mortalities within the impacted communities was calculated, and mortality rates from these additional emissions were computed to be 1 in 10 million people for a simulated truck fleet meeting stricter 2007 emission standards, representing a best case scenario. Mortality rates due to increased truck emissions from average, in-use vehicles, which represent a mixed age truck fleet, are expected to be higher (1 death per 341,000 people annually).
Development of CT scanner models for patient organ dose calculations using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Gu, Jianwei
CT scanner models in this dissertation were versatile and accurate tools for estimating dose to different patient phantoms undergoing various CT procedures. The organ doses from kV and MV CBCT were also calculated. This dissertation finally summarizes areas where future research can be performed including MV CBCT further validation and application, dose reporting software and image and dose correlation study.
Drover, Damion, Ryan
2011-12-01
One of the largest exports in the Southeast U.S. is forest products. Interest in biofuels using forest biomass has increased recently, leading to more research into better forest management BMPs. The USDA Forest Service, along with the Oak Ridge National Laboratory, University of Georgia and Oregon State University are researching the impacts of intensive forest management for biofuels on water quality and quantity at the Savannah River Site in South Carolina. Surface runoff of saturated areas, transporting excess nutrients and contaminants, is a potential water quality issue under investigation. Detailed maps of variable source areas and soil characteristics would therefore be helpful prior to treatment. The availability of remotely sensed and computed digital elevation models (DEMs) and spatial analysis tools make it easy to calculate terrain attributes. These terrain attributes can be used in models to predict saturated areas or other attributes in the landscape. With laser altimetry, an area can be flown to produce very high resolution data, and the resulting data can be resampled into any resolution of DEM desired. Additionally, there exist many maps that are in various resolutions of DEM, such as those acquired from the U.S. Geological Survey. Problems arise when using maps derived from different resolution DEMs. For example, saturated areas can be under or overestimated depending on the resolution used. The purpose of this study was to examine the effects of DEM resolution on the calculation of topographic wetness indices used to predict variable source areas of saturation, and to find the best resolutions to produce prediction maps of soil attributes like nitrogen, carbon, bulk density and soil texture for low-relief, humid-temperate forested hillslopes. Topographic wetness indices were calculated based on the derived terrain attributes, slope and specific catchment area, from five different DEM resolutions. The DEMs were resampled from LiDAR, which is a
NASA Astrophysics Data System (ADS)
Rüstemoǧlu, Sevinç; Barutçu, Burak; Sibel Menteş, Å..
2010-05-01
The continuous usage of fossil fuels as primary energy source is the reason of the emission of CO and powerless economy of the country affected by the great flactuations in the unit price of energy sources. In recent years, developments in wind energy sector and the supporting new renewable energy policies of the countries allow the new wind farm owners and the firms who expect to be an owner to consider and invest on the renewable sources. In this study, the annual production of the turbines with 1.8 kW and 30 kW which are available for Istanbul Technical University in Energy Institute is calculated by Wasp and WindPro Field Flow Models and the wind characteristics of the area are analysed. The meteorological data used in calculation includes the period between 02.March.2000 and 31.May.2004 and is taken from the meteorological mast ( ) in Istanbul Technical University's campus area. The measurement data is taken from 2 m and 10 m heights with hourly means. The topography, roughness classes and shelter effects are defined in the models to make accurate extrapolation to the turbine sites. As an advantage, the region is nearly 3.5 km close to the Istanbul Bosphorous but as it can be seen from the Wasp and WindPro Model Results, the Bosphorous effect is interrupted by the new buildings and hight forestry. The shelter effect of these high buildings have a great influence on the wind flow and decrease the high wind energy potential which is produced by the Bosphorous effect. This study, which determines wind characteristics and expected annual production, is important for this Project Site and therefore gains importance before the construction of wind energy system. However, when the system is operating, developing the energy management skills, forecasting the wind speed and direction will become important. At this point, three statistical models which are Kalman Fitler, AR Model and Neural Networks models are used to determine the success of each method for correct
Azoia, Nuno G; Fernandes, Margarida M; Micaêlo, Nuno M; Soares, Cláudio M; Cavaco-Paulo, Artur
2012-05-01
Molecular dynamics simulations of a keratin/peptide complex have been conducted to predict the binding affinity of four different peptides toward human hair. Free energy calculations on the peptides' interaction with the keratin model demonstrated that electrostatic interactions are believed to be the main driving force stabilizing the complex. The molecular mechanics-Poisson-Boltzmann surface area methodology used for the free energy calculations demonstrated that the dielectric constant in the protein's interior plays a major role in the free energy calculations, and the only way to obtain accordance between the free energy calculations and the experimental binding results was to use the average dielectric constant. PMID:22275089
NASA Technical Reports Server (NTRS)
Ackerman, Thomas P.; Kinne, Stefan A.; Heymsfield, Andrew J.; Valero, Francisco P. J.
1990-01-01
Several aircraft were employed during the FIRE Cirrus IFO in order to make nearly simultaneous observations of cloud properties and fluxes. A segment of the flight data collected on 28 October 1988 during which the NASA Ames ER-2 overflew the NCAR King Air was analyzed. The ER-2 flew at high altitude making observations of visible and infrared radiances and infrared flux and cloud height and thickness. During this segment, the King Air flew just above the cloud base making observations of ice crystal size and shape, local meteorological variables, and infrared fluxes. While the two aircraft did not collect data exactly coincident in space and time, they did make observations within a few minutes of each other. For this case study, the infrared radiation balance of the cirrus layer is of primary concern. Observations of the upwelling 10 micron radiance, made from the ER-2, can be used to deduce the 10 micron optical depth of the layer. The upwelling broadband infrared flux is also measured from the ER-2. At the same time, the upwelling and downwelling infrared flux at the cloud base is obtained from the King Air measurements. Information on cloud microphysics is also available from the King Air. Using this data in conjunction with atmospheric temperature and humidity profiles from local radiosondes, the necessary inputs for an infrared radiative transfer model can be developed. Infrared radiative transfer calculations are performed with a multispectral two-stream model. The model fluxes at the cloud base and at 19 km are then compared with the aircraft observations to determine whether the model is performing well. Cloud layer heating rates can then be computed from the radiation exchange.
Heterogeneous reactions in sulfuric acid aerosols: A framework for model calculations
Hanson, D.R.; Ravishankara, A.R.; Solomon, S. |
1994-02-20
A framework for applying rates of heterogeneous chemical reactions measured in the laboratory to small sulfuric acid aerosols found in the stratosphere is presented. The procedure for calculating the applicable reactive uptake coefficients using laboratory-measured parameters is developed, the necessary laboratory-measured quantities are discussed, and a set of equations for use in models are presented. This approach is demonstrated to be essential for obtaining uptake coefficients for the HOCl+HCl and ClONO{sub 2}+HCl reactions applicable to the stratosphere. In these cases the laboratory-measured uptake coefficients have to be substantially corrected for the small size of the atmospheric aerosol droplets. The measured uptake coefficients for N{sub 2}O{sub 5}+H{sub 2}O and ClONO{sub 2}+H{sub 2}O as well as those for other heterogeneous reactions are discussed in the context of this model. Finally, the derived uptake coefficients were incorporated in two-dimensional dynamical and photochemical model. Thus for the first time the HCl reactions in sulfuric acid have been included. Substantial direct chlorine activation and consequent ozone destruction is shown to occur due to heterogeneous reactions involving HCl for volcanically perturbed aerosol conditions at high latitudes. Smaller but significant chlorine activation also is predicted for background sulfuric acid aerosol in these regions. The coupling between homogeneous and heterogeneous chemistry is shown to lead to important changes in the concentrations of various reactive species. The basic physical and chemical quantities needed to better constrain the model input parameters are identified. 39 refs., 10 figs., 4 tabs.
NASA Astrophysics Data System (ADS)
Yamamoto, Daisuke; Marmorini, Giacomo; Danshita, Ippei
2015-01-01
Magnetization processes of spin-1 /2 layered triangular-lattice antiferromagnets (TLAFs) under a magnetic field H are studied by means of a numerical cluster mean-field method with a scaling scheme. We find that small antiferromagnetic couplings between the layers give rise to several types of extra quantum phase transitions among different high-field coplanar phases. Especially, a field-induced first-order transition is found to occur at H ≈0.7 Hs, where Hs is the saturation field, as another common quantum effect of ideal TLAFs in addition to the well-established one-third plateau. Our microscopic model calculation with appropriate parameters shows excellent agreement with experiments on Ba3CoSb2O9 [T. Susuki et al., Phys. Rev. Lett. 110, 267201 (2013)]. Given this fact, we suggest that the Co2 + -based compounds may allow for quantum simulations of intriguing properties of this simple frustrated model, such as quantum criticality and supersolid states.
A Lagrangian Approach for Calculating Microsphere Deposition in a One-Dimensional Lung-Airway Model.
Vaish, Mayank; Kleinstreuer, Clement
2015-09-01
Using the open-source software openfoam as the solver, a novel approach to calculate microsphere transport and deposition in a 1D human lung-equivalent trumpet model (TM) is presented. Specifically, for particle deposition in a nonlinear trumpetlike configuration a new radial force has been developed which, along with the regular drag force, generates particle trajectories toward the wall. The new semi-empirical force is a function of any given inlet volumetric flow rate, micron-particle diameter, and lung volume. Particle-deposition fractions (DFs) in the size range from 2 μm to 10 μm are in agreement with experimental datasets for different laminar and turbulent inhalation flow rates as well as total volumes. Typical run times on a single processor workstation to obtain actual total deposition results at comparable accuracy are 200 times less than that for an idealized whole-lung geometry (i.e., a 3D-1D model with airways up to 23rd generation in single-path only). PMID:26141916
Pressure Calculation in a Compressor Cylinder by a Modified New Helmholtz Modelling
NASA Astrophysics Data System (ADS)
MA, Y.-C.; MIN, O.-K.
2001-06-01
Pressure pulsation has a critical importance in the design of refrigerant compressor since it affects the performance by increasing over-compression loss, and it acts as a noise and vibration source. For the numerical analysis of pressure pulsation, quasi-steady flow equation has been used because of its easy manipulation derived from the pressure difference. By considering the dynamic effects of fluid, a new Helmholtz resonator model was also proposed on the basis of the continuity and the momentum equations, which consists of necks and cavities in flow manifolds.In this paper, a modified new Helmholtz resonator is introduced to include the gas inertia effect due to the volume decrease in the cavity. Comparisons between this modified new Helmholtz calculations and experimental results show that it is necessary to include the gas inertia effect in predicting pressure over-shooting phenomena at an instant of valve opening state and this modified new Helmholtz model can describe the over-compression phenomena in the compressor cylinder, a phenomenon which hinders a noise source identification of compressor.
Yamamoto, Daisuke; Marmorini, Giacomo; Danshita, Ippei
2015-01-16
Magnetization processes of spin-1/2 layered triangular-lattice antiferromagnets (TLAFs) under a magnetic field H are studied by means of a numerical cluster mean-field method with a scaling scheme. We find that small antiferromagnetic couplings between the layers give rise to several types of extra quantum phase transitions among different high-field coplanar phases. Especially, a field-induced first-order transition is found to occur at H≈0.7H_{s}, where H_{s} is the saturation field, as another common quantum effect of ideal TLAFs in addition to the well-established one-third plateau. Our microscopic model calculation with appropriate parameters shows excellent agreement with experiments on Ba_{3}CoSb_{2}O_{9} [T. Susuki et al., Phys. Rev. Lett. 110, 267201 (2013)]. Given this fact, we suggest that the Co^{2+}-based compounds may allow for quantum simulations of intriguing properties of this simple frustrated model, such as quantum criticality and supersolid states. PMID:25635561
NASA Astrophysics Data System (ADS)
De Lucas, Javier
2015-03-01
A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.
NASA Astrophysics Data System (ADS)
Funk, O.; Pfeilsticker, K.
2003-03-01
This paper addresses the statistics underlying cloudy sky radiative transfer (RT) by inspection of the distribution of the path lengths of solar photons. Recent studies indicate that this approach is promising, since it might reveal characteristics about the diffusion process underlying atmospheric radiative transfer (Pfeilsticker, 1999). Moreover, it uses an observable that is directly related to the atmospheric absorption and, therefore, of climatic relevance. However, these studies are based largely on the accuracy of the measurement of the photon path length distribution (PPD). This paper presents a refined analysis method based on high resolution spectroscopy of the oxygen A-band. The method is validated by Monte Carlo simulation atmospheric spectra. Additionally, a new method to measure the effective optical thickness of cloud layers, based on fitting the measured differential transmissions with a 1-dimensional (discrete ordinate) RT model, is presented. These methods are applied to measurements conducted during the cloud radar inter-comparison campaign CLARE’98, which supplied detailed cloud structure information, required for the further analysis. For some exemplary cases, measured path length distributions and optical thicknesses are presented and backed by detailed RT model calculations. For all cases, reasonable PPDs can be retrieved and the effects of the vertical cloud structure are found. The inferred cloud optical thicknesses are in agreement with liquid water path measurements.
Voxel modeling of rabbits for use in radiological dose rate calculations.
Caffrey, E A; Johansen, M P; Higley, K A
2016-01-01
Radiation dose to biota is generally calculated using Monte Carlo simulations of whole body ellipsoids with homogeneously distributed radioactivity throughout. More complex anatomical phantoms, termed voxel phantoms, have been developed to test the validity of these simplistic geometric models. In most voxel models created to date, human tissue composition and density values have been used in lieu of biologically accurate values for non-human biota. This has raised questions regarding variable tissue composition and density effects on the fraction of radioactive emission energy absorbed within tissues (e.g. the absorbed fraction - AF), along with implications for age-dependent dose rates as organisms mature. The results of this study on rabbits indicates that the variation in composition between two mammalian tissue types (e.g. human vs rabbit bones) made little difference in self-AF (SAF) values (within 5% over most energy ranges). However, variable tissue density (e.g. bone vs liver) can significantly impact SAF values. An examination of differences across life-stages revealed increasing SAF with testis and ovary size of over an order of magnitude for photons and several factors for electrons, indicating the potential for increasing dose rates to these sensitive organs as animals mature. AFs for electron energies of 0.1, 0.2, 0.4, 0.5, 0.7, 1.0, 1.5, 2.0, and 4.0 MeV and photon energies of 0.01, 0.015, 0.02, 0.03, 0.05, 0.1, 0.2, 0.5, 1.0, 1.5, 2.0, and 4.0 MeV are provided for eleven rabbit tissues. The data presented in this study can be used to calculate accurate organ dose rates for rabbits and other small rodents; to aide in extending dose results among different mammal species; and to validate the use of ellipsoidal models for regulatory purposes. PMID:25971772
Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements
NASA Astrophysics Data System (ADS)
Kim, Myung-Hee Y.; Wu, Honglu; Hada, Megumi; Cucinotta, Francis
The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET g or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ‘‘biological Bragg curve’’ is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called “overkill”. F. A. Cucinotta, I. Plante, A. L. Ponomarev, and M. Y. Kim, Nuclear Interactions in Heavy Ion Transport and Event
NASA Astrophysics Data System (ADS)
Legler, C. R.; Brown, N. R.; Dunbar, R. A.; Harness, M. D.; Nguyen, K.; Oyewole, O.; Collier, W. B.
2015-06-01
The Scaled Quantum Mechanical (SQM) method of scaling calculated force constants to predict theoretically calculated vibrational frequencies is expanded to include a broad array of polarized and augmented basis sets based on the split valence 6-31G and 6-311G basis sets with the B3LYP density functional. Pulay's original choice of a single polarized 6-31G(d) basis coupled with a B3LYP functional remains the most computationally economical choice for scaled frequency calculations. But it can be improved upon with additional polarization functions and added diffuse functions for complex molecular systems. The new scale factors for the B3LYP density functional and the 6-31G, 6-31G(d), 6-31G(d,p), 6-31G+(d,p), 6-31G++(d,p), 6-311G, 6-311G(d), 6-311G(d,p), 6-311G+(d,p), 6-311G++(d,p), 6-311G(2d,p), 6-311G++(2d,p), 6-311G++(df,p) basis sets are shown. The double d polarized models did not perform as well and the source of the decreased accuracy was investigated. An alternate system of generating internal coordinates that uses the out-of plane wagging coordinate whenever it is possible; makes vibrational assignments via potential energy distributions more meaningful. Automated software to produce SQM scaled vibrational calculations from different molecular orbital packages is presented.
Kawano, T.; Moeller, P.; Wilson, W. B.
2008-11-15
Theoretical {beta}-delayed-neutron spectra are calculated based on the Quasiparticle Random-Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after {beta} decay to the granddaughter residual are more accurately calculated than in previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra agree reasonably well with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors.
Combining molecular dynamics and an electrodiffusion model to calculate ion channel conductance.
Wilson, Michael A; Nguyen, Thuy Hien; Pohorille, Andrew
2014-12-14
Establishing the relation between the structures and functions of protein ion channels, which are protein assemblies that facilitate transmembrane ion transport through water-filled pores, is at the forefront of biological and medical sciences. A reliable way to determine whether our understanding of this relation is satisfactory is to reproduce the measured ionic conductance over a broad range of applied voltages. This can be done in molecular dynamics simulations by way of applying an external electric field to the system and counting the number of ions that traverse the channel per unit time. Since this approach is computationally very expensive we develop a markedly more efficient alternative in which molecular dynamics is combined with an electrodiffusion equation. This alternative approach applies if steady-state ion transport through channels can be described with sufficient accuracy by the one-dimensional diffusion equation in the potential given by the free energy profile and applied voltage. The theory refers only to line densities of ions in the channel and, therefore, avoids ambiguities related to determining the surface area of the channel near its endpoints or other procedures connecting the line and bulk ion densities. We apply the theory to a simple, model system based on the trichotoxin channel. We test the assumptions of the electrodiffusion equation, and determine the precision and consistency of the calculated conductance. We demonstrate that it is possible to calculate current/voltage dependence and accurately reconstruct the underlying (equilibrium) free energy profile, all from molecular dynamics simulations at a single voltage. The approach developed here applies to other channels that satisfy the conditions of the electrodiffusion equation. PMID:25494790
Combining molecular dynamics and an electrodiffusion model to calculate ion channel conductance
NASA Astrophysics Data System (ADS)
Wilson, Michael A.; Nguyen, Thuy Hien; Pohorille, Andrew
2014-12-01
Establishing the relation between the structures and functions of protein ion channels, which are protein assemblies that facilitate transmembrane ion transport through water-filled pores, is at the forefront of biological and medical sciences. A reliable way to determine whether our understanding of this relation is satisfactory is to reproduce the measured ionic conductance over a broad range of applied voltages. This can be done in molecular dynamics simulations by way of applying an external electric field to the system and counting the number of ions that traverse the channel per unit time. Since this approach is computationally very expensive we develop a markedly more efficient alternative in which molecular dynamics is combined with an electrodiffusion equation. This alternative approach applies if steady-state ion transport through channels can be described with sufficient accuracy by the one-dimensional diffusion equation in the potential given by the free energy profile and applied voltage. The theory refers only to line densities of ions in the channel and, therefore, avoids ambiguities related to determining the surface area of the channel near its endpoints or other procedures connecting the line and bulk ion densities. We apply the theory to a simple, model system based on the trichotoxin channel. We test the assumptions of the electrodiffusion equation, and determine the precision and consistency of the calculated conductance. We demonstrate that it is possible to calculate current/voltage dependence and accurately reconstruct the underlying (equilibrium) free energy profile, all from molecular dynamics simulations at a single voltage. The approach developed here applies to other channels that satisfy the conditions of the electrodiffusion equation.
Model calculated global, regional and megacity premature mortality due to air pollution
NASA Astrophysics Data System (ADS)
Lelieveld, J.; Barlas, C.; Giannadaki, D.; Pozzer, A.
2013-07-01
Air pollution by fine particulate matter (PM2.5) and ozone (O3) has increased strongly with industrialization and urbanization. We estimate the premature mortality rates and the years of human life lost (YLL) caused by anthropogenic PM2.5 and O3 in 2005 for epidemiological regions defined by the World Health Organization (WHO). This is based upon high-resolution global model calculations that resolve urban and industrial regions in greater detail compared to previous work. Results indicate that 69% of the global population is exposed to an annual mean anthropogenic PM2.5 concentration of >10 μg m-3 (WHO guideline) and 33% to > 25 μg m-3 (EU directive). We applied an epidemiological health impact function and find that especially in large countries with extensive suburban and rural populations, air pollution-induced mortality rates have been underestimated given that previous studies largely focused on the urban environment. We calculate a global respiratory mortality of about 773 thousand/year (YLL ≈ 5.2 million/year), 186 thousand/year by lung cancer (YLL ≈ 1.7 million/year) and 2.0 million/year by cardiovascular disease (YLL ≈ 14.3 million/year). The global mean per capita mortality caused by air pollution is about 0.1% yr-1. The highest premature mortality rates are found in the Southeast Asia and Western Pacific regions (about 25% and 46% of the global rate, respectively) where more than a dozen of the most highly polluted megacities are located.
Spectra for the A = 6 reactions calculated from a three-body resonance model
NASA Astrophysics Data System (ADS)
Paris, Mark W.; Hale, Gerald M.
2016-06-01
We develop a resonance model of the transition matrix for three-body breakup reactions of the A = 6 system and present calculations for the nucleon observed spectra, which are important for inertial confinement fusion and Big Bang nucleosynthesis (BBN). The model is motivated by the Faddeev approach where the form of the T matrix is written as a sum of the distinct Jacobi coordinate systems corresponding to particle configurations (α, n-n) and (n; n-α) to describe the final state. The structure in the spectra comes from the resonances of the two-body subsystems of the three-body final state, namely the singlet (T = 1) nucleon-nucleon (NN) anti-bound resonance, and the Nα resonances designated the ground state (Jπ = {{{3^ - }} over 2}) and first excited state (Jπ = {{{1^ - }} over 2}) of the A = 5 systems 5He and 5Li. These resonances are described in terms of single-level, single-channel R-matrix parameters that are taken from analyses of NN and Nα scattering data. While the resonance parameters are approximately charge symmetric, external charge-dependent effects are included in the penetrabilities, shifts, and hard-sphere phases, and in the level energies to account for internal Coulomb differences. The shapes of the resonance contributions to the spectrum are fixed by other, two-body data and the only adjustable parameters in the model are the combinatorial amplitudes for the compound system. These are adjusted to reproduce the observed nucleon spectra from measurements at the Omega and NIF facilities. We perform a simultaneous, least-squares fit of the tt neutron spectra and the 3He3He proton spectra. Using these amplitudes we make a prediction of the α spectra for both reactions at low energies. Significant differences in the tt and 3He3He spectra are due to Coulomb effects.
Statistical equilibrium calculations for silicon in early-type model stellar atmospheres
NASA Technical Reports Server (NTRS)
Kamp, L. W.
1976-01-01
Line profiles of 36 multiplets of silicon (Si) II, III, and IV were computed for a grid of model atmospheres covering the range from 15,000 to 35,000 K in effective temperature and 2.5 to 4.5 in log (gravity). The computations involved simultaneous solution of the steady-state statistical equilibrium equations for the populations and of the equation of radiative transfer in the lines. The variables were linearized, and successive corrections were computed until a minimal accuracy of 1/1000 in the line intensities was reached. The common assumption of local thermodynamic equilibrium (LTE) was dropped. The model atmospheres used also were computed by non-LTE methods. Some effects that were incorporated into the calculations were the depression of the continuum by free electrons, hydrogen and ionized helium line blocking, and auto-ionization and dielectronic recombination, which later were found to be insignificant. Use of radiation damping and detailed electron (quadratic Stark) damping constants had small but significant effects on the strong resonance lines of Si III and IV. For weak and intermediate-strength lines, large differences with respect to LTE computations, the results of which are also presented, were found in line shapes and strengths. For the strong lines the differences are generally small, except for the models at the hot, low-gravity extreme of our range. These computations should be useful in the interpretation of the spectra of stars in the spectral range B0-B5, luminosity classes III, IV, and V.
Zhang, Guozhi; Luo, Qingming; Zeng, Shaoqun; Liu, Qian
2008-02-01
A new whole-body computational phantom, the Visible Chinese Human (VCH), was developed using high-resolution transversal photographs of a Chinese adult male cadaver. Following the segmentation and tridimensional reconstruction, a voxel-based model that faithfully represented the average anatomical characteristics of the Chinese population was established for radiation dosimetry. The vascular system of VCH was fully preserved, and the cadaver specimen was processed in the standing posture. A total of 8,920 slices were obtained by continuous sectioning at 0.2 mm intervals, and 48 organs and tissues were segmented from the tomographic color images at 5440 x 4080 pixel resolution, corresponding to a voxel size of 0.1 x 0.1 x 0.2 mm3. The resulting VCH computational phantom, consisting of 230 x 120 x 892 voxels with a unit volume of 2 x 2 x 2 mm3, was ported into Monte Carlo code MCNPX2.5 to calculate the conversion coefficients from kerma free-in-air to absorbed dose and to effective dose for external monoenergetic photon beams from 15 keV to 10 MeV under six idealized external irradiation geometries (anterior-posterior, posterior-anterior, left lateral, right lateral, rotational, and isotropic). Organ masses of the VCH model are fairly different from other human phantoms. Differences of up to 300% are observed between doses from ICRP 74 data and those of VIP-Man. Detailed information from the VCH model is able to improve the radiological datasets, particular for the Chinese population, and provide insights into the research of various computational phantoms. PMID:18188046
NASA Astrophysics Data System (ADS)
Mueller, Roland Guenther
1987-06-01
In order to account for subcooled boiling in calculations of neutron physics and thermal hydraulics of light water reactors (where vapor bubbles strongly influence the nuclear chain reaction), a dynamic model is derived from the time-dependent conservation equations. It contains methods for the time-dependent determination of evaporation and condensation heat flow and for the heat transfer coefficient in subcooled boiling. It enables the complete two-phase flow region to be treated consistently. The calculation model was verified using measured data of experiments covering a wide range of thermodynamic boundary conditions. In all cases very good agreement is reached. The results from the coupling of the new calculation model with a neutron kinetics program proves its suitability for the steady-state and transient calculation of reactor cores.
NASA Astrophysics Data System (ADS)
Zheng, Na; Xu, Hai-Bo
2015-10-01
An empirical numerical model that includes nuclear absorption, multiple Coulomb scattering and energy loss is presented for the calculation of transmission through thick objects in high energy proton radiography. In this numerical model the angular distributions are treated as Gaussians in the laboratory frame. A Monte Carlo program based on the Geant4 toolkit was developed and used for high energy proton radiography experiment simulations and verification of the empirical numerical model. The two models are used to calculate the transmission fraction of carbon and lead step-wedges in proton radiography at 24 GeV/c, and to calculate radial transmission of the French Test Object in proton radiography at 24 GeV/c with different angular cuts. It is shown that the results of the two models agree with each other, and an analysis of the slight differences is given. Supported by NSAF (11176001) and Science and Technology Developing Foundation of China Academy of Engineering Physics (2012A0202006)
Austrian Carbon Calculator (ACC) - modelling soil carbon dynamics in Austrian soils
NASA Astrophysics Data System (ADS)
Sedy, Katrin; Freudenschuss, Alexandra; Zethner, Gehard; Spiegel, Heide; Franko, Uwe; Gründling, Ralf; Xaver Hölzl, Franz; Preinstorfer, Claudia; Haslmayr, Hans Peter; Formayer, Herbert
2014-05-01
Austrian Carbon Calculator (ACC) - modelling soil carbon dynamics in Austrian soils. The project funded by the Klima- und Energiefonds, Austrian Climate Research Programme, 4th call Authors: Katrin Sedy, Alexandra Freudenschuss, Gerhard Zethner (Environment Agency Austria), Heide Spiegel (Austrian Agency for Health and Food Safety), Uwe Franko, Ralf Gründling (Helmholtz Centre for Environmental Research) Climate change will affect plant productivity due to weather extremes. However, adverse effects could be diminished and satisfying production levels may be maintained with proper soil conditions. To sustain and optimize the potential of agricultural land for plant productivity it will be necessary to focus on preserving and increasing soil organic carbon (SOC). Carbon sequestration in agricultural soils is strongly influenced by management practice. The present management is affected by management practices that tend to speed up carbon loss. Crop rotation, soil cultivation and the management of crop residues are very important measures to influence carbon dynamics and soil fertility. For the future it will be crucial to focus on practical measures to optimize SOC and to improve soil structure. To predict SOC turnover the existing humus balance model the application of the "Carbon Candy Balance" was verified by results from Austrian long term field experiments and field data of selected farms. Thus the main aim of the project is to generate a carbon balancing tool box that can be applied in different agricultural production regions to assess humus dynamics due to agricultural management practices. The toolbox will allow the selection of specific regional input parameters for calculating the C-balance at field level. However farmers or other interested user can also apply their own field data to receive the result of C-dynamics under certain management practises within the next 100 years. At regional level the impact of predefined changes in agricultural management
Model creation and electronic structure calculation of amorphous hydrogenated boron carbide
NASA Astrophysics Data System (ADS)
Belhadj Larbi, Mohammed
Boron-rich solids are of great interest for many applications, particularly, amorphous hydrogenated boron carbide (a-BC:H) thin films are a leading candidate for numerous applications such as: heterostructure materials, neutron detectors, and photovoltaic energy conversion. Despite this importance, the local structural properties of these materials are not well-known, and very few theoretical studies for this family of disordered solids exist in the literature. In order to optimize this material for its potential applications the structure property relationships need to be discovered. We use a hybrid method in this endeavor---which is to the best of our knowledge the first in the literature---to model and calculate the electronic structure of amorphous hydrogenated boron carbide (a-BC:H). A combination of classical molecular dynamics using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and ab initio quantum mechanical simulations using the Vienna ab initio simulation package (VASP) have been conducted to create geometry optimized models that consist of a disordered hydrogenated twelve-vertex boron carbide icosahedra, with hydrogenated carbon cross-linkers. Then, the density functional theory (DFT) based orthogonalized linear combination of atomic orbitals (OLCAO) method was used to calculate the total and partial density of states (TDOS, PDOS), the complex dielectric function epsilon, and the radial pair distribution function (RPDF). The RPDF data stand as predictions that may be compared with future experimental electron or neutron diffraction data. The electronic structure simulations were not able to demonstrate a band gap of the same nature as that seen in prior experimental work, a general trend of the composition-properties relationship was established. The content of hydrogen and boron was found to be directly proportional to the decrease in the number of available states near the fermi energy, and inversely proportional to the
NASA Astrophysics Data System (ADS)
Slavinić, Petra; Cvetković, Marko
2016-01-01
The volume calculation of geological structures is one of the primary goals of interest when dealing with exploration or production of oil and gas in general. Most of those calculations are done using advanced software packages but still the mathematical workflow (equations) has to be used and understood for the initial volume calculation process. In this paper a comparison is given between bulk volume calculations of geological structures using trapezoidal and Simpson's rule and the ones obtained from cell-based models. Comparison in calculation is illustrated with four models; dome - 1/2 of ball/sphere, elongated anticline, stratigraphic trap due to lateral facies change and faulted anticline trap. Results show that Simpson's and trapezoidal rules give a very accurate volume calculation even with a few inputs(isopach areas - ordinates). A test of cell based model volume calculation precision against grid resolution is presented for various cases. For high accuracy, less the 1% of an error from coarsening, a cell area has to be 0.0008% of the reservoir area
NASA Astrophysics Data System (ADS)
Boberg, P. R.; Smart, D. F.; Shea, M. A.; Tylka, A. J.
2016-01-01
We have determined eight-second averaged geomagnetic transmissions of 36-80 MeV protons for the large Solar Energetic Particle (SEP) events and geomagnetic activity level variations of October 1989 using measurements from the NOAA-10 and GOES-7 satellites. We have compared the geomagnetic transmission measurements with model calculations employing trajectory tracings through the combined International Geomagnetic Reference Field (IGRF) and Kp/Dst modified 1989 Tsyganenko model. We present threshold geomagnetic transmission geographic latitudes and magnetic latitudes, as well as (a) differences between the measured and calculated threshold geographic latitudes and magnetic latitudes and (b) differences between measured and calculated polar pass durations. We find that for less disturbed geomagnetic activity levels, the measured threshold geomagnetic transmission geographic and magnetic latitudes are typically about 1-1.5° equatorward of the calculated geographic and magnetic latitudes, while for larger geomagnetic activity levels, the measured geographic and magnetic latitudes can be about 1.5° poleward of the calculated geographic and magnetic latitudes. For the eight Kp bins, we also compare the mean measured magnetic latitudes as a function of mean Dst with the mean calculated magnetic latitudes, interpolated to the mean measured Dst values. These comparisons of mean magnetic latitudes illustrate the improvement in the accuracy of the model calculations resulting from employing the actual mean measured Dst values.
NASA Astrophysics Data System (ADS)
Kase, Yuki; Yamashita, Haruo; Sakama, Makoto; Mizota, Manabu; Maeda, Yoshikazu; Tameshige, Yuji; Murayama, Shigeyuki
2015-08-01
In the development of an external radiotherapy treatment planning system, the output factor (OPF) is an important value for the monitor unit calculations. We developed a proton OPF calculation model with consideration for the collimator aperture edge to account for the dependence of the OPF on the collimator aperture and distance in proton beam therapy. Five parameters in the model were obtained by fitting with OPFs measured by a pinpoint chamber with the circular radiation fields of various field radii and collimator distances. The OPF model calculation using the fitted model parameters could explain the measurement results to within 1.6% error in typical proton treatment beams with 6- and 12 cm SOBP widths through a range shifter and a circular aperture more than 10.6 mm in radius. The calibration depth dependences of the model parameters were approximated by linear or quadratic functions. The semi-analytical OPF model calculation was tested with various MLC aperture shapes that included circles of various sizes as well as a rectangle, parallelogram, and L-shape for an intermediate proton treatment beam condition. The pre-calculated OPFs agreed well with the measured values, to within 2.7% error up to 620 mm in the collimator distance, though the maximum difference was 5.1% in the case of the largest collimator distance of 740 mm. The OPF calculation model would allow more accurate monitor unit calculations for therapeutic proton beams within the expected range of collimator conditions in clinical use.
NASA Astrophysics Data System (ADS)
Marchand, R.; Purschke, D.; Samson, J.
2013-03-01
Understanding the physics of interaction between satellites and the space environment is essential in planning and exploiting space missions. Several computer models have been developed over the years to study this interaction. In all cases, simulations are carried out in the reference frame of the spacecraft and effects such as charging, the formation of electrostatic sheaths and wakes are calculated for given conditions of the space environment. In this paper we present a program used to compute magnetic fields and a number of space plasma and space environment parameters relevant to Low Earth Orbits (LEO) spacecraft-plasma interaction modeling. Magnetic fields are obtained from the International Geophysical Reference Field (IGRF) and plasma parameters are obtained from the International Reference Ionosphere (IRI) model. All parameters are computed in the spacecraft frame of reference as a function of its six Keplerian elements. They are presented in a format that can be used directly in most spacecraft-plasma interaction models. Catalogue identifier: AENY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 270308 No. of bytes in distributed program, including test data, etc.: 2323222 Distribution format: tar.gz Programming language: FORTRAN 90. Computer: Non specific. Operating system: Non specific. RAM: 7.1 MB Classification: 19, 4.14. External routines: IRI, IGRF (included in the package). Nature of problem: Compute magnetic field components, direction of the sun, sun visibility factor and approximate plasma parameters in the reference frame of a Low Earth Orbit satellite. Solution method: Orbit integration, calls to IGRF and IRI libraries and transformation of coordinates from geocentric to spacecraft
Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran
2015-12-21
The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima–Zwanzig–Mori time-convolution (TC) and the other on the Tokuyama–Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called “memory kernel” or “generator,” going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green’s function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.
Model calculations for three-dimensional heat conduction in a real tooth
NASA Astrophysics Data System (ADS)
Foth, Hans-Jochen; Luke, Manfred
2003-06-01
To generate the three-dimensional grid net for a real tooth, an extracted tooth was grinded in steps of some millimetres from the top to the root. After each grinding step the displayed cross section was documented by photography showing clearly all transition lines between enamel, dentin and the pulp. The photographic reprints were used to determine the x-y-z-coordinates of selected points to represent the transition lines. In a fairly large-scale procedure these points were combined to a three dimensional net. FEM calculations were carried out to solve the heat equation numerically for the boundary condition that an IR laser pulse hits the surface for laser ablation. Since all the information of the various types of tissue is included in this model, the results give a huge variety of information. For example: the outer shell of enamel could be displayed exclusively to show its inner surface and which temperature distribution as well as mechanical stress got build up there.
Urbanek, Diana C; Berg, Mark A
2007-07-28
For coherent Raman spectroscopies, common femtosecond pulses often lie in an intermediate regime: their bandwidth is too wide for measurements in the frequency domain, but their temporal width is too broad for homodyne measurements in the time domain. A recent paper [S. Nath et al., Phys. Rev. Lett. 97, 267401 (2006)] showed that complete Raman spectra can be recovered from intermediate length pulses by using simultaneous time and frequency detection (TFD). Heterodyne detection and a phase-stable local oscillator at the anti-Stokes frequency are not needed with TFD. This paper examines the theory of TFD Raman in more detail; a companion paper tests the results on experimental data. Model calculations illustrate how information on the Raman spectrum is transferred from the frequency domain to the time domain as the pulse width shortens. When data are collected in both dimensions, the Raman spectrum is completely determined to high resolution, regardless of the probe pulse width. The loss of resolution in many femtosecond coherent Raman experiments is due to the restriction to one-dimensional data collection, rather than due to a fundamental restriction based on the pulse width. PMID:17672689
NASA Astrophysics Data System (ADS)
Urbanek, Diana C.; Berg, Mark A.
2007-07-01
For coherent Raman spectroscopies, common femtosecond pulses often lie in an intermediate regime: their bandwidth is too wide for measurements in the frequency domain, but their temporal width is too broad for homodyne measurements in the time domain. A recent paper [S. Nath et al., Phys. Rev. Lett. 97, 267401 (2006)] showed that complete Raman spectra can be recovered from intermediate length pulses by using simultaneous time and frequency detection (TFD). Heterodyne detection and a phase-stable local oscillator at the anti-Stokes frequency are not needed with TFD. This paper examines the theory of TFD Raman in more detail; a companion paper tests the results on experimental data. Model calculations illustrate how information on the Raman spectrum is transferred from the frequency domain to the time domain as the pulse width shortens. When data are collected in both dimensions, the Raman spectrum is completely determined to high resolution, regardless of the probe pulse width. The loss of resolution in many femtosecond coherent Raman experiments is due to the restriction to one-dimensional data collection, rather than due to a fundamental restriction based on the pulse width.
MEASURED AND CALCULATED LOSSES IN A MODEL DIPOLE FOR GSI'S HEAVY ION SYNCHROTRON.
WANDERER,P.; ANERELLA,M.; GANETIS,G.; GHOSH,A.K.; JOSHI,P.; MARONE,A.; MURATORE,J.; ET AL.
2003-06-15
The new heavy ion synchrotron facility proposed by GSI will have two superconducting magnet rings in the same tunnel, with rigidities of 300T{center_dot}m and 10OT{center_dot}m. Fast ramp times are needed. These can cause problems of ac loss and field distortion in the magnets. For the high energy ring, a lm model dipole magnet has been built, based on the RHIC dipole design. This magnet was tested under boiling liquid helium in a vertical dewar. The quench current showed very little dependence on ramp rate. The ac losses, measured by an electrical method, were fitted to straight line plots of loss/cycle versus ramp rate, thereby separating the eddy current and hysteresis components. These results were compared with calculated values, using parameters which had previously been measured on short samples of cable. Reasonably good agreement between theory and experiment was found, although the measured hysteresis loss is higher than expected in ramps to the highest field levels.
Theory of a zone-boundary collective state in A1: A model calculation
NASA Astrophysics Data System (ADS)
Sturm, K.; Oliveira, L. E.
1984-10-01
A two-band model, which previously was used successfully to evaluate the optical absorption in A1, is applied to derive the k-->- and ω-dependent dielectric function ɛM(k-->,ω) for k--> parallel to the [100] direction with use of degenerate perturbation theory. Within the nearly-free-electron approximation, it is shown that a pair of (200) Bragg planes gives rise to another pole in the energy-loss function Im[-1ɛ] and hence to a collective mode. Both the dispersion of the mode throughout the first Brillouin zone and the strength of the mode are evaluated and are found to agree very well with electron-energy-loss spectroscopy data. A detailed discussion of the nature of this mode is given. The mode is of the same origin as the so-called zone-boundary collective state (ZBCS) first proposed by Foo and Hopfield in Na. Comparison is made with a numerical calculation of ɛM(k-->,ω) by Singhal for some discrete k--> values. The general importance of the ZBCS for the understanding of the energy-loss spectrum and for more complicated systems is pointed out.
Start, G E; Cate, J H; Sagendorf, J F; Ackermann, G R; Dickson, C R; Nukari, N H; Thorngren, L G
1985-02-01
The 1981 Idaho Field Experiment was conducted in southeast Idaho over the Upper Snake River Plain. Nine test-day case studies were conducted between July 15 and 30, 1981. Releases of SF/sub 6/ gaseous tracer were made for 8-hour periods from 46 m above ground. Tracer was sampled hourly, for 12 sequential hours, at about 100 locations within an area 24 km square. Also, a single total integrated sample, of about 30 hours duration, was collected at approximately 100 sites within an area 48 by 72 km (using 6 km spacings). Extensive tower profiles of meteorology at the release point were collected. RAWINSONDES, RABALS and PIBALS were collected at 3 to 5 sites. Horizontal, low-altitude winds were monitored using the INEL mesonet. SF/sub 6/ tracer plume releases were marked with co-located oil fog releases and bi-hourly sequential launches of tetroon pairs. Aerial LIDAR observations of the oil fog plume and airborne samples of SF/sub 6/ were collected. High-altitude aerial photographs of daytime plumes were also collected. Volume III contains descriptions of the nine intensive measurement days. General meteorological conditions are described, trajectories and their relationships to analyses of gaseous tracer data are discussed, and overviews of test day cases are presented. Calculations using the ARLFRD MESODIF model are included and related to the gaseous tracer data. Finally, a summary and a list of recommendations are presented. 11 references, 39 figures, 4 tables.
Nuclear model calculation and targetry recipe for production of 110mIn.
Kakavand, T; Mirzaii, M; Eslami, M; Karimi, A
2015-10-01
(110m)In is potentially an important positron emitting that can be used in positron emission tomography. In this work, the excitation functions and production yields of (110)Cd(d, 2n), (111)Cd(d, 3n), (nat)Cd(d, xn), (110)Cd(p, n), (111)Cd(p, 2n), (112)Cd(p, 3n) and (nat)Cd(p, xn) reactions to produce the (110m)In were calculated using nuclear model code TALYS and compared with the experimental data. The yield of isomeric state production of (110)In was also compared with ground state production ones to reach the optimal energy range of projectile for the high yield production of metastable state. The results indicate that the (110)Cd(p, n)(110m)In is a high yield reaction with an isomeric ratio (σ(m)/σ(g)) of about 35 within the optimal incident energy range of 15-5 MeV. To make the target, cadmium was electroplated on a copper substrate in varying electroplating conditions such as PH, DC current density, temperature and time. A set of cold tests were also performed on the final sample under several thermal shocks to verify target resistance. The best electroplated cadmium target was irradiated with 15 MeV protons at current of 100 µA for one hour and the production yield of (110m)In and other byproducts were measured. PMID:26141297
Modeling status and needs for temperature calculations within spent fuel disposal containers
Sullivan, T.M.; Pescatore, C.
1989-10-01
The Brookhaven National Laboratory (BNL) Waste Materials and Environment Modeling (WMEM) Program has been assigned the task of helping the DOE formulate and certify analytical tools needed to support and/or strengthen the Waste Package Licensing strategy. One objective of the WMEM program is to perform qualitative and quantitative analyses of processes related to the internal waste package environment, e.g., temperature, radiolysis effects, presence of moisture, etc. The primary objective of this report is to present the findings of a literature review of work pertinent to predicting intact waste package internal temperatures under spent fuel isolation conditions. Therefore, it is assumed that a repository scale thermal analysis has been conducted and the exterior temperature of the waste package is known. Thus, the problem reduces to one determined by the waste package and its properties. Secondary objectives of this report are to identify key parameters and methodologies for performing the thermal analysis within intact waste containers, and identify sources of uncertainty in these calculations. 37 refs., 6 figs., 2 tabs.
Czuryło, Edward A.; Hellweg, Thomas; Eimer, Wolfgang; Da̧browska, Renata
1997-01-01
The size and the shape of caldesmon as well as its 50-kDa central and 19-kDa C-terminal fragments were investigated by photon correlation spectroscopy. The hydrodynamic radii, which have been calculated from the experimentally obtained translational diffusion coefficients, are 9.8 nm, 6.0 nm, and 2.9 nm, respectively. Moreover, the experimental values for the translational diffusion coefficients are compared with results obtained from hydrodynamic model calculations. Detailed models for the structure of caldesmon in solution are derived. The contour length is about 64 nm for all of the models used for caldesmon. ImagesFIGURE 3FIGURE 4 PMID:9017208
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same
Yang, Jie; Tang, Grace; Zhang, Pengpeng; Hunt, Margie; Lim, Seng B.; LoSasso, Thomas; Mageras, Gig
2016-01-01
Hypofractionated treatments generally increase the complexity of a treatment plan due to the more stringent constraints of normal tissues and target coverage. As a result, treatment plans contain more modulated MLC motions that may require extra efforts for accurate dose calculation. This study explores methods to minimize the differences between in-house dose calculation and actual delivery of hypofractionated volumetric-modulated arc therapy (VMAT), by focusing on arc approximation and tongue-and-groove (TG) modeling. For dose calculation, the continuous delivery arc is typically approximated by a series of static beams with an angular spacing of 2°. This causes significant error when there is large MLC movement from one beam to the next. While increasing the number of beams will minimize the dose error, calculation time will increase significantly. We propose a solution by inserting two additional apertures at each of the beam angle for dose calculation. These additional apertures were interpolated at two-thirds’ degree before and after each beam. Effectively, there were a total of three MLC apertures at each beam angle, and the weighted average fluence from the three apertures was used for calculation. Because the number of beams was kept the same, calculation time was only increased by about 6%–8%. For a lung plan, areas of high local dose differences (> 4%) between film measurement and calculation with one aperture were significantly reduced in calculation with three apertures. Ion chamber measurement also showed similar results, where improvements were seen with calculations using additional apertures. Dose calculation accuracy was further improved for TG modeling by developing a sampling method for beam fluence matrix. Single element point sampling for fluence transmitted through MLC was used for our fluence matrix with 1 mm resolution. For Varian HDMLC, grid alignment can cause fluence sampling error. To correct this, transmission volume averaging was
Yang, Jie; Tang, Grace; Zhang, Pengpeng; Hunt, Margie; Lim, Seng B; LoSasso, Thomas; Mageras, Gig
2016-01-01
Hypofractionated treatments generally increase the complexity of a treatment plan due to the more stringent constraints of normal tissues and target coverage. As a result, treatment plans contain more modulated MLC motions that may require extra efforts for accurate dose calculation. This study explores methods to minimize the differences between in-house dose calculation and actual delivery of hypofractionated volumetric-modulated arc therapy (VMAT), by focusing on arc approximation and tongue-and-groove (TG) modeling. For dose calculation, the continuous delivery arc is typically approximated by a series of static beams with an angular spacing of 2°. This causes significant error when there is large MLC movement from one beam to the next. While increasing the number of beams will minimize the dose error, calculation time will increase significantly. We propose a solution by inserting two additional apertures at each of the beam angle for dose calculation. These additional apertures were interpolated at two-thirds' degree before and after each beam. Effectively, there were a total of three MLC apertures at each beam angle, and the weighted average fluence from the three apertures was used for calculation. Because the number of beams was kept the same, calculation time was only increased by about 6%-8%. For a lung plan, areas of high local dose differences (> 4%) between film measurement and calculation with one aperture were significantly reduced in calculation with three apertures. Ion chamber measure-ment also showed similar results, where improvements were seen with calculations using additional apertures. Dose calculation accuracy was further improved for TG modeling by developing a sampling method for beam fluence matrix. Single ele-ment point sampling for fluence transmitted through MLC was used for our fluence matrix with 1 mm resolution. For Varian HDMLC, grid alignment can cause fluence sampling error. To correct this, transmission volume averaging was
A Comparative Study of Power and Sample Size Calculations for Multivariate General Linear Models
ERIC Educational Resources Information Center
Shieh, Gwowen
2003-01-01
Repeated measures and longitudinal studies arise often in social and behavioral science research. During the planning stage of such studies, the calculations of sample size are of particular interest to the investigators and should be an integral part of the research projects. In this article, we consider the power and sample size calculations for…
Fast calculation with point-based method to make CGHs of the polygon model
NASA Astrophysics Data System (ADS)
Ogihara, Yuki; Ichikawa, Tsubasa; Sakamoto, Yuji
2014-02-01
Holography is one of the three-dimensional technology. Light waves from an object are recorded and reconstructed by using a hologram. Computer generated holograms (CGHs), which are made by simulating light propagation using a computer, are able to represent virtual object. However, an enormous amount of computation time is required to make CGHs. There are two primary methods of calculating CGHs: the polygon-based method and the point-based method. In the polygon-based method with Fourier transforms, CGHs are calculated using a fast Fourier transform (FFT). The calculation of complex objects composed of multiple polygons requires as many FFTs, so unfortunately the calculation time become enormous. In contrast, in the point-based method, it is easy to express complex objects, an enormous calculation time is still required. Graphics processing units (GPUs) have been used to speed up the calculations of point-based method. Because a GPU is specialized for parallel computation and CGH calculation can be calculated independently for each pixel. However, expressing a planar object by the point-based method requires a signi cant increase in the density of points and consequently in the number of point light sources. In this paper, we propose a fast calculation algorithm to express planar objects by the point-based method with a GPU. The proposed method accelerate calculation by obtaining the distance between a pixel and the point light source from the adjacent point light source by a difference method. Under certain speci ed conditions, the difference between adjacent object points becomes constant, so the distance is obtained by only an additions. Experimental results showed that the proposed method is more effective than the polygon-based method with FFT when the number of polygons composing an objects are high.
NASA Technical Reports Server (NTRS)
Douglass, Anne R.; Rood, Richard B.; Jackman, Charles H.; Weaver, Clark J.
1994-01-01
Two-dimensional (zonally averaged) photochemical models are commonly used for calculations of ozone changes due to various perturbations. These include calculating the ozone change expected as a result of change in the lower stratospheric composition due to the exhaust of a fleet of supersonic aircraft flying in the lower stratosphere. However, zonal asymmetries are anticipated to be important to this sort of calculation. The aircraft are expected to be restricted from flying over land at supersonic speed due to sonic booms, thus the pollutant source will not be zonally symmetric. There is loss of pollutant through stratosphere/troposphere exchange, but these processes are spatially and temporally inhomogeneous. Asymmetry in the pollutant distribution contributes to the uncertainty in the ozone changes calculated with two dimensional models. Pollutant distributions for integrations of at least 1 year of continuous pollutant emissions along flight corridors are calculated using a three dimensional chemistry and transport model. These distributions indicate the importance of asymmetry in the pollutant distributions to evaluation of the impact of stratospheric aircraft on ozone. The implications of such pollutant asymmetries to assessment calculations are discussed, considering both homogeneous and heterogeneous reactions.
NASA Astrophysics Data System (ADS)
Haider, Clifton R.; Manduca, Armando; Camp, Jon J.; Fletcher, Joel G.; Robb, Richard A.; Bharucha, Adil E.
2006-03-01
The rectum can distend to accommodate stool, and contracts in response to distention during defecation. Rectal motor dysfunctions are implicated in the pathophysiology of functional defecation disorders and fecal incontinence. These rectal motor functions can be studied by intra-luminal measurements of pressure by manometry, or combined with volume during rectal balloon distention. Pressure-volume (p-v) relationships provide a global index of rectal mechanical properties. However, balloon distention alone does not measure luminal radius or wall thickness, which are necessary to compute wall tension and stress respectively. It has been suggested that the elastic modulus, which is the linear slope of the stress-strain relationship, is a more accurate measure of wall stiffness. Also, measurements of compliance may not reflect differences in rectal diameter between subjects prior to inflation, and imaging is necessary to determine if, as has been suggested, rectal pressure-volume relationships are affected by extra-rectal structures. We have developed a technique to measure rectal stress:strain relationships in humans, by simultaneous magnetic resonance imaging (MRI) during rectal balloon distention. After a conditioning distention, a rectal balloon was distended with water from 0 to 400 ml in 50 ml steps, and imaged at each step with MRI. The fluid filled balloon was segmented from each volume, the phase-ordered binary volumes were transformed into a geometric characterization of the inflated rectal surface. Taken together with measurements of balloon pressure and of rectal wall thickness, this model of the rectal surface was used to calculate regional values of curvature, tension, strain, and stress for the rectum. In summary, this technique has the unique ability to non-invasively measure the rectal stress:strain relationship and also determine if rectal expansion is limited by extra-rectal structures. This functional information allows the direct clinical analysis
Calculation of Two-Phase Navier-Stokes Flows Using Phase-Field Modeling
NASA Astrophysics Data System (ADS)
Jacqmin, David
1999-10-01
Phase-field models provide a way to model fluid interfaces as having finite thickness. This can allow the computation of interface movement and deformation on fixed grids. This paper applies phase-field modeling to the computation of two-phase incompressible Navier-Stokes flows. The Navier-Stokes equations are modified by the addition of the continuum forcing -C∇→φ, where C is the composition variable and φ is C's chemical potential. The equation for interface advection is replaced by a continuum advective-diffusion equation, with diffusion driven by C's chemical potential gradients. The paper discusses how solutions to these equations approach those of the original sharp-interface Navier-Stokes equations as the interface thickness ɛ and the diffusivity both go to zero. The basic flow-physics of phase-field interfaces is discussed. Straining flows can thin or thicken an interface and this must be resisted by a high enough diffusion. On the other hand, too large a diffusion will overly damp the flow. These two constraints result in an upper bound for the diffusivity of O(ɛ) and a lower bound of O(ɛ2). Within these two bounds, the phase-field Navier-Stokes equations appear to generate an O(ɛ) error relative to the exact sharp-interface equations. An O(h2/ɛ2) numerical method is introduced that is energy conserving in the sense that creation of interface energy by convection is always balanced by an equal decrease in kinetic energy caused by surface tension forcing. An O(h4/ɛ4) compact scheme is introduced that takes advantage of the asymptotic, comparatively smooth, behavior of the chemical potential. For O(ɛ) accurate phase-field models the optimum path to convergence for this scheme appears to be ɛ∝h4/5. The asymptotic rate of convergence corresponding to this is O(h4/5) but results at practical resolutions show that the practical convergence of the method is generally considerably faster than linear. Extensive analysis and computations show that
NASA Astrophysics Data System (ADS)
Granton, Patrick V.; Verhaegen, Frank
2013-05-01
Precision image-guided small animal radiotherapy is rapidly advancing through the use of dedicated micro-irradiation devices. However, precise modeling of these devices in model-based dose-calculation algorithms such as Monte Carlo (MC) simulations continue to present challenges due to a combination of very small beams, low mechanical tolerances on beam collimation, positioning and long calculation times. The specific intent of this investigation is to introduce and demonstrate the viability of a fast analytical source model (AM) for use in either investigating improvements in collimator design or for use in faster dose calculations. MC models using BEAMnrc were developed for circular and square fields sizes from 1 to 25 mm in diameter (or side) that incorporated the intensity distribution of the focal spot modeled after an experimental pinhole image. These MC models were used to generate phase space files (PSFMC) at the exit of the collimators. An AM was developed that included the intensity distribution of the focal spot, a pre-calculated x-ray spectrum, and the collimator-specific entrance and exit apertures. The AM was used to generate photon fluence intensity distributions (ΦAM) and PSFAM containing photons radiating at angles according to the focal spot intensity distribution. MC dose calculations using DOSXYZnrc in a water and mouse phantom differing only by source used (PSFMC versus PSFAM) were found to agree within 7% and 4% for the smallest 1 and 2 mm collimator, respectively, and within 1% for all other field sizes based on depth dose profiles. PSF generation times were approximately 1200 times faster for the smallest beam and 19 times faster for the largest beam. The influence of the focal spot intensity distribution on output and on beam shape was quantified and found to play a significant role in calculated dose distributions. Beam profile differences due to collimator alignment were found in both small and large collimators sensitive to shifts of 1
Use of the nuclear model code GNASH to calculate cross section data at energies up to 100 MeV
Young, P.G.; Chadwick, M.B.; Bosoian, M.
1992-12-01
The nuclear theory code GNASH has been used to calculate nuclear data for incident neutrons, protons, and deuterons at energies up to 100 MeV. Several nuclear models and theories are important in the 10--100 MeV energy range, including Hauser-Feshbach statistical theory, spherical and deformed optical model, preequilibrium theory, nuclear level densities, fission theory, and direct reaction theory. In this paper we summarize general features of the models in GNASH and describe the methodology utilized to determine relevant model parameters. We illustrate the significance of several of the models and include comparisons with experimental data for certain target materials that are important in applications.
Use of the nuclear model code GNASH to calculate cross section data at energies up to 100 MeV
Young, P.G.; Chadwick, M.B.; Bosoian, M.
1992-01-01
The nuclear theory code GNASH has been used to calculate nuclear data for incident neutrons, protons, and deuterons at energies up to 100 MeV. Several nuclear models and theories are important in the 10--100 MeV energy range, including Hauser-Feshbach statistical theory, spherical and deformed optical model, preequilibrium theory, nuclear level densities, fission theory, and direct reaction theory. In this paper we summarize general features of the models in GNASH and describe the methodology utilized to determine relevant model parameters. We illustrate the significance of several of the models and include comparisons with experimental data for certain target materials that are important in applications.
NASA Technical Reports Server (NTRS)
Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard
1991-01-01
Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.
Bondarenko, V A; Mitrikas, V G
2007-01-01
The model of a geometrical human body phantom developed for calculating the shielding functions of representative points of the body organs and systems is similar to the anthropomorphic phantom. This form of phantom can be integrated with the shielding model of the ISS Russian orbital segment to make analysis of radiation loading of crewmembers in different compartments of the vehicle. Calculation of doses absorbed by the body systems in terms of the representative points makes it clear that doses essentially depend on the phantom spatial orientation (eye direction). It also enables the absorbed dose evaluation from the shielding functions as the mean of the representative points and phantom orientation. PMID:18672518
Higher Order Model Power Calculation of the 56 MHz SRF Cavity
Choi,E.
2008-08-01
In this report, the HOM power dissipated to the load in the 56 MHz RF cavity is calculated. The HOM frequencies and Q factors with the inserted HOM damper are obtained from the simulations by MWS and SLAC codes.
Howard, David M.; Kearfott, Kimberlee J.; Wilderman, Scott J.
2011-01-01
Abstract High computational requirements restrict the use of Monte Carlo algorithms for dose estimation in a clinical setting, despite the fact that they are considered more accurate than traditional methods. The goal of this study was to compare mean tumor absorbed dose estimates using the unit density sphere model incorporated in OLINDA with previously reported dose estimates from Monte Carlo simulations using the dose planning method (DPMMC) particle transport algorithm. The dataset (57 tumors, 19 lymphoma patients who underwent SPECT/CT imaging during I-131 radioimmunotherapy) included tumors of varying size, shape, and contrast. OLINDA calculations were first carried out using the baseline tumor volume and residence time from SPECT/CT imaging during 6 days post-tracer and 8 days post-therapy. Next, the OLINDA calculation was split over multiple time periods and summed to get the total dose, which accounted for the changes in tumor size. Results from the second calculation were compared with results determined by coupling SPECT/CT images with DPM Monte Carlo algorithms. Results from the OLINDA calculation accounting for changes in tumor size were almost always higher (median 22%, range −1%–68%) than the results from OLINDA using the baseline tumor volume because of tumor shrinkage. There was good agreement (median −5%, range −13%–2%) between the OLINDA results and the self-dose component from Monte Carlo calculations, indicating that tumor shape effects are a minor source of error when using the sphere model. However, because the sphere model ignores cross-irradiation, the OLINDA calculation significantly underestimated (median 14%, range 2%–31%) the total tumor absorbed dose compared with Monte Carlo. These results show that when the quantity of interest is the mean tumor absorbed dose, the unit density sphere model is a practical alternative to Monte Carlo for some applications. For applications requiring higher accuracy, computer-intensive Monte