NASA Astrophysics Data System (ADS)
Zhang, ZhenHua
2016-07-01
The high-spin rotational properties of two-quasiparticle bands in the doubly-odd 166Ta are analyzed using the cranked shell model with pairing correlations treated by a particle-number conserving method, in which the blocking effects are taken into account exactly. The experimental moments of inertia and alignments and their variations with the rotational frequency hω are reproduced very well by the particle-number conserving calculations, which provides a reliable support to the configuration assignments in previous works for these bands. The backbendings in these two-quasiparticle bands are analyzed by the calculated occupation probabilities and the contributions of each orbital to the total angular momentum alignments. The moments of inertia and alignments for the Gallagher-Moszkowski partners of these observed two-quasiparticle rotational bands are also predicted.
Relativistic shell model calculations
NASA Astrophysics Data System (ADS)
Furnstahl, R. J.
1986-06-01
Shell model calculations are discussed in the context of a relativistic model of nuclear structure based on renormalizable quantum field theories of mesons and baryons (quantum hadrodynamics). The relativistic Hartree approximation to the full field theory, with parameters determined from bulk properties of nuclear matter, predicts a shell structure in finite nuclei. Particle-hole excitations in finite nuclei are described in an RPA calculation based on this QHD ground state. The particle-hole interaction is prescribed by the Hartree ground state, with no additional parameters. Meson retardation is neglected in deriving the RPA equations, but it is found to have negligible effects on low-lying states. The full Dirac matrix structure is maintained throughout the calculation; no nonrelativistic reductions are made. Despite sensitive cancellations in the ground state calculation, reasonable excitation spectra are obtained for light nuclei. The effects of including charged mesons, problems with heavy nuclei, and prospects for improved and extended calculations are discussed.
Calculations of signature for Dy, Er, Yb nuclei
Mueller, W.F.; Jensen, H.J.; Reviot, W.
1993-10-01
Energy signature splitting {Delta}e` of rotational bands depends sensitively on deformation, pair correlations, and Fermi level in the particular nucleus. Calculating {Delta}e` is therefore very useful in understanding the experimentally observed properties of such bands. In principal, one can extract {Delta}e` from Total Routhian Surface (TRS) calculations as well as from the Cranked Shell Model (CSM). However, the codes available are not based on a fully self-consistent treatment of all critical parameters, deformation, pairing, and Fermi level. The TRS calculations, while modeling the deformation in a {open_quote}realistic{close_quotes} manner as a function of rotational frequency and changes in the quasiparticle configuration, have deficiencies particularly in the treatment of pairing. The CSM codes, on the other hand, estimate pairing and the location of the Fermi level more precisely than the TRS codes, but work under the assumption of a constant deformation. We have developed a method to calculate {Delta}e` that utilizes the most advanced features of both types of codes. This ensures that the best parameter values are used as input for calculating the routhians. As a test, we have used a series of odd-A Dy, Er, and Yb nuclei around A = 160 and compared the results for the vi{sub 13/2} shell with experimental data on {Delta}e`. Details of our method will be discussed and the comparison will be presented.
ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Numerical Calculation of Model Rocket Trajectories.
ERIC Educational Resources Information Center
Keeports, David
1990-01-01
Discussed is the use of model rocketry to teach the principles of Newtonian mechanics. Included are forces involved; calculations for vertical launches; two-dimensional trajectories; and variations in mass, drag, and launch angle. (CW)
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Modeling, calculating, and analyzing multidimensional vibrational spectroscopies.
Tanimura, Yoshitaka; Ishizaki, Akihito
2009-09-15
Spectral line shapes in a condensed phase contain information from various dynamic processes that modulate the transition energy, such as microscopic dynamics, inter- and intramolecular couplings, and solvent dynamics. Because nonlinear response functions are sensitive to the complex dynamics of chemical processes, multidimensional vibrational spectroscopies can separate these processes. In multidimensional vibrational spectroscopy, the nonlinear response functions of a molecular dipole or polarizability are measured using ultrashort pulses to monitor inter- and intramolecular vibrational motions. Because a complex profile of such signals depends on the many dynamic and structural aspects of a molecular system, researchers would like to have a theoretical understanding of these phenomena. In this Account, we explore and describe the roles of different physical phenomena that arise from the peculiarities of the system-bath coupling in multidimensional spectra. We also present simple analytical expressions for a weakly coupled multimode Brownian system, which we use to analyze the results obtained by the experiments and simulations. To calculate the nonlinear optical response, researchers commonly use a particular form of a system Hamiltonian fit to the experimental results. The optical responses of molecular vibrational motions have been studied in either an oscillator model or a vibration energy state model. In principle, both models should give the same results as long as the energy states are chosen to be the eigenstates of the oscillator model. The energy state model can provide a simple description of nonlinear optical processes because the diagrammatic Liouville space theory that developed in the electronically resonant spectroscopies can easily handle three or four energy states involved in high-frequency vibrations. However, the energy state model breaks down if we include the thermal excitation and relaxation processes in the dynamics to put the system in a
Isomer ratio calculations using modeled discrete levels
Gardner, M.A.; Gardner, D.G.; Hoff, R.W.
1984-10-16
Isomer ratio calculations were made for the reactions: /sup 175/Lu(n,..gamma..)/sup 176m,g/Lu, /sup 175/Lu(n,2n)/sup 174m,g/Lu, /sup 237/Np(n,2n)/sup 236m,g/Np, /sup 241/Am(n,..gamma..)/sup 242m,g/Am, and /sup 243/Am(n,..gamma..)/sup 244m,g/Am using modeled level structures in the deformed, odd-odd product nuclei. The hundreds of discrete levels and their gamma-ray branching ratios provided by the modeling are necessary to achieve agreement with experiment. Many rotational bands must be included in order to obtain a sufficiently representative selection of K quantum numbers. The levels of each band must be extended to appropriately high values of angular momentum.
Density functional calculations on model tyrosyl radicals.
Himo, F; Gräslund, A; Eriksson, L A
1997-01-01
A gradient-corrected density functional theory approach (PWP86) has been applied, together with large basis sets (IGLO-III), to investigate the structure and hyperfine properties of model tyrosyl free radicals. In nature, these radicals are observed in, e.g., the charge transfer pathways in photosystem II (PSII) and in ribonucleotide reductases (RNRs). By comparing spin density distributions and proton hyperfine couplings with experimental data, it is confirmed that the tyrosyl radicals present in the proteins are neutral. It is shown that hydrogen bonding to the phenoxyl oxygen atom, when present, causes a reduction in spin density on O and a corresponding increase on C4. Calculated proton hyperfine coupling constants for the beta-protons show that the alpha-carbon is rotated 75-80 degrees out of the plane of the ring in PSII and Salmonella typhimurium RNR, but only 20-30 degrees in, e.g., Escherichia coli, mouse, herpes simplex, and bacteriophage T4-induced RNRs. Furthermore, based on the present calculations, we have revised the empirical parameters used in the experimental determination of the oxygen spin density in the tyrosyl radical in E. coli RNR and of the ring carbon spin densities, from measured hyperfine coupling constants. Images FIGURE 1 FIGURE 5 PMID:9083661
A Radiative Transfer Model for Climate Calculations
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
2000-01-01
This paper describes a radiative transfer model developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. We use a newly developed k-distribution model for both the thermal and solar parts of the spectrum. We employ a generalized two-stream approximation for the scattering by aerosol and clouds. To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. We perform several calculations focussing primarily on the question of absorption of solar radiation by gases and aerosols. We estimate the accuracy of the k-distribution to be approx. 1 W/sq m for the gaseous absorption in the solar spectrum. We estimate the accuracy of the two-stream method to be 3-12 W/sq m for the downward solar flux and 1-5 W/sq m for the upward solar flux at the top of atmosphere depending on the optical depth of the aerosol layer. We also show that the effect of ignoring aerosol absorption on the downward solar flux at the surface is 50 W/sq m for the TARFOX aerosol for an optical depth of 0.5 and 150 W/sq m for a highly absorbing mineral aerosol. Thus, we conclude that the uncertainty introduced by the aerosol solar radiative properties (and merely assuming some "representative" model) can be considerably larger than the error introduced by the use of a two-stream method.
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-10-01
batman provides fast calculation of exoplanet transit light curves and supports calculation of light curves for any radially symmetric stellar limb darkening law. It uses an integration algorithm for models that cannot be quickly calculated analytically, and in typical use, the batman Python package can calculate a million model light curves in well under ten minutes for any limb darkening profile.
Radiative accelerations for evolutionary model calculations
Richer, J.; Michaud, G.; Rogers, F.; Iglesias, C.; Turcotte, S.; LeBlanc, F.
1998-01-01
Monochromatic opacities from the OPAL database have been used to calculate radiative accelerations for the 21 included chemical species. The 10{sup 4} frequencies used are sufficient to calculate the radiative accelerations of many elements for T{gt}10{sup 5}K, using frequency sampling. This temperature limit is higher for less abundant elements. As the abundances of Fe, He, or O are varied, the radiative acceleration of other elements changes, since abundant elements modify the frequency dependence of the radiative flux and the Rosseland opacity. Accurate radiative accelerations for a given element can only be obtained by allowing the abundances of the species that contribute most to the Rosseland opacity to vary during the evolution and recalculating the radiative accelerations and the Rosseland opacity during the evolution. There are physical phenomena that cannot be included in the calculations if one uses only the OPAL data. For instance, one should correct for the momentum given to the electron in a photoionization. Such effects are evaluated using atomic data from Opacity Project, and correction factors are given. {copyright} {ital 1998} {ital The American Astronomical Society}
Model potential calculations of lithium transitions.
NASA Technical Reports Server (NTRS)
Caves, T. C.; Dalgarno, A.
1972-01-01
Semi-empirical potentials are constructed that have eigenvalues close in magnitude to the binding energies of the valence electron in lithium. The potentials include the long range polarization force between the electron and the core. The corresponding eigenfunctions are used to calculate dynamic polarizabilities, discrete oscillator strengths, photoionization cross sections and radiative recombination coefficients. A consistent application of the theory imposes a modification on the transition operator, but its effects are small for lithium. The method presented can be regarded as a numerical generalization of the widely used Coulomb approximation.
CALCULATION OF PHYSICOCHEMICAL PROPERTIES FOR ENVIRONMENTAL MODELING
Recent trends in environmental regulatory strategies dictate that EPA will rely heavily on predictive modeling to carry out the increasingly complex array of exposure and risk assessments necessary to develop scientifically defensible regulations. In response to this need, resea...
Model calculates was deposition for North Sea oils
Majeed, A.; Bringedal, B.; Overa, S. )
1990-06-18
A model for calculation of wax formation and deposition in pipelines and process equipment has been developed along with a new method for wax-equilibrium calculations using input from TBP distillation cuts. Selected results from the wax formation and deposition model have been compared with laboratory data from wax equilibrium and deposition experiments, and there have been some field applications of the model.
Models for Automated Tube Performance Calculations
C. Brunkhorst
2002-12-12
High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance.
Renormalization-group calculation of excitation properties for impurity models
NASA Astrophysics Data System (ADS)
Yoshida, M.; Whitaker, M. A.; Oliveira, L. N.
1990-05-01
The renormalization-group method developed by Wilson to calculate thermodynamical properties of dilute magnetic alloys is generalized to allow the calculation of dynamical properties of many-body impurity Hamiltonians. As a simple illustration, the impurity spectral density for the resonant-level model (i.e., the U=0 Anderson model) is computed. As a second illustration, for the same model, the longitudinal relaxation rate for a nuclear spin coupled to the impurity is calculated as a function of temperature.
Quantum Biological Channel Modeling and Capacity Calculation
Djordjevic, Ivan B.
2012-01-01
Quantum mechanics has an important role in photosynthesis, magnetoreception, and evolution. There were many attempts in an effort to explain the structure of genetic code and transfer of information from DNA to protein by using the concepts of quantum mechanics. The existing biological quantum channel models are not sufficiently general to incorporate all relevant contributions responsible for imperfect protein synthesis. Moreover, the problem of determination of quantum biological channel capacity is still an open problem. To solve these problems, we construct the operator-sum representation of biological channel based on codon basekets (basis vectors), and determine the quantum channel model suitable for study of the quantum biological channel capacity and beyond. The transcription process, DNA point mutations, insertions, deletions, and translation are interpreted as the quantum noise processes. The various types of quantum errors are classified into several broad categories: (i) storage errors that occur in DNA itself as it represents an imperfect storage of genetic information, (ii) replication errors introduced during DNA replication process, (iii) transcription errors introduced during DNA to mRNA transcription, and (iv) translation errors introduced during the translation process. By using this model, we determine the biological quantum channel capacity and compare it against corresponding classical biological channel capacity. We demonstrate that the quantum biological channel capacity is higher than the classical one, for a coherent quantum channel model, suggesting that quantum effects have an important role in biological systems. The proposed model is of crucial importance towards future study of quantum DNA error correction, developing quantum mechanical model of aging, developing the quantum mechanical models for tumors/cancer, and study of intracellular dynamics in general. PMID:25371271
Beyond standard model calculations with Sherpa
Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; Siegert, Frank
2015-03-24
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.
Martian Radiation Environment: Model Calculations and Recent Measurements with "MARIE"
NASA Technical Reports Server (NTRS)
Saganti, P. B.; Cucinotta, F. A.; zeitlin, C. J.; Cleghorn, T. F.
2004-01-01
The Galactic Cosmic Ray spectra in Mars orbit were generated with the recently expanded HZETRN (High Z and Energy Transport) and QMSFRG (Quantum Multiple-Scattering theory of nuclear Fragmentation) model calculations. These model calculations are compared with the first eighteen months of measured data from the MARIE (Martian Radiation Environment Experiment) instrument onboard the 2001 Mars Odyssey spacecraft that is currently in Martian orbit. The dose rates observed by the MARIE instrument are within 10% of the model calculated predictions. Model calculations are compared with the MARIE measurements of dose, dose-equivalent values, along with the available particle flux distribution. Model calculated particle flux includes GCR elemental composition of atomic number, Z = 1-28 and mass number, A = 1-58. Particle flux calculations specific for the current MARIE mapping period are reviewed and presented.
Method and models for R-curve instability calculations
NASA Technical Reports Server (NTRS)
Orange, Thomas W.
1988-01-01
This paper presents a simple method for performing elastic R-curve instability calculations. For a single material-structure combination, the calculations can be done on some pocket calculators. On microcomputers and larger, it permits the development of a comprehensive program having libraries of driving force equations for different configurations and R-curve model equations for different materials. The paper also presents several model equations for fitting to experimental R-curve data, both linear elastic and elastoplastic. The models are fit to data from the literature to demonstrate their viability.
Campbell, David L.; Watts, Raymond D.
1978-01-01
Program listing, instructions, and example problems are given for 12 programs for the interpretation of geophysical data, for use on Hewlett-Packard models 67 and 97 programmable hand-held calculators. These are (1) gravity anomaly over 2D prism with = 9 vertices--Talwani method; (2) magnetic anomaly (?T, ?V, or ?H) over 2D prism with = 8 vertices?Talwani method; (3) total-field magnetic anomaly profile over thick sheet/thin dike; (4) single dipping seismic refractor--interpretation and design; (5) = 4 dipping seismic refractors--interpretation; (6) = 4 dipping seismic refractors?design; (7) vertical electrical sounding over = 10 horizontal layers--Schlumberger or Wenner forward calculation; (8) vertical electric sounding: Dar Zarrouk calculations; (9) magnetotelluric planewave apparent conductivity and phase angle over = 9 horizontal layers--forward calculation; (10) petrophysics: a.c. electrical parameters; (11) petrophysics: elastic constants; (12) digital convolution with = 10-1ength filter.
Model calculations of nuclear data for biologically-important elements
Chadwick, M.B.; Blann, M.; Reffo, G.; Young, P.G.
1994-05-01
We describe calculations of neutron-induced reactions on carbon and oxygen for incident energies up to 70 MeV, the relevant clinical energy in radiation neutron therapy. Our calculations using the FKK-GNASH, GNASH, and ALICE codes are compared with experimental measurements, and their usefulness for modeling reactions on biologically-important elements is assessed.
IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS
D.M. Jolley
2001-12-18
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.
In-Drift Microbial Communities Model Validation Calculations
D. M. Jolley
2001-09-24
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.
In-Drift Microbial Communities Model Validation Calculation
D. M. Jolley
2001-10-31
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.
Effective UV radiation from model calculations and measurements
NASA Technical Reports Server (NTRS)
Feister, Uwe; Grewe, Rolf
1994-01-01
Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.
Microbial Communities Model Parameter Calculation for TSPA/SR
D. Jolley
2001-07-16
This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit and eclipse light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 s with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman.
Nonlinear triggered lightning models for use in finite difference calculations
NASA Technical Reports Server (NTRS)
Rudolph, Terence; Perala, Rodney A.; Ng, Poh H.
1989-01-01
Two nonlinear triggered lightning models have been developed for use in finite difference calculations. Both are based on three species of air chemistry physics and couple nonlinearly calculated air conductivity to Maxwell's equations. The first model is suitable for use in three-dimensional modeling and has been applied to the analysis of triggered lightning on the NASA F106B Thunderstorm Research Aircraft. The model calculates number densities of positive ions, negative ions, and electrons as a function of time and space through continuity equations, including convective derivative terms. The set of equations is closed by using experimentally determined mobilities, and the mobilities are also used to determine the air conductivity. Results from the model's application to the F106B are shown. The second model is two-dimensional and incorporates an enhanced air chemistry formulation. Momentum conservation equations replace the mobility assumption of the first model. Energy conservation equations for neutrals, heavy ions, and electrons are also used. Energy transfer into molecular vibrational modes is accounted for. The purpose for the enhanced model is to include the effects of temperature into the air breakdown, a necessary step if the model is to simulate more than the very earliest stages of breakdown. Therefore, the model also incorporates a temperature-dependent electron avalanche rate. Results from the model's application to breakdown around a conducting ellipsoid placed in an electric field are shown.
Influence of Wake Models on Calculated Tiltrotor Aerodynamics
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
The tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single,l/4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will examine the influence of wake models on calculated tiltrotor aerodynamics, comparing calculations of performance and airloads with TRAM DNW measurements. The calculations will be performed using the comprehensive analysis CAMRAD II.
A model of incomplete chromatic adaptation for calculating corresponding colors
Fairchild, M.D.
1990-01-01
A new mathematical model of chromatic adaptation for calculating corresponding colors across changes in illumination is formulated and tested. This model consists of a modified von Kries transform that accounts for incomplete levels of adaptation. The model predicts that adaptation will be less complete as the saturation of the adapting stimulus increases and more complete as the luminance of the adapting stimulus increases. The model is tested with experimental results from two different studies and found to be significantly better at predicting corresponding colors than other proposed models. This model represents a first step toward the specification of color appearance across varying conditions. 30 refs., 3 figs., 1 tab.
Calculating osmotic pressure according to nonelectrolyte Wilson nonrandom factor model.
Li, Hui; Zhan, Tingting; Zhan, Xiancheng; Wang, Xiaolan; Tan, Xiaoying; Guo, Yiping; Li, Chengrong
2014-08-01
Abstract The osmotic pressure of NaCl solutions was determined by the air humidity in equilibrium (AHE) method. The relationship between the osmotic pressure and the concentration was explored theoretically, and the osmotic pressure was calculated according to the nonelectrolyte Wilson nonrandom factor (N-Wilson-NRF) model from the concentration. The results indicate that the calculated osmotic pressure is comparable to the measured one.
Shell-model calculations of nuclear-charge radii
McGrory, J.B.; Brown, B.A.
1982-01-01
Shell-model calculations of charge radius differences in the Pb isotopes are discussed. Core quadrupole oscillations are found to be significant factors in the calculations. Existing data on the /sup 210/Pb isotope shift and the B(E2) strengths in /sup 210/Pb are shown to be inconsistent. Ground-state correlation effects in light nuclei (i.e., 0 and Ca isotopes) introduce odd-even staggering effects and other qualitative features in agreement with existing data.
Adherence of Model Molecules to Silica Surfaces: First Principle Calculations
NASA Astrophysics Data System (ADS)
Nuñez, Matías; Prado, Miguel Oscar
The adherence of "model molecules" methylene blue and eosine Y ("positive" and "negatively" charged respectively) to crystal SiO2 surfaces is studied from first principle calculations at the DFT level. Adsorption energies are calculated which follow the experimental threads obtained elsewhere (Rivera et al., 2013). We study the quantum nature of the electronic charge transfer between the surface and the molecules, showing the localized and delocalized patterns associated to the repulsive and attractive case respectively.
New generation of universal modeling for centrifugal compressors calculation
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
The Universal Modeling method is in constant use from mid - 1990th. Below is presented the newest 6th version of the Method. The flow path configuration of 3D impellers is presented in details. It is possible to optimize meridian configuration including hub/shroud curvatures, axial length, leading edge position, etc. The new model of vaned diffuser includes flow non-uniformity coefficient based on CFD calculations. The loss model was built from the results of 37 experiments with compressors stages of different flow rates and loading factors. One common set of empirical coefficients in the loss model guarantees the efficiency definition within an accuracy of 0.86% at the design point and 1.22% along the performance curve. The model verification was made. Four multistage compressors performances with vane and vaneless diffusers were calculated. As the model verification was made, four multistage compressors performances with vane and vaneless diffusers were calculated. Two of these compressors have quite unusual flow paths. The modeling results were quite satisfactory in spite of these peculiarities. One sample of the verification calculations is presented in the text. This 6th version of the developed computer program is being already applied successfully in the design practice.
Separated transonic airfoil flow calculations with a nonequilibrium turbulence model
NASA Technical Reports Server (NTRS)
King, L. S.; Johnson, D. A.
1985-01-01
Navier-Stokes transonic airfoil calculations based on a recently developed nonequilibrium, turbulence closure model are presented for a supercritical airfoil section at transonic cruise conditions and for a conventional airfoil section at shock-induced stall conditions. Comparisons with experimental data are presented which show that this nonequilibrium closure model performs significantly better than the popular Baldwin-Lomax and Cebeci-Smith equilibrium algebraic models when there is boundary-layer separation that results from the inviscid-viscous interactions.
Strange hadronic loops of the proton: A quark model calculation
Paul Geiger; Nathan Isgur
1996-10-01
Nontrivial q{anti q} sea effects have their origin in the low-Q{sup 2} dynamics of strong QCD. The authors present here a quark model calculation of the contribution of s{anti s} pairs arising from a complete set of OZI-allowed strong Y{sup *}K{sup *} hadronic loops to the net spin of the proton, to its charge radius, and to its magnetic moment. The calculation is performed in an ``unquenched quark model'' which has been shown to preserve the spectroscopic successes of the naive quark model and to respect the OZI rule. They speculate that an extension of the calculation to the nonstrange sea will show that most of the ``missing spin'' of the proton is in orbital angular momenta.
An Improved Radiative Transfer Model for Climate Calculations
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
Interactions of model biomolecules. Benchmark CC calculations within MOLCAS
Urban, Miroslav; Pitoňák, Michal; Neogrády, Pavel; Dedíková, Pavlína; Hobza, Pavel
2015-01-22
We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine – guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.
Interactions of model biomolecules. Benchmark CC calculations within MOLCAS
NASA Astrophysics Data System (ADS)
Urban, Miroslav; PitoÅák, Michal; Neogrády, Pavel; Dedíková, Pavlína; Hobza, Pavel
2015-01-01
We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine - guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.
Comparison of statistical model calculations for stable isotope neutron capture
NASA Astrophysics Data System (ADS)
Beard, M.; Uberseder, E.; Crowter, R.; Wiescher, M.
2014-09-01
It is a well-observed result that different nuclear input models sensitively affect Hauser-Feshbach (HF) cross-section calculations. Less well-known, however, are the effects on calculations originating from nonmodel aspects, such as experimental data truncation and transmission function energy binning, as well as code-dependent aspects, such as the definition of level-density matching energy and the inclusion of shell correction terms in the level-density parameter. To investigate these aspects, Maxwellian-averaged neutron capture cross sections (MACS) at 30 keV have been calculated using the well-established statistical Hauser-Feshbach model codes talys and non-smoker for approximately 340 nuclei. For the same nuclei, MACS predictions have also been obtained using two new HF codes, cigar and sapphire. Details of these two codes, which have been developed to contain an overlapping set of identically implemented nuclear physics input models, are presented. It is generally accepted that HF calculations are valid to within a factor of 3. It was found that this factor is dependent on both model and nonmodel details, such as the coarseness of the transmission function energy binning and data truncation, as well as variances in details regarding the implementation of level-density parameter, backshift, matching energy, and giant dipole strength function parameters.
New calculations in Dirac gaugino models: operators, expansions, and effects
NASA Astrophysics Data System (ADS)
Carpenter, Linda M.; Goodman, Jessica
2015-07-01
In this work we calculate important one loop SUSY-breaking parameters in models with Dirac gauginos, which are implied by the existence of heavy messenger fields. We find that these SUSY-breaking effects are all related by a small number of parameters, thus the general theory is tightly predictive. In order to make the most accurate analyses of one loop effects, we introduce calculations using an expansion in SUSY breaking messenger mass, rather than relying on postulating the forms of effective operators. We use this expansion to calculate one loop contributions to gaugino masses, non-holomorphic SM adjoint masses, new A-like and B-like terms, and linear terms. We also test the Higgs potential in such models, and calculate one loop contributions to the Higgs mass in certain limits of R-symmetric models, finding a very large contribution in many regions of the [InlineMediaObject not available: see fulltext.], where Higgs fields couple to standard model adjoint fields.
Preliminary Modulus and Breakage Calculations on Cellulose Models
Technology Transfer Automated Retrieval System (TEKTRAN)
The Young’s modulus of polymers can be calculated by stretching molecular models with the computer. The molecule is stretched and the derivative of the changes in stored potential energy for several displacements, divided by the molecular cross-section area, is the stress. The modulus is the slope o...
The role of hand calculations in ground water flow modeling.
Haitjema, Henk
2006-01-01
Most ground water modeling courses focus on the use of computer models and pay little or no attention to traditional analytic solutions to ground water flow problems. This shift in education seems logical. Why waste time to learn about the method of images, or why study analytic solutions to one-dimensional or radial flow problems? Computer models solve much more realistic problems and offer sophisticated graphical output, such as contour plots of potentiometric levels and ground water path lines. However, analytic solutions to elementary ground water flow problems do have something to offer over computer models: insight. For instance, an analytic one-dimensional or radial flow solution, in terms of a mathematical expression, may reveal which parameters affect the success of calibrating a computer model and what to expect when changing parameter values. Similarly, solutions for periodic forcing of one-dimensional or radial flow systems have resulted in a simple decision criterion to assess whether or not transient flow modeling is needed. Basic water balance calculations may offer a useful check on computer-generated capture zones for wellhead protection or aquifer remediation. An easily calculated "characteristic leakage length" provides critical insight into surface water and ground water interactions and flow in multi-aquifer systems. The list goes on. Familiarity with elementary analytic solutions and the capability of performing some simple hand calculations can promote appropriate (computer) modeling techniques, avoids unnecessary complexity, improves reliability, and is likely to save time and money. Training in basic hand calculations should be an important part of the curriculum of ground water modeling courses.
Model calculations for diffuse molecular clouds. [interstellar hydrogen cloud model
NASA Technical Reports Server (NTRS)
Glassgold, A. E.; Langer, W. D.
1974-01-01
A steady state isobaric cloud model is developed. The pressure, thermal, electrical, and chemical balance equations are solved simultaneously with a simple one dimensional approximation to the equation of radiative transfer appropriate to diffuse clouds. Cooling is mainly by CII fine structure transitions, and a variety of heating mechanisms are considered. Particular attention is given to the abundance variation of H2. Inhomogeneous density distributions are obtained because of the attenuation of the interstellar UV field and the conversion from atomic to molecular hyrodgen. The effects of changing the model parameters are described and the applicability of the model to OAO-3 observations is discussed. Good qualitative agreement with the fractional H2 abundance determinations has been obtained. The observed kinetic temperatures near 80 K can also be achieved by grain photoelectron heating. The problem of the electron density is solved taking special account of the various hydrogen ions as well as heavier ones.
Calculations of multiquark functions in effective models of strong interaction
Jafarov, R. G.; Rochev, V. E.
2013-09-15
In this paper we present our results of the investigation of multiquark equations in the Nambu-Jona-Lasinio model with chiral symmetry of SU(2) group in the mean-field expansion. To formulate the mean-field expansion we have used an iteration scheme of solution of the Schwinger-Dyson equations with the fermion bilocal source. We have considered the equations for Green functions of the Nambu-Jona-Lasinio model up to third step for this iteration scheme. To calculate the high-order corrections to the mean-field approximation, we propose the method of the Legendre transformation with respect to the bilocal source, which allows effectively to take into account the symmetry constraints related with the chiral Ward identity. We discuss also the problem of calculating the multiquark functions in the mean-field expansion for Nambu-Jona-Lasinio-type models with other types of the multifermion sources.
Feasibility of supersonic diode pumped alkali lasers: Model calculations
Barmashenko, B. D.; Rosenwaks, S.
2013-04-08
The feasibility of supersonic operation of diode pumped alkali lasers (DPALs) is studied for Cs and K atoms applying model calculations, based on a semi-analytical model previously used for studying static and subsonic flow DPALs. The operation of supersonic lasers is compared with that measured and modeled in subsonic lasers. The maximum power of supersonic Cs and K lasers is found to be higher than that of subsonic lasers with the same resonator and alkali density at the laser inlet by 25% and 70%, respectively. These results indicate that for scaling-up the power of DPALs, supersonic expansion should be considered.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel
Revised method for calculating cloud densities in equilibrium models
NASA Astrophysics Data System (ADS)
Wong, M. H.; Atreya, S. K.; Kuhn, W. R.
2013-12-01
Models of cloud condensation under thermodynamic equilibrium in planetary atmospheres are simple but still useful for several reasons. They calculate the wet adiabatic lapse rate, they determine saturation-limited mixing ratios of condensing species, and they calculate the stabilizing effect of latent heat release and molecular weight stratification. Equilibrium cloud condensation models (ECCMs) also calculate a type of condensate density---a condensate "unit density"---that only equates to cloud density under specific circumstances, because microphysics and dynamics are not considered in ECCMs. Unit densities are calculated for every model altitude by requiring that condensed material remains at the level where it condenses. Many ECCMs in use trace their heritage to Weidenschilling and Lewis (1973; Icarus 20, 465--476; hereafter WL73), which contains an error that affects only the calculation of condensate unit density. The error led to densities too high by a factor of the atmospheric scale height divided by unit length, which is about 3x10^6 at Jupiter's ammonia cloud level. We will describe the condensate unit density calculation error in WL73, and provide a new algorithm based on the local change in vapor mixing ratio, rather than the difference between integrated column masses as in WL73. The new algorithm satisfies conservation of mass. Using a simple scaling law to parameterize dynamics in terms of updraft speed and duration, condensate unit densities from ECCMs can be converted to cloud densities. We validate the technique for the terrestrial case, by comparing model predictions with representative densities of cirrus and cumulus clouds. For cirrus and cumulus updraft parameters, respectively, we find cloud densities of 0.01--0.2 g m-3 and 0.8--7 g m-3, in excellent agreement with observations and models of terrestrial clouds of these types. Implications for models of planetary and exoplanetary atmospheres will be discussed. [This material is based upon
Free energy calculations for a flexible water model.
Habershon, Scott; Manolopoulos, David E
2011-11-28
In this work, we consider the problem of calculating the classical free energies of liquids and solids for molecular models with intramolecular flexibility. We show that thermodynamic integration from the fully-interacting solid of interest to a Debye crystal reference state, with anisotropic harmonic interactions derived from the Hessian of the original crystal, provides a straightforward route to calculating the Gibbs free energy of the solid. To calculate the molecular liquid free energy, it is essential to correctly account for contributions from both intermolecular and intramolecular motion; we employ thermodynamic integration to a Lennard-Jones reference fluid, coupled with direct evaluation of the molecular ro-vibrational partition function. These approaches are used to study the low-pressure classical phase diagram of the flexible q-TIP4P/F water model. We find that, while the experimental ice-I/liquid and ice-III/liquid coexistence lines are described reasonably well by this model, the ice-II phase is predicted to be metastable. In light of this finding, we go on to examine how the coupling between intramolecular flexibility and intermolecular interactions influences the computed phase diagram by comparing our results with those of the underlying rigid-body water model. PMID:21887423
Free energy calculations for a flexible water model.
Habershon, Scott; Manolopoulos, David E
2011-11-28
In this work, we consider the problem of calculating the classical free energies of liquids and solids for molecular models with intramolecular flexibility. We show that thermodynamic integration from the fully-interacting solid of interest to a Debye crystal reference state, with anisotropic harmonic interactions derived from the Hessian of the original crystal, provides a straightforward route to calculating the Gibbs free energy of the solid. To calculate the molecular liquid free energy, it is essential to correctly account for contributions from both intermolecular and intramolecular motion; we employ thermodynamic integration to a Lennard-Jones reference fluid, coupled with direct evaluation of the molecular ro-vibrational partition function. These approaches are used to study the low-pressure classical phase diagram of the flexible q-TIP4P/F water model. We find that, while the experimental ice-I/liquid and ice-III/liquid coexistence lines are described reasonably well by this model, the ice-II phase is predicted to be metastable. In light of this finding, we go on to examine how the coupling between intramolecular flexibility and intermolecular interactions influences the computed phase diagram by comparing our results with those of the underlying rigid-body water model.
Synthetic vision and emotion calculation in intelligent virtual human modeling.
Zhao, Y; Kang, J; Wright, D K
2007-01-01
The virtual human technique can already provide vivid and believable human behaviour in more and more scenarios. Virtual humans are expected to replace real humans in hazardous situations to undertake tests and feed back valuable information. This paper will introduce a virtual human with a novel collision-based synthetic vision, short-term memory model and a capability to implement emotion calculation and decision making. The virtual character based on this model can 'see' what is in its field of view (FOV) and remember those objects. After that, a group of affective computing equations have been introduced. These equations have been implemented into a proposed emotion calculation process to enlighten emotion for virtual intelligent humans. PMID:17487108
ILNCSIM: improved lncRNA functional similarity calculation model
You, Zhu-Hong; Huang, De-Shuang; Chan, Keith C.C.
2016-01-01
Increasing observations have indicated that lncRNAs play a significant role in various critical biological processes and the development and progression of various human diseases. Constructing lncRNA functional similarity networks could benefit the development of computational models for inferring lncRNA functions and identifying lncRNA-disease associations. However, little effort has been devoted to quantifying lncRNA functional similarity. In this study, we developed an Improved LNCRNA functional SIMilarity calculation model (ILNCSIM) based on the assumption that lncRNAs with similar biological functions tend to be involved in similar diseases. The main improvement comes from the combination of the concept of information content and the hierarchical structure of disease directed acyclic graphs for disease similarity calculation. ILNCSIM was combined with the previously proposed model of Laplacian Regularized Least Squares for lncRNA-Disease Association to further evaluate its performance. As a result, new model obtained reliable performance in the leave-one-out cross validation (AUCs of 0.9316 and 0.9074 based on MNDR and Lnc2cancer databases, respectively), and 5-fold cross validation (AUCs of 0.9221 and 0.9033 for MNDR and Lnc2cancer databases), which significantly improved the prediction performance of previous models. It is anticipated that ILNCSIM could serve as an effective lncRNA function prediction model for future biomedical researches. PMID:27028993
Sample size calculation for the proportional hazards cure model.
Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin
2012-12-20
In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. PMID:22786805
IBAR: Interacting boson model calculations for large system sizes
NASA Astrophysics Data System (ADS)
Casperson, R. J.
2012-04-01
Scaling the system size of the interacting boson model-1 (IBM-1) into the realm of hundreds of bosons has many interesting applications in the field of nuclear structure, most notably quantum phase transitions in nuclei. We introduce IBAR, a new software package for calculating the eigenvalues and eigenvectors of the IBM-1 Hamiltonian, for large numbers of bosons. Energies and wavefunctions of the nuclear states, as well as transition strengths between them, are calculated using these values. Numerical errors in the recursive calculation of reduced matrix elements of the d-boson creation operator are reduced by using an arbitrary precision mathematical library. This software has been tested for up to 1000 bosons using comparisons to analytic expressions. Comparisons have also been made to the code PHINT for smaller system sizes. Catalogue identifier: AELI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 28 734 No. of bytes in distributed program, including test data, etc.: 4 104 467 Distribution format: tar.gz Programming language: C++ Computer: Any computer system with a C++ compiler Operating system: Tested under Linux RAM: 150 MB for 1000 boson calculations with angular momenta of up to L=4 Classification: 17.18, 17.20 External routines: ARPACK (http://www.caam.rice.edu/software/ARPACK/) Nature of problem: Construction and diagonalization of large Hamiltonian matrices, using reduced matrix elements of the d-boson creation operator. Solution method: Reduced matrix elements of the d-boson creation operator have been stored in data files at machine precision, after being recursively calculated with higher than machine precision. The Hamiltonian matrix is calculated and diagonalized, and the requested transition strengths are calculated
Infrared lens thermal effect: equivalent focal shift and calculating model
NASA Astrophysics Data System (ADS)
Zhang, Cheng-shuo; Shi, Zelin; Feng, Bin; Xu, Bao-shu
2014-11-01
It's well-know that the focal shift of infrared lens is the major factor in degeneration of imaging quality when temperature change. In order to figure out the connection between temperature change and focal shift, partial differential equations of thermal effect on light path are obtained by raytrace method, to begin with. The approximately solution of the PDEs show that focal shift is proportional to temperature change. And a formula to compute the proportional factor is given. In order to understand infrared lens thermal effect deeply, we use defocus by image plane shift at constant temperature to equivalently represent thermal effect on infrared lens. So equivalent focal shift (EFS) is defined and its calculating model is proposed at last. In order to verify EFS and its calculating model, Physical experimental platform including a motorized linear stage with built-in controller, blackbody, target, collimator, IR detector, computer and other devices is developed. The experimental results indicate that EFS make the image plane shift at constant temperature have the same influence on infrared lens as thermal effect and its calculating model is correct.
Freeway travel speed calculation model based on ETC transaction data.
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107
A PROPOSED BENCHMARK PROBLEM FOR SCATTER CALCULATIONS IN RADIOGRAPHIC MODELLING
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-03
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
Atmospheric neutrino flux calculation using the NRLMSISE-00 atmospheric model
NASA Astrophysics Data System (ADS)
Honda, M.; Athar, M. Sajjad; Kajita, T.; Kasahara, K.; Midorikawa, S.
2015-07-01
We extend our calculation of the atmospheric neutrino fluxes to polar and tropical regions. It is well known that the air density profiles in the polar and the tropical regions are different from the mid-latitude region. Also there are large seasonal variations in the polar region. In this extension, we use the NRLMSISE-00 global atmospheric model J. M. Picone, J. Geophys. Res. 107, SIA 15 (2002), replacing the U.S.-standard 1976 atmospheric model, which has no positional or seasonal variations. With the NRLMSISE-00 atmospheric model, we study the atmospheric neutrino flux at the polar and tropical regions with seasonal variations. The geomagnetic model international geomagnetic reference field (IGRF) we have used in our calculations seems accurate enough in the polar regions also. However, the polar and the equatorial regions are the two extremes in the IGRF model, and the magnetic field configurations are largely different from one another. Note that the equatorial region is also the tropical region generally. We study the effect of the geomagnetic field on the atmospheric neutrino flux in these extreme regions.
Acoustic intensity calculations for axisymmetrically modeled fluid regions
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.; Everstine, Gordon C.
1992-01-01
An algorithm for calculating acoustic intensities from a time harmonic pressure field in an axisymmetric fluid region is presented. Acoustic pressures are computed in a mesh of NASTRAN triangular finite elements of revolution (TRIAAX) using an analogy between the scalar wave equation and elasticity equations. Acoustic intensities are then calculated from pressures and pressure derivatives taken over the mesh of TRIAAX elements. Intensities are displayed as vectors indicating the directions and magnitudes of energy flow at all mesh points in the acoustic field. A prolate spheroidal shell is modeled with axisymmetric shell elements (CONEAX) and submerged in a fluid region of TRIAAX elements. The model is analyzed to illustrate the acoustic intensity method and the usefulness of energy flow paths in the understanding of the response of fluid-structure interaction problems. The structural-acoustic analogy used is summarized for completeness. This study uncovered a NASTRAN limitation involving numerical precision issues in the CONEAX stiffness calculation causing large errors in the system matrices for nearly cylindrical cones.
Hydrothermal hydration of Martian crust: illustration via geochemical model calculations.
Griffith, L L; Shock, E L
1997-04-25
If hydrothermal Systems existed on Mars, hydration of crustal rocks may have had the potential to affect the water budget of the planet. We have conducted geochemical model calculations to investigate the relative roles of host rock composition, temperature, water-to-rock ratio, and initial fluid oxygen fugacity on the mineralogy of hydrothermal alteration assemblages, as well as the effectiveness of alteration to store water in the crust as hydrous minerals. In order to place calculations for Mars in perspective, models of hydrothermal alteration of three genetically related Icelandic volcanics (a basalt, andesite, and rhyolite) are presented, together with results for compositions based on SNC meteorite samples (Shergotty and Chassigny). Temperatures from 150 degrees C to 250 degrees C, water-to-rock ratios from 0.1 to 1000, and two initial fluid oxygen fugacities are considered in the models. Model results for water-to-rock ratios less than 10 are emphasized because they are likely to be more applicable to Mars. In accord with studies of low-grade alteration of terrestrial rocks, we find that the major controls on hydrous mineral production are host rock composition and temperature. Over the range of conditions considered, the alteration of Shergotty shows the greatest potential for storing water as hydrous minerals, and the alteration of Icelandic rhyolite has the lowest potential. PMID:11541456
Hydrothermal hydration of Martian crust: illustration via geochemical model calculations
NASA Technical Reports Server (NTRS)
Griffith, L. L.; Shock, E. L.
1997-01-01
If hydrothermal Systems existed on Mars, hydration of crustal rocks may have had the potential to affect the water budget of the planet. We have conducted geochemical model calculations to investigate the relative roles of host rock composition, temperature, water-to-rock ratio, and initial fluid oxygen fugacity on the mineralogy of hydrothermal alteration assemblages, as well as the effectiveness of alteration to store water in the crust as hydrous minerals. In order to place calculations for Mars in perspective, models of hydrothermal alteration of three genetically related Icelandic volcanics (a basalt, andesite, and rhyolite) are presented, together with results for compositions based on SNC meteorite samples (Shergotty and Chassigny). Temperatures from 150 degrees C to 250 degrees C, water-to-rock ratios from 0.1 to 1000, and two initial fluid oxygen fugacities are considered in the models. Model results for water-to-rock ratios less than 10 are emphasized because they are likely to be more applicable to Mars. In accord with studies of low-grade alteration of terrestrial rocks, we find that the major controls on hydrous mineral production are host rock composition and temperature. Over the range of conditions considered, the alteration of Shergotty shows the greatest potential for storing water as hydrous minerals, and the alteration of Icelandic rhyolite has the lowest potential.
Hydrothermal hydration of Martian crust: illustration via geochemical model calculations.
Griffith, L L; Shock, E L
1997-04-25
If hydrothermal Systems existed on Mars, hydration of crustal rocks may have had the potential to affect the water budget of the planet. We have conducted geochemical model calculations to investigate the relative roles of host rock composition, temperature, water-to-rock ratio, and initial fluid oxygen fugacity on the mineralogy of hydrothermal alteration assemblages, as well as the effectiveness of alteration to store water in the crust as hydrous minerals. In order to place calculations for Mars in perspective, models of hydrothermal alteration of three genetically related Icelandic volcanics (a basalt, andesite, and rhyolite) are presented, together with results for compositions based on SNC meteorite samples (Shergotty and Chassigny). Temperatures from 150 degrees C to 250 degrees C, water-to-rock ratios from 0.1 to 1000, and two initial fluid oxygen fugacities are considered in the models. Model results for water-to-rock ratios less than 10 are emphasized because they are likely to be more applicable to Mars. In accord with studies of low-grade alteration of terrestrial rocks, we find that the major controls on hydrous mineral production are host rock composition and temperature. Over the range of conditions considered, the alteration of Shergotty shows the greatest potential for storing water as hydrous minerals, and the alteration of Icelandic rhyolite has the lowest potential.
A simplified analytical random walk model for proton dose calculation
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.
A simple model of throughput calculation for single screw
NASA Astrophysics Data System (ADS)
Béreaux, Yves; Charmeau, Jean-Yves; Moguedet, Maël
2007-04-01
To be able to predict the throughput of a single-screw extruder or the metering time of an injection moulding machine for a given screw geometry, set of processing conditions and polymeric material is important both for practical and designing purposes. Our simple model show that the screw geometry is the most important parameter, followed by polymer rheology and processing conditions. Melting properties and length seem to intervene to a lesser extent. The calculations hinges on the idea of viewing the entire screw as a pump, conveying a solid and a molten fraction. The evolution of the solid fraction is the essence of the plastication process, but under particular circumstances, its influence on the throughput is nil. This allows us to get a very good estimate on the throughput and pressure development along the screw. Our calculations are compared to different sets of experiments available from the literature. We have consistent agreement both in throughput and pressure with published data.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Analytical Jacobian Calculation in RT Model Including Polarization Effect
NASA Astrophysics Data System (ADS)
Okabayashi, Y.; Yoshida, Y.; Ota, Y.
2014-12-01
The greenhouse gas observing satellite "GOSAT" launched in January 2009 has been observing global distribution of CO2 and CH4. The TANSO-FTS mounted on GOSAT measures the two polarized components (called "P" and "S") of short wavelength infrared (SWIR) spectrum reflected from the earth's surface. In NIES, column-averaged dry air mole fraction of CO2 and CH4 (XCO2 and XCH4) are retrieved from SWIR spectra. However, the observed polarization information is not effectively utilized in the retrieval process due to the large computational cost of a vector RT model, instead the polarization synthesized spectra and a scalar RT model are used in the operational processing. An optical path length modification due to aerosol scattering is known as the major error source for XCO2 and XCH4 retrieval from SWIR spectra. Because the aerosol scattering changes polarization state of light, more accurate or additional aerosol information is expected by using the observed polarization spectra effectively in the retrieval process, which improves the retrieval accuracy of XCO2 and XCH4. In addition, for information content analysis, sensitivity analysis and error analysis, Jacobian matrix is important onto retrieval algorithm design before analyses for actual observed data. However, in the case of using RT model including polarization effect in retrieval process, the computational cost of Jacobian matrix calculations in maximum a posteriori retrieval is significantly large. Efficient calculation of analytical Jacobian is necessary. As a first step, we are implementing an analytical Jacobian calculation function to the vector RT model "Pstar". RT scheme of Pstar is based on hybrid method comprising the discrete ordinate and matrix operator methods. The reflection/transmission matrices and source vectors are obtained for each vertical layer through the discrete ordinate solution, and the vertically inhomogeneous system is constructed using the matrix operator method. Because the delta
Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose
NASA Technical Reports Server (NTRS)
Welton, Andrew; Lee, Kerry
2010-01-01
While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.
Stealth Dark Matter: Model, lattice calculations, and constraints
NASA Astrophysics Data System (ADS)
Schaich, David; Lattice Strong Dynamics Collaboration
2016-03-01
A new strongly coupled dark sector can produce a well-motivated and phenomenologically interesting composite dark matter candidate. I will review a model recently proposed by the Lattice Strong Dynamics Collaboration in which the composite dark matter is naturally ``stealthy'': Although its constituents are charged the composite particle itself is electroweak neutral with vanishing magnetic moment and charge radius. This results in an extraordinarily small direct detection cross section dominated by the dimension-7 electromagnetic polarizability interaction. I will present direct detection constraints on the model that rely on our non-perturbative lattice calculations of the polarizability, as well as complementary constraints from collider experiments. Collider bounds require the stealth dark matter mass to be m > 300 GeV, while its cross section for spin-independent scattering with xenon is smaller than the coherent neutrino scattering background for m > 700 GeV.
Calculations of hot gas ingestion for a STOVL aircraft model
NASA Technical Reports Server (NTRS)
Fricker, David M.; Holdeman, James D.; Vanka, Surya P.
1992-01-01
Hot gas ingestion problems for Short Take-Off, Vertical Landing (STOVL) aircraft are typically approached with empirical methods and experience. In this study, the hot gas environment around a STOVL aircraft was modeled as multiple jets in crossflow with inlet suction. The flow field was calculated with a Navier-Stokes, Reynolds-averaged, turbulent, 3D computational fluid dynamics code using a multigrid technique. A simple model of a STOVL aircraft with four choked jets at 1000 K was studied at various heights, headwind speeds, and thrust splay angles in a modest parametric study. Scientific visualization of the computed flow field shows a pair of vortices in front of the inlet. This and other qualitative aspects of the flow field agree well with experimental data.
Folding model calculations for 6He+12C elastic scattering
NASA Astrophysics Data System (ADS)
Awad, A. Ibraheem
2016-03-01
In the framework of the double folding model, we used the α+2n and di-triton configurations for the nuclear matter density of the 6He nucleus to generate the real part of the optical potential for the system 6He+12C. As an alternative, we also use the high energy approximation to generate the optical potential for the same system. The derived potentials are employed to analyze the elastic scattering differential cross section at energies of 38.3, 41.6 and 82.3 MeV/u. For the imaginary part of the potential we adopt the squared Woods-Saxon form. The obtained results are compared with the corresponding measured data as well as with available results in the literature. The calculated total reaction cross sections are investigated and compared with the optical limit Glauber model description.
High-energy photoelectron diffraction: model calculations and future possibilities
NASA Astrophysics Data System (ADS)
Winkelmann, Aimo; Fadley, Charles S.; Garcia de Abajo, F. Javier
2008-11-01
We discuss the theoretical modeling of x-ray photoelectron diffraction (XPD) with hard x-ray excitation at up to 20 keV, using the dynamical theory of electron diffraction to illustrate the characteristic aspects of the diffraction patterns resulting from such localized emission sources in a multilayer crystal. We show via dynamical calculations for diamond, Si and Fe that the dynamical theory predicts well the available current data for lower energies around 1 keV, and that the patterns for energies above about 1 keV are dominated by Kikuchi bands, which are created by the dynamical scattering of electrons from lattice planes. The origin of the fine structure in such bands is discussed from the point of view of atomic positions in the unit cell. The profiles and positions of the element-specific photoelectron Kikuchi bands are found to be sensitive to lattice distortions (e.g. a 1% tetragonal distortion) and the position of impurities or dopants with respect to lattice sites. We also compare the dynamical calculations with results from a cluster model that is more often used to describe lower energy XPD. We conclude that hard XPD (HXPD) should be capable of providing unique bulk-sensitive structural information for a wide variety of complex materials in future experiments.
Recent Developments in No-Core Shell-Model Calculations
Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R
2009-03-20
We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.
Level structure of sup 101 Tc investigated by means of massive transfer reactions
Dejbakhsh, H.; Mouchaty, G.; Schmitt, R.P. )
1991-07-01
The structure of {sup 101}Tc has been studied using the {sup 100}Mo({sup 7}Li,{alpha}2{ital n}) reaction at 49 MeV. Both particle-{gamma} and particle--{gamma}-{gamma} coincidence experiments were performed. The intensities of {gamma} rays both in and out of the reaction plane were measured to obtain information on the {Delta}{ital I} of the transitions. A new band based on the {pi}{ital g}{sub 9/2} configuration was identified for the first time. To investigate the shape coexistence or configuration dependent deformation in this nucleus, interacting-boson-fermion-model and cranked shell-model calculations have been performed. Cranked shell-model calculations were used to interpret the states at higher excitation energies.
Semiclassical model calculations of NH⋯N H-bonds
NASA Astrophysics Data System (ADS)
Luck, W. A. P.; Wess, T.
1992-07-01
Using the ab initio MO-LCAO-SCF-CISD method with a 6-31G** basis we calculated, for 30 different N⋯N distances, the NH potentials of an [H 2NH⋯NH 2] - model complex containing a linear NH⋯N H-bond. Based on these potentials we quantized the NH stretching vibration by the semiclassical Planck-Sommerfield phase integral method. Anomalous vibrational spectroscopic behaviour—such as H-bond energies, NH frequencies, anharmonicity, the relation between frequencies and H-bond energies (Badger-Bauer rule), the correlation between NH distances and the N⋯N distances and the NH/ND isotope ratios of the fundamental NH and ND stretching vibration—could be established theoretically over the whole N⋯N distance region, and the proton-transfer mechanism is discussed from the classical point of view. The results are similar to the OH⋯O properties and demonstrate that spectroscopic anomalies of strong H-bonds are not necessarily based on tunnel effects. The success of these results demonstrates, in addition, the usefulness of calculating quantized levels easily from known potentials using the phase integral, without needing knowledge of methods of solving the Schrödinger equation.
Assessment of Some Atomization Models Used in Spray Calculations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Bulzin, Dan
2011-01-01
The paper presents the results from a validation study undertaken as a part of the NASA s fundamental aeronautics initiative on high altitude emissions in order to assess the accuracy of several atomization models used in both non-superheat and superheat spray calculations. As a part of this investigation we have undertaken the validation based on four different cases to investigate the spray characteristics of (1) a flashing jet generated by the sudden release of pressurized R134A from cylindrical nozzle, (2) a liquid jet atomizing in a subsonic cross flow, (3) a Parker-Hannifin pressure swirl atomizer, and (4) a single-element Lean Direct Injector (LDI) combustor experiment. These cases were chosen because of their importance in some aerospace applications. The validation is based on some 3D and axisymmetric calculations involving both reacting and non-reacting sprays. In general, the predicted results provide reasonable agreement for both mean droplet sizes (D32) and average droplet velocities but mostly underestimate the droplets sizes in the inner radial region of a cylindrical jet.
Model calculations for enhanced fluorescence in photonic crystal phosphor.
Min, Kyungtaek; Choi, Yun-Kyoung; Jeon, Heonsu
2012-01-30
We propose a novel photonic structure, based on the photonic crystal (PC) effect, which simulations show results in an improved fluorescence efficiency from embedded phosphor. To be specific, the phosphor pumping efficiency can be significantly improved by tuning the pump photon energy to a photonic band-edge (PBE) of the PC phosphor. We have confirmed this theoretically by calculating optical properties of one-dimensional PC phosphor structures using the transfer-matrix method and plane-wave expansion method. For a particular model structure based on a quantum dot phosphor, the fluorescence enhancement factor was estimated to be as high as 6.9 for a monochromatic pump source and 2.2 for a broad bandwidth (20 nm) pump source.
Quantum plasmonics: from jellium models to ab initio calculations
NASA Astrophysics Data System (ADS)
Varas, Alejandro; García-González, Pablo; Feist, Johannes; García-Vidal, F. J.; Rubio, Angel
2016-08-01
Light-matter interaction in plasmonic nanostructures is often treated within the realm of classical optics. However, recent experimental findings show the need to go beyond the classical models to explain and predict the plasmonic response at the nanoscale. A prototypical system is a nanoparticle dimer, extensively studied using both classical and quantum prescriptions. However, only very recently, fully ab initio time-dependent density functional theory (TDDFT) calculations of the optical response of these dimers have been carried out. Here, we review the recent work on the impact of the atomic structure on the optical properties of such systems. We show that TDDFT can be an invaluable tool to simulate the time evolution of plasmonic modes, providing fundamental understanding into the underlying microscopical mechanisms.
Shape evolution of yrast-band in 78Kr
NASA Astrophysics Data System (ADS)
Joshi, P. K.; Jain, H. C.; Palit, R.; Mukherjee, G.; Nagaraj, S.
2002-03-01
Lifetimes have been measured up to the I=22 + level in the yrast positive-parity band for 78Kr using the recoil distance and lineshape analysis methods. The B(E2) and Qt values obtained from these measurements show a significant drop with increasing spin. The band crossings and the observed variation in Qt are understood through cranked shell-model, TRS and configuration-dependent shell-correction calculations assuming an oblate deformation for 78Kr at low spins.
Full waveform modelling and misfit calculation using the VERCE platform
NASA Astrophysics Data System (ADS)
Garth, Thomas; Spinuso, Alessandro; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schwichtenberg, Horst; Frank, Anton; Vilotte, Jean-Pierre; Rietbrock, Andreas
2016-04-01
simulated and recorded waveforms, enabling seismologists to specify and steer their misfit analyses using existing python tools and libraries such as Pyflex and the dispel4py data-intensive processing library. All these processes, including simulation, data access, pre-processing and misfit calculation, are presented to the users of the gateway as dedicated and interactive workspaces. The VERCE platform can also be used to produce animations of seismic wave propagation through the velocity model, and synthetic shake maps. We demonstrate the functionality of the VERCE platform with two case studies, using the pre-loaded velocity model and mesh for Chile and Northern Italy. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shake map production and other full waveform applications, in a wide range of tectonic settings.
MCNPX Cosmic Ray Shielding Calculations with the NORMAN Phantom Model
NASA Technical Reports Server (NTRS)
James, Michael R.; Durkee, Joe W.; McKinney, Gregg; Singleterry Robert
2008-01-01
The United States is planning manned lunar and interplanetary missions in the coming years. Shielding from cosmic rays is a critical aspect of manned spaceflight. These ventures will present exposure issues involving the interplanetary Galactic Cosmic Ray (GCR) environment. GCRs are comprised primarily of protons (approx.84.5%) and alpha-particles (approx.14.7%), while the remainder is comprised of massive, highly energetic nuclei. The National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) has commissioned a joint study with Los Alamos National Laboratory (LANL) to investigate the interaction of the GCR environment with humans using high-fidelity, state-of-the-art computer simulations. The simulations involve shielding and dose calculations in order to assess radiation effects in various organs. The simulations are being conducted using high-resolution voxel-phantom models and the MCNPX[1] Monte Carlo radiation-transport code. Recent advances in MCNPX physics packages now enable simulated transport over 2200 types of ions of widely varying energies in large, intricate geometries. We report here initial results obtained using a GCR spectrum and a NORMAN[3] phantom.
Effective Inflow Conditions for Turbulence Models in Aerodynamic Calculations
NASA Technical Reports Server (NTRS)
Spalart, Philippe R.; Rumsey, Christopher L.
2007-01-01
The selection of inflow values at boundaries far upstream of an aircraft is considered, for one- and two-equation turbulence models. Inflow values are distinguished from the ambient values near the aircraft, which may be much smaller. Ambient values should be selected first, and inflow values that will lead to them after the decay second; this is not always possible, especially for the time scale. The two-equation decay during the approach to the aircraft is shown; often, the time scale has been set too short for this decay to be calculated accurately on typical grids. A simple remedy for both issues is to impose floor values for the turbulence variables, outside the viscous sublayer, and it is argued that overriding the equations in this manner is physically justified. Selecting laminar ambient values is easy, if the boundary layers are to be tripped, but a more common practice is to seek ambient values that will cause immediate transition in boundary layers. This opens up a wide range of values, and selection criteria are discussed. The turbulent Reynolds number, or ratio of eddy viscosity to laminar viscosity has a huge dynamic range that makes it unwieldy; it has been widely mis-used, particularly by codes that set upper limits on it. The value of turbulent kinetic energy in a wind tunnel or the atmosphere is also of dubious value as an input to the model. Concretely, the ambient eddy viscosity must be small enough to preserve potential cores in small geometry features, such as flap gaps. The ambient frequency scale should also be small enough, compared with shear rates in the boundary layer. Specific values are recommended and demonstrated for airfoil flows
Carbon dioxide fluid-flow modeling and injectivity calculations
Burke, Lauri
2011-01-01
These results were used to classify subsurface formations into three permeability classifications for the probabilistic calculations of storage efficiency and containment risk of the U.S. Geological Survey geologic carbon sequestration assessment methodology. This methodology is currently in use to determine the total carbon dioxide containment capacity of the onshore and State waters areas of the United States.
Molecular modeling study of chiral drug crystals: lattice energy calculations.
Li, Z J; Ojala, W H; Grant, D J
2001-10-01
The lattice energies of a number of chiral drugs with known crystal structures were calculated using Dreiding II force field. The lattice energies, including van der Waals, Coulombic, and hydrogen-bonding energies, of homochiral and racemic crystals of some ephedrine derivatives and of several other chiral drugs, are compared. The calculated energies are correlated with experimental data to probe the underlying intermolecular forces responsible for the formation of racemic species, racemic conglomerates, or racemic compounds, termed chiral discrimination. Comparison of the calculated energies among ephedrine derivatives reveals that a greater Coulombic energy corresponds to a higher melting temperature, while a greater van der Waals energy corresponds to a larger enthalpy of fusion. For seven pairs of homochiral and racemic compounds, correlation of the differences between the two forms in the calculated energies and experimental enthalpy of fusion suggests that the van der Waals interactions play a key role in the chiral discrimination in the crystalline state. For salts of the chiral drugs, the counter ions diminish chiral discrimination by increasing the Coulombic interactions. This result may explain why salt forms favor the formation of racemic conglomerates, thereby facilitating the resolution of racemates.
National Stormwater Calculator - Version 1.1 (Model)
EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The SWC estimates runoff at a site based on available information ...
Calculation of the 3D density model of the Earth
NASA Astrophysics Data System (ADS)
Piskarev, A.; Butsenko, V.; Poselov, V.; Savin, V.
2009-04-01
The study of the Earth's crust is a part of investigation aimed at extension of the Russian Federation continental shelf in the Sea of Okhotsk Gathered data allow to consider the Sea of Okhotsk' area located outside the exclusive economic zone of the Russian Federation as the natural continuation of Russian territory. The Sea of Okhotsk is an Epi-Mesozoic platform with Pre-Cenozoic heterogeneous folded basement of polycyclic development and sediment cover mainly composed of Paleocene - Neocene - Quaternary deposits. Results of processing and complex interpretation of seismic, gravity, and aeromagnetic data along profile 2-DV-M, as well as analysis of available geological and geophysical information on the Sea of Okhotsk region, allowed to calculate of the Earth crust model. 4 layers stand out (bottom-up) in structure of the Earth crust: granulite-basic (density 2.90 g/cm3), granite-gneiss (limits of density 2.60-2.76 g/cm3), volcanogenic-sedimentary (2.45 g/cm3) and sedimentary (density 2.10 g/cm3). The last one is absent on the continent; it is observed only on the water area. Density of the upper mantle is taken as 3.30 g/cm3. The observed gravity anomalies are mostly related to the surface relief of the above mentioned layers or to the density variations of the granite-metamorphic basement. So outlining of the basement blocks of different constitution preceded to the modeling. This operation is executed after Double Fourier Spectrum analysis of the gravity and magnetic anomalies and following compilation of the synthetic anomaly maps, related to the basement density and magnetic heterogeneity. According to bathymetry data, the Sea of Okhotsk can be subdivided at three mega-blocks. Taking in consideration that central Sea of Okhotsk area is aseismatic, i.e. isostatic compensated, it is obvious that Earth crust structure of these three blocks is different. The South-Okhotsk depression is characteristics by 3200-3300 m of sea depths. Moho surface in this area is at
A simplified model for unstable temperature field calculation of gas turbine rotor
NASA Astrophysics Data System (ADS)
He, Guangxin
1989-06-01
A simplified model is presented for calculating the unstable temperature field of a cooled turbine rotor by the finite element method. In the simplified model, an outer radius for calculating has been chosen which is smaller than the radius of the fir-tree root groove's bottom. And an equivalent heat release coefficient has been introduced. Thus, the calculation can be treated as an axial symmetrical problem and carried out on a microcomputer. The simplified model has been used to calculate the unstable temperature field during the start-up of a rotor. A comparison with the three-dimensional calculated result shows that the simplified model is satisfactory.
[An empirical model for calculating electron dose distributions].
Leistner, H; Schüler, W
1990-01-01
Dose-distributions in radiation fields are calculated for purpose of irradiation planning from measured depth dose and cross-distributions predominantly. Especially in electron fields the measuring effort is high to this, because these distributions have to be measured for all occurring irradiation parameters and in many different tissue depths. At the very least it can be shown for the 6...10 MeV electron radiation of the linear accelerator Neptun 10p that all required distributions can be calculated from each separately measured depth dose and cross-distribution. For this depth dose distribution and the measured border decrease of cross-distribution are tabulated and the abscissas are submitted to a linear transformation x' = k.x. In case of depth dose distribution the transformation factor k is dependent on electron energy only and in cross-distribution on tissue depth and source-surface-distance additionally. PMID:2356295
Moeller, M. P.; Urbanik, II, T.; Desrosiers, A. E.
1982-03-01
This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuatlon tlmes for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies.
Calculation of the Aerodynamic Behavior of the Tilt Rotor Aeroacoustic Model (TRAM) in the DNW
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 1/4-scale V- 22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance and airloads for helicopter mode operation, as well as calculated induced and profile power. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
Chemically reacting supersonic flow calculation using an assumed PDF model
NASA Technical Reports Server (NTRS)
Farshchi, M.
1990-01-01
This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.
An assessment of artificial damping models for aeroacoustic calculations
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham
1995-01-01
We present a study of the effect of artificial dissipation models on nonlinear wave computations using a few high order schemes. Our motivation is to assess the effectiveness of artificial dissipation models for their suitability for aeroacoustic computations. We solve three model problems in one dimension using the Euler equations. Initial conditions are chosen to generate nonlinear waves in the computational domain. We examine various dissipation models in central difference schemes such as the Dispersion Relation Preserving (DRP) scheme and the standard fourth and sixth order schemes. We also make a similar study with the fourth order MacCormack scheme due to Gottieb and Turkel.
Code System for Calculating Ion Track Condensed Collision Model.
1997-05-21
Version 00 ICOM calculates the transport characteristics of ion radiation for applicaton to radiation protection, dosimetry and microdosimetry, and radiation physics of solids. Ions in the range Z=1-92 are handled. The energy range for protons is 0.001-10,000 MeV. For other ions the energy range is 0.001-100MeV/nucleon. Computed quantities include stopping powers, ranges; spatial, angular and energy distributions of particle current and fluence; spatial distributions of the absorbed dose; and spatial distributions of thermalized ions.
NASA Astrophysics Data System (ADS)
Dekker, C. M.; Sliggers, C. J.
To spur on quality assurance for models that calculate air pollution, quality criteria for such models have been formulated. By satisfying these criteria the developers of these models and producers of the software packages in this field can assure and account for the quality of their products. In this way critics and users of such (computer) models can gain a clear understanding of the quality of the model. Quality criteria have been formulated for the development of mathematical models, for their programming—including user-friendliness, and for the after-sales service, which is part of the distribution of such software packages. The criteria have been introduced into national and international frameworks to obtain standardization.
NASA Technical Reports Server (NTRS)
Maples, A. L.
1980-01-01
The operation of solidification model 1 is described. Model 1 calculates the macrosegregation in a rectangular ingot of a binary alloy as a result of horizontal axisymmetric bidirectional solidification. The calculation is restricted to steady-state solidification; there is no variation in final local average composition in the direction of isotherm movement. The physics of the model are given.
The Martian Plasma Environment: Model Calculations and Observations
NASA Astrophysics Data System (ADS)
Lichtenegger, H. I. M.; Dubinin, E.; Schwingenschuh, K.; Riedler, W.
Based on a modified version of the model of an induced martian magnetosphere developed by Luhmann (1990), the dynamics and spatial distribution of different planetary ion species is examined. Three main regions are identified: A cloud of ions travelling along cycloidal trajectories, a plasma mantle and a plasma sheet. The latter predominantly consists of oxygen ions of ionospheric origin with minor portions of light particles. Comparison of model results with Phobos-2 observations shows reasonable agreement.
Calculation of screening masses in a chiral quark model
Li Xiangdong; Li Hu; Shakin, C.M.; Sun Qing
2004-10-01
We consider a simple model for the coordinate-space vacuum polarization function which is often parametrized in terms of a screening mass. We discuss the circumstances in which the value m{sub sc}={pi}T is obtained for the screening mass. In the model considered here, that result is obtained when the momenta in the relevant vacuum polarization integral are small with respect to the first Matsubara frequency.
A simple model for calculating air pollution within street canyons
NASA Astrophysics Data System (ADS)
Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.
2014-04-01
This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.
Model Calculations of Continuous-Wave Laser Ionization of Krypton
Bret D. Cannon
1999-07-27
This report describes modeling of a scheme that uses continuous-wave (CW) lasers to ionize selected isotopes of krypton with high isotopic selectivity. The models predict that combining this ionization scheme with mass spectrometric measurement of the resulting ions can be the basis for ultra-sensitive methods to measure {sup 85}Kr in the presence of a 10{sup 11} excess of the stable krypton isotopes. Two experimental setups are considered in this model: the first setup is for krypton as a static gas, the second is for krypton in an atomic beam. In the static gas experiment, for a total krypton press of 10{sup {minus}4} torr and 10 W of power in the cavity, the model predicts a total krypton ion current of 4.6 x 10{sup 8} s{sup {minus}1} and for a {sup 85}Kr/Kr of 10{sup {minus}11} a {sup 85}Kr ion current of 3.5 s{sup {minus}1} or about 10,000 per hour. The atomic beam setup allowed higher isotopic selectivity; the model predicts a {sup 85}Kr ion current of 18 s{sup {minus}1} or 65,000 per hour.
Speeding Up Calculations Of The Non-equilibrium Ionization Model
NASA Astrophysics Data System (ADS)
Ji, Li; Noble, M.; Schulz, N. S.; Nowak, M. A.; Marshall, H. L.
2008-03-01
By taking advantage of the atomic data and physics from the equilibrium photoionization model of XSTAR, we are extending our non-equilibrium collisional ionization code to photoionized plasmas. The expanded model will allow us to study processes in a wide range of astrophysical scenarios -- such as colliding winds in X-ray binaries, outflows in AGNs, and shock flows in the IGM, but presents significant challenges. Chief among these are that the new model is expensive to compute and difficult to compare directly with HETG observations. We discuss how parallelism and modular software techniques are being brought to bear on these problems, in the context of several applications: (1) emission measurement analysis for the accretion disk corona of HerX-1, using XSTAR within the Parallel Virtual Machine; (2) using ISIS for direct ionization analysis and line diagnostics of the plane shock model, via our dynamically loadable interface to selected routines from the XSPEC vpshock model; and (3) computing atomic rates directly in ISIS by way of our dynamically loadable XSTAR module.
Thorne, B.J.
1990-10-01
Early attempts at estimation of stress wave damage due to blasting by use of finite element calculations met with limited success due to numerical instabilities that prevented calculations from being carried to late times. An improved damage model allows finite element calculations which remain stable at late times. Reasonable agreement between crater profiles calculated with this model using the PRONTO finite element program and excavated crater profiles from blasting experiments in granite demonstrate a successful application of this model. Detailed instructions for use of this new damage model with the PRONTO finite element programs are included. 18 refs., 16 figs.
Improved Dielectric Solvation Model for Electronic Structure Calculations
Chipman, Daniel M.
2015-12-16
This project was originally funded for the three year period from 09/01/2009 to 08/31/2012. Subsequently a No-Cost Extension was approved for a revised end date of 11/30/2013. The primary goals of the project were to develop continuum solvation models for nondielectric short-range interactions between solvent and solute that arise from dispersion, exchange, and hydrogen bonding. These goals were accomplished and are reported in the five peer-reviewed journal publications listed in the bibliography below. The secondary goals of the project included derivation of analytic gradients for the models, improvement of the cavity integration scheme, application of the models to the core-level spectroscopy of water, and several other miscellaneous items. These goals were not accomplished because they depended on completion of the primary goals, after which there was a lack of time for any additional effort.
Fluorescein as a model molecular calculator with reset capability
NASA Astrophysics Data System (ADS)
Margulies, David; Melman, Galina; Shanzer, Abraham
2005-10-01
The evolution of molecules capable of performing boolean operations has gone a long way since the inception of the first molecular AND logic gate, followed by other logic functions, such as XOR and INHIBIT, and has reached the stage where these tiny processors execute arithmetic calculations. Molecular logic gates that process a variety of chemical inputs can now be loaded with arrays of logic functions, enabling even a single molecular species to execute distinct algebraic operations: addition and subtraction. However, unlike electronic or optical signals, the accumulation of chemical inputs prevents chemical arithmetic systems from resetting. Consequently, a set of solutions is required to complete even the simplest arithmetic cycle. It has been suggested that these limitations can be overcome by washing off the input signals from solid supports. An alternative approach, which does not require solvent exchange or incorporation of bulk surfaces, is to reset the arithmetic system chemically. Ultimately, this is how some biological systems regenerate. Here we report a highly efficient and exceptionally simple molecular arithmetic system based on a plain fluorescein dye, capable of performing a full scale of elementary addition and subtraction algebraic operations. This system can be reset following each separate arithmetic step. The ability to selectively eradicate chemical inputs brings us closer to the realization of chemical computation.
Aeroelastic Calculations Using CFD for a Typical Business Jet Model
NASA Technical Reports Server (NTRS)
Gibbons, Michael D.
1996-01-01
Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.
Suomi NPP VIIRS Striping Analysis using Radiative Transfer Model Calculations
NASA Astrophysics Data System (ADS)
Wang, Z.; Cao, C.
2015-12-01
Modern satellite radiometers such as VIIRS have many detectors with slightly different relative spectral response (RSR). These differences can introduce artifacts such as striping in the imagery. In recent studies we have analyzed the striping pattern related to the detector level RSR difference in VIIRS Thermal Emissive Bands (TEB) M15 and M16, which includes line-by-line radiative transfer model (LBLRTM) detector level response study and onboard detector stability evaluation using the solar diffuser. Now we extend these analysis to the Reflective Solar Bands (RSB) using MODTRAN atmospheric radiative transfer model (RTM) for detector level radiance simulation. Previous studies analyzed the striping pattern in the images of VIIRS ocean color and reflectance in RSB, further studies about the root cause for striping are still needed. In this study, we will use the MODTRAN model at spectral resolution of 1 cm^-1 under different atmospheric conditions for VIIRS RSB, for example band M1 centered at 410nm which is used for Ocean Color product retrieval. The impact of detector level RSR difference, atmospheric dependency, and solar geometry on the striping in VIIRS SDR imagery will be investigated. The cumulative histogram method used successfully for the TEB striping analysis will be used to quantify the striping. These analysis help S-NPP and J1 to better understand the root cause for VIIRS image artifacts and reduce the uncertainties in geophysical retrievals to meet the user needs.
Time-partitioning simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Milner, Edward J.; Blech, Richard A.; Chima, Rodrick V.
1987-01-01
A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer.
Nuclear model calculations and their role in space radiation research
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Heilbronn, L. H.
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. c2002 COSPAR. Published by Elsevier Science Ltd. All right reserved.
Multiscale modeling approach for calculating grain-boundary energies from first principles
Shenderova, O.A.; Brenner, D.W.; Nazarov, A.A.; Romanov, A.E.; Yang, L.H.
1998-02-01
A multiscale modeling approach is proposed for calculating energies of tilt-grain boundaries in covalent materials from first principles over an entire misorientation range for given tilt axes. The method uses energies from density-functional calculations for a few key structures as input into a disclination structural-units model. This approach is demonstrated by calculating energies of {l_angle}001{r_angle}-symmetrical tilt-grain boundaries in diamond. {copyright} {ital 1998} {ital The American Physical Society}
The Io mass-loading disk: Model calculations
NASA Astrophysics Data System (ADS)
Wang, Yongli; Russell, Christopher T.; Raeder, Joachim
2001-11-01
The observations of ion cyclotron waves up to 0.5 RJ beyond the orbit of Io are best explained by the presence of a thin disk of fast neutrals whose ionization and pickup provide the free energy for the waves. We extend the model of Wilson and Schneider [1999] in order to explain the observed properties of this mass-loading region, especially the most recent Galileo observations near Io. In the extended model, some of the molecules of sulfur compounds in Io's exobase are first ionized by photoionization, impact ionization, and charge exchange. These charged particles are accelerated in the corotation electric field associated with the motion of the magnetized Io torus plasma that is moving through its exosphere. After a period of acceleration the heavy ions are neutralized by charge exchange with other exospheric neutral particles or combined with local electrons. These newly neutralized particles continue with high velocities similar to those of their former charged state, moving only under the influence of the gravity fields of Jupiter and Io, not affected by the electric and magnetic field. If they do not impact Io or its atmosphere, these neutral particles can propagate large distances across the magnetic field before they are reionized. Eventually, the reionized particles are lost by dissociation. Characteristic Io mass-loading particle distributions, such as high torus plasma density outside Io's orbit and the lower density inside, and the directional feature of the mass-loading neutral cloud, are qualitatively reproduced in the model. Meanwhile, the configuration of the mass-loading region, in which the ion cyclotron waves are observed, is obtained, and the results are consistent with Galileo wave observations. Three parameters are found to control the structure of the neutral and ion loading disks: the characteristic lifetimes of the initially created ions, of the neutral molecules, and of the ions generated by neutral particle reionization. In addition
Efficient distance calculation using the spherically-extended polytope (s-tope) model
NASA Technical Reports Server (NTRS)
Hamlin, Gregory J.; Kelley, Robert B.; Tornero, Josep
1991-01-01
An object representation scheme which allows for Euclidean distance calculation is presented. The object model extends the polytope model by representing objects as the convex hull of a finite set of spheres. An algorithm for calculating distances between objects is developed which is linear in the total number of spheres specifying the two objects.
NASA Astrophysics Data System (ADS)
Preobrazhenskii, M. P.; Rudakov, O. B.
2016-01-01
A regression model for calculating the boiling point isobars of tetrachloromethane-organic solvent binary homogeneous systems is proposed. The parameters of the model proposed were calculated for a series of solutions. The correlation between the nonadditivity parameter of the regression model and the hydrophobicity criterion of the organic solvent is established. The parameter value of the proposed model is shown to allow prediction of the potential formation of azeotropic mixtures of solvents with tetrachloromethane.
Diurnal heating of ocean: Experiments and model calculations
Tsvetkov, A.V.; Kudryavtsev, Y.N.; Grodsky, S.A.
1994-12-31
Presented are the results of investigation of the ocean upper layer diurnal heating, when absorption of isolation in the near-surface layer leads to formation of the diurnal thermocline. Turbulence suppression below the heated layer leads to the fact that the impulse incoming from the atmosphere does not propagate below the diurnal thermocline in which main current velocity shears are concentrated. Motion of this layer is determined by wind stress and the Coriolis force. At evening time due to growth of the heating layer depth it starts to decelerate fast. Groups of internal waves are recorded in the diurnal thermocline. The thermocline shear instability may be the source of their generation. Experimental data are analyzed by the model based on the concept of the diurnal thermocline shear instability. The combined analysis permitted us to reveal peculiarities of the heating layer dynamics in wide range of wind velocities and the Coriolis parameter, and also to obtain semi-empirical dependencies appropriate for its realization.
Model Calculations of Ocean Acidification at the End Cretaceous
NASA Astrophysics Data System (ADS)
Tyrrell, T.; Merico, A.; Armstrong McKay, D. I.
2014-12-01
Most episodes of ocean acidification (OA) in Earth's past were either too slow or too minor to provide useful lessons for understanding the present. The end-Cretaceous event (66 Mya) is special in this sense, both because of its rapid onset and also because many calcifying species (including 100% of ammonites and >95% of calcareous nannoplankton and planktonic foraminifera) went extinct at this time. We used box models of the ocean carbon cycle to evaluate whether impact-generated OA could feasibly have been responsible for the calcifier mass extinctions. We simulated several proposed consequences of the asteroid impact: (1) vaporisation of gypsum (CaSO4) and carbonate (CaCO3) rocks at the point of impact, producing sulphuric acid and CO2 respectively; (2) generation of NOx by the impact pressure wave and other sources, producing nitric acid; (3) release of CO2 from wildfires, biomass decay and disinterring of fossil organic carbon and hydrocarbons; and (4) ocean stirring leading to introduction into the surface layer of deep water with elevated CO2. We simulated additions over: (A) a few years (e-folding time of 6 months), and also (B) a few days (e-folding time of 10 hours) for SO4 and NOx, as recently proposed by Ohno et al (2014. Nature Geoscience, 7:279-282). Sulphuric acid as a consequence of gypsum vaporisation was found to be the most important acidifying process. Results will also be presented of the amounts of SO4 required to make the surface ocean become extremely undersaturated (Ωcalcite<0.5) for different e-folding times and combinations of processes. These will be compared to estimates in the literature of how much SO4 was actually released.
Micro-mutual-dipolar model for rapid calculation of forces between paramagnetic colloids.
Du, Di; Biswal, Sibani Lisa
2014-09-01
Typically, the force between paramagnetic particles in a uniform magnetic field is calculated using either dipole-based models or the Maxwell stress tensor combined with Laplace's equation for magnetostatics. Dipole-based models are fast but involve many assumptions, leading to inaccuracies in determining forces for clusters of particles. The Maxwell stress tensor yields an exact force calculation, but solving Laplace's equation is very time consuming. Here, we present a more elaborate dipole-based model: the micro-mutual-dipolar model. Our model has a time complexity that is similar to that of other dipole-based models but is much more accurate especially when used to calculate the force of small aggregates. Using this model, we calculate the force between two paramagnetic spheres in a uniform magnetic field and a circular rotational magnetic field and compare our results with those of other models. The forces for three-particle and ten-particle systems dispersed in two-dimensional (2D) space are examined using the same model. We also apply this model to calculate the force between two paramagnetic disks dispersed in 2D space. The micro-mutual-dipolar model is demonstrated to be useful for force calculations in dynamic simulations of small clusters of particles for which both accuracy and efficiency are desirable. PMID:25314567
Micro-mutual-dipolar model for rapid calculation of forces between paramagnetic colloids
NASA Astrophysics Data System (ADS)
Du, Di; Biswal, Sibani Lisa
2014-09-01
Typically, the force between paramagnetic particles in a uniform magnetic field is calculated using either dipole-based models or the Maxwell stress tensor combined with Laplace's equation for magnetostatics. Dipole-based models are fast but involve many assumptions, leading to inaccuracies in determining forces for clusters of particles. The Maxwell stress tensor yields an exact force calculation, but solving Laplace's equation is very time consuming. Here, we present a more elaborate dipole-based model: the micro-mutual-dipolar model. Our model has a time complexity that is similar to that of other dipole-based models but is much more accurate especially when used to calculate the force of small aggregates. Using this model, we calculate the force between two paramagnetic spheres in a uniform magnetic field and a circular rotational magnetic field and compare our results with those of other models. The forces for three-particle and ten-particle systems dispersed in two-dimensional (2D) space are examined using the same model. We also apply this model to calculate the force between two paramagnetic disks dispersed in 2D space. The micro-mutual-dipolar model is demonstrated to be useful for force calculations in dynamic simulations of small clusters of particles for which both accuracy and efficiency are desirable.
Kase, Yuki; Kanai, Tatsuaki; Matsufuji, Naruhiro; Furusawa, Yoshiya; Elsässer, Thilo; Scholz, Michael
2008-01-01
Both the microdosimetric kinetic model (MKM) and the local effect model (LEM) can be used to calculate the surviving fraction of cells irradiated by high-energy ion beams. In this study, amorphous track structure models instead of the stochastic energy deposition are used for the MKM calculation, and it is found that the MKM calculation is useful for predicting the survival curves of the mammalian cells in vitro for (3)He-, (12)C- and (20)Ne-ion beams. The survival curves are also calculated by two different implementations of the LEM, which inherently used an amorphous track structure model. The results calculated in this manner show good agreement with the experimental results especially for the modified LEM. These results are compared to those calculated by the MKM. Comparison of the two models reveals that both models require three basic constituents: target geometry, photon survival curve and track structure, although the implementation of each model is significantly different. In the context of the amorphous track structure model, the difference between the MKM and LEM is primarily the result of different approaches calculating the biological effects of the extremely high local dose in the center of the ion track. PMID:18182686
A computer program for calculating relative-transmissivity input arrays to aid model calibration
Weiss, Emanuel
1982-01-01
A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.
Er, Li; Xiangying, Zeng
2014-01-01
To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models. PMID:25026574
A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models
ERIC Educational Resources Information Center
Shieh, Gwowen
2007-01-01
The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…
S-values calculated from a tomographic head/brain model for brain imaging
NASA Astrophysics Data System (ADS)
Chao, Tsi-chian; Xu, X. George
2004-11-01
A tomographic head/brain model was developed from the Visible Human images and used to calculate S-values for brain imaging procedures. This model contains 15 segmented sub-regions including caudate nucleus, cerebellum, cerebral cortex, cerebral white matter, corpus callosum, eyes, lateral ventricles, lenses, lentiform nucleus, optic chiasma, optic nerve, pons and middle cerebellar peduncle, skull CSF, thalamus and thyroid. S-values for C-11, O-15, F-18, Tc-99m and I-123 have been calculated using this model and a Monte Carlo code, EGS4. Comparison of the calculated S-values with those calculated from the MIRD (1999) stylized head/brain model shows significant differences. In many cases, the stylized head/brain model resulted in smaller S-values (as much as 88%), suggesting that the doses to a specific patient similar to the Visible Man could have been underestimated using the existing clinical dosimetry.
40 CFR 600.207-86 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel...
A Workstation Farm Optimized for Monte Carlo Shell Model Calculations : Alphleet
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Shimizu, N.; Haruyama, S.; Honma, M.; Mizusaki, T.; Taketani, A.; Utsuno, Y.; Otsuka, T.
We have built a workstation farm named ``Alphleet" which consists of 140 COMPAQ's Alpha 21264 CPUs, for Monte Carlo Shell Model (MCSM) calculations. It has achieved more than 90 % scalable performance with 140 CPUs when the MCSM calculation with PVM and 61.2 Gflops of LINPACK.
Application of Dynamic Grey-Linear Auto-regressive Model in Time Scale Calculation
NASA Astrophysics Data System (ADS)
Yuan, H. T.; Don, S. W.
2009-01-01
Because of the influence of different noise and the other factors, the running of an atomic clock is very complex. In order to forecast the velocity of an atomic clock accurately, it is necessary to study and design a model to calculate its velocity in the near future. By using the velocity, the clock could be used in the calculation of local atomic time and the steering of local universal time. In this paper, a new forecast model called dynamic grey-liner auto-regressive model is studied, and the precision of the new model is given. By the real data of National Time Service Center, the new model is tested.
Airloads and Wake Geometry Calculations for an Isolated Tiltrotor Model in a Wind Tunnel
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 0.25-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance for hover and helicopter mode operation, and airloads for helicopter mode. Calculated induced power, profile power, and wake geometry provide additional information about the aerodynamic behavior. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
NASA Technical Reports Server (NTRS)
Maples, A. L.
1980-01-01
The software developed for the solidification model is presented. A link between the calculations and the FORTRAN code is provided, primarily in the form of global flow diagrams and data structures. A complete listing of the solidification code is given.
Large-scale shell-model calculations on the spectroscopy of N <126 Pb isotopes
NASA Astrophysics Data System (ADS)
Qi, Chong; Jia, L. Y.; Fu, G. J.
2016-07-01
Large-scale shell-model calculations are carried out in the model space including neutron-hole orbitals 2 p1 /2 ,1 f5 /2 ,2 p3 /2 ,0 i13 /2 ,1 f7 /2 , and 0 h9 /2 to study the structure and electromagnetic properties of neutron-deficient Pb isotopes. An optimized effective interaction is used. Good agreement between full shell-model calculations and experimental data is obtained for the spherical states in isotopes Pb-206194. The lighter isotopes are calculated with an importance-truncation approach constructed based on the monopole Hamiltonian. The full shell-model results also agree well with our generalized seniority and nucleon-pair-approximation truncation calculations. The deviations between theory and experiment concerning the excitation energies and electromagnetic properties of low-lying 0+ and 2+ excited states and isomeric states may provide a constraint on our understanding of nuclear deformation and intruder configuration in this region.
Bougher, S.W. ); Gerard, J.C. ); Stewart, A.I.F.; Fesen, C.G. )
1990-05-01
Pioneer Venus (PV) orbiter ultraviolet spectrometer (OUVS) images of the nightside airglow in the (0, 1) {delta} band of nitric oxide showed a maximum whose average location was at 0200 local solar time just south of the equator. The average airglow brightness calculated over a portion of the nightside for 35 early orbits during the Pioneer Venus mission was a factor of 4 lower than this maximum. Recent recalibration of the PV OUVS instrument and reanalysis of the data yield new values for this statistical maximum (1.9 {plus minus} 0.6 kR) and the nightside average (400-460 {plus minus} 120 R) nightglow. This emission is produced by radiative recombination of N and O atoms transported from their source on the dayside to the nightside by the Venus thermospheric circulation. The Venus Thermospheric General Circulation Model (VTGCM) has been extended to incorporate odd nitrogen chemistry in order to examine the dynamical and chemical processes required to give rise to this emission. Its predictions of dayside N atom densities are also compared with empirical models based on Pioneer Venus measurements. Calculations are presented corresponding to OUVS data taken during solar maximum. The average production of nitrogen atoms on the dayside is about 9.0 {times} 10{sup 9} atoms cm{sup {minus}2} s{sup {minus}1}. Approximately 30% of this dayside source is required for transport to the nightside to yield the observed dark-disk nightglow features. The statistical location and intensity of the bright spot are well reproduced, as well as the altitude of the airglow layer. The importance of the large-scale transport and eddy diffusion on the global N({sup 4}S) distribution is also evaluated.
NUCLEAR PHYSICS: Challenge on the Astrophysical R-Process Calculation with Nuclear Mass Models
NASA Astrophysics Data System (ADS)
Sun, Bao-Hua; Meng, Jie
2008-07-01
Our understanding of the rapid neutron capture nucleosynthesis process in universe depends on the reliability of nuclear mass predictions. Initiated by the newly developed mass table in the relativistic mean field theory (RMF), we investigate the influence of mass models on the r-process calculations, assuming the same astrophysical conditions. The different model predictions on the so far unreachable nuclei lead to significant deviations in the calculated r-process abundances.
Analytical approach to calculation of response spectra from seismological models of ground motion
Safak, Erdal
1988-01-01
An analytical approach to calculate response spectra from seismological models of ground motion is presented. Seismological models have three major advantages over empirical models: (1) they help in an understanding of the physics of earthquake mechanisms, (2) they can be used to predict ground motions for future earthquakes and (3) they can be extrapolated to cases where there are no data available. As shown with this study, these models also present a convenient form for the calculation of response spectra, by using the methods of random vibration theory, for a given magnitude and site conditions. The first part of the paper reviews the past models for ground motion description, and introduces the available seismological models. Then, the random vibration equations for the spectral response are presented. The nonstationarity, spectral bandwidth and the correlation of the peaks are considered in the calculation of the peak response.
Collins, William; Iacono, Michael J.; Delamere, Jennifer S.; Mlawer, Eli J.; Shephard, Mark W.; Clough, Shepard A.; Collins, William D.
2008-04-01
A primary component of the observed, recent climate change is the radiative forcing from increased concentrations of long-lived greenhouse gases (LLGHGs). Effective simulation of anthropogenic climate change by general circulation models (GCMs) is strongly dependent on the accurate representation of radiative processes associated with water vapor, ozone and LLGHGs. In the context of the increasing application of the Atmospheric and Environmental Research, Inc. (AER) radiation models within the GCM community, their capability to calculate longwave and shortwave radiative forcing for clear sky scenarios previously examined by the radiative transfer model intercomparison project (RTMIP) is presented. Forcing calculations with the AER line-by-line (LBL) models are very consistent with the RTMIP line-by-line results in the longwave and shortwave. The AER broadband models, in all but one case, calculate longwave forcings within a range of -0.20 to 0.23 W m{sup -2} of LBL calculations and shortwave forcings within a range of -0.16 to 0.38 W m{sup -2} of LBL results. These models also perform well at the surface, which RTMIP identified as a level at which GCM radiation models have particular difficulty reproducing LBL fluxes. Heating profile perturbations calculated by the broadband models generally reproduce high-resolution calculations within a few hundredths K d{sup -1} in the troposphere and within 0.15 K d{sup -1} in the peak stratospheric heating near 1 hPa. In most cases, the AER broadband models provide radiative forcing results that are in closer agreement with high 20 resolution calculations than the GCM radiation codes examined by RTMIP, which supports the application of the AER models to climate change research.
Evaluation model calculations with the water reactor analysis package (WRAP-EM)
Gregory, M.V.; Beranek, F.
1982-01-01
The Water Reactor Analysis Package-Evaluation Model (WRAP-EM) is a modular system of computer codes designed to provide the safety analyst with the capability of performing complete loss-of-coolant calculations for both pressurized- and boiling-water reactor systems. The system provides a licensing-type calculation capability and thus contains most of the Nuclear Regulatory Commission-Approved EM options, as described in the Code of Federal Regulations, Title 10, Part 50, Appendix K. All phases of an accident (blowdown, refill, and reflood) are modeled. The WRAP consists of modified versions of five preexisting codes (RELAP4/MOD5, GAPCON, FRAP, MOXY, and NORCOOL), the necessary interfaces to permit automatic transition from one code to the next during the transient calculations, plus a host of user-convenience features to aid the analyst faced with a multitude of EM calculations. The WRAP has been verified against both calculated and experimental results.
2007-07-09
Version 02 PRECO-2006 is a two-component exciton model code for the calculation of double differential cross sections of light particle nuclear reactions. PRECO calculates the emission of light particles (A = 1 to 4) from nuclear reactions induced by light particles on a wide variety of target nuclei. Their distribution in both energy and angle is calculated. Since it currently only considers the emission of up to two particles in any given reaction, it ismore » most useful for incident energies of 14 to 30 MeV when used as a stand-alone code. However, the preequilibrium calculations are valid up to at least around 100 MeV, and these can be used as input for more complete evaporation calculations, such as are performed in a Hauser-Feshbach model code. Finally, the production cross sections for specific product nuclides can be obtained« less
New quark-model calculations of photo-and electroproduction of N* and and Delta * resonances
Capstick, Simon
1992-06-01
An introduction is given to the calculation of resonance electromagnetic coupling in the nonrelativistic quark model. Recent improvements brought about by the inclusion of relativistic corrections to the transition operator are described. We show how such calculations may be further improved by the use of relativized-model wave functions, a modestly increased effective quark mass, and an ab initio calculation of the signs of the N-pidecay amplitudes of the resonances. A summary is given of the results for the photocouplings of all nonstrage baryons, as well as for certain amplitude ratios in electroproduction.
Calculation of delayed-neutron energy spectra in a QRPA-Hauser-Feshbach model
Kawano, Toshihiko; Moller, Peter; Wilson, William B
2008-01-01
Theoretical {beta}-delayed-neutron spectra are calculated based on the Quasiparticle Random-Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after {beta} decay to the granddaughter residual are more accurately calculated than in previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra agree reasonably well with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors.
Simoncini, David; Nakata, Hiroya; Ogata, Koji; Nakamura, Shinichiro; Zhang, Kam Yj
2015-02-01
Protein structure prediction directly from sequences is a very challenging problem in computational biology. One of the most successful approaches employs stochastic conformational sampling to search an empirically derived energy function landscape for the global energy minimum state. Due to the errors in the empirically derived energy function, the lowest energy conformation may not be the best model. We have evaluated the use of energy calculated by the fragment molecular orbital method (FMO energy) to assess the quality of predicted models and its ability to identify the best model among an ensemble of predicted models. The fragment molecular orbital method implemented in GAMESS was used to calculate the FMO energy of predicted models. When tested on eight protein targets, we found that the model ranking based on FMO energies is better than that based on empirically derived energies when there is sufficient diversity among these models. This model diversity can be estimated prior to the FMO energy calculations. Our result demonstrates that the FMO energy calculated by the fragment molecular orbital method is a practical and promising measure for the assessment of protein model quality and the selection of the best protein model among many generated.
Basic study on Reynolds stress model calculations - Application of TVD scheme
NASA Astrophysics Data System (ADS)
Yamamoto, Makoto
1993-07-01
The Reynolds stress turbulence model has been investigated for more than 15 years. However, it has not been used widely in comparison with the k-epsilon model. One reason is that Reynolds equations are unstable because of the Eulerian characteristics, and thus it is necessary to introduce a stabilizing procedure in solving the equations. Attention is given to the applicability of the TVD scheme, which is been used successfully in compressible flow calculations, for incompressible Reynolds stress turbulence model calculations. A zero pressure-gradient turbulent boundary layer on a flat plate was calculated using the TVD scheme. It was found that the TVD scheme contributes to the stabilization of the calculation, especially in the case where the spatial resolution is not sufficient.
Comparison of results of experimental research with numerical calculations of a model one-sided seal
NASA Astrophysics Data System (ADS)
Joachimiak, Damian; Krzyślak, Piotr
2015-06-01
Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
The Calculation of Theoretical Chromospheric Models and the Interpretation of the Solar Spectrum
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.
1998-01-01
Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Rurucz on the detailed synthesis of the solar spectrum based on opacity data or over 58 million atomic and molecular lines. our goals are: to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.
Calculation of individual isotope equilibrium constants for implementation in geochemical models
Thorstenson, Donald C.; Parkhurst, David L.
2002-01-01
Theory is derived from the work of Urey to calculate equilibrium constants commonly used in geochemical equilibrium and reaction-transport models for reactions of individual isotopic species. Urey showed that equilibrium constants of isotope exchange reactions for molecules that contain two or more atoms of the same element in equivalent positions are related to isotope fractionation factors by , where is n the number of atoms exchanged. This relation is extended to include species containing multiple isotopes, for example and , and to include the effects of nonideality. The equilibrium constants of the isotope exchange reactions provide a basis for calculating the individual isotope equilibrium constants for the geochemical modeling reactions. The temperature dependence of the individual isotope equilibrium constants can be calculated from the temperature dependence of the fractionation factors. Equilibrium constants are calculated for all species that can be formed from and selected species containing , in the molecules and the ion pairs with where the subscripts g, aq, l, and s refer to gas, aqueous, liquid, and solid, respectively. These equilibrium constants are used in the geochemical model PHREEQC to produce an equilibrium and reaction-transport model that includes these isotopic species. Methods are presented for calculation of the individual isotope equilibrium constants for the asymmetric bicarbonate ion. An example calculates the equilibrium of multiple isotopes among multiple species and phases.
SAMPLE AOR CALCULATION USING ANSYS FULL PARAMETRIC MODEL FOR TANK SST-SX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS parametric 360-degree model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric full model for the single shell tank (SST) SX to deal with asymmetry loading conditions and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS SLICE PARAMETRIC MODEL FOR TANK SST-SX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS slice parametric model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for the single shell tank (SST) SX, and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS AXISYMMETRIC PARAMETRIC MODEL FOR TANK SST-SX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS axisymmetric parametric model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for single shell tank (SST) SX, and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
The contrast model method for the thermodynamical calculation of air-air wet heat exchanger
NASA Astrophysics Data System (ADS)
Yuan, Xiugan; Mei, Fang
1989-02-01
The 'contrast model' method thermodynamic calculation of air-air crossflow wet heat exchangers with initial air condensation is presented. Contrast-model equations are derived from the actual heat exchanger equations as well as imaginary ones; it is then possible to proceed to a proof that the enthalpy efficiency of the contrast model equations is similar to the temperature efficiency of the dry heat exchanger. Conditions are noted under which it becomes possible to unify thermodynamic calculations for wet and dry heat exchangers.
Zanboori, E; Rostamy-Malkhalifeh, M; Jahanshahloo, G R; Shoja, N
2014-01-01
There are a number of methods for ranking decision making units (DMUs), among which calculating super efficiency and then ranking the units based on the obtained amount of super efficiency are both valid and efficient. Since most of the proposed models do not provide the projection of Pareto efficiency, a model is developed and presented through this paper based on which in the projection of Pareto-efficient is obtained, in addition to calculating the amount of super efficiency. Moreover, the model is unit invariant, and is always feasible and makes the amount of inefficiency effective in ranking.
Benchmark calculation of no-core Monte Carlo shell model in light nuclei
Abe, T.; Shimizu, N.; Maris, P.; Vary, J. P.; Otsuka, T.; Utsuno, Y.
2011-05-06
The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.
Accurate calculation of conductive conductances in complex geometries for spacecrafts thermal models
NASA Astrophysics Data System (ADS)
Garmendia, Iñaki; Anglada, Eva; Vallejo, Haritz; Seco, Miguel
2016-02-01
The thermal subsystem of spacecrafts and payloads is always designed with the help of Thermal Mathematical Models. In the case of the Thermal Lumped Parameter (TLP) method, the non-linear system of equations that is created is solved to calculate the temperature distribution and the heat power that goes between nodes. The accuracy of the results depends largely on the appropriate calculation of the conductive and radiative conductances. Several established methods for the determination of conductive conductances exist but they present some limitations for complex geometries. Two new methods are proposed in this paper to calculate accurately these conductive conductances: The Extended Far Field method and the Mid-Section method. Both are based on a finite element calculation but while the Extended Far Field method uses the calculation of node mean temperatures, the Mid-Section method is based on assuming specific temperature values. They are compared with traditionally used methods showing the advantages of these two new methods.
A model for calculating heat transfer coefficient concerning ethanol-water mixtures condensation
NASA Astrophysics Data System (ADS)
Wang, J. S.; Yan, J. J.; Hu, S. H.; Yang, Y. S.
2010-03-01
The attempt of the author in this research is made to calculate a heat transfer coefficient (HTC) by combining the filmwise theory with the dropwise notion for ethanol-water mixtures condensation. A new model, including ethanol concentration, vapor pressure and velocity, is developed by introducing a characteristic coefficient to combine the two mentioned-above theories. Under different concentration, pressure and velocity, the calculation is in comparison with experiment. It turns out that the calculation value is in good agreement with the experimental result; the maximal error is within ±30.1%. In addition, the model is applied to calculate related experiment in other literature and the values obtained agree well with results in reference.
The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation
Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt
2010-01-01
Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.
Shell model based Coulomb excitation γ-ray intensity calculations in 107Sn
NASA Astrophysics Data System (ADS)
DiJulio, D. D.; Cederkall, J.; Ekström, A.; Fahlander, C.; Hjorth-Jensen, M.
2012-10-01
In this work, we present recent shell model calculations, based on a realistic nucleon-nucleon interaction, for the light 107, 109Sn nuclei. By combining the calculations with the semi-classical Coulomb excitation code GOSIA, a set of γ-ray intensities has been generated. The calculated intensities are compared with the data from recent Coulomb excitation studies in inverse kinematics at the REX-ISOLDE facility with the nucleus 107Sn. The results are discussed in the context of the ordering of the single-particle orbits relative to 100Sn.
NASA Astrophysics Data System (ADS)
Piringer, Martin; Knauder, Werner; Petz, Erwin; Schauberger, Günther
2016-09-01
Direction-dependent separation distances to avoid odour annoyance, calculated with the Gaussian Austrian Odour Dispersion Model AODM and the Lagrangian particle diffusion model LASAT at two sites, are analysed and compared. The relevant short-term peak odour concentrations are calculated with a stability-dependent peak-to-mean algorithm. The same emission and meteorological data, but model-specific atmospheric stability classes are used. The estimate of atmospheric stability is obtained from three-axis ultrasonic anemometers using the standard deviations of the three wind components and the Obukhov stability parameter. The results are demonstrated for the Austrian villages Reidling and Weissbach with very different topographical surroundings and meteorological conditions. Both the differences in the wind and stability regimes as well as the decrease of the peak-to-mean factors with distance lead to deviations in the separation distances between the two sites. The Lagrangian model, due to its model physics, generally calculates larger separation distances. For worst-case calculations necessary with environmental impact assessment studies, the use of a Lagrangian model is therefore to be preferred over that of a Gaussian model. The study and findings relate to the Austrian odour impact criteria.
Calculations of Diffuser Flows with an Anisotropic K-Epsilon Model
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T.-H.
1995-01-01
A newly developed anisotropic K-epsilon model is applied to calculate three axisymmetric diffuser flows with or without separation. The new model uses a quadratic stress-strain relation and satisfies the realizability conditions, i.e., it ensures both the positivity of the turbulent normal stresses and the Schwarz' inequality between any fluctuating velocities. Calculations are carried out with a finite-element method. A second-order accurate, bounded convection scheme and sufficiently fine grids are used to ensure numerical credibility of the solutions. The standard K-epsilon model is also used in order to highlight the performance of the new model. Comparison with the experimental data shows that the anisotropic K-epsilon model performs consistently better than does the standard K-epsilon model in all of the three test cases.
Code System to Calculate Nuclear Reaction Cross Sections by Evaporation Model.
2000-11-27
Version: 00 Both STAPRE and STAPREF are included in this package. STAPRE calculates energy-averaged cross sections for nuclear reactions with emission of particles and gamma rays and fission. The models employed are the evaporation model with inclusion of pre-equilibrium decay and a gamma-ray cascade model. Angular momentum and parity conservation are accounted for. Major improvement in the 1976 STAPRE program relates to level density approach, implemented in subroutine ZSTDE. Generalized superfluid model is incorporated, boltzman-gasmore » modeling of intrinsic state density and semi-empirical modeling of a few-quasiparticle effects in total level density at equilibrium and saddle deformations of actinide nuclei. In addition to the activation cross sections, particle and gamma-ray production spectra are calculated. Isomeric state populations and production cross sections for gamma rays from low excited levels are obtained, too. For fission a single or a double humped barrier may be chosen.« less
Development of a patient-specific model for calculation of pulmonary function
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Ding, Mingyue; Movsas, Benjamin; Chetty, Indrin J.
2011-06-01
The purpose of this paper is to develop a patient-specific finite element model (FEM) to calculate the pulmonary function of lung cancer patients for evaluation of radiation treatment. The lung model was created with an in-house developed FEM software with region-specific parameters derived from a four-dimensional CT (4DCT) image. The model was used first to calculate changes in air volume and elastic stress in the lung, and then to calculate regional compliance defined as the change in air volume corrected by its associated stress. The results have shown that the resultant compliance images can reveal the regional elastic property of lung tissue, and could be useful for radiation treatment planning and assessment.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1984-01-01
Models and spectra of sunspots were studied, because they are important to energy balance and variability discussions. Sunspot observations in the ultraviolet region 140 to 168 nn was obtained by the NRL High Resolution Telescope and Spectrograph. Extensive photometric observations of sunspot umbrae and prenumbrae in 10 chanels covering the wavelength region 387 to 3800 nm were made. Cool star opacities and model atmospheres were computed. The Sun is the first testcase, both to check the opacity calculations against the observed solar spectrum, and to check the purely theoretical model calculation against the observed solar energy distribution. Line lists were finally completed for all the molecules that are important in computing statistical opacities for energy balance and for radiative rate calculations in the Sun (except perhaps for sunspots). Because many of these bands are incompletely analyzed in the laboratory, the energy levels are not well enough known to predict wavelengths accurately for spectrum synthesis and for detailed comparison with the observations.
Schick, W.C. Jr.; Milani, S.; Duncombe, E.
1980-03-01
A model has been devised for incorporating into the thermal feedback procedure of the PDQ few-group diffusion theory computer program the explicit calculation of depletion and temperature dependent fuel-rod shrinkage and swelling at each mesh point. The model determines the effect on reactivity of the change in hydrogen concentration caused by the variation in coolant channel area as the rods contract and expand. The calculation of fuel temperature, and hence of Doppler-broadened cross sections, is improved by correcting the heat transfer coefficient of the fuel-clad gap for the effects of clad creep, fuel densification and swelling, and release of fission-product gases into the gap. An approximate calculation of clad stress is also included in the model.
Tabulation of Mie scattering calculation results for microwave radiative transfer modeling
NASA Technical Reports Server (NTRS)
Yeh, Hwa-Young M.; Prasad, N.
1988-01-01
In microwave radiative transfer model simulations, the Mie calculations usually consume the majority of the computer time necessary for the calculations (70 to 86 percent for frequencies ranging from 6.6 to 183 GHz). For a large array of atmospheric profiles, the repeated calculations of the Mie codes make the radiative transfer computations not only expensive, but sometimes impossible. It is desirable, therefore, to develop a set of Mie tables to replace the Mie codes for the designated ranges of temperature and frequency in the microwave radiative transfer calculation. Results of using the Mie tables in the transfer calculations show that the total CPU time (IBM 3081) used for the modeling simulation is reduced by a factor of 7 to 16, depending on the frequency. The tables are tested by computing the upwelling radiance of 144 atmospheric profiles generated by a 3-D cloud model (Tao, 1986). Results are compared with those using Mie quantities computed from the Mie codes. The bias and root-mean-square deviation (RMSD) of the model results using the Mie tables, in general, are less than 1 K except for 37 and 90 GHz. Overall, neither the bias nor RMSD is worse than 1.7 K for any frequency and any viewing angle.
Eged, Katalin; Kis, Zoltán; Voigt, Gabriele
2006-01-01
After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the "location factor method" and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison.
Modification of the Simons model for calculation of nonradial expansion plumes
NASA Technical Reports Server (NTRS)
Boyd, I. D.; Stark, J. P. W.
1989-01-01
The Simons model is a simple model for calculating the expansion plumes of rockets and thrusters and is a widely used engineering tool for the determination of spacecraft impingement effects. The model assumes that the density of the plume decreases radially from the nozzle exit. Although a high degree of success has been achieved in modeling plumes with moderate Mach numbers, the accuracy obtained under certain conditions is unsatisfactory. A modification made to the model that allows effective description of nonradial behavior in plumes is presented, and the conditions under which its use is preferred are prescribed.
Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow
NASA Astrophysics Data System (ADS)
Kemerink, G. J.; Pleiter, F.
1986-08-01
The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.
A model of the circulating blood for use in radiation dose calculations
Hui, T.E.; Poston, J.W. Sr.
1987-12-31
Over the last few years there has been a significant increase in the use of radionuclides in leukocyte, platelet, and erythrocyte imaging procedures. Radiopharmaceutical used in these procedures are confined primarily to the blood, have short half-lives, and irradiate the body as they move through the circulatory system. There is a need for a model, to describe the circulatory system in an adult human, which can be used to provide radiation absorbed dose estimates for these procedures. A simplified model has been designed assuming a static circulatory system and including major organs of the body. The model has been incorporated into the MIRD phantom and calculations have been completed for a number of exposure situations and radionuclides of clinical importance. The model will be discussed in detail and results of calculations using this model will be presented.
A model of the circulating blood for use in radiation dose calculations
Hui, T.E.; Poston, J.W. Sr.
1987-01-01
Over the last few years there has been a significant increase in the use of radionuclides in leukocyte, platelet, and erythrocyte imaging procedures. Radiopharmaceutical used in these procedures are confined primarily to the blood, have short half-lives, and irradiate the body as they move through the circulatory system. There is a need for a model, to describe the circulatory system in an adult human, which can be used to provide radiation absorbed dose estimates for these procedures. A simplified model has been designed assuming a static circulatory system and including major organs of the body. The model has been incorporated into the MIRD phantom and calculations have been completed for a number of exposure situations and radionuclides of clinical importance. The model will be discussed in detail and results of calculations using this model will be presented.
Model Calculations with Excited Nuclear Fragmentations and Implications of Current GCR Spectra
NASA Astrophysics Data System (ADS)
Saganti, Premkumar
As a result of the fragmentation process in nuclei, energy from the excited states may also contribute to the radiation damage on the cell structure. Radiation induced damage to the human body from the excited states of oxygen and several other nuclei and its fragments are of a concern in the context of the measured abundance of the current galactic cosmic rays (GCR) environment. Nuclear Shell model based calculations of the Selective-Core (Saganti-Cucinotta) approach are being expanded for O-16 nuclei fragments into N-15 with a proton knockout and O-15 with a neutron knockout are very promising. In our on going expansions of these nuclear fragmentation model calculations and assessments, we present some of the prominent nuclei interactions from a total of 190 isotopes that were identified for the current model expansion based on the Quantum Multiple Scattering Fragmentation Model (QMSFRG) of Cucinotta. Radiation transport model calculations with the implementation of these energy level spectral characteristics are expected to enhance the understanding of radiation damage at the cellular level. Implications of these excited energy spectral calculations in the assessment of radiation damage to the human body may provide enhanced understanding of the space radiation risk assessment.
NASA Astrophysics Data System (ADS)
Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.
2014-06-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.
Radiative Trasnfer Calculation Of Light Curves And Spectra For Type Ia Sne Models
NASA Astrophysics Data System (ADS)
De, Soma; Baron, E.; Timmes, F.; Hauschildt, P.
2011-01-01
We present calculations of the light curves and spectra from a suite of Type Ia supernovae models, ranging from standard single degenerate scenarios to double degenerate collisions. We use a fully relativistic and time dependent radiative transfer code PHOENIX for our calculations which is time dependent in both radiative transfer and rate equation. Simple hydrodynamic calculation is used to treat conservation of energy of the gas and the radiation together and also allow different time-scales for gas and radiation. Between two time steps for the calculation of the light curve, the correct distribution of total energy change among gas and radiation is obtained by iteratively solving for the radiative transfer equation and hence the new temperature in the new time step. In our work we explore systematic relationships between the mass of 56ni mass produced, the mass of silicon group elements produced, the white dwarf metallicity, and the mass of unburned material
Space Radiation Dose Calculations for the Space Experiment Matroshka-R Modelling Conditions
NASA Astrophysics Data System (ADS)
Shurshakov, Vyacheslav; Kartashov, Dmitrij; Tolochek, Raisa
Space radiation dose calculations for the space experiment Matroshka-R modelling conditions are presented in the report. The experiment has been carried out onboard the ISS from 2004 to 2014. Dose measurements were realized both outside the ISS on the outer surface of the Service Module with the MTR-facility and in the ISS compartments with anthropomorphic and spherical phantoms, and the protective curtain facility. Newly applied approach to calculate the shielding probability functions for complex shape objects is used when the object surface is composed from a set of the disjoint adjacent triangles that fully cover the surface. Using the simplified Matroshka-R shielding geometry models of the space station compartments the space ionizing radiation dose distributions in tissue-equivalent spherical and anthropomorphic phantoms, and for an additional shielding installed in the compartment are calculated. There is good agreement between the data obtained in the experiment and calculated ones within an experiment accuracy of about 10%. Thus the calculation method used has been successfully verified with the Matroshka-R experiment data. The suggested method can be recommended for modelling of radiation loads on the crewmembers, and estimation of the additional shielding efficiency in space station compartments, and also for pre-flight estimations of radiation shielding in future space missions.
Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model.
Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong
2012-10-17
We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion k at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.
Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.
2011-06-15
We present the calculation of the atmospheric neutrino fluxes with an interaction model named JAM, which is used in PHITS (Particle and Heavy-Ion Transport code System) [K. Niita et al., Radiation Measurements 41, 1080 (2006).]. The JAM interaction model agrees with the HARP experiment [H. Collaboration, Astropart. Phys. 30, 124 (2008).] a little better than DPMJET-III[S. Roesler, R. Engel, and J. Ranft, arXiv:hep-ph/0012252.]. After some modifications, it reproduces the muon flux below 1 GeV/c at balloon altitudes better than the modified DPMJET-III, which we used for the calculation of atmospheric neutrino flux in previous works [T. Sanuki, M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 75, 043005 (2007).][M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, and T. Sanuki, Phys. Rev. D 75, 043006 (2007).]. Some improvements in the calculation of atmospheric neutrino flux are also reported.
Modelling lateral beam quality variations in pencil kernel based photon dose calculations.
Nyholm, T; Olofsson, J; Ahnesjö, A; Karlsson, M
2006-08-21
Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error
Modelling lateral beam quality variations in pencil kernel based photon dose calculations
NASA Astrophysics Data System (ADS)
Nyholm, T.; Olofsson, J.; Ahnesjö, A.; Karlsson, M.
2006-08-01
Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error
Nuclear shell model calculations of the spin-dependent neutralino- nucleus cross sections
Ressell, M.T.; Aufderheide, M.B.; Bloom, S.D.; Mathews, G.J.; Resler, D.A. ); Griest, K. . Dept. of Physics)
1992-11-01
We describe nuclear shell model calculations of the spin-dependent elastic cross sections of supersymmetric particles on several nuclei, including [sup 73]Ge and [sup 29]Si, which are being used in the construction of dark matter detectors. To check the accuracy of the wave functions we have calculated excited state energy spectra, magnetic moments, and spectroscopic factors for each of the nuclei. Our results differ significantly from previous estimates based upon the independent single particle shell model and the odd group model. These differences are especially evident if the naive quark model estimates of the quark contribution to nucleon spin are correct. We also discuss the modifications that occur when finite momentum transfer between the neutralino and nucleus is included.
NASA Technical Reports Server (NTRS)
Boudreau, R. D.
1973-01-01
A numerical model is developed which calculates the atmospheric corrections to infrared radiometric measurements due to absorption and emission by water vapor, carbon dioxide, and ozone. The corrections due to aerosols are not accounted for. The transmissions functions for water vapor, carbon dioxide, and water are given. The model requires as input the vertical distribution of temperature and water vapor as determined by a standard radiosonde. The vertical distribution of carbon dioxide is assumed to be constant. The vertical distribution of ozone is an average of observed values. The model also requires as input the spectral response function of the radiometer and the nadir angle at which the measurements were made. A listing of the FORTRAN program is given with details for its use and examples of input and output listings. Calculations for four model atmospheres are presented.
NASA Technical Reports Server (NTRS)
Clark, A. L.
1985-01-01
Integral property calculation is an important application for solid modeling systems. Algorithms for computing integral properties for various solid representation schemes are fairly well known. It is important to deigners and users of solid modeling systems to understand the behavior of such algorithms. Specifically the trade-off between execution time and accuracy is critical to effective use of integral property calculation. The average behavior of two algorithms for Constructive Solid Geometry (CSG) representations is investigated. Experimental results from the PADL-2 solid modeling system show that coarse decompositions can be used to predict execution time and error estimates for finer decompositions. Exploiting this predictability allow effective use of the algorithms in a solid modeling system.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1985-01-01
Solar chromospheric models are described. The models included are based on the observed spectrum, and on the assumption of hydrostatic equilibrium. The calculations depend on realistic solutions of the radiative transfer and statistical equilibrium equations for optically thick lines and continua, and on including the effects of large numbers of lines throughout the spectrum. Although spectroheliograms show that the structure of the chromosphere is highly complex, one-dimensional models of particular features are reasonably successful in matching observed spectra. Such models were applied to the interpretation of chromospheric observations.
Mobli, Mehdi; Abraham, Raymond J
2005-03-01
A model based on classical concepts is derived to describe the effect of the nitro group on proton chemical shifts. The calculated chemical shifts are then compared to ab initio (GIAO) calculated chemical shifts. The accuracy of the two models is assessed using proton chemical shifts of a set of rigid organic nitro compounds that are fully assigned in CDCl3 at 700 MHz. The two methods are then used to evaluate the accuracy of different popular post-SCF methods (B3LYP and MP2) and molecular mechanics methods (MMX and MMFF94) in calculating the molecular structure of a set of sterically crowded nitro aromatic compounds. Both models perform well on the rigid molecules used as a test set, although when using the GIAO method a general overestimation of the deshielding of protons near the nitro group is observed. The analysis of the sterically crowded molecules shows that the very popular B3LYP/6-31G(d,p) method produces very poor twist angles for these, and that using a larger basis set [6-311++G(2d,p)] gives much more reasonable results. The MP2 calculations, on the other hand, overestimate the twist angles, which for these compounds compensates for the deshielding effect generally observed for protons near electronegative atoms when using the GIAO method at the B3LYP/6-311++G(2d,p) level. The most accurate results are found when the structures are calculated using B3LYP/6-311++G(2d,p) level of theory, and the chemical shifts are calculated using the CHARGE program based on classical models.
Improved Ionospheric Electrodynamic Models and Application to Calculating Joule Heating Rates
NASA Technical Reports Server (NTRS)
Weimer, D. R.
2004-01-01
Improved techniques have been developed for empirical modeling of the high-latitude electric potentials and magnetic field aligned currents (FAC) as a function of the solar wind parameters. The FAC model is constructed using scalar magnetic Euler potentials, and functions as a twin to the electric potential model. The improved models have more accurate field values as well as more accurate boundary locations. Non-linear saturation effects in the solar wind-magnetosphere coupling are also better reproduced. The models are constructed using a hybrid technique, which has spherical harmonic functions only within a small area at the pole. At lower latitudes the potentials are constructed from multiple Fourier series functions of longitude, at discrete latitudinal steps. It is shown that the two models can be used together in order to calculate the total Poynting flux and Joule heating in the ionosphere. An additional model of the ionospheric conductivity is not required in order to obtain the ionospheric currents and Joule heating, as the conductivity variations as a function of the solar inclination are implicitly contained within the FAC model's data. The models outputs are shown for various input conditions, as well as compared with satellite measurements. The calculations of the total Joule heating are compared with results obtained by the inversion of ground-based magnetometer measurements. Like their predecessors, these empirical models should continue to be a useful research and forecast tools.
The "little ice age": northern hemisphere average observations and model calculations.
Robock, A
1979-12-21
Numerical energy balance climate model calculations of the average surface temperature of the Northern Hemisphere for the past 400 years are compared with a new reconstruction of the past climate. Forcing with volcanic dust produces the best simulation, whereas expressing the solar constant as a function of the envelope of the sunspot number gives very poor results.
Power and Sample Size Calculations for Multivariate Linear Models with Random Explanatory Variables
ERIC Educational Resources Information Center
Shieh, Gwowen
2005-01-01
This article considers the problem of power and sample size calculations for normal outcomes within the framework of multivariate linear models. The emphasis is placed on the practical situation that not only the values of response variables for each subject are just available after the observations are made, but also the levels of explanatory…
A Simple Model to Calculate Leaf Area Index from Lidar Data
NASA Astrophysics Data System (ADS)
Riano, D.; Sanchez-Pena, J.; Patricio, M.; Valladares, F.; Greenberg, J.; Ustin, S. L.
2006-12-01
Empirical relationships are generally established between Lidar data and field Leaf Area Index (LAI) measurements. This kind of relationships are site-specific, requiring calibration to obtain LAI when the forest structure varies. This paper presents a more holistic LAI model based on how laser pulses penetrate the vegetation canopy. This simple model obtains leaf angle distribution in order to calculate LAI, assuming leaves follow an ellipsoidal distribution, according to the Beer´s law. Lidar data within a maximum radius were selected for each site, to match field LAI measurements obtained with fish-eye photos. Elevation above the ground was calculated for each laser pulse using a digital ground model generated from the Lidar data itself. The penetration in the canopy of each laser pulse was calculated based on the distance to the ground and maximum height within the selected radius for each site. Several Lidar flight lines with different angles of incidence were processed for each site. The model calculated LAI based on the changes in angle of incidence and penetration rate, after adjusting using minimum mean squared error estimators. Results were compared with field estimates
The 'Little Ice Age' - Northern Hemisphere average observations and model calculations
NASA Technical Reports Server (NTRS)
Robock, A.
1979-01-01
Numerical energy balance climate model calculations of the average surface temperature of the Northern Hemisphere for the past 400 years are compared with a new reconstruction of the past climate. Forcing with volcanic dust produces the best simulation, whereas expressing the solar constant as a function of the envelope of the sunspot number gives very poor results.
Martelli, Saulo; Kersh, Mariana E; Pandy, Marcus G
2015-10-15
The determination of femoral strain in post-menopausal women is important for studying bone fragility. Femoral strain can be calculated using a reference musculoskeletal model scaled to participant anatomies (referred to as scaled-generic) combined with finite-element models. However, anthropometric errors committed while scaling affect the calculation of femoral strains. We assessed the sensitivity of femoral strain calculations to scaled-generic anthropometric errors. We obtained CT images of the pelves and femora of 10 healthy post-menopausal women and collected gait data from each participant during six weight-bearing tasks. Scaled-generic musculoskeletal models were generated using skin-mounted marker distances. Image-based models were created by modifying the scaled-generic models using muscle and joint parameters obtained from the CT data. Scaled-generic and image-based muscle and hip joint forces were determined by optimisation. A finite-element model of each femur was generated from the CT images, and both image-based and scaled-generic principal strains were computed in 32 regions throughout the femur. The intra-participant regional RMS error increased from 380 με (R2=0.92, p<0.001) to 4064 με (R2=0.48, p<0.001), representing 5.2% and 55.6% of the tensile yield strain in bone, respectively. The peak strain difference increased from 2821 με in the proximal region to 34,166 με at the distal end of the femur. The inter-participant RMS error throughout the 32 femoral regions was 430 με (R2=0.95, p<0.001), representing 5.9% of bone tensile yield strain. We conclude that scaled-generic models can be used for determining cohort-based averages of femoral strain whereas image-based models are better suited for calculating participant-specific strains throughout the femur.
Shell-Model Calculations of f p-shell Nuclei with Realistic NN Interactions
Qi, C.; Xu, F. R.
2010-05-12
Shell model calculations have been carried out to study the structure and decay properties of nuclei in the f p shell. The shell-model effective Hamiltonian is constructed from the high-precision CD-Bonn NN potential using the folded-diagram renormalization method without any empirical modification. The Hamiltonian gives a reasonable description of spectroscopic properties in nuclei around the N = Z line.
Goorley, J T; Kiger, W S; Zamenhof, R G
2002-02-01
As clinical trials of Neutron Capture Therapy (NCT) are initiated in the U.S. and other countries, new treatment planning codes are being developed to calculate detailed dose distributions in patient-specific models. The thorough evaluation and comparison of treatment planning codes is a critical step toward the eventual standardization of dosimetry, which, in turn, is an essential element for the rational comparison of clinical results from different institutions. In this paper we report development of a reference suite of computational test problems for NCT dosimetry and discuss common issues encountered in these calculations to facilitate quantitative evaluations and comparisons of NCT treatment planning codes. Specifically, detailed depth-kerma rate curves were calculated using the Monte Carlo radiation transport code MCNP4B for four different representations of the modified Snyder head phantom, an analytic, multishell, ellipsoidal model, and voxel representations of this model with cubic voxel sizes of 16, 8, and 4 mm. Monoenergetic and monodirectional beams of 0.0253 eV, 1, 2, 10, 100, and 1000 keV neutrons, and 0.2, 0.5, 1, 2, 5, and 10 MeV photons were individually simulated to calculate kerma rates to a statistical uncertainty of <1% (1 std. dev.) in the center of the head model. In addition, a "generic" epithermal neutron beam with a broad neutron spectrum, similar to epithermal beams currently used or proposed for NCT clinical trials, was computed for all models. The thermal neutron, fast neutron, and photon kerma rates calculated with the 4 and 8 mm voxel models were within 2% and 4%, respectively, of those calculated for the analytical model. The 16 mm voxel model produced unacceptably large discrepancies for all dose components. The effects from different kerma data sets and tissue compositions were evaluated. Updating the kerma data from ICRU 46 to ICRU 63 data produced less than 2% difference in kerma rate profiles. The depth-dose profile data
A mathematical model of the nine-month pregnant woman for calculating specific absorbed fractions
Watson, E.E.; Stabin, M.G.
1986-01-01
Existing models that allow calculation of internal doses from radionuclide intakes by both men and women are based on a mathematical model of Reference Man. No attempt has been made to allow for the changing geometric relationships that occur during pregnancy which would affect the doses to the mother's organs and to the fetus. As pregnancy progresses, many of the mother's abdominal organs are repositioned, and their shapes may be somewhat changed. Estimation of specific absorbed fractions requires that existing mathematical models be modified to accommodate these changes. Specific absorbed fractions for Reference Woman at three, six, and nine months of pregnancy should be sufficient for estimating the doses to the pregnant woman and the fetus. This report describes a model for the pregnant woman at nine months. An enlarged uterus was incorporated into a model for Reference Woman. Several abdominal organs as well as the exterior of the trunk were modified to accommodate the new uterus. This model will allow calculation of specific absorbed fractions for the fetus from photon emitters in maternal organs. Specific absorbed fractions for the repositioned maternal organs from other organs can also be calculated. 14 refs., 2 figs.
Murty, D G; Smith, W L; Woolf, H M; Hayden, C M
1993-03-20
An evaluation of two different atmospheric transmittance models is performed by using radiance data from the high-resolution infraRed Sounder (HIRS) instrument onboard the National Oceanic and Atmospheric Administration's NOAA-9 satellite and the airborne high-resolution interferometer sounder (HIS) instrument. Synthetic radiances have been derived from collocated radiosondes by using the television infrared observation satellite (TIROS) operational vertical sounder (TOVS) operational transmittance model and the fast atmospheric signature code (FASCOD2) line-by-line transmittance model for comparison with the two independent instrument observations. Radiance observations in various spectral channels from the HIRS and HIS instruments along with the synthetic radiances derived from the FASCOD2 and operational TOVS transmittance models are used for the performance evaluation. The results of the comparison reveal a significant discrepancy between 707 and 717 cm(-l) in the radiance calculation for both models. Exce llent agreement is observed between observation and calculation for the lower tropospheric long-wave temperature sounding channels. Serious problems are noted with the modeling of water vapor in the operational TOVS transmittance model. In addition, poor performance by FASCOD2 is revealed for the short-wavelength N(2)O-CO(2) HIRS spectral channels. In general the operational TOVS transmittance model is found to be only slightly inferior to the FASCOD2 model. Regarding the performance of the instruments, observations from the NOAA-9 HIRS and the aircraft HIS are comparable in terms of their agreement with theoretical computations.
NASA Technical Reports Server (NTRS)
Lytle, John K.; Harloff, Gary J.; Hsu, Andrew T.
1990-01-01
Previous calculations of jet-in-crossflow problems have been sensitive to the turbulence and artificial viscosity models and to the grid. Consequently, the eddy viscosity model in the PARC3D code was modified to consider the turbulent jet by switching from the Baldwin-Lomax (1978) model to an axisymmetric jet model. A modified artificial viscosity model has been utilized and evaluated in this study as well. The new model includes cell size scaling and a directional dependence in the coefficients. Computational results from PARC3D demonstrate the effects of the viscosity models on the pressure distribution fore and aft of the jet and the ability of the adaptive grid scheme to adjust to the three-dimensional gradients around the jet.
The role of convective model choice in calculating the climate impact of doubling CO2
NASA Technical Reports Server (NTRS)
Lindzen, R. S.; Hou, A. Y.; Farrell, B. F.
1982-01-01
The role of the parameterization of vertical convection in calculating the climate impact of doubling CO2 is assessed using both one-dimensional radiative-convective vertical models and in the latitude-dependent Hadley-baroclinic model of Lindzen and Farrell (1980). Both the conventional 6.5 K/km and the moist-adiabat adjustments are compared with a physically-based, cumulus-type parameterization. The model with parameterized cumulus convection has much less sensitivity than the 6.5 K/km adjustment model at low latitudes, a result that can be to some extent imitiated by the moist-adiabat adjustment model. However, when averaged over the globe, the use of the cumulus-type parameterization in a climate model reduces sensitivity only approximately 34% relative to models using 6.5 K/km convective adjustment. Interestingly, the use of the cumulus-type parameterization appears to eliminate the possibility of a runaway greenhouse.
Turner, D.R.; Pabalan, R.T. )
1999-01-01
Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.
Turner, D.R.; Pabalan, R.T.
1999-11-01
Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.
Shell model calculation for Te and Sn isotopes in the vicinity of {sup 100}Sn
Yakhelef, A.; Bouldjedri, A.
2012-06-27
New Shell Model calculations for even-even isotopes {sup 104-108}Sn and {sup 106,108}Te, in the vicinity of {sup 100}Sn have been performed. The calculations have been carried out using the windows version of NuShell-MSU. The two body matrix elements TBMEs of the effective interaction between valence nucleons are obtained from the renormalized two body effective interaction based on G-matrix derived from the CD-bonn nucleon-nucleon potential. The single particle energies of the proton and neutron valence spaces orbitals are defined from the available spectra of lightest odd isotopes of Sb and Sn respectively.
Development of a New Shielding Model for JB-Line Dose Rate Calculations
Buckner, M.R.
2001-08-09
This report describes the shielding model development for the JB-Line Upgrade project. The product of this effort is a simple-to-use but accurate method of estimating the personnel dose expected for various operating conditions on the line. The current techniques for shielding calculations use transport codes such as ANISN which, while accurate for geometries which can be accurately approximated as one dimensional slabs, cylinders or spheres, fall short in calculating configurations in which two-or three-dimensional effects (e.g., streaming) play a role in the dose received by workers.
Bonn potential and shell-model calculations for N=126 isotones
Coraggio, L.; Covello, A.; Gargano, A.; Itaco, N.; Kuo, T. T. S.
1999-12-01
We have performed shell-model calculations for the N=126 isotones {sup 210}Po, {sup 211}At, and {sup 212}Rn using a realistic effective interaction derived from the Bonn-A nucleon-nucleon potential by means of a G-matrix folded-diagram method. The calculated binding energies, energy spectra, and electromagnetic properties show remarkably good agreement with the experimental data. The results of this paper complement those of our previous study on neutron hole Pb isotopes, confirming that realistic effective interactions are now able to reproduce with quantitative accuracy the spectroscopic properties of complex nuclei. (c) 1999 The American Physical Society.
NASA Astrophysics Data System (ADS)
Mahajan, Ruchi; Behera, B. R.; Pal, Santanu
2015-01-01
Statistical model calculations for evaporation residue and fission cross-sections are performed for 210Po nucleus populated by 18O + 192Os system in the excitation energy range of 52.43 - 83.51 MeV. Experimental fusion cross-sections are fitted using CCFULL code. Evaporation residue and fission cross-sections are then fitted using Bohr-Wheeler formalism including shell effects in level density and fission barrier by using scaling factor (Kf) in the range of 1.0 to 0.75. The results of the calculations are in good agreement with the experimental data.
Poston, J.W.
1989-12-31
This presentation will review and describe the development of pediatric phantoms for use in radiation dose calculations . The development of pediatric models for dose calculations essentially paralleled that of the adult. In fact, Snyder and Fisher at the Oak Ridge National Laboratory reported on a series of phantoms for such calculations in 1966 about two years before the first MIRD publication on the adult human phantom. These phantoms, for a newborn, one-, five-, ten-, and fifteen-year old, were derived from the adult phantom. The ``pediatric`` models were obtained through a series of transformations applied to the major dimensions of the adult, which were specified in a Cartesian coordinate system. These phantoms suffered from the fact that no real consideration was given to the influence of these mathematical transformations on the actual organ sizes in the other models nor to the relation of the resulting organ masses to those in humans of the particular age. Later, an extensive effort was invested in designing ``individual`` pediatric phantoms for each age based upon a careful review of the literature. Unfortunately, the phantoms had limited use and only a small number of calculations were made available to the user community. Examples of the phantoms, their typical dimensions, common weaknesses, etc. will be discussed.
Poston, J.W.
1989-01-01
This presentation will review and describe the development of pediatric phantoms for use in radiation dose calculations . The development of pediatric models for dose calculations essentially paralleled that of the adult. In fact, Snyder and Fisher at the Oak Ridge National Laboratory reported on a series of phantoms for such calculations in 1966 about two years before the first MIRD publication on the adult human phantom. These phantoms, for a newborn, one-, five-, ten-, and fifteen-year old, were derived from the adult phantom. The pediatric'' models were obtained through a series of transformations applied to the major dimensions of the adult, which were specified in a Cartesian coordinate system. These phantoms suffered from the fact that no real consideration was given to the influence of these mathematical transformations on the actual organ sizes in the other models nor to the relation of the resulting organ masses to those in humans of the particular age. Later, an extensive effort was invested in designing individual'' pediatric phantoms for each age based upon a careful review of the literature. Unfortunately, the phantoms had limited use and only a small number of calculations were made available to the user community. Examples of the phantoms, their typical dimensions, common weaknesses, etc. will be discussed.
NASA Astrophysics Data System (ADS)
Yano, Masato; Hirose, Kenji; Yoshikawa, Minoru; Thermal management technology Team
Facile property calculation model for adsorption chillers was developed based on equilibrium adsorption cycles. Adsorption chillers are one of promising systems that can use heat energy efficiently because adsorption chillers can generate cooling energy using relatively low temperature heat energy. Properties of adsorption chillers are determined by heat source temperatures, adsorption/desorption properties of adsorbent, and kinetics such as heat transfer rate and adsorption/desorption rate etc. In our model, dependence of adsorption chiller properties on heat source temperatures was represented using approximated equilibrium adsorption cycles instead of solving conventional time-dependent differential equations for temperature changes. In addition to equilibrium cycle calculations, we calculated time constants for temperature changes as functions of heat source temperatures, which represent differences between equilibrium cycles and real cycles that stemmed from kinetic adsorption processes. We found that the present approximated equilibrium model could calculate properties of adsorption chillers (driving energies, cooling energies, and COP etc.) under various driving conditions quickly and accurately within average errors of 6% compared to experimental data.
Wang, Junmei; Hou, Tingjun
2012-05-25
It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS
Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing. PMID:22217596
A novel model for calculating the inter-electrode capacitance of wedge-strip anode.
Zhao, Airong; Ni, Qiliang
2016-04-01
The wedge strip anode (WSA) detector has been widely used in particle detection. In this work, a novel model for calculating the inter-electrode capacitance of WSA was proposed on the basis of conformal transformations and the partial capacitance method. Based on the model, the inter-electrode capacitance within a period was calculated besides the total inter-electrode capacitance. As a result, the effects of the WSA design parameters on the inter-electrode capacitance are systematically analyzed. It is found that the inter-electrode capacitance monotonically increases with insulated gap and substrate permittivity but not with the period. In order to prove the validation of the model, two round WSAs were manufactured by employing the picosecond laser micro-machining technology. It is found that 9%-15% errors between the theoretical and experimental results can be obtained, which is better than that obtained by employing ANSYS software. PMID:27131648
Analytical model for release calculations in solid thin-foils ISOL targets
NASA Astrophysics Data System (ADS)
Egoriti, L.; Boeckx, S.; Ghys, L.; Houngbo, D.; Popescu, L.
2016-10-01
A detailed analytical model has been developed to simulate isotope-release curves from thin-foils ISOL targets. It involves the separate modeling of diffusion and effusion inside the target. The former has been modeled using both first and second Fick's law. The latter, effusion from the surface of the target material to the end of the ionizer, was simulated with the Monte Carlo code MolFlow+. The calculated delay-time distribution for this process was then fitted using a double-exponential function. The release curve obtained from the convolution of diffusion and effusion shows good agreement with experimental data from two different target geometries used at ISOLDE. Moreover, the experimental yields are well reproduced when combining the release fraction with calculated in-target production.
Molecular Modeling for Calculation of Mechanical Properties of Epoxies with Moisture Ingress
NASA Technical Reports Server (NTRS)
Clancy, Thomas C.; Frankland, Sarah J.; Hinkley, J. A.; Gates, T. S.
2009-01-01
Atomistic models of epoxy structures were built in order to assess the effect of crosslink degree, moisture content and temperature on the calculated properties of a typical representative generic epoxy. Each atomistic model had approximately 7000 atoms and was contained within a periodic boundary condition cell with edge lengths of about 4 nm. Four atomistic models were built with a range of crosslink degree and moisture content. Each of these structures was simulated at three temperatures: 300 K, 350 K, and 400 K. Elastic constants were calculated for these structures by monitoring the stress tensor as a function of applied strain deformations to the periodic boundary conditions. The mechanical properties showed reasonably consistent behavior with respect to these parameters. The moduli decreased with decreasing crosslink degree with increasing temperature. The moduli generally decreased with increasing moisture content, although this effect was not as consistent as that seen for temperature and crosslink degree.
Construction of new skin models and calculation of skin dose coefficients for electron exposures
NASA Astrophysics Data System (ADS)
Yeom, Yeon Soo; Kim, Chan Hyeong; Nguyen, Thang Tat; Choi, Chansoo; Han, Min Cheol; Jeong, Jong Hwi
2016-08-01
The voxel-type reference phantoms of the International Commission on Radiological Protection (ICRP), due to their limited voxel resolutions, cannot represent the 50- μm-thick radiosensitive target layer of the skin necessary for skin dose calculations. Alternatively, in ICRP Publication 116, the dose coefficients (DCs) for the skin were calculated approximately, averaging absorbed dose over the entire skin depth of the ICRP phantoms. This approximation is valid for highly-penetrating radiations such as photons and neutrons, but not for weakly penetrating radiations like electrons due to the high gradient in the dose distribution in the skin. To address the limitation, the present study introduces skin polygon-mesh (PM) models, which have been produced by converting the skin models of the ICRP voxel phantoms to a high-quality PM format and adding a 50- μm-thick radiosensitive target layer into the skin models. Then, the constructed skin PM models were implemented in the Geant4 Monte Carlo code to calculate the skin DCs for external exposures of electrons. The calculated values were then compared with the skin DCs of the ICRP Publication 116. The results of the present study show that for high-energy electrons (≥ 1 MeV), the ICRP-116 skin DCs are, indeed, in good agreement with the skin DCs calculated in the present study. For low-energy electrons (< 1 MeV), however, significant discrepancies were observed, and the ICRP-116 skin DCs underestimated the skin dose as much as 15 times for some energies. Besides, regardless of the small tissue weighting factor of the skin ( w T = 0.01), the discrepancies in the skin dose were found to result in significant discrepancies in the effective dose, demonstarting that the effective DCs in ICRP-116 are not reliable for external exposure to electrons.
Direct comparison between two {gamma}-alumina structural models by DFT calculations
Ferreira, Ary R.; Martins, Mateus J.F.; Konstantinova, Elena; Capaz, Rodrigo B.; Souza, Wladmir F.; Chiaro, Sandra Shirley X.; Leitao, Alexandre A.
2011-05-15
We selected two important {gamma}-alumina models proposed in literature, a spinel-like one and a nonspinel one, to perform a theoretical comparison. Using ab initio calculations, the models were compared regarding their thermodynamic stability, lattice vibrational modes, and bulk electronic properties. The spinel-like model is thermodynamically more stable by 4.55 kcal/mol per formula unit on average from 0 to 1000 K. The main difference between the models is in their simulated infrared spectra, with the spinel-like model showing the best agreement with experimental data. Analysis of the electronic density of states and charge transfer between atoms reveal the similarity on the electronic structure of the two models, despite some minor differences. -- Graphical abstract: Two {gamma}-Alumina bulk models selected in this work for a comparison focusing in the electronic structure and thermodynamics of the systems. (a) The nonspinel model and (b) the spinel-like model. Display Omitted Highlights: {yields} There is still a debate about the {gamma}-Alumina structure in the literature. {yields} Models of surfaces are constructed from different bulk structural models. {yields} Two models commonly used in the literate were selected and compared. {yields} One model reproduce better the experimental data. {yields} Both presented a similar electronic structure.
On calculating the transfer of carbon-13 in reservoir models of the carbon cycle
TANS, PIETER P.
1980-10-01
An approach to calculating the transfer of isotopic tracers in reservoir models is outlined that takes into account the effects of isotopic fractionation at phase boundaries without any significant approximations. Simultaneous variations in both the rare isotopic tracer and the total elemental (the sum of its isotopes) concentration are considered. The proposed procedure is applicable to most models of the carbon cycle and a four-box model example is discussed. Although the exact differential equations are non-linear, a simple linear approximation exists that gives insight into the nature of the solution. The treatment will be in terms of isotopic ratios which are the directly measured quantities.
The impact of nuclear mass models on r-process nucleosynthesis network calculations
NASA Astrophysics Data System (ADS)
Vaughan, Kelly
2002-10-01
An insight into understanding various nucleosynthesis processes is via modelling of the process with network calculations. My project focus is r-process network calculations where the r-process is nucleosynthesis via rapid neutron capture thought to take place in high entropy supernova bubbles. One of the main uncertainties of the simulations is the Nuclear Physics input. My project investigates the role that nuclear masses play in the resulting abundances. The code tecode, involves rapid (n,γ) capture reactions in competition with photodisintegration and β decay onto seed nuclei. In order to fully analyze the effects of nuclear mass models on the relative isotopic abundances, calculations were done from the network code, keeping the initial environmental parameters constant throughout. The supernova model investigated by Qian et al (1996) in which two r-processes, of high and low frequency with seed nucleus ^90Se and of fixed luminosity (fracL_ν_e(0)r_7(0)^2 ˜= 8.77), contribute to the nucleosynthesis of the heavier elements. These two r-processes, however, do not contribute equally to the total abundance observed. The total isotopic abundance produced from both events was therefore calculated using equation refabund. Y(H+L) = fracY(H)+fY(L)f+1 <~belabund where Y(H) denotes the relative isotopic abundance produced in the high frequency event, Y(L) corresponds to the low freqeuncy event and f is the ratio of high event matter to low event matter produced. Having established reliable, fixed parameters, the network code was run using data files containing parameters such as the mass excess, neutron separation energy, β decay rates and neutron capture rates based around three different nuclear mass models. The mass models tested are the HFBCS model (Hartree-Fock BCS) derived from first principles, the ETFSI-Q model (Extended Thomas-Fermi with Strutinsky Integral including shell Quenching) known for its particular successes in the replication of Solar System
Calculated flame temperature (CFT) modeling of fuel mixture lower flammability limits.
Zhao, Fuman; Rogers, William J; Mannan, M Sam
2010-02-15
Heat loss can affect experimental flammability limits, and it becomes indispensable to quantify flammability limits when apparatus quenching effect becomes significant. In this research, the lower flammability limits of binary hydrocarbon mixtures are predicted using calculated flame temperature (CFT) modeling, which is based on the principle of energy conservation. Specifically, the hydrocarbon mixture lower flammability limit is quantitatively correlated to its final flame temperature at non-adiabatic conditions. The modeling predictions are compared with experimental observations to verify the validity of CFT modeling, and the minor deviations between them indicated that CFT modeling can represent experimental measurements very well. Moreover, the CFT modeling results and Le Chatelier's Law predictions are also compared, and the agreement between them indicates that CFT modeling provides a theoretical justification for the Le Chatelier's Law. PMID:19819067
A new method for modeling rough membrane surface and calculation of interfacial interactions.
Zhao, Leihong; Zhang, Meijia; He, Yiming; Chen, Jianrong; Hong, Huachang; Liao, Bao-Qiang; Lin, Hongjun
2016-01-01
Membrane fouling control necessitates the establishment of an effective method to assess interfacial interactions between foulants and rough surface membrane. This study proposed a new method which includes a rigorous mathematical equation for modeling membrane surface morphology, and combination of surface element integration (SEI) method and the composite Simpson's approach for assessment of interfacial interactions. The new method provides a complete solution to quantitatively calculate interfacial interactions between foulants and rough surface membrane. Application of this method in a membrane bioreactor (MBR) showed that, high calculation accuracy could be achieved by setting high segment number, and moreover, the strength of three energy components and energy barrier was remarkably impaired by the existence of roughness on the membrane surface, indicating that membrane surface morphology exerted profound effects on membrane fouling in the MBR. Good agreement between calculation prediction and fouling phenomena was found, suggesting the feasibility of this method.
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy. PMID:26734567
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.
Study on the calculation models of bus delay at bays using queueing theory and Markov chain.
Sun, Feng; Sun, Li; Sun, Shao-Wei; Wang, Dian-Hai
2015-01-01
Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays.
Study on the calculation models of bus delay at bays using queueing theory and Markov chain.
Sun, Feng; Sun, Li; Sun, Shao-Wei; Wang, Dian-Hai
2015-01-01
Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays. PMID:25759720
Seth, Ajay; Delp, Scott L.
2015-01-01
Biomechanics researchers often use multibody models to represent biological systems. However, the mapping from biology to mechanics and back can be problematic. OpenSim is a popular open source tool used for this purpose, mapping between biological specifications and an underlying generalized coordinate multibody system called Simbody. One quantity of interest to biomechanical researchers and clinicians is “muscle moment arm,” a measure of the effectiveness of a muscle at contributing to a particular motion over a range of configurations. OpenSim can automatically calculate these quantities for any muscle once a model has been built. For simple cases, this calculation is the same as the conventional moment arm calculation in mechanical engineering. But a muscle may span several joints (e.g., wrist, neck, back) and may follow a convoluted path over various curved surfaces. A biological joint may require several bodies or even a mechanism to accurately represent in the multibody model (e.g., knee, shoulder). In these situations we need a careful definition of muscle moment arm that is analogous to the mechanical engineering concept, yet generalized to be of use to biomedical researchers. Here we present some biomechanical modeling challenges and how they are resolved in OpenSim and Simbody to yield biologically meaningful muscle moment arms. PMID:25905111
Nikjoo, H; Uehara, S; Pinsky, L; Cucinotta, Francis A
2007-01-01
Space activities in earth orbit or in deep space pose challenges to the estimation of risk factors for both astronauts and instrumentation. In space, risk from exposure to ionising radiation is one of the main factors limiting manned space exploration. Therefore, characterising the radiation environment in terms of the types of radiations and the quantity of radiation that the astronauts are exposed to is of critical importance in planning space missions. In this paper, calculations of the response of TEPC to protons and carbon ions were reported. The calculations have been carried out using Monte Carlo track structure simulation codes for the walled and the wall-less TEPC counters. The model simulates nonhomogenous tracks in the sensitive volume of the counter and accounts for direct and indirect events. Calculated frequency- and dose-averaged lineal energies 0.3 MeV-1 GeV protons are presented and compared with the experimental data. The calculation of quality factors (QF) were made using individual track histories. Additionally, calculations of absolute frequencies of energy depositions in cylindrical targets, 100 nm height by 100 nm diameter, when randomly positioned and oriented in water irradiated with 1 Gy of protons of energy 0.3-100 MeV, is presented. The distributions show the clustering properties of protons of different energies in a 100 nm by 100 nm cylinder. PMID:17513858
Nikjoo, H; Uehara, S; Pinsky, L; Cucinotta, Francis A
2007-01-01
Space activities in earth orbit or in deep space pose challenges to the estimation of risk factors for both astronauts and instrumentation. In space, risk from exposure to ionising radiation is one of the main factors limiting manned space exploration. Therefore, characterising the radiation environment in terms of the types of radiations and the quantity of radiation that the astronauts are exposed to is of critical importance in planning space missions. In this paper, calculations of the response of TEPC to protons and carbon ions were reported. The calculations have been carried out using Monte Carlo track structure simulation codes for the walled and the wall-less TEPC counters. The model simulates nonhomogenous tracks in the sensitive volume of the counter and accounts for direct and indirect events. Calculated frequency- and dose-averaged lineal energies 0.3 MeV-1 GeV protons are presented and compared with the experimental data. The calculation of quality factors (QF) were made using individual track histories. Additionally, calculations of absolute frequencies of energy depositions in cylindrical targets, 100 nm height by 100 nm diameter, when randomly positioned and oriented in water irradiated with 1 Gy of protons of energy 0.3-100 MeV, is presented. The distributions show the clustering properties of protons of different energies in a 100 nm by 100 nm cylinder.
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
MPS solidification model. Analysis and calculation of macrosegregation in a casting ingot
NASA Technical Reports Server (NTRS)
Poirier, D. R.; Maples, A. L.
1985-01-01
Work performed on several existing solidification models for which computer codes and documentation were developed is presented. The models describe the solidification of alloys in which there is a time varying zone of coexisting solid and liquid phases; i.e., the S/L zone. The primary purpose of the models is to calculate macrosegregation in a casting or ingot which results from flow of interdendritic liquid in this S/L zone during solidification. The flow, driven by solidification contractions and by gravity acting on density gradients in the interdendritic liquid, is modeled as flow through a porous medium. In Model 1, the steady state model, the heat flow characteristics are those of steady state solidification; i.e., the S/L zone is of constant width and it moves at a constant velocity relative to the mold. In Model 2, the unsteady state model, the width and rate of movement of the S/L zone are allowed to vary with time as it moves through the ingot. Each of these models exists in two versions. Models 1 and 2 are applicable to binary alloys; models 1M and 2M are applicable to multicomponent alloys.
Off-center spherical model for dosimetry calculations in chick brain tissue
Gonzalez, G.; Nearing, J.C.; Spiegel, R.J.; Joines, W.T.
1986-01-01
This paper presents calculations for the electric field and absorbed power density distribution in chick brain tissue inside a test tube, using an off-center spherical model. It is shown that the off-center spherical model overcomes many of the limitations of the concentric spherical model, and permits a more realistic modeling of the brain tissue as it sits in the bottom of the test tube surrounded by buffer solution. The effect of the unequal amount of buffer solution above the upper and below the lower surfaces of the brain is analyzed. The field distribution is obtained in terms of a rapidly converging series of zonal harmonics. A method that permits the expansion of spherical harmonics about an off-center origin in terms of spherical harmonics at the origin is developed to calculate in closed form the electric field distribution. Numerical results are presented for the absorbed power density distribution at a carrier frequency of 147 MHz. It is shown that the absorbed power density increases toward the bottom of the brain surface. Scaling relations are developed by keeping the electric field intensity in the brain tissue the same at two different frequencies. Scaling relations inside, as well as outside, the brain surface are given. The scaling relation distribution is calculated as a function of position, and compared to the scaling relations obtained in the concentric spherical model. It is shown that the off-center spherical model yields scaling ratios in the brain tissue that lie between the extreme values predicted by the concentric and isolated spherical models.
PHASE-OTI: A pre-equilibrium model code for nuclear reactions calculations
NASA Astrophysics Data System (ADS)
Elmaghraby, Elsayed K.
2009-09-01
The present work focuses on a pre-equilibrium nuclear reaction code (based on the one, two and infinity hypothesis of pre-equilibrium nuclear reactions). In the PHASE-OTI code, pre-equilibrium decays are assumed to be single nucleon emissions, and the statistical probabilities come from the independence of nuclei decay. The code has proved to be a good tool to provide predictions of energy-differential cross sections. The probability of emission was calculated statistically using bases of hybrid model and exciton model. However, more precise depletion factors were used in the calculations. The present calculations were restricted to nucleon-nucleon interactions and one nucleon emission. Program summaryProgram title: PHASE-OTI Catalogue identifier: AEDN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5858 No. of bytes in distributed program, including test data, etc.: 149 405 Distribution format: tar.gz Programming language: Fortran 77 Computer: Pentium 4 and Centrino Duo Operating system: MS Windows RAM: 128 MB Classification: 17.12 Nature of problem: Calculation of the differential cross section for nucleon induced nuclear reaction in the framework of pre-equilibrium emission model. Solution method: Single neutron emission was treated by assuming occurrence of the reaction in successive steps. Each step is called phase because of the phase transition nature of the theory. The probability of emission was calculated statistically using bases of hybrid model [1] and exciton model [2]. However, more precise depletion factor was used in the calculations. Exciton configuration used in the code is that described in earlier work [3]. Restrictions: The program is restricted to single nucleon emission and nucleon
Necessity of using heterogeneous ellipsoidal Earth model with terrain to calculate co-seismic effect
NASA Astrophysics Data System (ADS)
Cheng, Huihong; Zhang, Bei; Zhang, Huai; Huang, Luyuan; Qu, Wulin; Shi, Yaolin
2016-04-01
Co-seismic deformation and stress changes, which reflect the elasticity of the earth, are very important in the earthquake dynamics, and also to other issues, such as the evaluation of the seismic risk, fracture process and triggering of earthquake. Lots of scholars have researched the dislocation theory and co-seismic deformation and obtained the half-space homogeneous model, half-space stratified model, spherical stratified model, and so on. Especially, models of Okada (1992) and Wang (2003, 2006) are widely applied in the research of calculating co-seismic and post-seismic effects. However, since both semi-infinite space model and layered model do not take the role of the earth curvature or heterogeneity or topography into consideration, there are large errors in calculating the co-seismic displacement of a great earthquake in its impacted area. Meanwhile, the computational methods of calculating the co-seismic strain and stress are different between spherical model and plane model. Here, we adopted the finite element method which could well deal with the complex characteristics (such as anisotropy, discontinuities) of rock and different conditions. We use the mash adaptive technique to automatically encrypt the mesh at the fault and adopt the equivalent volume force replace the dislocation source, which can avoid the difficulty in handling discontinuity surface with conventional (Zhang et al., 2015). We constructed an earth model that included earth's layered structure and curvature, the upper boundary was set as a free surface and the core-mantle boundary was set under buoyancy forces. Firstly, based on the precision requirement, we take a testing model - - a strike-slip fault (the length of fault is 500km and the width is 50km, and the slippage is 10m) for example. Because of the curvature of the Earth, some errors certainly occur in plane coordinates just as previous studies (Dong et al., 2014; Sun et al., 2012). However, we also found that: 1) the co
Calculation of Forming Limits for Sheet Metal using an Enhanced Continuous Damage Fracture Model
NASA Astrophysics Data System (ADS)
Nguyen, Ngoc-Trung; Kim, Dae-Young; Kim, Heon Young
2011-08-01
An enhanced continuous damage fracture model was introduced in this paper to calculate forming limits of sheet metal. The fracture model is a combination of a fracture criterion and a continuum damage constitutive law. A modified McClintock void growth fracture criterion was incorporated with a coupled damage-plasticity Gurson-type constitutive law. Also, by introducing a Lode angle dependent parameter to define the loading asymmetry condition, the shear effect was phenomenologically taken into account. The proposed fracture model was implemented using user-subroutines in commercial finite element software. The model was calibrated and correlated by the uniaxial tension, shear and notched specimens tests. Application of the fracture model for the LDH tests was discussed and the simulation results were compared with the experimental data.
NASA Astrophysics Data System (ADS)
de Pater, I.
1986-11-01
The models on which the present simulation calculations of Jupiter's thermal emission at radio wavelengths are based are in turn constrained by recent VLA observations at cm-wavelengths of the limb darkening characteristics and zone-belt structure of the planet. Ammonia gas is noted to be superabundant by a factor of nearly 2 by comparison with the solar value at pressure levels greater than 2.2 bar. The present model atmosphere calculations, when compared to the new radio data, confirm the cloud scenario proposed by Bjoraker (1985) and West et al. (1986). Further evidence is found to support Ingersoll and Cuzzy's (1969) theory of the dynamics of Jupiter's cloud bands, in the guise of a mechanism triggering zonal wind motions.
Improved analytical flux surface representation and calculation models for poloidal asymmetries
NASA Astrophysics Data System (ADS)
Collart, T. G.; Stacey, W. M.
2016-05-01
An orthogonalized flux-surface aligned curvilinear coordinate system has been developed from an up-down asymmetric variation of the "Miller" flux-surface equilibrium model. It is found that the new orthogonalized "asymmetric Miller" model representation of equilibrium flux surfaces provides a more accurate match than various other representations of DIII-D [J. L. Luxon, Nucl. Fusion 42, 614-633 (2002)] discharges to flux surfaces calculated using the DIII-D Equilibrium Fitting tokamak equilibrium reconstruction code. The continuity and momentum balance equations were used to develop a system of equations relating asymmetries in plasma velocities, densities, and electrostatic potential in this curvilinear system, and detailed calculations of poloidal asymmetries were performed for a DIII-D discharge.
Large-scale shell-model calculations of nuclei around mass 210
NASA Astrophysics Data System (ADS)
Teruya, E.; Higashiyama, K.; Yoshinaga, N.
2016-06-01
Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.
Supersonic flow calculation using a Reynolds-stress and an eddy thermal diffusivity turbulence model
NASA Technical Reports Server (NTRS)
Sommer, T. P.; So, R. M. C.; Zhang, H. S.
1993-01-01
A second-order model for the velocity field and a two-equation model for the temperature field are used to calculate supersonic boundary layers assuming negligible real gas effects. The modeled equations are formulated on the basis of an incompressible assumption and then extended to supersonic flows by invoking Morkovin's hypothesis, which proposes that compressibility effects are completely accounted for by mean density variations alone. In order to calculate the near-wall flow accurately, correction functions are proposed to render the modeled equations asymptotically consistent with the behavior of the exact equations near a wall and, at the same time, display the proper dependence on the molecular Prandtl number. Thus formulated, the near-wall second order turbulence model for heat transfer is applicable to supersonic flows with different Prandtl numbers. The model is validated against flows with different Prandtl numbers and supersonic flows with free-stream Mach numbers as high as 10 and wall temperature ratios as low as 0.3. Among the flow cases considered, the momentum thickness Reynolds number varies from approximately 4,000 to approximately 21,000. Good correlation with measurements of mean velocity, temperature, and its variance is obtained. Discernible improvements in the law-of-the-wall are observed, especially in the range where the big-law applies.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1986-01-01
Calculated results based on two chromospheric flare models F1 and F2 of Machado, et al., (1980) are presented. Two additional models are included: F1*, which has enhanced temperatures relative to the weak-flare model F1 in the upper photosphere and low chromosphere, and F3 which has enhanced temperatures relative to the strong flare model F2 in the upper chromosphere. Each model is specified by means of a given variation of the temperature as a function of column mass. The corresponding variation of particle density and the geometrical height scale are determined by assuming hydrostatic equilibrium. The coupled equations of statistical equilibrium is solved as is radiative transfer for H, H-, He I-II, C I-IV, Si I-II, Mg I-II, Fe, Al, O I-II, Na, and Ca II. The overall absorption and emission of radiation by lines throughout the spectrum is determined by means of a reduced set of opacities sampled from a compilation of over 10 to the 7th power individual lines. That the white flight flare continuum may arise by extreme chromospheric overheating as well as by an enhancement of the minimum temperature region is also shown. The radiative cooling rate calculations for our brightest flare model suggest that chromospheric overheating provides enhanced radiation that could cause significant heating deep in the flare atmosphere.
A numerical model for calculating vibration from a railway tunnel embedded in a full-space
NASA Astrophysics Data System (ADS)
Hussein, M. F. M.; Hunt, H. E. M.
2007-08-01
Vibration generated by underground railways transmits to nearby buildings causing annoyance to inhabitants and malfunctioning to sensitive equipment. Vibration can be isolated through countermeasures by reducing the stiffness of railpads, using floating-slab tracks and/or supporting buildings on springs. Modelling of vibration from underground railways has recently gained more importance on account of the need to evaluate accurately the performance of vibration countermeasures before these are implemented. This paper develops an existing model, reported by Forrest and Hunt, for calculating vibration from underground railways. The model, known as the Pipe-in-Pipe model, has been developed in this paper to account for anti-symmetrical inputs and therefore to model tangential forces at the tunnel wall. Moreover, three different arrangements of supports are considered for floating-slab tracks, one which can be used to model directly-fixed slabs. The paper also investigates the wave-guided solution of the track, the tunnel, the surrounding soil and the coupled system. It is shown that the dynamics of the track have significant effect on the results calculated in the wavenumber-frequency domain and therefore an important role on controlling vibration from underground railways.
Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ruf, Joe
2007-01-01
As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.
Influence of polarization and a source model for dose calculation in MRT
Bartzsch, Stefan Oelfke, Uwe; Lerch, Michael; Petasecca, Marco; Bräuer-Krisch, Elke
2014-04-15
Purpose: Microbeam Radiation Therapy (MRT), an alternative preclinical treatment strategy using spatially modulated synchrotron radiation on a micrometer scale, has the great potential to cure malignant tumors (e.g., brain tumors) while having low side effects on normal tissue. Dose measurement and calculation in MRT is challenging because of the spatial accuracy required and the arising high dose differences. Dose calculation with Monte Carlo simulations is time consuming and their accuracy is still a matter of debate. In particular, the influence of photon polarization has been discussed in the literature. Moreover, it is controversial whether a complete knowledge of phase space trajectories, i.e., the simulation of the machine from the wiggler to the collimator, is necessary in order to accurately calculate the dose. Methods: With Monte Carlo simulations in the Geant4 toolkit, the authors investigate the influence of polarization on the dose distribution and the therapeutically important peak to valley dose ratios (PVDRs). Furthermore, the authors analyze in detail phase space information provided byMartínez-Rovira et al. [“Development and commissioning of a Monte Carlo photon model for the forthcoming clinical trials in microbeam radiation therapy,” Med. Phys. 39(1), 119–131 (2012)] and examine its influence on peak and valley doses. A simple source model is developed using parallel beams and its applicability is shown in a semiadjoint Monte Carlo simulation. Results are compared to measurements and previously published data. Results: Polarization has a significant influence on the scattered dose outside the microbeam field. In the radiation field, however, dose and PVDRs deduced from calculations without polarization and with polarization differ by less than 3%. The authors show that the key consequences from the phase space information for dose calculations are inhomogeneous primary photon flux, partial absorption due to inclined beam incidence outside
Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model
NASA Technical Reports Server (NTRS)
White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.
1989-01-01
A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.
A quark model calculation of yy->pipi including final-state interactions
H.G. Blundell; S. Godfrey; G. Hay; Eric Swanson
2000-02-01
A quark model calculation of the processes yy->pi+pi- and yy->pipi is performed. At tree level, only charged pions couple to the initial state photons and neutral pions are not exceeded in the final state. However a small but significant cross section is observed. We demonstrate that this may be accounted for by a rotation in isospin space induced by final-state interactions.
Model calculations of the relative effects of CFCs and their replacements on stratospheric ozone
NASA Technical Reports Server (NTRS)
Fisher, Donald A.; Hales, Charles H.; Filkin, David L.; Ko, Malcolm K. W.; Sze, N. Dak
1990-01-01
Because chlorine has been linked to the destruction of stratospheric ozone, the use of many fully halogenated compounds, such as the chlorofluorocarbons CFC-11 and -12, is restricted by international agreement. Hydrohalocarbons are under intensive development as replacements for CFCs. Because they contain hydrogen, these gases are susceptible to tropospheric destruction which significantly shortens their atmospheric lifetimes,. Model calculations show that chlorine-containing hydrohalocarbons have less effect on ozone, by an order of magnitude, than their regulated counterparts.
Lattice dynamics and spin-phonon interactions in multiferroic RMn2O5: Shell model calculations
NASA Astrophysics Data System (ADS)
Litvinchuk, A. P.
2009-08-01
The results of the shell model lattice dynamics calculations of multiferroic RMn2O5 materials (space group Pbam) are reported. Theoretical even-parity eigenmode frequencies are compared with those obtained experimentally in polarized Raman scattering experiments for R=Ho,Dy. Analysis of displacement patterns allows to identify vibrational modes which facilitate spin-phonon coupling by modulating the Mn-Mn exchange interaction and provides explanation of the observed anomalous temperature behavior of phonons.
A deterministic partial differential equation model for dose calculation in electron radiotherapy
NASA Astrophysics Data System (ADS)
Duclous, R.; Dubroca, B.; Frank, M.
2010-07-01
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung
Double-step truncation procedure for large-scale shell-model calculations
NASA Astrophysics Data System (ADS)
Coraggio, L.; Gargano, A.; Itaco, N.
2016-06-01
We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model Hamiltonian, in order to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform a unitary transformation of the original Hamiltonian from its model space into the truncated one. This transformation generates a new shell-model Hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model Hamiltonian defined in a large model space, set up by seven proton and five neutron single-particle orbitals outside 88Sr. We study the dependence of shell-model results upon different truncations of the original model space for the Zr, Mo, Ru, Pd, Cd, and Sn isotopic chains, showing the reliability of this truncation procedure.
Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.
Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P
2016-06-14
Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates.
First-Principles Calculations, Experimental Study, and Thermodynamic Modeling of the Al-Co-Cr System
Liu, Xuan L.; Gheno, Thomas; Lindahl, Bonnie B.; Lindwall, Greta; Gleeson, Brian; Liu, Zi-Kui
2015-01-01
The phase relations and thermodynamic properties of the condensed Al-Co-Cr ternary alloy system are investigated using first-principles calculations based on density functional theory (DFT) and phase-equilibria experiments that led to X-ray diffraction (XRD) and electron probe micro-analysis (EPMA) measurements. A thermodynamic description is developed by means of the calculations of phase diagrams (CALPHAD) method using experimental and computational data from the present work and the literature. Emphasis is placed on modeling the bcc-A2, B2, fcc-γ, and tetragonal-σ phases in the temperature range of 1173 to 1623 K. Liquid, bcc-A2 and fcc-γ phases are modeled using substitutional solution descriptions. First-principles special quasirandom structures (SQS) calculations predict a large bcc-A2 (disordered)/B2 (ordered) miscibility gap, in agreement with experiments. A partitioning model is then used for the A2/B2 phase to effectively describe the order-disorder transitions. The critically assessed thermodynamic description describes all phase equilibria data well. A2/B2 transitions are also shown to agree well with previous experimental findings. PMID:25875037
Liu, Xuan L; Gheno, Thomas; Lindahl, Bonnie B; Lindwall, Greta; Gleeson, Brian; Liu, Zi-Kui
2015-01-01
The phase relations and thermodynamic properties of the condensed Al-Co-Cr ternary alloy system are investigated using first-principles calculations based on density functional theory (DFT) and phase-equilibria experiments that led to X-ray diffraction (XRD) and electron probe micro-analysis (EPMA) measurements. A thermodynamic description is developed by means of the calculations of phase diagrams (CALPHAD) method using experimental and computational data from the present work and the literature. Emphasis is placed on modeling the bcc-A2, B2, fcc-γ, and tetragonal-σ phases in the temperature range of 1173 to 1623 K. Liquid, bcc-A2 and fcc-γ phases are modeled using substitutional solution descriptions. First-principles special quasirandom structures (SQS) calculations predict a large bcc-A2 (disordered)/B2 (ordered) miscibility gap, in agreement with experiments. A partitioning model is then used for the A2/B2 phase to effectively describe the order-disorder transitions. The critically assessed thermodynamic description describes all phase equilibria data well. A2/B2 transitions are also shown to agree well with previous experimental findings.
Most predictions of the effect of climate change on species’ ranges are based on correlations between climate and current species’ distributions. These so-called envelope models may be a good first approximation, but we need demographically mechanistic models to incorporate the ...
Hoak, T.E. |; Sundberg, K.R.; Ortoleva, P.
1998-12-31
The analysis carried out in the Chemical Interaction of Rocks and Fluids Basin (CIRFB) model describes the chemical and physical evolution of the entire system. One aspect of this is the deformation of the rocks, and its treatment with a rigorous flow and rheological model. This type of analysis depends on knowing the state of the model domain`s boundaries as functions of time. In the Andrews and Ector County areas of the Central Basin Platform of West Texas, the authors calculate this shortening with a simple interpretation of the basic motion and a restoration of the Ellenburger formation. Despite its simplicity, this calculation reveals two distinct periods of shortening/extension, a relatively uniform directionality to all the deformation, and the localization of deformation effects to the immediate vicinities of the major faults in the area. Conclusions are drawn regarding the appropriate expressions of these boundary conditions in the CIRFB model and possible implications for exploration.
Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models.
Stiglic, Gregor; Pajnkihar, Majda
2015-01-01
Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999-2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001), with a significantly higher AUC (p < 0.001) of 0.774 (0.740) for the pre-diabetes group. Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were
Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models.
Stiglic, Gregor; Pajnkihar, Majda
2015-01-01
Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999-2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001), with a significantly higher AUC (p < 0.001) of 0.774 (0.740) for the pre-diabetes group. Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were
The calculation of theoretical chromospheric models and predicted OSO 1 spectra
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1975-01-01
Theoretical solar chromospheric and photospheric models are computed for use in analyzing OSO 8 spectra. The Vernazza, Avrett, and Loeser (1976) solar model is updated and self-consistent non-LTE number densities for H I, He I, He II, C I, Mg I, Al I, Si I, and H(-) are produced. These number densities are used in the calculation of a theoretical solar spectrum from 90 to 250 nm, including approximately 7000 lines in non-LTE. More than 60,000 lines of other elements are treated with approximate source functions.
Transverse space charge effect calculation in the Synergia accelerator modeling toolkit
Okonechnikov, Konstantin; Amundson, James; Macridin, Alexandru; /Fermilab
2009-09-01
This paper describes a transverse space charge effect calculation algorithm, developed in the context of accelerator modeling toolkit Synergia. The introduction to the space charge problem and the Synergia modeling toolkit short description are given. The developed algorithm is explained and the implementation is described in detail. As a result of this work a new space charge solver was developed and integrated into the Synergia toolkit. The solver showed correct results in comparison to existing Synergia solvers and delivered better performance in the regime where it is applicable.
Calculation of plane turbulent Couette-Poiseuille flows with a modified k-ɛ model
NASA Astrophysics Data System (ADS)
Gretler, W.; Meile, W.
1997-10-01
Suitable modifications to the k—ɛ model are proposed for the calculation of turbulent Couette-Poiseuille flows. In the case of pure Couette flow a logarithmic expression for the turbulent kinetic energy could be derived which is valid over the entire fully turbulent region. The basic idea for numerical computations is the deviation from the concept of constant cμ. In the case of Couette-type flows proper distributions of this model parameter could be found. In Poiseuille-type flows the application of an extended eddy-diffusivity approach for the turbulent shear stress leads to results which satisfactorily correspond to the measurements.
Chambers, Alex; Rajantie, Arttu
2008-02-01
If light scalar fields are present at the end of inflation, their nonequilibrium dynamics such as parametric resonance or a phase transition can produce non-Gaussian density perturbations. We show how these perturbations can be calculated using nonlinear lattice field theory simulations and the separate universe approximation. In the massless preheating model, we find that some parameter values are excluded while others lead to acceptable but observable levels of non-Gaussianity. This shows that preheating can be an important factor in assessing the viability of inflationary models.
Chambers, Alex; Rajantie, Arttu
2008-02-01
If light scalar fields are present at the end of inflation, their nonequilibrium dynamics such as parametric resonance or a phase transition can produce non-Gaussian density perturbations. We show how these perturbations can be calculated using nonlinear lattice field theory simulations and the separate universe approximation. In the massless preheating model, we find that some parameter values are excluded while others lead to acceptable but observable levels of non-Gaussianity. This shows that preheating can be an important factor in assessing the viability of inflationary models. PMID:18352255
A two-dimensional point-kernel model for dose calculations in a glovebox array
Kornreich, D.E.; Dooley, D.E.
1999-06-01
An associated paper details a model of a room containing gloveboxes using the industry standard dose equivalent (dose) estimation tool MCNP. Such tools provide an excellent means for obtaining relatively reliable estimates of radiation transport in a complicated geometric structure. However, creating the input deck that models the complicated geometry is equally complicated. Therefore, an alternative tool is desirable that provides reasonable accurate dose estimates in complicated geometries for use in engineering-scale dose analyses. In the past, several tools that use the point-kernel model for estimating doses equivalent have been constructed (those referenced are only a small sample of similar tools). This new tool, the Photon and Neutron Dose Equivalent Model Of Nuclear materials Integrated with an Uncomplicated geometry Model (PANDEMONIUM), combines point-kernel and diffusion theory calculation routines with a simple geometry construction tool. PANDEMONIUM uses Visio{trademark} to draw a glovebox array in the room, including hydrogenous shields, sources and detectors. This simplification in geometric rendering limits the tool to two-dimensional geometries (and one-dimensional particle transport calculations).
An empirical model for calculation of the collimator contamination dose in therapeutic proton beams.
Vidal, M; De Marzi, L; Szymanowski, H; Guinement, L; Nauraye, C; Hierso, E; Freud, N; Ferrand, R; François, P; Sarrut, D
2016-02-21
Collimators are used as lateral beam shaping devices in proton therapy with passive scattering beam lines. The dose contamination due to collimator scattering can be as high as 10% of the maximum dose and influences calculation of the output factor or monitor units (MU). To date, commercial treatment planning systems generally use a zero-thickness collimator approximation ignoring edge scattering in the aperture collimator and few analytical models have been proposed to take scattering effects into account, mainly limited to the inner collimator face component. The aim of this study was to characterize and model aperture contamination by means of a fast and accurate analytical model. The entrance face collimator scatter distribution was modeled as a 3D secondary dose source. Predicted dose contaminations were compared to measurements and Monte Carlo simulations. Measurements were performed on two different proton beam lines (a fixed horizontal beam line and a gantry beam line) with divergent apertures and for several field sizes and energies. Discrepancies between analytical algorithm dose prediction and measurements were decreased from 10% to 2% using the proposed model. Gamma-index (2%/1 mm) was respected for more than 90% of pixels. The proposed analytical algorithm increases the accuracy of analytical dose calculations with reasonable computation times.
An empirical model for calculation of the collimator contamination dose in therapeutic proton beams
NASA Astrophysics Data System (ADS)
Vidal, M.; De Marzi, L.; Szymanowski, H.; Guinement, L.; Nauraye, C.; Hierso, E.; Freud, N.; Ferrand, R.; François, P.; Sarrut, D.
2016-02-01
Collimators are used as lateral beam shaping devices in proton therapy with passive scattering beam lines. The dose contamination due to collimator scattering can be as high as 10% of the maximum dose and influences calculation of the output factor or monitor units (MU). To date, commercial treatment planning systems generally use a zero-thickness collimator approximation ignoring edge scattering in the aperture collimator and few analytical models have been proposed to take scattering effects into account, mainly limited to the inner collimator face component. The aim of this study was to characterize and model aperture contamination by means of a fast and accurate analytical model. The entrance face collimator scatter distribution was modeled as a 3D secondary dose source. Predicted dose contaminations were compared to measurements and Monte Carlo simulations. Measurements were performed on two different proton beam lines (a fixed horizontal beam line and a gantry beam line) with divergent apertures and for several field sizes and energies. Discrepancies between analytical algorithm dose prediction and measurements were decreased from 10% to 2% using the proposed model. Gamma-index (2%/1 mm) was respected for more than 90% of pixels. The proposed analytical algorithm increases the accuracy of analytical dose calculations with reasonable computation times.
An empirical model for calculation of the collimator contamination dose in therapeutic proton beams.
Vidal, M; De Marzi, L; Szymanowski, H; Guinement, L; Nauraye, C; Hierso, E; Freud, N; Ferrand, R; François, P; Sarrut, D
2016-02-21
Collimators are used as lateral beam shaping devices in proton therapy with passive scattering beam lines. The dose contamination due to collimator scattering can be as high as 10% of the maximum dose and influences calculation of the output factor or monitor units (MU). To date, commercial treatment planning systems generally use a zero-thickness collimator approximation ignoring edge scattering in the aperture collimator and few analytical models have been proposed to take scattering effects into account, mainly limited to the inner collimator face component. The aim of this study was to characterize and model aperture contamination by means of a fast and accurate analytical model. The entrance face collimator scatter distribution was modeled as a 3D secondary dose source. Predicted dose contaminations were compared to measurements and Monte Carlo simulations. Measurements were performed on two different proton beam lines (a fixed horizontal beam line and a gantry beam line) with divergent apertures and for several field sizes and energies. Discrepancies between analytical algorithm dose prediction and measurements were decreased from 10% to 2% using the proposed model. Gamma-index (2%/1 mm) was respected for more than 90% of pixels. The proposed analytical algorithm increases the accuracy of analytical dose calculations with reasonable computation times. PMID:26816191
Two-dimensional model calculation of fluorine-containing reservoir species in the stratosphere
NASA Technical Reports Server (NTRS)
Kaye, Jack A.; Douglass, Anne R.; Jackman, Charles H.; Stolarski, Richard S.; Zander, R.
1991-01-01
Two-dimensional model calculations have been carried out of the distributions of the fluorine-containing reservoir species HF, CF2O, and CFClO. HF constitutes the largest fluorine reservoir in the stratosphere, but CF2O also makes an important contribution to the inorganic fluorine budget. CFClO amounts are most important in the tropical lower stratosphere. HF amounts increase with altitude throughout the stratosphere, while those of CF2O and CFClO fall off above their mixing ratio peaks due to photolysis. The model is in good qualitative agreement with observed vertical profiles of HF and CF2O but tends to underestimate the total column of HF. The calculated CFClO distribution is in good agreement with the very limited data. The disagreement in the HF columns is likely due to small inaccuracies in the model's treatment of lower stratospheric photolysis of chlorofluorocarbons. The model results support the suggestion that CF2O may be heterogeneously converted to HF on the surface of polar stratospheric cloud particles. The model results also suggest that the quantum yield for photolysis of CF2O is near unity.
NASA Astrophysics Data System (ADS)
Ebert, H.; Mankovsky, S.; Chadova, K.; Polesya, S.; Minár, J.; Ködderitzsch, D.
2015-04-01
A scheme is presented that is based on the alloy analogy model and allows one to account for thermal lattice vibrations as well as spin fluctuations when calculating response quantities in solids. Various models to deal with spin fluctuations are discussed concerning their impact on the resulting temperature-dependent magnetic moment, longitudinal conductivity, and Gilbert damping parameter. It is demonstrated that, by using the Monte Carlo (MC) spin configuration as input, the alloy analogy model is capable of reproducing the results of MC simulations on the average magnetic moment within all spin fluctuation models under discussion. On the other hand, the response quantities are much more sensitive to the spin fluctuation model. Separate calculations accounting for the thermal effect due to either lattice vibrations or spin fluctuations show that they give comparable contributions to the electrical conductivity and Gilbert damping. However, comparison to results accounting for both thermal effects demonstrates violation of Matthiessen's rule, showing the nonadditive effect of lattice vibrations and spin fluctuations. The results obtained for bcc Fe and fcc Ni are compared with the experimental data, showing rather good agreement for the temperature-dependent electrical conductivity and the Gilbert damping parameter.
The truth is out there: measured, calculated and modelled benthic fluxes.
NASA Astrophysics Data System (ADS)
Pakhomova, Svetlana; Protsenko, Elizaveta
2016-04-01
In a modern Earth science there is a great importance of understanding the processes, forming the benthic fluxes as one of element sources or sinks to or from the water body, which affects the elements balance in the water system. There are several ways to assess benthic fluxes and here we try to compare the results obtained by chamber experiments, calculated from porewater distributions and simulated with model. Benthic fluxes of dissolved elements (oxygen, nitrogen species, phosphate, silicate, alkalinity, iron and manganese species) were studied in the Baltic and Black Seas from 2000 to 2005. Fluxes were measured in situ using chamber incubations (Jch) and at the same time sediment cores were collected to assess the porewater distribution at different depths to calculate diffusive fluxes (Jpw). Model study was carried out with benthic-pelagic biogeochemical model BROM (O-N-P-Si-C-S-Mn-Fe redox model). It was applied to simulate biogeochemical structure of the water column and upper sediment and to assess the vertical fluxes (Jmd). By the behaviour at the water-sediment interface all studied elements can be divided into three groups: (1) elements which benthic fluxes are determined by the concentrations gradient only (Si, Mn), (2) elements which fluxes depend on redox conditions in the bottom water (Fe, PO4, NH4), and (3) elements which fluxes are strongly connected with organic matter fate (O2, Alk, NH4). For the first group it was found that measured fluxes are always higher than calculated diffusive fluxes (1.5
NASA Astrophysics Data System (ADS)
Wong, Michael H.; Atreya, Sushil K.; Kuhn, William R.; Romani, Paul N.; Mihalka, Kristen M.
2015-01-01
Models of cloud condensation under thermodynamic equilibrium in planetary atmospheres are useful for several reasons. These equilibrium cloud condensation models (ECCMs) calculate the wet adiabatic lapse rate, determine saturation-limited mixing ratios of condensing species, calculate the stabilizing effect of latent heat release and molecular weight stratification, and locate cloud base levels. Many ECCMs trace their heritage to Lewis (Lewis, J.S. [1969]. Icarus 10, 365-378) and Weidenschilling and Lewis (Weidenschilling, S.J., Lewis, J.S. [1973]. Icarus 20, 465-476). Calculation of atmospheric structure and gas mixing ratios are correct in these models. We resolve errors affecting the cloud density calculation in these models by first calculating a cloud density rate: the change in cloud density with updraft length scale. The updraft length scale parameterizes the strength of the cloud-forming updraft, and converts the cloud density rate from the ECCM into cloud density. The method is validated by comparison with terrestrial cloud data. Our parameterized updraft method gives a first-order prediction of cloud densities in a “fresh” cloud, where condensation is the dominant microphysical process. Older evolved clouds may be better approximated by another 1-D method, the diffusive-precipitative Ackerman and Marley (Ackerman, A.S., Marley, M.S. [2001]. Astrophys. J. 556, 872-884) model, which represents a steady-state equilibrium between precipitation and condensation of vapor delivered by turbulent diffusion. We re-evaluate observed cloud densities in the Galileo Probe entry site (Ragent, B. et al. [1998]. J. Geophys. Res. 103, 22891-22910), and show that the upper and lower observed clouds at ∼0.5 and ∼3 bars are consistent with weak (cirrus-like) updrafts under conditions of saturated ammonia and water vapor, respectively. The densest observed cloud, near 1.3 bar, requires unexpectedly strong updraft conditions, or higher cloud density rates. The cloud
Johnson, M.W.
1990-01-01
A comparison of electron densities calculated from the Utah State University First-Principals Ionospheric Model with simultaneous observations taken at Sondrestrom, Millstone, and Arecibo incoherent-scatter radars was undertaken to better understanding the response of the ionosphere at these longitudinally similar yet latitudinally separated locations. The comparison included over 50 days distributed over 3 1/2 years roughly symmetrical about the last solar-minimum in 1986. The overall trend of the comparison was that to first-order the model reproduces electron densities responding to diurnal, seasonal, geomagnetic, and solar-cycle variations for all three radars. However, some model-observation discrepancies were found. These include, failure of the model to correctly produce an evening peak at Millstone, fall-spring equinox differences at Sondrestrom, tidal structure at Arecibo, and daytime NmF2 values at Arecibo.
NASA Astrophysics Data System (ADS)
Fujii, Hiroyuki; Okawa, Shinpei; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao
2015-12-01
Development of a physically accurate and computationally efficient photon migration model for turbid media is crucial for optical computed tomography such as diffuse optical tomography. For the development, this paper constructs a space-time coupling model of the radiative transport equation with the photon diffusion equation. In the coupling model, a space-time regime of the photon migration is divided into the ballistic and diffusive regimes with the interaction between the both regimes to improve the accuracy of the results and the efficiency of computation. The coupling model provides an accurate description of the photon migration in various turbid media in a wide range of the optical properties, and reduces computational loads when compared with those of full calculation of the RTE.
Development of an algebraic stress/two-layer model for calculating thrust chamber flow fields
NASA Astrophysics Data System (ADS)
Chen, C. P.; Shang, H. M.; Huang, J.
1993-07-01
Following the consensus of a workshop in Turbulence Modeling for Liquid Rocket Thrust Chambers, the current effort was undertaken to study the effects of second-order closure on the predictions of thermochemical flow fields. To reduce the instability and computational intensity of the full second-order Reynolds Stress Model, an Algebraic Stress Model (ASM) coupled with a two-layer near wall treatment was developed. Various test problems, including the compressible boundary layer with adiabatic and cooled walls, recirculating flows, swirling flows and the entire SSME nozzle flow were studied to assess the performance of the current model. Detailed calculations for the SSME exit wall flow around the nozzle manifold were executed. As to the overall flow predictions, the ASM removes another assumption for appropriate comparison with experimental data, to account for the non-isotropic turbulence effects.
Scheuerell, Mark D
2016-01-01
Stock-recruitment models have been used for decades in fisheries management as a means of formalizing the expected number of offspring that recruit to a fishery based on the number of parents. In particular, Ricker's stock recruitment model is widely used due to its flexibility and ease with which the parameters can be estimated. After model fitting, the spawning stock size that produces the maximum sustainable yield (S MSY) to a fishery, and the harvest corresponding to it (U MSY), are two of the most common biological reference points of interest to fisheries managers. However, to date there has been no explicit solution for either reference point because of the transcendental nature of the equation needed to solve for them. Therefore, numerical or statistical approximations have been used for more than 30 years. Here I provide explicit formulae for calculating both S MSY and U MSY in terms of the productivity and density-dependent parameters of Ricker's model.
Palmer, David S; Sergiievskyi, Volodymyr P; Jensen, Frank; Fedorov, Maxim V
2010-07-28
We report on the results of testing the reference interaction site model (RISM) for the estimation of the hydration free energy of druglike molecules. The optimum model was selected after testing of different RISM free energy expressions combined with different quantum mechanics and empirical force-field methods of structure optimization and atomic partial charge calculation. The final model gave a systematic error with a standard deviation of 2.6 kcal/mol for a test set of 31 molecules selected from the SAMPL1 blind challenge set [J. P. Guthrie, J. Phys. Chem. B 113, 4501 (2009)]. After parametrization of this model to include terms for the excluded volume and the number of atoms of different types in the molecule, the root mean squared error for a test set of 19 molecules was less than 1.2 kcal/mol.
Development of an algebraic stress/two-layer model for calculating thrust chamber flow fields
NASA Technical Reports Server (NTRS)
Chen, C. P.; Shang, H. M.; Huang, J.
1993-01-01
Following the consensus of a workshop in Turbulence Modeling for Liquid Rocket Thrust Chambers, the current effort was undertaken to study the effects of second-order closure on the predictions of thermochemical flow fields. To reduce the instability and computational intensity of the full second-order Reynolds Stress Model, an Algebraic Stress Model (ASM) coupled with a two-layer near wall treatment was developed. Various test problems, including the compressible boundary layer with adiabatic and cooled walls, recirculating flows, swirling flows and the entire SSME nozzle flow were studied to assess the performance of the current model. Detailed calculations for the SSME exit wall flow around the nozzle manifold were executed. As to the overall flow predictions, the ASM removes another assumption for appropriate comparison with experimental data, to account for the non-isotropic turbulence effects.
Comparison of inverse dynamics calculated by two- and three-dimensional models during walking.
Alkjaer, T; Simonsen, E B; Dyhre-Poulsen, P
2001-04-01
The purpose of the study was to compare joint moments calculated by a two- (2D) and a three-dimensional (3D) inverse dynamics model to examine how the different approaches influenced the joint moment profiles. Fifteen healthy male subjects participated in the study. A five-camera video system recorded the subjects as they walked across two force plates. The subjects were invited to approach a walking speed of 4.5 km/h. The ankle, knee and hip joint moments in the sagittal plane were calculated by 2D and 3D inverse dynamics analysis and compared. Despite the uniform walking speed (4.53 km/h) and similar footwear, relatively large inter-individual variations were found in the joint moment patterns during the stance phase. The differences between individuals were present in both the 2D and 3D analysis. For the entire sample of subjects the overall time course pattern of the ankle, knee and hip joint moments was almost identical in 2D and 3D. However, statistically significant differences were observed in the magnitude of the moments, which could be explained by differences in the joint centre location and joint axes used in the two approaches. In conclusion, there were differences between the magnitude of the joint moments calculated by 2D and 3D inverse dynamics but the inter-individual variation was not affected by the different models. The simpler 2D model seems therefore appropriate for human gait analysis. However, comparisons of gait data from different studies are problematic if the calculations are based on different approaches. A future perspective for solving this problem could be to introduce a standard proposal for human gait analysis.
Zhang, Haiyang; Tan, Tianwei; van der Spoel, David
2015-11-10
Evaluation of solvation (binding) free energies with implicit solvent models in different dielectric environments for biological simulations as well as high throughput ligand screening remain challenging endeavors. In order to address how well implicit solvent models approximate explicit ones we examined four generalized Born models (GB(Still), GB(HCT), GB(OBC)I, and GB(OBC)II) for determining the dimerization free energy (ΔG(0)) of β-cyclodextrin monomers in 17 implicit solvents with dielectric constants (D) ranging from 5 to 80 and compared the results to previous free energy calculations with explicit solvents ( Zhang et al. J. Phys. Chem. B 2012 , 116 , 12684 - 12693 ). The comparison indicates that neglecting the environmental dependence of Born radii appears acceptable for such calculations involving cyclodextrin and that the GB(Still) and GB(OBC)I models yield a reasonable estimation of ΔG(0), although the details of binding are quite different from explicit solvents. Large discrepancies between implicit and explicit solvent models occur in high-dielectric media with strong hydrogen bond (HB) interruption properties. ΔG(0) with the GB models is shown to correlate strongly to 2(D-1)/(2D+1) (R(2) ∼ 0.90) in line with the Onsager reaction field ( Onsager J. Am. Chem. Soc. 1936 , 58 , 1486 - 1493 ) but to be very sensitive to D (D < 10) as well. Both high-dielectric environments where hydrogen bonds are of interest and low-dielectric media such as protein binding pockets and membrane interiors therefore need to be considered with caution in GB-based calculations. Finally, a literature analysis of Gibbs energy of solvation of small molecules in organic liquids shows that the Onsager relation does not hold for real molecules since the correlation between ΔG(0) and 2(D-1)/(2D+1) is low for most solutes. Interestingly, explicit solvent calculations of the solvation free energy ( Zhang et al. J. Chem. Inf. Model . 2015 , 55 , 1192 - 1201 ) reproduce the weak
Li, Hanshan
2016-05-01
This paper researches the calculation method of space target optical characteristics to improve performance and sensitivity of the photoelectric detection target. In accordance with the detection principle of the photoelectric detection target and the detection screen thickness geometrical relationship, this paper sets up the space target spectral characteristic model using the surface element mesh analysis method and the bidirectional reflection distribution function. It provides the incident radiation energy calculation function in the optical lens entrance pupil of the photoelectric detection target detection area in order to determine the total spectral radiance intensity calculation function of the space target. The paper also reports on the minimum flux calculation function detected by the photoelectric detection target based on the definition of detection sensitivity and the change curve of the target's radiation energy when entering the detection area at different incident angles. Lastly, it demonstrates the spectral illuminations of an optical detection system under different radiation wavelengths and reflection radiation angles, as well as change curves of the target's spectral radiation intensity passing through the detection screen area and at different incident angles from the same distance.
GPU-based ultra-fast dose calculation using a finite size pencil beam model
NASA Astrophysics Data System (ADS)
Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B.
2009-10-01
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.
Three-body calculations for the K - pp system within potential models
NASA Astrophysics Data System (ADS)
Kezerashvili, R. Ya; Tsiklauri, S. M.; Filikhin, I.; Suslov, V. M.; Vlahovic, B.
2016-06-01
We present three-body nonrelativistic calculations within the framework of a potential model for the kaonic cluster K - pp using two methods: the method of hyperspherical harmonics in the momentum representation and the method of Faddeev equations in configuration space. To perform numerical calculations, different NN and antikaon-nucleon interactions are applied. The results of the calculations for the ground-state energy for the K - pp system obtained by both methods are in reasonable agreement. Although the ground-state energy is not sensitive to the pp interaction, it shows very strong dependence on the K - p potential. We show that the dominant clustering of the {K}-{pp} system in the configuration Λ (1405) + p allows us to calculate the binding energy to good accuracy within a simple cluster approach for the differential Faddeev equations. The theoretical discrepancies in the binding energy and width for the K - pp system related to the different pp and K - p interactions are addressed.
Three-body calculations for the K ‑ pp system within potential models
NASA Astrophysics Data System (ADS)
Kezerashvili, R. Ya; Tsiklauri, S. M.; Filikhin, I.; Suslov, V. M.; Vlahovic, B.
2016-06-01
We present three-body nonrelativistic calculations within the framework of a potential model for the kaonic cluster K ‑ pp using two methods: the method of hyperspherical harmonics in the momentum representation and the method of Faddeev equations in configuration space. To perform numerical calculations, different NN and antikaon–nucleon interactions are applied. The results of the calculations for the ground-state energy for the K ‑ pp system obtained by both methods are in reasonable agreement. Although the ground-state energy is not sensitive to the pp interaction, it shows very strong dependence on the K ‑ p potential. We show that the dominant clustering of the {K}-{pp} system in the configuration Λ (1405) + p allows us to calculate the binding energy to good accuracy within a simple cluster approach for the differential Faddeev equations. The theoretical discrepancies in the binding energy and width for the K ‑ pp system related to the different pp and K ‑ p interactions are addressed.
NASA Astrophysics Data System (ADS)
Wu, Qiong; Li, Shu-Suo; Ma, Yue; Gong, Sheng-Kai
2012-10-01
The diffusion coefficients of several alloying elements (Al, Mo, Co, Ta, Ru, W, Cr, Re) in Ni are directly calculated using the five-frequency model and the first principles density functional theory. The correlation factors provided by the five-frequency model are explicitly calculated. The calculated diffusion coefficients show their excellent agreement with the available experimental data. Both the diffusion pre-factor (D0) and the activation energy (Q) of impurity diffusion are obtained. The diffusion coefficients above 700 K are sorted in the following order: DAl > DCr > DCo > DTa > DMo > DRu > DW > DRe. It is found that there is a positive correlation between the atomic radius of the solute and the jump energy of Ni that results in the rotation of the solute-vacancy pair (E1). The value of E2-E1 (E2 is the solute diffusion energy) and the correlation factor each also show a positive correlation. The larger atoms in the same series have lower diffusion activation energies and faster diffusion coefficients.
NASA Astrophysics Data System (ADS)
Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.
2011-06-01
We present the calculation of the atmospheric neutrino fluxes with an interaction model named JAM, which is used in PHITS (Particle and Heavy-Ion Transport code System) [K. Niita , Radiation MeasurementsRMEAEP1350-4487 41, 1080 (2006).10.1016/j.radmeas.2006.07.013]. The JAM interaction model agrees with the HARP experiment [H. Collaboration, Astropart. Phys. 30, 124 (2008).APHYEE0927-650510.1016/j.astropartphys.2008.07.007] a little better than DPMJET-III [S. Roesler, R. Engel, and J. Ranft, arXiv:hep-ph/0012252.]. After some modifications, it reproduces the muon flux below 1GeV/c at balloon altitudes better than the modified DPMJET-III, which we used for the calculation of atmospheric neutrino flux in previous works [T. Sanuki, M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. DPRVDAQ1550-7998 75, 043005 (2007).10.1103/PhysRevD.75.043005][M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, and T. Sanuki, Phys. Rev. DPRVDAQ1550-7998 75, 043006 (2007).10.1103/PhysRevD.75.043006]. Some improvements in the calculation of atmospheric neutrino flux are also reported.
NASA Astrophysics Data System (ADS)
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods
Kan, An-Kang; Cao, Dan; Zhang, Xue-Lai
2015-04-01
Accurately predicting the effective thermal conductivity of the fibrous materials is highly desirable but remains to be a challenging work. In this paper, the microstructure of the porous fiber materials is analyzed, approximated and modeled on basis of the statistical self-similarity of fractal theory. A fractal model is presented to accurately calculate the effective thermal conductivity of fibrous porous materials. Taking the two-phase heat transfer effect into account, the existing statistical microscopic geometrical characteristics are analyzed and the Hertzian Contact solution is introduced to calculate the thermal resistance of contact points. Using the fractal method, the impacts of various factors, including the porosity, fiber orientation, fractal diameter and dimension, rarified air pressure, bulk thermal conductivity coefficient, thickness and environment condition, on the effective thermal conductivity, are analyzed. The calculation results show that the fiber orientation angle caused the material effective thermal conductivity to be anisotropic, and normal distribution is introduced into the mathematic function. The effective thermal conductivity of fibrous material increases with the fiber fractal diameter, fractal dimension and rarefied air pressure within the materials, but decreases with the increase of vacancy porosity.
Calculation of heat flux through a wall containing a cavity: Comparison of several models
NASA Astrophysics Data System (ADS)
Park, J. E.; Kirkpatrick, J. R.; Tunstall, J. N.; Childs, K. W.
1986-02-01
This paper describes the calculation of the heat transfer through the standard stud wall structure of a residential building. The wall cavity contains no insulation. Results from five test cases are presented. The first four represent progressively more complicated approximations to the heat transfer through and within a hollow wall structure. The fifth adds the model components necessary to severely inhibit the radiative energy transport across the empty cavity. Flow within the wall cavity is calculated from the Navier-Stokes equations and the energy conservation equation for an ideal gas using an improvement to the Implicit-Compressible Eulerian (ICE) algorithm of Harlow and Amsden. An algorithm is described to efficiently couple the fluid flow calculations to the radiation-conduction model for the solid portions of the system. Results indicate that conduction through still plates contributes less than 2% of the total heat transferred through a composite wall. All of the other elements (conduction through wall board, sheathing, and siding; convection from siding and wallboard to am bients; and radiation across the wall cavity) are required to accurately predict the heat transfer through a wall. Addition of a foil liner on one inner surface of the wall cavity reduces the total heat transferred by almost 50%.
Wijesinghe, R S; Tepley, N
1997-01-01
In our previous model, we ascertained that the large amplitude waves (LAWs), reported by Barkley and coworkers (1990) in time series magnetoencephalography (MEG) recordings from migraine patients, could be simulated and compared with the recorded signals using a simple plane volume conductor model (Tepley and Wijesinghe 1996). In this paper, we model LAWs using the help of more complicated yet reliable four-sphere model. This mathematical model furthermore assumes that the LAWs arise from propagation of Spreading Cortical Depression (SCD) across a sulcus and these simulated signals are more similar to the recorded signals than the ones we obtained from our previous model. SCD propagates slowly across the cortex in all species in which it has been observed. In our model, current dipoles represent the excitable neurons in the cortex and magnetic fields created by these individual dipoles are calculated using a four-sphere model. The magnetic field arising from the excited area of cortex is obtained by summing the fields due to these individual dipoles. Sulci shapes are represented by simple mathematical formulae. PMID:9104830
Unifying Algebraic and Large-Scale Shell-Model Approaches in Nuclear Structure Calculations
NASA Astrophysics Data System (ADS)
Draayer, Jerry P.
1997-04-01
The shell model is the most robust theory for addressing nuclear structure questions. Unfortunately, it is only as good as the input hamiltonian and the appropriateness of the selected model space, and both of these elements usually prove to be a significant challenge. There are three basic theories: 1) algebraic models, boson and fermion, which focus on symmetries, exact and approximate, of a hamiltonian and usually use model spaces that are severely truncated; 2) numerically oriented schemes that accommodate larger spaces but rely on special techniques and algorithms for producing convergent results; and 3) models that employ statistical concepts, like statistical spectroscopy of the 70s and 80s and Monte Carlo methods of the 90s, schemes that are not limited by the usual dimensionality considerations. These three approaches and their various realizations and extensions, with their pluses and minuses, will be considered. In addition, opportunities that exist for defining a scheme that employs the best of all three approaches to yield a symmetry adapted theory that is not limited to simplified spaces and hamiltonians and yet remains tractable even for large-scale calculations of the type that are required for testing a theory against experimental data and for predicting new physical phenomena will be explored. Special attention will be focused on unifying themes linking the shell-model with the simpler and yet highly successful mean-field and collective-model theories. As a example of the latter, some recent results using the symplectic shell model will be presented.
Recalibration of the Shear Stress Transport Model to Improve Calculation of Shock Separated Flows
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.
2013-01-01
The Menter Shear Stress Transport (SST) k . turbulence model is one of the most widely used two-equation Reynolds-averaged Navier-Stokes turbulence models for aerodynamic analyses. The model extends Menter s baseline (BSL) model to include a limiter that prevents the calculated turbulent shear stress from exceeding a prescribed fraction of the turbulent kinetic energy via a proportionality constant, a1, set to 0.31. Compared to other turbulence models, the SST model yields superior predictions of mild adverse pressure gradient flows including those with small separations. In shock - boundary layer interaction regions, the SST model produces separations that are too large while the BSL model is on the other extreme, predicting separations that are too small. In this paper, changing a1 to a value near 0.355 is shown to significantly improve predictions of shock separated flows. Several cases are examined computationally and experimental data is also considered to justify raising the value of a1 used for shock separated flows.
Calculations of inflaton decays and reheating: with applications to no-scale inflation models
Ellis, John; Garcia, Marcos A.G.; Nanopoulos, Dimitri V.; Olive, Keith A.
2015-07-30
We discuss inflaton decays and reheating in no-scale Starobinsky-like models of inflation, calculating the effective equation-of-state parameter, w, during the epoch of inflaton decay, the reheating temperature, T{sub reh}, and the number of inflationary e-folds, N{sub ∗}, comparing analytical approximations with numerical calculations. We then illustrate these results with applications to models based on no-scale supergravity and motivated by generic string compactifications, including scenarios where the inflaton is identified as an untwisted-sector matter field with direct Yukawa couplings to MSSM fields, and where the inflaton decays via gravitational-strength interactions. Finally, we use our results to discuss the constraints on these models imposed by present measurements of the scalar spectral index n{sub s} and the tensor-to-scalar perturbation ratio r, converting them into constraints on N{sub ∗}, the inflaton decay rate and other parameters of specific no-scale inflationary models.
A Comparison of Model Calculation and Measurement of Absorbed Dose for Proton Irradiation. Chapter 5
NASA Technical Reports Server (NTRS)
Zapp, N.; Semones, E.; Saganti, P.; Cucinotta, F.
2003-01-01
With the increase in the amount of time spent EVA that is necessary to complete the construction and subsequent maintenance of ISS, it will become increasingly important for ground support personnel to accurately characterize the radiation exposures incurred by EVA crewmembers. Since exposure measurements cannot be taken within the organs of interest, it is necessary to estimate these exposures by calculation. To validate the methods and tools used to develop these estimates, it is necessary to model experiments performed in a controlled environment. This work is such an effort. A human phantom was outfitted with detector equipment and then placed in American EMU and Orlan-M EVA space suits. The suited phantom was irradiated at the LLUPTF with proton beams of known energies. Absorbed dose measurements were made by the spaceflight operational dosimetrist from JSC at multiple sites in the skin, eye, brain, stomach, and small intestine locations in the phantom. These exposures are then modeled using the BRYNTRN radiation transport code developed at the NASA Langley Research Center, and the CAM (computerized anatomical male) human geometry model of Billings and Yucker. Comparisons of absorbed dose calculations with measurements show excellent agreement. This suggests that there is reason to be confident in the ability of both the transport code and the human body model to estimate proton exposure in ground-based laboratory experiments.
Development of Aerosol Models for Radiative Flux Calculations at ARM Sites
Ogren, John A.; Dutton, Ellsworth G.; McComiskey, Allison C.
2006-09-30
The direct radiative forcing (DRF) of aerosols, the change in net radiative flux due to aerosols in non-cloudy conditions, is an essential quantity for understanding the human impact on climate change. Our work has addressed several key issues that determine the accuracy, and identify the uncertainty, with which aerosol DRF can be modeled. These issues include the accuracy of several radiative transfer models when compared to measurements and to each other in a highly controlled closure study using data from the ARM 2003 Aerosol IOP. The primary focus of our work has been to determine an accurate approach to assigning aerosol properties appropriate for modeling over averaged periods of time and space that represent the observed regional variability of these properties. We have also undertaken a comprehensive analysis of the aerosol properties that contribute most to uncertainty in modeling aerosol DRF, and under what conditions they contribute the most uncertainty. Quantification of these issues enables the community to better state accuracies of radiative forcing calculations and to concentrate efforts in areas that will decrease uncertainties in these calculations in the future.
Field evaluation of a two-dimensinal hydrodynamic model near boulders for habitat calculation
Waddle, Terry
2010-01-01
Two-dimensional hydrodynamic models are now widely used in aquatic habitat studies. To test the sensitivity of calculated habitat outcomes to limitations of such a model and of typical field data, bathymetry, depth and velocity data were collected for three discharges in the vicinity of two large boulders in the South Platte River (Colorado) and used in the River2D model. Simulated depth and velocity were compared with observed values at 204 locations and the differences in habitat numbers produced by observed and simulated conditions were calculated. The bulk of the differences between simulated and observed depth and velocity values were found to lie within the likely error of measurement. However, the effect of flow simulation outliers on potential habitat outcomes must be considered when using 2D models for habitat simulation. Furthermore, the shape of the habitat suitability relation can influence the effects of simulation errors. Habitat relations with steep slopes in the velocity ranges found in similar study areas are expected to be sensitive to the magnitude of error found here. Comparison of habitat values derived from simulated and observed depth and velocity revealed a small tendency to under-predict habitat values.
Field evaluation of a two-dimensional hydrodynamic model near boulders for habitat calculation
Waddle, Terry
2009-01-01
Two-dimensional hydrodynamic models are now widely used in aquatic habitat studies. To test the sensitivity of calculated habitat outcomes to limitations of such a model and of typical field data, bathmetry, depth and velocity data were collected for three discharges in the vicinity of two large boulders in the South Platte River (Colorado) and used in the River2D model. Simulated depth and velocity were compared with observed values at 204 locations and the differences in habitat numbers produced by observed and simulated conditions were calculated. The bulk of the differences between simulated and observed depth and velocity values were found to lie within the likely error of measurement. However, the effect of flow simulation outliers on potential habitat outcomes must be considered when using 2D models for habitat simulation. Furthermore, the shape of the habitat suitability relation can influence the effects of simulation errors. Habitat relations with steep slopes in the velocity ranges found in similar study areas are expected to be sensitive to the magnitude of error found here. Comparison of habitat values derived from simulated and observed depth and velocity revealed a small tendency to under-predict habitat values.
Direct calculation of ice homogeneous nucleation rate for a molecular model of water.
Haji-Akbari, Amir; Debenedetti, Pablo G
2015-08-25
Ice formation is ubiquitous in nature, with important consequences in a variety of environments, including biological cells, soil, aircraft, transportation infrastructure, and atmospheric clouds. However, its intrinsic kinetics and microscopic mechanism are difficult to discern with current experiments. Molecular simulations of ice nucleation are also challenging, and direct rate calculations have only been performed for coarse-grained models of water. For molecular models, only indirect estimates have been obtained, e.g., by assuming the validity of classical nucleation theory. We use a path sampling approach to perform, to our knowledge, the first direct rate calculation of homogeneous nucleation of ice in a molecular model of water. We use TIP4P/Ice, the most accurate among existing molecular models for studying ice polymorphs. By using a novel topological approach to distinguish different polymorphs, we are able to identify a freezing mechanism that involves a competition between cubic and hexagonal ice in the early stages of nucleation. In this competition, the cubic polymorph takes over because the addition of new topological structural motifs consistent with cubic ice leads to the formation of more compact crystallites. This is not true for topological hexagonal motifs, which give rise to elongated crystallites that are not able to grow. This leads to transition states that are rich in cubic ice, and not the thermodynamically stable hexagonal polymorph. This mechanism provides a molecular explanation for the earlier experimental and computational observations of the preference for cubic ice in the literature.
Calculations of axisymmetric vortex sheet roll-up using a panel and a filament model
NASA Technical Reports Server (NTRS)
Kantelis, J. P.; Widnall, S. E.
1986-01-01
A method for calculating the self-induced motion of a vortex sheet using discrete vortex elements is presented. Vortex panels and vortex filaments are used to simulate two-dimensional and axisymmetric vortex sheet roll-up. A straight forward application using vortex elements to simulate the motion of a disk of vorticity with an elliptic circulation distribution yields unsatisfactroy results where the vortex elements move in a chaotic manner. The difficulty is assumed to be due to the inability of a finite number of discrete vortex elements to model the singularity at the sheet edge and due to large velocity calculation errors which result from uneven sheet stretching. A model of the inner portion of the spiral is introduced to eliminate the difficulty with the sheet edge singularity. The model replaces the outermost portion of the sheet with a single vortex of equivalent circulation and a number of higher order terms which account for the asymmetry of the spiral. The resulting discrete vortex model is applied to both two-dimensional and axisymmetric sheets. The two-dimensional roll-up is compared to the solution for a semi-infinite sheet with good results.
Direct calculation of ice homogeneous nucleation rate for a molecular model of water
Haji-Akbari, Amir; Debenedetti, Pablo G.
2015-01-01
Ice formation is ubiquitous in nature, with important consequences in a variety of environments, including biological cells, soil, aircraft, transportation infrastructure, and atmospheric clouds. However, its intrinsic kinetics and microscopic mechanism are difficult to discern with current experiments. Molecular simulations of ice nucleation are also challenging, and direct rate calculations have only been performed for coarse-grained models of water. For molecular models, only indirect estimates have been obtained, e.g., by assuming the validity of classical nucleation theory. We use a path sampling approach to perform, to our knowledge, the first direct rate calculation of homogeneous nucleation of ice in a molecular model of water. We use TIP4P/Ice, the most accurate among existing molecular models for studying ice polymorphs. By using a novel topological approach to distinguish different polymorphs, we are able to identify a freezing mechanism that involves a competition between cubic and hexagonal ice in the early stages of nucleation. In this competition, the cubic polymorph takes over because the addition of new topological structural motifs consistent with cubic ice leads to the formation of more compact crystallites. This is not true for topological hexagonal motifs, which give rise to elongated crystallites that are not able to grow. This leads to transition states that are rich in cubic ice, and not the thermodynamically stable hexagonal polymorph. This mechanism provides a molecular explanation for the earlier experimental and computational observations of the preference for cubic ice in the literature. PMID:26240318
Direct calculation of ice homogeneous nucleation rate for a molecular model of water.
Haji-Akbari, Amir; Debenedetti, Pablo G
2015-08-25
Ice formation is ubiquitous in nature, with important consequences in a variety of environments, including biological cells, soil, aircraft, transportation infrastructure, and atmospheric clouds. However, its intrinsic kinetics and microscopic mechanism are difficult to discern with current experiments. Molecular simulations of ice nucleation are also challenging, and direct rate calculations have only been performed for coarse-grained models of water. For molecular models, only indirect estimates have been obtained, e.g., by assuming the validity of classical nucleation theory. We use a path sampling approach to perform, to our knowledge, the first direct rate calculation of homogeneous nucleation of ice in a molecular model of water. We use TIP4P/Ice, the most accurate among existing molecular models for studying ice polymorphs. By using a novel topological approach to distinguish different polymorphs, we are able to identify a freezing mechanism that involves a competition between cubic and hexagonal ice in the early stages of nucleation. In this competition, the cubic polymorph takes over because the addition of new topological structural motifs consistent with cubic ice leads to the formation of more compact crystallites. This is not true for topological hexagonal motifs, which give rise to elongated crystallites that are not able to grow. This leads to transition states that are rich in cubic ice, and not the thermodynamically stable hexagonal polymorph. This mechanism provides a molecular explanation for the earlier experimental and computational observations of the preference for cubic ice in the literature. PMID:26240318
Lito, Patrícia F; Magalhães, Ana L; Gomes, José R B; Silva, Carlos M
2013-05-17
In this work it is presented a new model for accurate calculation of binary diffusivities (D12) of solutes infinitely diluted in gas, liquid and supercritical solvents. It is based on a Lennard-Jones (LJ) model, and contains two parameters: the molecular diameter of the solvent and a diffusion activation energy. The model is universal since it is applicable to polar, weakly polar, and non-polar solutes and/or solvents, over wide ranges of temperature and density. Its validation was accomplished with the largest database ever compiled, namely 487 systems with 8293 points totally, covering polar (180 systems/2335 points) and non-polar or weakly polar (307 systems/5958 points) mixtures, for which the average errors were 2.65% and 2.97%, respectively. With regard to the physical states of the systems, the average deviations achieved were 1.56% for gaseous (73 systems/1036 points), 2.90% for supercritical (173 systems/4398 points), and 2.92% for liquid (241 systems/2859 points). Furthermore, the model exhibited excellent prediction ability. Ten expressions from the literature were adopted for comparison, but provided worse results or were not applicable to polar systems. A spreadsheet for D12 calculation is provided online for users in Supplementary Data.
Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.
1995-11-01
During certain hypothetical severe accidents in a nuclear power plant, radionuclides could be released to the environment as a plume. Prediction of the atmospheric dispersion and transport of these radionuclides is important for assessment of the risk to the public from such accidents. A simplified PC-based model was developed that predicts time-integrated air concentration of each radionuclide at any location from release as a function of time integrated source strength using the Gaussian plume model. The solution procedure involves direct analytic integration of air concentration equations over time and position, using simplified meteorology. The formulation allows for dry and wet deposition, radioactive decay and daughter buildup, reactor building wake effects, the inversion lid effect, plume rise due to buoyancy or momentum, release duration, and grass height. Based on air and ground concentrations of the radionuclides, the early dose to an individual is calculated via cloudshine, groundshine, and inhalation. The model also calculates early health effects based on the doses. This paper presents aspects of the model that would be of interest to the prediction of environmental flows and their public consequences.
Ernren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.
1999-01-01
Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH)4, under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability.In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem.In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem.
Linden, D.S.
1993-05-01
The traditional two-fluid model of superconducting conductivity was modified to make it accurate, while remaining fast, for designing and simulating microwave devices. The modification reflects the BCS coherence effects in the conductivity of a superconductor, and is incorporated through the ratio of normal to superconducting electrons. This modified ratio is a simple analytical expression which depends on frequency, temperature and material parameters. This modified two-fluid model allows accurate and rapid calculation of the microwave surface impedance of a superconductor in the clean and dirty limits and in the weak- and strong-coupled regimes. The model compares well with surface resistance data for Nb and provides insight into Nb3Sn and Y1Ba2Cu3O(7-delta). Numerical calculations with the modified two-fluid model are an order of magnitude faster than the quasi-classical program by Zimmermann (1), and two to five orders of magnitude faster than Halbritter's BCS program (2) for surface resistance.
Experimental verification of a Monte Carlo-based MLC simulation model for IMRT dose calculation
Tyagi, Neelam; Moran, Jean M.; Litzenberg, Dale W.; Bielajew, Alex F.; Fraass, Benedick A.; Chetty, Indrin J.
2007-02-15
Inter- and intra-leaf transmission and head scatter can play significant roles in intensity modulated radiation therapy (IMRT)-based treatment deliveries. In order to accurately calculate the dose in the IMRT planning process, it is therefore important that the detailed geometry of the multi-leaf collimator (MLC), in addition to other components in the accelerator treatment head, be accurately modeled. In this paper, we have used the Monte Carlo method (MC) to develop a comprehensive model of the Varian 120 leaf MLC and have compared it against measurements in homogeneous phantom geometries under different IMRT delivery circumstances. We have developed a geometry module within the DPM MC code to simulate the detailed MLC design and the collimating jaws. Tests consisting of leakage, leaf positioning and static MLC shapes were performed to verify the accuracy of transport within the MLC model. The calculations show agreement within 2% in the high dose region for both film and ion-chamber measurements for these static shapes. Clinical IMRT treatment plans for the breast [both segmental MLC (SMLC) and dynamic MLC (DMLC)], prostate (SMLC) and head and neck split fields (SMLC) were also calculated and compared with film measurements. Such a range of cases were chosen to investigate the accuracy of the model as a function of modulation in the beamlet pattern, beamlet width, and field size. The overall agreement is within 2%/2 mm of the film data for all IMRT beams except the head and neck split field, which showed differences up to 5% in the high dose regions. Various sources of uncertainties in these comparisons are discussed.
Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models
Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.
2008-01-01
Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483
Large-scale shell model calculations for even-even 62-66Fe isotopes
NASA Astrophysics Data System (ADS)
Srivastava, P. C.; Mehrotra, I.
2009-10-01
The recently measured experimental data of Legnaro National Laboratories on neutron-rich even isotopes of 62-66Fe with A = 62, 64, 66 have been interpreted in the framework of a large-scale shell model. Calculations have been performed with a newly derived effective interaction GXPF1A in full fp space without truncation. The experimental data are very well explained for 62Fe, satisfactorily reproduced for 64Fe and poorly fitted for 66Fe. The increasing collectivity reflected in experimental data when approaching N = 40 is not reproduced in calculated values. This indicates that whereas the considered valence space is adequate for 62Fe, inclusion of higher orbits from the sdg shell is required for describing 66Fe.
Solar particle events observed at Mars: dosimetry measurements and model calculations.
Cleghorn, Timothy F; Saganti, Premkumar B; Zeitlin, Cary J; Cucinotta, Francis A
2004-01-01
During the period from March 13, 2002 to mid-September, 2002, six solar particle events (SPE) were observed by the MARIE instrument onboard the Odyssey Spacecraft in Martian Orbit. These events were observed also by the GOES 8 satellite in Earth orbit, and thus represent the first time that the same SPE have been observed at these separate locations. The characteristics of these SPE are examined, given that the active regions of the solar disc from which the event originated can usually be identified. The dose rates at Martian orbit are calculated, both for the galactic and solar components of the ionizing particle radiation environment. The dose rates due to galactic cosmic rays (GCR) agree well with the HZETRN model calculations.
Calculating model of light transmission efficiency of diffusers attached to a lighting cavity.
Sun, Ching-Cherng; Chien, Wei-Ting; Moreno, Ivan; Hsieh, Chih-To; Lin, Mo-Cha; Hsiao, Shu-Li; Lee, Xuan-Hao
2010-03-15
A lighting cavity is a reflecting box with light sources inside. Its exit side is covered with a diffuser plate to mix and distribute light, which addresses a key issue of luminaires, display backlights, and other illumination systems. We derive a simple but precise formula for the optical efficiency of diffuser plates attached to a light cavity. We overcome the complexity of the scattering theory and the difficulty of the multiple calculations involved, by carrying out the calculation with a single ray of light that statistically represents all the scattered rays. We constructed and tested several optical cavities using light-emitting diodes, bulk-scattering diffusers, white scatter sheets, and silver coatings. All measurements are in good agreement with predictions from our optical model.
Sensitivity of model calculations to uncertain inputs, with an application to neutron star envelopes
NASA Technical Reports Server (NTRS)
Epstein, R. I.; Gudmundsson, E. H.; Pethick, C. J.
1983-01-01
A method is given for determining the sensitivity of certain types of calculations to the uncertainties in the input physics or model parameters; this method is applicable to problems that involve solutions to coupled, ordinary differential equations. In particular the sensitivity of calculations of the thermal structure of neutron star envelopes to uncertainties in the opacity and equation of state is examined. It is found that the uncertainties in the relationship between the surface and interior temperatures of a neutron star are due almost entirely to the imprecision in the values of the conductive opacity in the region where the ions form a liquid; here the conductive opacity is, for the most part, due to the scattering of electrons from ions.
An equivalent circuit model and power calculations for the APS SPX crab cavities.
Berenc, T. )
2012-03-21
An equivalent parallel resistor-inductor-capacitor (RLC) circuit with beam loading for a polarized TM110 dipole-mode cavity is developed and minimum radio-frequency (rf) generator requirements are calculated for the Advanced Photon Source (APS) short-pulse x-ray (SPX) superconducting rf (SRF) crab cavities. A beam-loaded circuit model for polarized TM110 mode crab cavities was derived. The single-cavity minimum steady-state required generator power has been determined for the APS SPX crab cavities for a storage ring current of 200mA DC current as a function of external Q for various vertical offsets including beam tilt and uncontrollable detuning. Calculations to aid machine protection considerations were given.
POF misalignment model based on the calculation of the radiation pattern using the Hankel transform.
Mateo, J; Losada, M A; López, A
2015-03-23
Here, we propose a method to estimate misalignment losses that is based on the calculation of the radiated angular power distribution as light propagates through space using the fiber far field pattern (FFP) and simplifying and speeding calculations with the Hankel transform. This method gives good estimates for combined transversal and longitudinal losses at short, intermediate and long offset distances. In addition, the same methodology can be adapted to describe not only scalar loss but also its angular dependence caused by misalignments. We show that this approach can be applied to upgrade a connector matrix included in a propagation model that is integrated into simulation software. This way, we assess the effects of misalignments at different points in the link and are able to predict the performance of different layouts at system level.
Calculation of the wetting parameter from a cluster model in the framework of nanothermodynamics.
García-Morales, V; Cervera, J; Pellicer, J
2003-06-01
The critical wetting parameter omega(c) determines the strength of interfacial fluctuations in critical wetting transitions. In this Brief Report, we calculate omega(c) from considerations on critical liquid clusters inside a vapor phase. The starting point is a cluster model developed by Hill and Chamberlin in the framework of nanothermodynamics [Proc. Natl. Acad. Sci. USA 95, 12779 (1998)]. Our calculations yield results for omega(c) between 0.52 and 1.00, depending on the degrees of freedom considered. The findings are in agreement with previous experimental results and give an idea of the universal dynamical behavior of the clusters when approaching criticality. We suggest that this behavior is a combination of translation and vortex rotational motion (omega(c)=0.84). PMID:16241275
Calculation of the wetting parameter from a cluster model in the framework of nanothermodynamics.
García-Morales, V; Cervera, J; Pellicer, J
2003-06-01
The critical wetting parameter omega(c) determines the strength of interfacial fluctuations in critical wetting transitions. In this Brief Report, we calculate omega(c) from considerations on critical liquid clusters inside a vapor phase. The starting point is a cluster model developed by Hill and Chamberlin in the framework of nanothermodynamics [Proc. Natl. Acad. Sci. USA 95, 12779 (1998)]. Our calculations yield results for omega(c) between 0.52 and 1.00, depending on the degrees of freedom considered. The findings are in agreement with previous experimental results and give an idea of the universal dynamical behavior of the clusters when approaching criticality. We suggest that this behavior is a combination of translation and vortex rotational motion (omega(c)=0.84).
NASA Technical Reports Server (NTRS)
Livne, Eli
1989-01-01
A method is presented for generating mode shapes for model order reduction in a way that leads to accurate calculation of eigenvalue derivatives and eigenvalues for a class of control augmented structures. The method is based on treating degrees of freedom where control forces act or masses are changed in a manner analogous to that used for boundary degrees of freedom in component mode synthesis. It is especially suited for structures controlled by a small number of actuators and/or tuned by a small number of concentrated masses whose positions are predetermined. A control augmented multispan beam with closely spaced natural frequencies is used for numerical experimentation. A comparison with reduced-order eigenvalue sensitivity calculations based on the normal modes of the structure shows that the method presented produces significant improvements in accuracy.
NASA Astrophysics Data System (ADS)
Li, Zheng; Sohn, Ilyoup; Levin, Deborah A.; Modest, Michael F.
2011-05-01
The current work implemented excited levels of atomic N and corresponding electron impact excitation/de-excitation and ionization processes in DSMC. Results show that when excitation models are included, the Stardust 68.9 km re-entry flow has an observable change in the ion number densities and electron temperature. Adding in the excited levels of atoms improves the degree of ionization by providing additional intermediate steps to ionization. The extra ionization reactions consume the electron energy and reduce the electron temperature. The DSMC results of number densities of excited levels are lower than the prediction of quasi steady state calculation. Comparison of radiation calculations using electronic excited populations from DSMC and QSS indicates that, at the stagnation point, there is about 20% difference of the radiative heat flux between DSMC and QSS.
A new pencil beam model for photon dose calculations in heterogeneous media.
Zhang, P; Simon, A; De Crevoisier, R; Haigron, P; Nassef, M H; Li, B; Shu, H
2014-11-01
The pencil beam method is commonly used for dose calculations in intensity-modulated radiation therapy (IMRT). In this study, we have proposed a novel pencil model for calculating photon dose distributions in heterogeneous media. To avoid any oblique kernel-related bias and reduce computation time, dose distributions were computed in a spherical coordinate system based on the pencil kernels of different distances from source to surface (DSS). We employed two different dose calculation methods: the superposition method and the fast Fourier transform convolution (FFTC) method. In order to render the superposition method more accurate, we scaled the depth-directed component by moving the position of the entry point and altering the DSS value for a given beamlet. The lateral components were thus directly corrected by the density scaling method along the spherical shell without taking the densities from the previous layers into account. Significant computation time could be saved by performing the FFTC calculations on each spherical shell, disregarding density changes in the lateral direction. The proposed methods were tested on several phantoms, including lung- and bone-type heterogeneities. We compared them with Monte Carlo (MC) simulation for several field sizes with 6 MV photon beams. Our results revealed mean absolute deviations <1% for the proposed superposition method. Compared to the AAA algorithm, this method improved dose calculation accuracy by at least 0.3% in heterogeneous phantoms. The FFTC method was approximately 40 times faster than the superposition method. However, compared with MC, mean absolute deviations were <3% for the FFTC method.
Al Abed, Amr; Yin, Shijie; Suaning, Gregg J; Lovell, Nigel H; Dokos, Socrates
2012-01-01
Computational models are valuable tools that can be used to aid the design and test the efficacy of electrical stimulation strategies in prosthetic vision devices. In continuum models of retinal electrophysiology, the effective extracellular potential can be considered as an approximate measure of the electrotonic loading a neuron's dendritic tree exerts on the soma. A convolution based method is presented to calculate the local spatial average of the effective extracellular loading in retinal ganglion cells (RGCs) in a continuum model of the retina which includes an active RGC tissue layer. The method can be used to study the effect of the dendritic tree size on the activation of RGCs by electrical stimulation using a hexagonal arrangement of electrodes (hexpolar) placed in the suprachoroidal space.
Model of the catalytic mechanism of human aldose reductase based on quantum chemical calculations.
Cachau, R. C.; Howard, E. H.; Barth, P. B.; Mitschler, A. M.; Chevrier, B. C.; Lamour, V.; Joachimiak, A.; Sanishvili, R.; Van Zandt, M.; Sibley, E.; Moras, D.; Podjarny, A.; UPR de Biologie Structurale; National Cancer Inst.; Univ. Louis Pasteur; Inst. for Diabetes Discovery, Inc.
2000-01-01
Aldose Reductase is an enzyme involved in diabetic complications, thoroughly studied for the purpose of inhibitor development. The structure of an enzyme-inhibitor complex solved at sub-atomic resolution has been used to develop a model for the catalytic mechanism. This model has been refined using a combination of Molecular Dynamics and Quantum calculations. It shows that the proton donation, the subject of previous controversies, is the combined effect of three residues: Lys 77, Tyr 48 and His 110. Lys 77 polarises the Tyr 48 OH group, which donates the proton to His 110, which becomes doubly protonated. His 110 then moves and donates the proton to the substrate. The key information from the sub-atomic resolution structure is the orientation of the ring and the single protonafion of the His 110 in the enzyme-inhibitor complex. This model is in full agreement with all available experimental data.
NASA Technical Reports Server (NTRS)
Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.
1988-01-01
A physically based ground hydrology model is presented that includes the processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff. Data from the Goddard Institute for Space Studies GCM were used as inputs for off-line tests of the model in four 8 x 10 deg regions, including Brazil, Sahel, Sahara, and India. Soil and vegetation input parameters were caculated as area-weighted means over the 8 x 10 deg gridbox; the resulting hydrological quantities were compared to ground hydrology model calculations performed on the 1 x 1 deg cells which comprise the 8 x 10 deg gridbox. Results show that the compositing procedure worked well except in the Sahel, where low soil water levels and a heterogeneous land surface produce high variability in hydrological quantities; for that region, a resolution better than 8 x 10 deg is needed.
A model for calculating effects of liquid waste disposal in deep saline aquifers
Intercomp Resource Development and Engineering, Inc.
1976-01-01
A transient, three-dimensional subsurface waste-disposal model has been developed to provide methodology to design and test waste-disposal systems. The model is a finite-difference solution to the pressure, energy, and mass-transport equations. Equation parameters such as viscosity and density are allowed to be functions of the equations ' dependent variables. Multiple user options allow the choice of x, y, and z cartesian or r and z radial coordinates, various finite-difference methods, iterative and direct matrix solution techniques, restart options , and various provisions for output display. The addition of well-bore heat and pressure-loss calculations to the model makes available to the ground-water hydrologist the most recent advances from the oil and gas reservoir engineering field. (Woodard-USGS)
Calculating the renormalisation group equations of a SUSY model with Susyno
NASA Astrophysics Data System (ADS)
Fonseca, Renato M.
2012-10-01
Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features
Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth
2014-12-01
There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.
HEMCO: a versatile software component for calculating and validating emissions in atmospheric models
NASA Astrophysics Data System (ADS)
Keller, C. A.; Long, M. S.; Yantosca, R.; da Silva, A.; Pawson, S.; Jacob, D. J.
2014-12-01
Accurate representation of emissions is essential in global models of atmospheric composition. New and updated emission inventories are continuously being developed by research groups and agencies, reflecting both improving knowledge and actual changes in emissions. Timely incorporation of this new information into atmospheric models is crucial but can involve laborious programming. Here, we present the Harvard-NASA Emission Component version 1.0 (HEMCO), a stand-alone software component for computing emissions in global atmospheric models. HEMCO determines emissions from different sources, regions, and species on a user-defined grid and can combine, overlay, and update a set of data inventories and scale factors, as specified by the user through the HEMCO configuration file. New emission inventories at any spatial and temporal resolution are readily added to HEMCO and can be accessed by the user without any preprocessing of the data files or modification of the source code. Emissions that depend on dynamic source types and local environmental variables such as wind speed or surface temperature are calculated in separate HEMCO extensions. By providing a widely applicable framework for specifying constituent emissions, HEMCO is designed to ease sensitivity studies and model comparisons, as well as inverse modeling in which emissions are adjusted iteratively. So far, we have implemented HEMCO in the GEOS-Chem chemical transport model and in the NASA Goddard Earth Observing System Model (GEOS-5) along with its integrated data assimilation system.
A compressible near-wall turbulence model for boundary layer calculations
NASA Technical Reports Server (NTRS)
So, R. M. C.; Zhang, H. S.; Lai, Y. G.
1992-01-01
A compressible near-wall two-equation model is derived by relaxing the assumption of dynamical field similarity between compressible and incompressible flows. This requires justifications for extending the incompressible models to compressible flows and the formulation of the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilational part, which is directly affected by these changes. This approach isolates terms with explicit dependence on compressibility so that they can be modeled accordingly. An equation that governs the transport of the solenoidal dissipation rate with additional terms that are explicitly dependent on the compressibility effects is derived similarly. A model with an explicit dependence on the turbulent Mach number is proposed for the dilational dissipation rate. Thus formulated, all near-wall incompressible flow models could be expressed in terms of the solenoidal dissipation rate and straight-forwardly extended to compressible flows. Therefore, the incompressible equations are recovered correctly in the limit of constant density. The two-equation model and the assumption of constant turbulent Prandtl number are used to calculate compressible boundary layers on a flat plate with different wall thermal boundary conditions and free-stream Mach numbers. The calculated results, including the near-wall distributions of turbulence statistics and their limiting behavior, are in good agreement with measurements. In particular, the near-wall asymptotic properties are found to be consistent with incompressible behavior; thus suggesting that turbulent flows in the viscous sublayer are not much affected by compressibility effects.
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation
Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong
2013-01-01
Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324
A simple model for calculating tsunami flow speed from tsunami deposits
Jaffe, B.E.; Gelfenbuam, G.
2007-01-01
This paper presents a simple model for tsunami sedimentation that can be applied to calculate tsunami flow speed from the thickness and grain size of a tsunami deposit (the inverse problem). For sandy tsunami deposits where grain size and thickness vary gradually in the direction of transport, tsunami sediment transport is modeled as a steady, spatially uniform process. The amount of sediment in suspension is assumed to be in equilibrium with the steady portion of the long period, slowing varying uprush portion of the tsunami. Spatial flow deceleration is assumed to be small and not to contribute significantly to the tsunami deposit. Tsunami deposits are formed from sediment settling from the water column when flow speeds on land go to zero everywhere at the time of maximum tsunami inundation. There is little erosion of the deposit by return flow because it is a slow flow and is concentrated in topographic lows. Variations in grain size of the deposit are found to have more effect on calculated tsunami flow speed than deposit thickness. The model is tested using field data collected at Arop, Papua New Guinea soon after the 1998 tsunami. Speed estimates of 14??m/s at 200??m inland from the shoreline compare favorably with those from a 1-D inundation model and from application of Bernoulli's principle to water levels on buildings left standing after the tsunami. As evidence that the model is applicable to some sandy tsunami deposits, the model reproduces the observed normal grading and vertical variation in sorting and skewness of a deposit formed by the 1998 tsunami.
PFLOW: A 3-D Numerical Modeling Tool for Calculating Fluid-Pressure Diffusion from Coulomb Strain
NASA Astrophysics Data System (ADS)
Wolf, L. W.; Lee, M.; Meir, A.; Dyer, G.; Ma, K.; Chan, C.
2009-12-01
A new 3D time-dependent pore-pressure diffusion model PFLOW is developed to investigate the response of pore fluids to the crustal deformation generated by strong earthquakes in heterogeneous geologic media. Given crustal strain generated by changes in Coulomb stress, this MATLAB-based code uses Skempton's coefficient to calculate resulting changes fluid pressure. Pore-pressure diffusion can be tracked over time in a user-defined model space with user-prescribed Neumann or Dirchilet boundary conditions and with spatially variable values of permeability. PFLOW employs linear or quadratic finite elements for spatial discretization and first order or second order, explicit or implicit finite difference discretization in time. PFLOW is easily interfaced with output from deformation modeling programs such as Coulomb (Toda et al., 2007) or 3D-DEF (Gomberg and Ellis, 1994). The code is useful for investigating to first-order the evolution of pore pressure changes induced by changes in Coulomb stress and their possible relation to water-level changes in wells or changes in stream discharge. It can also be used for student research and classroom instruction. As an example application, we calculate the coseismic pore pressure changes and diffusion induced by volumetric strain associated with the 1999 Chi-Chi earthquake (Mw = 7.6) in Taiwan. The Chi-Chi earthquake provides an unique opportunity to investigate the spatial and time-dependent poroelastic response of near-field rocks and sediments because there exist extensive observational data of water-level changes and crustal deformation. The integrated model allows us to explore whether changes in Coulomb stress can adequately explain hydrologic anomalies observed in areas such as Taiwan’s western foothills and the Choshui River alluvial plain. To calculate coseismic strain, we use the carefully calibrated finite fault-rupture model of Ma et al. (2005) and the deformation modeling code Coulomb 3.1 (Toda et al., 2007
NASA Astrophysics Data System (ADS)
Zurita-Milla, Raul; Mehdipoor, Hamed; Batarseh, Sana; Ault, Toby; Schwartz, Mark D.
2014-05-01
Models that predict the timing of recurrent biological events play an important role in supporting the systematic study of phenological changes at a variety of spatial and temporal scales. One set of such models are the extended Spring indices (SI-x). These models predicts a suite of phenological metrics ("first leaf" and "first bloom," "last freeze" and the "damage index") from temperature data and geographic location (to model the duration of the day). The SI-x models were calibrated using historical phenological and weather observations from the continental US. In particular, the models relied on first leaf and first bloom observations for lilac and honeysuckle and on daily minimum and maximum temperature values from a number of weather stations located near to the sites where phenological observations were made. In this work, we study the use of DAYMET (http://daymet.ornl.gov/) to calculate the SI-x models over the continental USA. DAYMET offers daily gridded maximum and minimum temperature values for the period 1980 to 2012. Using an automatic downloader, we downloaded complete DAYMET temperature time series for the over 1100 geographic locations where historical lilac observations were made. The temperature values were parsed and, using the recently available MATLAB code, the SI-x indices were calculated. Subsequently, the predicted first leaf and first bloom dates were compared with historical lilac observations. The RMSE between predicted and observed lilac leaf/bloom dates was calculated after identifying data from the same geographic location and year. Results were satisfactory for the lilac observations in the Eastern US (e.g. the RMSE for the blooming date was of about 5 days). However, the correspondence between the observed and predicted lilac values in the West was rather week (e.g. RMSE for the blooming date of about 22 days). This might indicate that DAYMET temperature data in this region of the US might contain larger uncertainties due to a more
Wang, Junmei; Hou, Tingjun
2012-01-01
It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (Molecular Mechanics-Poisson Boltzmann Surface Area) and MM-GBSA (Molecular Mechanics-Generalized Born Surface Area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parameterized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For the convenience, TS, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for post-entropy calculations): the mean correlation coefficient squares (R2) was 0.56. As to the 20 complexes, the TS changes
Direct Calculation of the Rate of Homogeneous Ice Nucleation for a Molecular Model of Water
NASA Astrophysics Data System (ADS)
Haji-Akbari, Amir; Debenedetti, Pablo
Ice formation is ubiquitous in nature, with important consequences in many systems and environments. However, its intrinsic kinetics and mechanism are difficult to discern with experiments. Molecular simulations of ice nucleation are also challenging due to sluggish structural relaxation and the large nucleation barriers, and direct calculations of homogeneous nucleation rates have only been achieved for mW, a monoatomic coarse-grained model of water. For the more realistic molecular models, only indirect estimates have been obtained by assuming the validity of classical nucleation theory. Here, we use a coarse-grained variant of a path sampling approach known as forward-flux sampling to perform the first direct calculation of the homogeneous nucleation rate for TIP4P/Ice, which is the most accurate water model for studying ice polymorphs. By using a novel topological order parameter, we are able to identify a freezing mechanism that involves a competition between cubic and hexagonal ice polymorphs. In this competition, cubic ice wins as its growth leads to more compact crystallites
A Geometric Computational Model for Calculation of Longwall Face Effect on Gate Roadways
NASA Astrophysics Data System (ADS)
Mohammadi, Hamid; Ebrahimi Farsangi, Mohammad Ali; Jalalifar, Hossein; Ahmadi, Ali Reza
2016-01-01
In this paper a geometric computational model (GCM) has been developed for calculating the effect of longwall face on the extension of excavation-damaged zone (EDZ) above the gate roadways (main and tail gates), considering the advance longwall mining method. In this model, the stability of gate roadways are investigated based on loading effects due to EDZ and caving zone (CZ) above the longwall face, which can extend the EDZ size. The structure of GCM depends on four important factors: (1) geomechanical properties of hanging wall, (2) dip and thickness of coal seam, (3) CZ characteristics, and (4) pillar width. The investigations demonstrated that the extension of EDZ is a function of pillar width. Considering the effect of pillar width, new mathematical relationships were presented to calculate the face influence coefficient and characteristics of extended EDZ. Furthermore, taking GCM into account, a computational algorithm for stability analysis of gate roadways was suggested. Validation was carried out through instrumentation and monitoring results of a longwall face at Parvade-2 coal mine in Tabas, Iran, demonstrating good agreement between the new model and measured results. Finally, a sensitivity analysis was carried out on the effect of pillar width, bearing capacity of support system and coal seam dip.
Wang, Junmei; Cieplak, Piotr; Li, Jie; Hou, Tingjun; Luo, Ray; Duan, Yong
2011-03-31
In this work, four types of polarizable models have been developed for calculating interactions between atomic charges and induced point dipoles. These include the Applequist, Thole linear, Thole exponential model, and the Thole Tinker-like. The polarizability models have been optimized to reproduce the experimental static molecular polarizabilities obtained from the molecular refraction measurements on a set of 420 molecules reported by Bosque and Sales. We grouped the models into five sets depending on the interaction types, that is, whether the interactions of two atoms that form the bond, bond angle, and dihedral angle are turned off or scaled down. When 1-2 (bonded) and 1-3 (separated by two bonds) interactions are turned off, 1-4 (separated by three bonds) interactions are scaled down, or both, all models including the Applequist model achieved similar performance: the average percentage error (APE) ranges from 1.15 to 1.23%, and the average unsigned error (AUE) ranges from 0.143 to 0.158 Å(3). When the short-range 1-2, 1-3, and full 1-4 terms are taken into account (set D models), the APE ranges from 1.30 to 1.58% for the three Thole models, whereas the Applequist model (DA) has a significantly larger APE (3.82%). The AUE ranges from 0.166 to 0.196 Å(3) for the three Thole models, compared with 0.446 Å(3) for the Applequist model. Further assessment using the 70-molecule van Duijnen and Swart data set clearly showed that the developed models are both accurate and highly transferable and are in fact have smaller errors than the models developed using this particular data set (set E models). The fact that A, B, and C model sets are notably more accurate than both D and E model sets strongly suggests that the inclusion of 1-2 and 1-3 interactions reduces the transferability and accuracy.
Systematical calculation of α decay half-lives with a generalized liquid drop model
NASA Astrophysics Data System (ADS)
Bao, Xiaojun; Zhang, Hongfei; Zhang, Haifei; Royer, G.; Li, Junqing
2014-01-01
A systematic calculation of α decay half-lives is presented for even-even nuclei between Te and Z = 118 isotopes. The potential energy governing α decay has been determined within a liquid drop model including proximity effects between the α particle and the daughter nucleus and taking into account the experimental Q value. The α decay half-lives have been deduced from the WKB barrier penetration probability. The α decay half-lives obtained agree reasonably well with the experimental data.
Microscopic calculation of interacting boson model parameters by potential-energy surface mapping
Bentley, I.; Frauendorf, S.
2011-06-15
A coherent state technique is used to generate an interacting boson model (IBM) Hamiltonian energy surface which is adjusted to match a mean-field energy surface. This technique allows the calculation of IBM Hamiltonian parameters, prediction of properties of low-lying collective states, as well as the generation of probability distributions of various shapes in the ground state of transitional nuclei, the last two of which are of astrophysical interest. The results for krypton, molybdenum, palladium, cadmium, gadolinium, dysprosium, and erbium nuclei are compared with experiment.
A simplified model for calculating early offsite consequences from nuclear reactor accidents
Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.
1988-07-01
A personal computer-based model, SMART, has been developed that uses an integral approach for calculating early offsite consequences from nuclear reactor accidents. The solution procedure uses simplified meteorology and involves direct analytic integration of air concentration equations over time and position. This is different from the discretization approach currently used in the CRAC2 and MACCS codes. The SMART code is fast-running, thereby providing a valuable tool for sensitivity and uncertainty studies. The code was benchmarked against both MACCS version 1.4 and CRAC2. Results of benchmarking and detailed sensitivity/uncertainty analyses using SMART are presented. 34 refs., 21 figs., 24 tabs.
An approximate framework for quantum transport calculation with model order reduction
Chen, Quan; Li, Jun; Yam, Chiyung; Zhang, Yu; Wong, Ngai; Chen, Guanhua
2015-04-01
A new approximate computational framework is proposed for computing the non-equilibrium charge density in the context of the non-equilibrium Green's function (NEGF) method for quantum mechanical transport problems. The framework consists of a new formulation, called the X-formulation, for single-energy density calculation based on the solution of sparse linear systems, and a projection-based nonlinear model order reduction (MOR) approach to address the large number of energy points required for large applied biases. The advantages of the new methods are confirmed by numerical experiments.
Airloads and Wake Geometry Calculations for an Isolated Tiltrotor Model in a Wind Tunnel
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2003-01-01
Th tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. For many years such correlation has been performed for helicopter rotors (rotors designed for edgewise flight), but correlation activities for tiltrotors have been limited, in part by the absence of appropriate measured data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, U4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) now provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will present calculations of airloads, wake geometry, and performance, including correlation with TRAM DNW measurements. The calculations were obtained using CAMRAD II, which is a modern rotorcraft comprehensive analysis, with advanced models intended for application to tiltrotor aircraft as well as helicopters. Comprehensive analyses have received extensive correlation with performance and loads measurements on helicopter rotors. The proposed paper is part of an initial effort to perform an equally extensive correlation with tiltrotor data. The correlation will establish the level of predictive capability achievable with current technology; identify the limitations of the current aerodynamic, wake, and structural models of tiltrotors; and lead to recommendations for research to extend tiltrotor aeromechanics analysis capability. The purpose of the Tilt Rotor Aeroacoustic Model (TRAM) experimental project is to provide data necessary to validate
Renormalization effects on the MSSM from a calculable model of a strongly coupled hidden sector
Arai, Masato; Okada, Nobuchika
2011-10-01
We investigate possible renormalization effects on the low-energy mass spectrum of the minimal supersymmetric standard model (MSSM), using a calculable model of strongly coupled hidden sector. We model the hidden sector by N=2 supersymmetric quantum chromodynamics with gauge group SU(2)xU(1) and N{sub f}=2 matter hypermultiplets, perturbed by a Fayet-Iliopoulos term which breaks the supersymmetry down to N=0 on a metastable vacuum. In the hidden sector the Kaehler potential is renormalized. Upon identifying a hidden sector modulus with the renormalization scale, and extrapolating to the strongly coupled regime using the Seiberg-Witten solution, the contribution from the hidden sector to the MSSM renormalization group flows is computed. For concreteness, we consider a model in which the renormalization effects are communicated to the MSSM sector via gauge mediation. In contrast to the perturbative toy examples of hidden sector renormalization studied in the literature, we find that our strongly coupled model exhibits rather intricate effects on the MSSM soft scalar mass spectrum, depending on how the hidden sector fields are coupled to the messenger fields. This model provides a concrete example in which the low-energy spectrum of MSSM particles that are expected to be accessible in collider experiments is obtained using strongly coupled hidden sector dynamics.
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Precipitation in Microalloyed Steel by Model Alloy Experiments and Thermodynamic Calculations
NASA Astrophysics Data System (ADS)
Frisk, Karin; Borggren, Ulrika
2016-10-01
Precipitation in microalloyed steel has been studied by applying thermodynamic calculations based on a description of the Gibbs energies of the individual phases over the full multicomponent composition range. To validate and improve the thermodynamic description, new experimental investigations of the phase separation in the cubic carbides/nitrides/carbonitrides in alloys containing Nb, V, Mo, and Cr, have been performed. Model alloys were designed to obtain equilibrium carbides/carbonitrides that are sufficiently large for measurements of compositions, making it possible to study the partitioning of the elements into different precipitates, showing distinctly different composition sets. The reliability of the calculations, when applied to multicomponent alloys, was tested by comparing with published experimental studies of precipitation in microalloyed steel. It is shown that thermodynamic calculations accurately describe the observed precipitation sequences. Further, they can reproduce several important features of precipitation processes in microalloyed steel such as the partitioning of Mo between matrix and precipitates and the variation of precipitate compositions depending on precipitation temperature.
Large uncertainty in soil carbon modelling related to carbon input calculation method
NASA Astrophysics Data System (ADS)
Keel, Sonja G.; Leifeld, Jens; Taghizadeh-Toosi, Arezoo; Oleson, Jørgen E.
2016-04-01
A model-based inventory for carbon (C) sinks and sources in agricultural soils is being established for Switzerland. As part of this project, five frequently used allometric equations that estimate soil C inputs based on measured yields are compared. To evaluate the different methods, we calculate soil C inputs for a long-term field trial in Switzerland. This DOK experiment (bio-Dynamic, bio-Organic, and conventional (German: Konventionell)) compares five different management systems, that are applied to identical crop rotations. Average calculated soil C inputs vary largely between allometric equations and range from 1.6 t C ha-1 yr-1 to 2.6 t C ha-1 yr-1. Among the most important crops in Switzerland, the uncertainty is largest for barley (difference between highest and lowest estimate: 3.0 t C ha-1 yr-1). For the unfertilized control treatment, the estimated soil C inputs vary less between allometric equations than for the treatment that received mineral fertilizer and farmyard manure. Most likely, this is due to the higher yields in the latter treatment, i.e. the difference between methods might be amplified because yields differ more. To evaluate the influence of these allometric equations on soil C dynamics we simulate the DOK trial for the years 1977-2004 using the model C-TOOL (Taghizadeh-Toosi et al. 2014) and the five different soil C input calculation methods. Across all treatments, C-TOOL simulates a decrease in soil C in line with the experimental data. This decline, however, varies between allometric equations (-2.4 t C ha-1 to -6.3 t C ha-1 for the years 1977-2004) and has the same order of magnitude as the difference between treatments. In summary, the method to estimate soil C inputs is identified as a significant source of uncertainty in soil C modelling. Choosing an appropriate allometric equation to derive the input data is thus a critical step when setting up a model-based national soil C inventory. References Taghizadeh-Toosi A et al. (2014) C
TH-C-BRD-02: Analytical Modeling and Dose Calculation Method for Asymmetric Proton Pencil Beams
Gelover, E; Wang, D; Hill, P; Flynn, R; Hyer, D
2014-06-15
Purpose: A dynamic collimation system (DCS), which consists of two pairs of orthogonal trimmer blades driven by linear motors has been proposed to decrease the lateral penumbra in pencil beam scanning proton therapy. The DCS reduces lateral penumbra by intercepting the proton pencil beam near the lateral boundary of the target in the beam's eye view. The resultant trimmed pencil beams are asymmetric and laterally shifted, and therefore existing pencil beam dose calculation algorithms are not capable of trimmed beam dose calculations. This work develops a method to model and compute dose from trimmed pencil beams when using the DCS. Methods: MCNPX simulations were used to determine the dose distributions expected from various trimmer configurations using the DCS. Using these data, the lateral distribution for individual beamlets was modeled with a 2D asymmetric Gaussian function. The integral depth dose (IDD) of each configuration was also modeled by combining the IDD of an untrimmed pencil beam with a linear correction factor. The convolution of these two terms, along with the Highland approximation to account for lateral growth of the beam along the depth direction, allows a trimmed pencil beam dose distribution to be analytically generated. The algorithm was validated by computing dose for a single energy layer 5×5 cm{sup 2} treatment field, defined by the trimmers, using both the proposed method and MCNPX beamlets. Results: The Gaussian modeled asymmetric lateral profiles along the principal axes match the MCNPX data very well (R{sup 2}≥0.95 at the depth of the Bragg peak). For the 5×5 cm{sup 2} treatment plan created with both the modeled and MCNPX pencil beams, the passing rate of the 3D gamma test was 98% using a standard threshold of 3%/3 mm. Conclusion: An analytical method capable of accurately computing asymmetric pencil beam dose when using the DCS has been developed.
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
Non-LTE kinetics modeling of krypton ions: Calculations of radiative cooling coefficients
NASA Astrophysics Data System (ADS)
Chung, H.-K.; Fournier, K. B.; Lee, R. W.
2006-06-01
For plasmas containing high-Z ions the energy loss due to radiative processes can be important in understanding energy distributions and spectral characteristics. Since high-Z plasmas occur over a wide range of temperature and density conditions, a general non-LTE population kinetics description is required to provide a qualitative and quantitative description for radiative energy loss. We investigate radiative properties of non-LTE krypton plasmas with a collisional-radiative (CR) model constructed from detailed atomic data. This work makes two extensions beyond previous non-LTE kinetics models. First, this model explicitly treats the dielectronic recombination (DR) channels. Second, this model allows one to investigate the higher electron density regimes found commonly in laboratory plasmas. This more comprehensive approach enables the study of population kinetics in a general manner and will provide a systematic guide for reducing a complex model to a simpler one. Specifically, we present the calculations of radiative cooling coefficients of krypton ions as a function of electron density in the optically thin limit. Total, soft X-ray (1.6 keV ≤ E ≤ 12 keV), and hard X-ray ( E ≥ 12 keV) radiative cooling coefficients are given for the plasma conditions of 0.6 keV ≤ Te ≤ 10 keV and 10 14 cm -3 ≤ Ne ≤ 10 24 cm -3. The ionic radiative cooling coefficients provided are sufficient to allow users to construct the total rate from given charge state distributions. Steady-state calculations of the average charge state at given Te and Ne values are also presented.
CHEMEOS: a new chemical-picture-based model for plasma equation-of-state calculations
Hakel, P.; Kilcrease, D. P.
2004-01-01
We present the results of a new plasma equation-of-state (EOS) model currently under development at the Atomic and Optical Theory Group (T-4) in Los Alamos. This model is based on the chemical picture of the plasma and uses the free-energy-minimization technique and the occupation-probability formalism. The model is constructed as a combination of ideal and non-ideal contributions to the total Helmholtz free energy of the plasma including the effects of plasma microfields, strong coupling, and the hard-sphere description of the finite sizes of atomic species with bound electrons. These types of models have been recognized as a convenient and computationally inexpensive tool for modeling of local-thermal-equilibrium (LIE) plasmas for a broad range of temperatures and densities. We calculate the thermodynamic characteristics of the plasma (such as pressure and internal energy), and populations and occupation probabilities of atomic bound states. In addition to a smooth truncation of partition functions necessary for extracting ion populations from the system of Saha-type equations, the occupation probabilities can also be used for the merging of Rydberg line series into their associated bound-free edges. In the low-density, high-temperature regimes the plasma effects are adequately described by the Debye-Huckel model and its corresponding contribution to the total Helmholtz free energy of the plasma. In strongly-coupled plasmas, however, the Debye-Huckel approximation is no longer appropriate. In order to extend the validity of our EOS model to strongly-coupled plasmas while maintaining the analytic nature of our model, we adopt fits to the plasma free energy based on hypernetted-chain and Monte Carlo simulations. Our results for hydrogen are compared to other theoretical models. Hydrogen has been selected as a test-case on which improvements in EOS physics are benchmarked before analogous upgrades are included for any element in the EOS part of the new Los Alamos
NASA Astrophysics Data System (ADS)
Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fessen, C. G.
1990-05-01
The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.
NASA Technical Reports Server (NTRS)
Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.
1990-01-01
The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.
Effect of photochemical models on calculated equilibria and cooling rates in the stratosphere
NASA Technical Reports Server (NTRS)
Blake, D.; Lindzen, R. S.
1973-01-01
The determination of the relaxation time of a temperature perturbation in the stratosphere must take into account the effects of the absorption of solar energy by ozone, while the ozone density itself is dependent on temperature. A photochemical model, consisting of continuity equations for each of 29 constituents with reaction rates and adjustment times, is used to obtain the vertical distribution of ozone. The altitude range from 35 to 60 km in non-Arctic regions is shown to be in approximate joint radiative-photochemical equilibrium. The temperature and ozone distributions are thus well buffered. Modest changes in ozone and temperature were calculated on the basis of this model from large changes in cooling rates and reaction rates, and the results are shown to be more in line with actual observations. The importance of estimating the mixing ratios for NO and H2O is emphasized.
Long-term changes in the mesosphere calculated by a two-dimensional model
NASA Astrophysics Data System (ADS)
Gruzdev, Aleksandr N.; Brasseur, Guy P.
2005-02-01
We have used the interactive two-dimensional model SOCRATES to investigate the thermal and the chemical response of the mesosphere to the changes in greenhouse gas concentrations observed in the past 50 years (CO2, CH4, water vapor, N2O, CFCs), and to specified changes in gravity wave drag and diffusion in the upper mesosphere. When considering the observed increase in the abundances of greenhouse gases for the past 50 years, a cooling of 3-7 K is calculated in the mesopause region together with a cooling of 4-6 K in the middle mesosphere. Changes in the meridional circulation of the mesosphere damp the pure radiative thermal effect of the greenhouse gases. The largest cooling in the winter upper mesosphere-mesopause region occurs when the observed increase in concentrations of greenhouse gases and the strengthening of the gravity wave drag and diffusion are considered simultaneously. Depending on the adopted strengthening of the gravity wave drag and diffusion, a cooling varying from typically 6-10 K to 10-20 K over the past 50 years is predicted in the extratropical upper mesosphere during wintertime. In summer, however, consistently with observations, the thermal response calculated by the model is insignificant in the vicinity of the mesopause. Although the calculated cooling of the winter mesopause is still less than suggested by some observations, these results lead to the conclusion that the increase in the abundances of greenhouse gases alone may not entirely explain the observed temperature trends in the mesosphere. Long-term changes in the dynamics of the middle atmosphere (and the troposphere), including changes in gravity wave activity may have contributed significantly to the observed long-term changes in thermal structure and chemical composition of the mesosphere.
Using molecular dynamics and quantum mechanics calculations to model fluorescence observables
Speelman, Amy L.; Muñoz-Losa, Aurora; Hinkle, Katie L.; VanBeek, Darren B.; Mennucci, Benedetta; Krueger, Brent P.
2011-01-01
We provide a critical examination of two different methods for generating a donor-acceptor electronic coupling trajectory from a molecular dynamics (MD) trajectory and three methods for sampling that coupling trajectory, allowing the modeling of experimental observables directly from the MD simulation. In the first coupling method we perform a single quantum-mechanical (QM) calculation to characterize the excited state behavior, specifically the transition dipole moment, of the fluorescent probe, which is then mapped onto the configuration space sampled by MD. We then utilize these transition dipoles within the ideal dipole approximation (IDA) to determine the electronic coupling between the probes that mediates the transfer of energy. In the second method we perform a QM calculation on each snapshot and use the complete transition densities to calculate the electronic coupling without need for the IDA. The resulting coupling trajectories are then sampled using three methods ranging from an independent sampling of each trajectory point (the Independent Snapshot Method) to a Markov chain treatment that accounts for the dynamics of the coupling in determining effective rates. The results show that the IDA significantly overestimates the energy transfer rate (by a factor of 2.6) during the portions of the trajectory in which the probes are close to each other. Comparison of the sampling methods shows that the Markov chain approach yields more realistic observables at both high and low FRET efficiencies. Differences between the three sampling methods are discussed in terms of the different mechanisms for averaging over structural dynamics in the system. Convergence of the Markov chain method is carefully examined. Together, the methods for estimating coupling and for sampling the coupling provide a mechanism for directly connecting the structural dynamics modeled by MD with fluorescence observables determined through FRET experiments. PMID:21417498
Using molecular dynamics and quantum mechanics calculations to model fluorescence observables.
Speelman, Amy L; Muñoz-Losa, Aurora; Hinkle, Katie L; VanBeek, Darren B; Mennucci, Benedetta; Krueger, Brent P
2011-04-28
We provide a critical examination of two different methods for generating a donor-acceptor electronic coupling trajectory from a molecular dynamics (MD) trajectory and three methods for sampling that coupling trajectory, allowing the modeling of experimental observables directly from the MD simulation. In the first coupling method we perform a single quantum-mechanical (QM) calculation to characterize the excited state behavior, specifically the transition dipole moment, of the fluorescent probe, which is then mapped onto the configuration space sampled by MD. We then utilize these transition dipoles within the ideal dipole approximation (IDA) to determine the electronic coupling between the probes that mediates the transfer of energy. In the second method we perform a QM calculation on each snapshot and use the complete transition densities to calculate the electronic coupling without need for the IDA. The resulting coupling trajectories are then sampled using three methods ranging from an independent sampling of each trajectory point (the independent snapshot method) to a Markov chain treatment that accounts for the dynamics of the coupling in determining effective rates. The results show that the IDA significantly overestimates the energy transfer rate (by a factor of 2.6) during the portions of the trajectory in which the probes are close to each other. Comparison of the sampling methods shows that the Markov chain approach yields more realistic observables at both high and low FRET efficiencies. Differences between the three sampling methods are discussed in terms of the different mechanisms for averaging over structural dynamics in the system. Convergence of the Markov chain method is carefully examined. Together, the methods for estimating coupling and for sampling the coupling provide a mechanism for directly connecting the structural dynamics modeled by MD with fluorescence observables determined through FRET experiments.
Power Calculations for General Linear Multivariate Models Including Repeated Measures Applications.
Muller, Keith E; Lavange, Lisa M; Ramey, Sharon Landesman; Ramey, Craig T
1992-12-01
Recently developed methods for power analysis expand the options available for study design. We demonstrate how easily the methods can be applied by (1) reviewing their formulation and (2) describing their application in the preparation of a particular grant proposal. The focus is a complex but ubiquitous setting: repeated measures in a longitudinal study. Describing the development of the research proposal allows demonstrating the steps needed to conduct an effective power analysis. Discussion of the example also highlights issues that typically must be considered in designing a study. First, we discuss the motivation for using detailed power calculations, focusing on multivariate methods in particular. Second, we survey available methods for the general linear multivariate model (GLMM) with Gaussian errors and recommend those based on F approximations. The treatment includes coverage of the multivariate and univariate approaches to repeated measures, MANOVA, ANOVA, multivariate regression, and univariate regression. Third, we describe the design of the power analysis for the example, a longitudinal study of a child's intellectual performance as a function of mother's estimated verbal intelligence. Fourth, we present the results of the power calculations. Fifth, we evaluate the tradeoffs in using reduced designs and tests to simplify power calculations. Finally, we discuss the benefits and costs of power analysis in the practice of statistics. We make three recommendations: Align the design and hypothesis of the power analysis with the planned data analysis, as best as practical.Embed any power analysis in a defensible sensitivity analysis.Have the extent of the power analysis reflect the ethical, scientific, and monetary costs. We conclude that power analysis catalyzes the interaction of statisticians and subject matter specialists. Using the recent advances for power analysis in linear models can further invigorate the interaction. PMID:24790282
A computer code for calculations in the algebraic collective model of the atomic nucleus
NASA Astrophysics Data System (ADS)
Welsh, T. A.; Rowe, D. J.
2016-03-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.
A three-dimensional tunnel model for calculation of train-induced ground vibration
NASA Astrophysics Data System (ADS)
Forrest, J. A.; Hunt, H. E. M.
2006-07-01
The frequency range of interest for ground vibration from underground urban railways is approximately 20 to 100 Hz. For typical soils, the wavelengths of ground vibration in this frequency range are of the order of the spacing of train axles, the tunnel diameter and the distance from the tunnel to nearby building foundations. For accurate modelling, the interactions between these entities therefore have to be taken into account. This paper describes an analytical three-dimensional model for the dynamics of a deep underground railway tunnel of circular cross-section. The tunnel is conceptualised as an infinitely long, thin cylindrical shell surrounded by soil of infinite radial extent. The soil is modelled by means of the wave equations for an elastic continuum. The coupled problem is solved in the frequency domain by Fourier decomposition into ring modes circumferentially and a Fourier transform into the wavenumber domain longitudinally. Numerical results for the tunnel and soil responses due to a normal point load applied to the tunnel invert are presented. The tunnel model is suitable for use in combination with track models to calculate the ground vibration due to excitation by running trains and to evaluate different track configurations.
Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode
NASA Astrophysics Data System (ADS)
Seibert, P.; Frank, A.
2003-04-01
A method for the calculation of source-receptor (s-r) relationships (sensitivity of a trace substance concentration at some place and time to emission at some place and time) with Lagrangian particle models has been derived and presented previously (Air Pollution Modeling and its Application XIV, Proc. of ITM Boulder 2000). Now, the generalisation to any linear s-r relationship, including dry and wet deposition, decay etc., is presented. It was implemented in the model FLEXPART and tested extensively in idealised set-ups. These tests turned out to be very useful for finding minor model bugs and inaccuracies, and can be recommended generally for model testing. Recently, a convection scheme has been integrated in FLEXPART which was also tested. Both source and receptor can be specified in mass mixing ratio or mass units. Properly taking care of this is quite relevant for sources and receptors at different levels in the atmosphere. Furthermore, we present a test with the transport of aerosol-bound Caesium-137 from the areas contaminated by the Chernobyl disaster to Stockholm during one month.
Model calculating annual mean atmospheric dispersion factor for coastal site of nuclear power plant.
Hu, E B; Chen, J Y; Yao, R T; Zhang, M S; Gao, Z R; Wang, S X; Jia, P R; Liao, Q L
2001-07-01
This paper describes an atmospheric dispersion field experiment performed on the coastal site of nuclear power plant in the east part of China during 1995 to 1996. The three-dimension joint frequency are obtained by hourly observation of wind and temperature on a 100 m high tower; the frequency of the "event day of land and sea breezes" are given by observation of surface wind and land and sea breezes; the diffusion parameters are got from measurements of turbulent and wind tunnel simulation test. A new model calculating the annual mean atmospheric dispersion factor for coastal site of nuclear power plant is developed and established. This model considers not only the effect from mixing release and mixed layer but also the effect from the internal boundary layer and variation of diffusion parameters due to the distance from coast. The comparison between results obtained by the new model and current model shows that the ratio of annual mean atmospheric dispersion factor gained by the new model and the current one is about 2.0.
A computer code for calculations in the algebraic collective model of the atomic nucleus
NASA Astrophysics Data System (ADS)
Welsh, T. A.; Rowe, D. J.
2016-03-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.
2014-02-15
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model
A Multilayered Box Model for Calculating Preliminary RemediationGoals in Soil Screening
Shan, Chao; Javandel, Iraj
2004-05-21
In the process of screening a soil against a certain contaminant, we define the health-risk based preliminary remediation goal (PRG) as the contaminant concentration above which some remedial action may be required. PRG is thus the first standard (or guidance) for judging a site. An over-estimated PRG (a too-large value) may cause us to miss some contaminated sites that can threaten human health and the environment. An under-estimated PRG (a too-small value), on the other hand, may lead to unnecessary cleanup and waste tremendous resources. The PRGs for soils are often calculated on the assumption that the contaminant concentration in soil does not change with time. However, that concentration usually decreases with time as a result of different chemical and transport mechanisms. The static assumption thus exaggerates the long-term exposure dose and results in a too-small PRG. We present a box model that considers all important transport processes and obeys the law of mass conservation. We can use the model as a tool to estimate the transient contaminant concentrations in air, soil and groundwater. Using these concentrations in conjunction with appropriate health risk parameters, we may estimate the PRGs for different contaminants. As an example, we calculated the tritium PRG for residential soils. The result is quite different from, but within the range of, the two versions of the corresponding PRG previously recommended by the U.S. EPA.
NASA Astrophysics Data System (ADS)
Giannoglou, V.; Stylianidis, E.
2016-06-01
Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s) calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s) and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.
NASA Astrophysics Data System (ADS)
Margulis, Vl A.; Muryumin, E. E.; Gaiduk, E. A.
2016-05-01
An effective anisotropic tight-binding model is developed to analytically describe the low-energy electronic structure and optical response of phosphorene (a black phosphorus (BP) monolayer). Within the framework of the model, we derive explicit closed-form expressions, in terms of elementary functions, for the elements of the optical conductivity tensor of phosphorene. These relations provide a convenient parametrization of the highly anisotropic optical response of phosphorene, which allows the reflectance, transmittance, and absorbance of this material to be easily calculated as a function of the frequency of the incident radiation at arbitrary angles of incidence. The results of such a calculation are presented for both a free-standing phosphorene layer and the phosphorene layer deposited on a {{SiO}}2 substrate, and for the two principal cases of polarization of the incident radiation either parallel to or normal to the plane of incidence. Our findings (e.g., a ‘quasi-Brewster’ effect in the reflectance of the phosphorene/{{SiO}}2 overlayer system) pave the way for developing a new, purely optical method of distinguishing BP monolayers.
Comparison of different models for EBCD calculation in the TJ-II Stellarator
NASA Astrophysics Data System (ADS)
García-Regaña, J. M.; Castejón, F.; Cappa, A.; Marushchenko, N. B.; Tereshchenko, M.
2010-06-01
In this work, we have compared different linear methods to estimate the electron Bernstein current drive (EBCD). The expressions for the current drive efficiency have been plugged into the ray tracing code TRUBA, which was used in previous works for electron Bernstein wave (EBW) heating studies in the TJ-II stellarator. This device is taken here as an example for this comparison. The driven current is calculated for different densities and temperatures, as well as launching directions of the heating beam, which is a critical issue in the O-X-B mode conversion scenario considered in TJ-II. The range of applicability of each model is discussed. The influence of the Ohkawa, relativistic and frictional trapping effects on the total current generated is studied by comparing the results obtained by pairs of models that include and neglect those effects. The Ohkawa effect has resulted in being the least important. Although the relativistic effects are not negligible, the main disagreement between the results arises from including or not momentum conservation and neglecting frictional trapping effects. The total EBCD current drive efficiency calculated is in all cases greater than the experimental ECCD one, previously measured in TJ-II. The results presented in this work are the guideline for future experiments in this device.
A model to calculate consistent atmospheric emission projections and its application to Spain
NASA Astrophysics Data System (ADS)
Lumbreras, Julio; Borge, Rafael; de Andrés, Juan Manuel; Rodríguez, Encarnación
Global warming and air quality are headline environmental issues of our time and policy must preempt negative international effects with forward-looking strategies. As part of the revision of the European National Emission Ceilings Directive, atmospheric emission projections for European Union countries are being calculated. These projections are useful to drive European air quality analyses and to support wide-scale decision-making. However, when evaluating specific policies and measures at sectoral level, a more detailed approach is needed. This paper presents an original methodology to evaluate emission projections. Emission projections are calculated for each emitting activity that has emissions under three scenarios: without measures (business as usual), with measures (baseline) and with additional measures (target). The methodology developed allows the estimation of highly disaggregated multi-pollutant, consistent emissions for a whole country or region. In order to assure consistency with past emissions included in atmospheric emission inventories and coherence among the individual activities, the consistent emission projection (CEP) model incorporates harmonization and integration criteria as well as quality assurance/quality check (QA/QC) procedures. This study includes a sensitivity analysis as a first approach to uncertainty evaluation. The aim of the model presented in this contribution is to support decision-making process through the assessment of future emission scenarios taking into account the effect of different detailed technical and non-technical measures and it may also constitute the basis for air quality modelling. The system is designed to produce the information and formats related to international reporting requirements and it allows performing a comparison of national results with lower resolution models such as RAINS/GAINS. The methodology has been successfully applied and tested to evaluate Spanish emission projections up to 2020 for 26
Modeling a superficial radiotherapy X-ray source for relative dose calculations.
Johnstone, Christopher D; LaFontaine, Richard; Poirier, Yannick; Tambasco, Mauro
2015-05-08
The purpose of this study was to empirically characterize and validate a kilovoltage (kV) X-ray beam source model of a superficial X-ray unit for relative dose calculations in water and assess the accuracy of the British Journal of Radiology Supplement 25 (BJR 25) percentage depth dose (PDD) data. We measured central axis PDDs and dose profiles using an Xstrahl 150 X-ray system. We also compared the measured and calculated PDDs to those in the BJR 25. The Xstrahl source was modeled as an effective point source with varying spatial fluence and spectra. In-air ionization chamber measurements were made along the x- and y-axes of the X-ray beam to derive the spatial fluence and half-value layer (HVL) measurements were made to derive the spatially varying spectra. This beam characterization and resulting source model was used as input for our in-house dose calculation software (kVDoseCalc) to compute radiation dose at points of interest (POIs). The PDDs and dose profiles were measured using 2, 5, and 15 cm cone sizes at 80, 120, 140, and 150 kVp energies in a scanning water phantom using IBA Farmer-type ionization chambers of volumes 0.65 and 0.13 cc, respectively. The percent difference in the computed PDDs compared with our measurements range from -4.8% to 4.8%, with an overall mean percent difference and standard deviation of 1.5% and 0.7%, respectively. The percent difference between our PDD measurements and those from BJR 25 range from -14.0% to 15.7%, with an overall mean percent difference and standard deviation of 4.9% and 2.1%, respectively - showing that the measurements are in much better agreement with kVDoseCalc than BJR 25. The range in percent difference between kVDoseCalc and measurement for profiles was -5.9% to 5.9%, with an overall mean percent difference and standard deviation of 1.4% and 1.4%, respectively. The results demonstrate that our empirically based X-ray source modeling approach for superficial X-ray therapy can be used to accurately
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination.
Modeling for calculation of vanadium oxide film composition in reactive-sputtering process
Yu He; Jiang Yadong; Wang Tao; Wu Zhiming; Yu Junsheng; Wei Xiongbang
2010-05-15
A modified model describing the changing ratio of vanadium to oxide on the target and substrate as a function of oxygen flow is described. Actually, this ratio is extremely sensitive to the deposition conditions during the vanadium oxide (VO{sub x}) reactive magnetron-sputtering process. The method in this article is an extension of a previously presented Berg's model, where only a single stoichiometry compound layer was taken into consideration. This work deals with reactive magnetron sputtering of vanadium oxide films with different oxygen contents from vanadium metal target. The presence of vanadium mixed oxides at both target and substrate surface produced during reactive-sputtering process are included. It shows that the model can be used for the optimization of film composition with respect to oxygen flow in a stable hysteresis-free reactive-sputtering process. A systematic experimental study of deposition rate of VO{sub x} with respect to target ion current was also made. Compared to experimental results, it was verified that the theoretical calculation from modeling is in good agreement with the experimental counterpart.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Comparative Assessment of Models and Methods To Calculate Grid Electricity Emissions.
Ryan, Nicole A; Johnson, Jeremiah X; Keoleian, Gregory A
2016-09-01
Due to the complexity of power systems, tracking emissions attributable to a specific electrical load is a daunting challenge but essential for many environmental impact studies. Currently, no consensus exists on appropriate methods for quantifying emissions from particular electricity loads. This paper reviews a wide range of the existing methods, detailing their functionality, tractability, and appropriate use. We identified and reviewed 32 methods and models and classified them into two distinct categories: empirical data and relationship models and power system optimization models. To illustrate the impact of method selection, we calculate the CO2 combustion emissions factors associated with electric-vehicle charging using 10 methods at nine charging station locations around the United States. Across the methods, we found an up to 68% difference from the mean CO2 emissions factor for a given charging site among both marginal and average emissions factors and up to a 63% difference from the average across average emissions factors. Our results underscore the importance of method selection and the need for a consensus on approaches appropriate for particular loads and research questions being addressed in order to achieve results that are more consistent across studies and allow for soundly supported policy decisions. The paper addresses this issue by offering a set of recommendations for determining an appropriate model type on the basis of the load characteristics and study objectives.
Present state-of-the-art of two-phase flow model calculations
NASA Astrophysics Data System (ADS)
Lyczkowski, R. W.; Ding, Jianmin; Bouillard, J. X.
1992-07-01
Argonne National Laboratory (ANL) has developed two- and three-dimensional computer programs to predict hydrodynamics in complex fluid/solids systems including atmospheric and pressurized bubbling and circulating fluidized-bed combustors and gasifiers, concentrated suspension (slurry) piping systems and advanced particle-bed reactors for space-based applications, for example. The computer programs are based upon phenomenological mechanistic models and can predict frequency of bubble formation, bubble size and growth, bubble frequency and rise-velocity, solids volume fraction, gas and solids velocities and low dimension chaotic attracters. The results of these hydrodynamic calculations are used as inputs to mechanistic models to predict heat transfer and erosion and have been used to produce simplified models and guidelines to assist in design and scaling. An extensive coordinated effort involving industry, government, and university laboratory data has served to validate the various models. Babcock & Wilcox (B&W), in close collaboration with ANL, has developed the three dimensional FORCE2 computer program which is both transient as well as steady state.
Fission yield calculation using toy model based on Monte Carlo simulation
Jubaidah; Kurniadi, Rizal
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Comparative Assessment of Models and Methods To Calculate Grid Electricity Emissions.
Ryan, Nicole A; Johnson, Jeremiah X; Keoleian, Gregory A
2016-09-01
Due to the complexity of power systems, tracking emissions attributable to a specific electrical load is a daunting challenge but essential for many environmental impact studies. Currently, no consensus exists on appropriate methods for quantifying emissions from particular electricity loads. This paper reviews a wide range of the existing methods, detailing their functionality, tractability, and appropriate use. We identified and reviewed 32 methods and models and classified them into two distinct categories: empirical data and relationship models and power system optimization models. To illustrate the impact of method selection, we calculate the CO2 combustion emissions factors associated with electric-vehicle charging using 10 methods at nine charging station locations around the United States. Across the methods, we found an up to 68% difference from the mean CO2 emissions factor for a given charging site among both marginal and average emissions factors and up to a 63% difference from the average across average emissions factors. Our results underscore the importance of method selection and the need for a consensus on approaches appropriate for particular loads and research questions being addressed in order to achieve results that are more consistent across studies and allow for soundly supported policy decisions. The paper addresses this issue by offering a set of recommendations for determining an appropriate model type on the basis of the load characteristics and study objectives. PMID:27499211
2014-01-01
Background Exact drug dosing in isolated limb perfusion (ILP) and infusion (ILI) is essential. We developed and evaluated a model for calculating the volume of extremities and compared this model with body weight- and height-dependent parameters. Methods The extremity was modeled by a row of coupled truncated cones. The sizes of the truncated cone bases were derived from the circumference measurements of the extremity at predefined levels (5 cm). The resulting volumes were added. This extremity volume model was correlated to the computed tomography (CT) volume data of the extremity (total limb volume). The extremity volume was also correlated with the patient’s body weight, body mass index (BMI) and ideal body weight (IBW). The no-fat CT limb volume was correlated with the circumference-measured limb volume corrected by the ideal-body-weight to actual-body-weight ratio (IBW corrected-limb-volume). Results The correlation between the CT volume and the volume measured by the circumference was high and significant. There was no correlation between the limb volume and the bare body weight, BMI or IBW. The correlation between the no-fat CT volume and IBW-corrected limb volume was high and significant. Conclusions An appropriate drug dosing in ILP can be achieved by combining the limb volume with the simple circumference measurements and the IBW to body-weight ratio. PMID:24684972
Calculation of Heavy Ion Inactivation and Mutation Rates in Radial Dose Model of Track Structure
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wilson, John W.; Shavers, Mark R.; Katz, Robert
1997-01-01
In the track structure model, the inactivation cross section is found by summing an inactivation probability over all impact parameters from the ion to the sensitive sites within the cell nucleus. The inactivation probability is evaluated by using the dose response of the system to gamma rays and the radial dose of the ions and may be equal to unity at small impact parameters. We apply the track structure model to recent data with heavy ion beams irradiating biological samples of E. Coli, B. Subtilis spores, and Chinese hamster (V79) cells. Heavy ions have observed cross sections for inactivation that approach and sometimes exceed the geometric size of the cell nucleus. We show how the effects of inactivation may be taken into account in the evaluation of the mutation cross sections in the track structure model through correlation of sites for gene mutation and cell inactivation. The model is fit to available data for HPRT (hypoxanthine guanine phosphoribosyl transferase) mutations in V79 cells, and good agreement is found. Calculations show the high probability for mutation by relativistic ions due to the radial extension of ions track from delta rays. The effects of inactivation on mutation rates make it very unlikely that a single parameter such as LET (linear energy transfer) can be used to specify radiation quality for heavy ion bombardment.
Present state-of-the-art of two-phase flow model calculations
Lyczkowski, R.W.; Ding, Jianmin; Bouillard, J.X.
1992-07-01
Argonne National Laboratory (ANL) has developed two- and three-dimensional computer programs to predict hydrodynamics in complex fluid/solids systems including atmospheric and pressurized bubbling and circulating fluidized-bed combustors and gasifiers, concentrated suspension (slurry) piping systems and advanced particle-bed reactors for space-based applications, for example. The computer programs are based upon phenomenological mechanistic models and can predict frequency of bubble formation, bubble size and growth, bubble frequency and rise-velocity, solids volume fraction, gas and solids velocities and low dimension chaotic attracters. The results of these hydrodynamic calculations are used as inputs to mechanistic models to predict heat transfer and erosion and have been used to produce simplified models and guidelines to assist in design and scaling. An extensive coordinated effort involving industry, government, and university laboratory data has served to validate the various models. Babcock & Wilcox (B&W), in close collaboration with ANL, has developed the three dimensional FORCE2 computer program which is both transient as well as steady-state.
Present state-of-the-art of two-phase flow model calculations
Lyczkowski, R.W.; Ding, Jianmin; Bouillard, J.X.
1992-07-01
Argonne National Laboratory (ANL) has developed two- and three-dimensional computer programs to predict hydrodynamics in complex fluid/solids systems including atmospheric and pressurized bubbling and circulating fluidized-bed combustors and gasifiers, concentrated suspension (slurry) piping systems and advanced particle-bed reactors for space-based applications, for example. The computer programs are based upon phenomenological mechanistic models and can predict frequency of bubble formation, bubble size and growth, bubble frequency and rise-velocity, solids volume fraction, gas and solids velocities and low dimension chaotic attracters. The results of these hydrodynamic calculations are used as inputs to mechanistic models to predict heat transfer and erosion and have been used to produce simplified models and guidelines to assist in design and scaling. An extensive coordinated effort involving industry, government, and university laboratory data has served to validate the various models. Babcock Wilcox (B W), in close collaboration with ANL, has developed the three dimensional FORCE2 computer program which is both transient as well as steady-state.
Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite
NASA Astrophysics Data System (ADS)
Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia
2015-12-01
Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.
Online calculation of global marine halocarbon emissions in the chemistry climate model EMAC
NASA Astrophysics Data System (ADS)
Lennartz, Sinikka T.; Krysztofiak-Tong, Gisèle; Sinnhuber, Björn-Martin; Marandino, Christa A.; Tegtmeier, Susann; Krüger, Kirstin; Ziska, Franziska; Quack, Birgit
2015-04-01
Marine produced trace gases such as dibromomethane (CH2Br2), bromoform (CHBr3) and methyl iodide (CH3I) significantly impact tropospheric and stratospheric chemistry. Marine emissions are the dominant source of halocarbons to the atmosphere, and therefore, it is crucial to represent them accurately in order to model their impact on atmospheric chemistry. Chemistry climate models are a frequently used tool for quantifying the influence of halocarbons on ozone depletion. In these model simulations, marine emissions of halocarbons have mainly been prescribed from established emission climatologies, thus neglecting the interaction with the actual state of the atmosphere in the model. Here, we calculate halocarbon marine emissions for the first time online by coupling the submodel AIRSEA to the chemical climate model EMAC. Our method combines prescribed water concentrations and varying atmospheric concentrations derived from the model instead of using fixed emission climatologies. This method has a number of conceptual and practical advantages, as the modelled emissions can respond consistently to changes in temperature, wind speed, possible sea ice cover and atmospheric concentration in the model. Differences between the climatology-based and the new approach (2-18%) result from consideration of the actual, time-varying state of the atmosphere and the consideration of air-side transfer velocities. Extensive comparison to observations from aircraft, ships and ground stations reveal that interactively computing the air-sea flux from prescribed water concentrations leads to equally or more accurate atmospheric concentrations in the model compared to using constant emission climatologies. The effect of considering the actual state of the atmosphere is largest for gases with concentrations close to equilibrium in the surface ocean, such as CH2Br2. Halocarbons with comparably long atmospheric lifetimes, e.g. CH2Br2, are reflected more accurately in EMAC when compared to time
Evaluating range-expansion models for calculating nonnative species' expansion rate.
Preuss, Sonja; Low, Matthew; Cassel-Lundhagen, Anna; Berggren, Asa
2014-07-01
Species range shifts associated with environmental change or biological invasions are increasingly important study areas. However, quantifying range expansion rates may be heavily influenced by methodology and/or sampling bias. We compared expansion rate estimates of Roesel's bush-cricket (Metrioptera roeselii, Hagenbach 1822), a nonnative species currently expanding its range in south-central Sweden, from range statistic models based on distance measures (mean, median, 95(th) gamma quantile, marginal mean, maximum, and conditional maximum) and an area-based method (grid occupancy). We used sampling simulations to determine the sensitivity of the different methods to incomplete sampling across the species' range. For periods when we had comprehensive survey data, range expansion estimates clustered into two groups: (1) those calculated from range margin statistics (gamma, marginal mean, maximum, and conditional maximum: ˜3 km/year), and (2) those calculated from the central tendency (mean and median) and the area-based method of grid occupancy (˜1.5 km/year). Range statistic measures differed greatly in their sensitivity to sampling effort; the proportion of sampling required to achieve an estimate within 10% of the true value ranged from 0.17 to 0.9. Grid occupancy and median were most sensitive to sampling effort, and the maximum and gamma quantile the least. If periods with incomplete sampling were included in the range expansion calculations, this generally lowered the estimates (range 16-72%), with exception of the gamma quantile that was slightly higher (6%). Care should be taken when interpreting rate expansion estimates from data sampled from only a fraction of the full distribution. Methods based on the central tendency will give rates approximately half that of methods based on the range margin. The gamma quantile method appears to be the most robust to incomplete sampling bias and should be considered as the method of choice when sampling the entire
Evaluating range-expansion models for calculating nonnative species' expansion rate
Preuss, Sonja; Low, Matthew; Cassel-Lundhagen, Anna; Berggren, Åsa
2014-01-01
Species range shifts associated with environmental change or biological invasions are increasingly important study areas. However, quantifying range expansion rates may be heavily influenced by methodology and/or sampling bias. We compared expansion rate estimates of Roesel's bush-cricket (Metrioptera roeselii, Hagenbach 1822), a nonnative species currently expanding its range in south-central Sweden, from range statistic models based on distance measures (mean, median, 95th gamma quantile, marginal mean, maximum, and conditional maximum) and an area-based method (grid occupancy). We used sampling simulations to determine the sensitivity of the different methods to incomplete sampling across the species' range. For periods when we had comprehensive survey data, range expansion estimates clustered into two groups: (1) those calculated from range margin statistics (gamma, marginal mean, maximum, and conditional maximum: ˜3 km/year), and (2) those calculated from the central tendency (mean and median) and the area-based method of grid occupancy (˜1.5 km/year). Range statistic measures differed greatly in their sensitivity to sampling effort; the proportion of sampling required to achieve an estimate within 10% of the true value ranged from 0.17 to 0.9. Grid occupancy and median were most sensitive to sampling effort, and the maximum and gamma quantile the least. If periods with incomplete sampling were included in the range expansion calculations, this generally lowered the estimates (range 16–72%), with exception of the gamma quantile that was slightly higher (6%). Care should be taken when interpreting rate expansion estimates from data sampled from only a fraction of the full distribution. Methods based on the central tendency will give rates approximately half that of methods based on the range margin. The gamma quantile method appears to be the most robust to incomplete sampling bias and should be considered as the method of choice when sampling the entire
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
Determination of a silane intermolecular force field potential model from an ab initio calculation
Li, Arvin Huang-Te; Chao, Sheng D.; Chang, Chien-Cheng
2010-12-15
Intermolecular interaction potentials of the silane dimer in 12 orientations have been calculated by using the Hartree-Fock (HF) self-consistent theory and the second-order Moeller-Plesset (MP2) perturbation theory. We employed basis sets from Pople's medium-size basis sets [up to 6-311++G(3df, 3pd)] and Dunning's correlation consistent basis sets (up to the triply augmented correlation-consistent polarized valence quadruple-zeta basis set). We found that the minimum energy orientations were the G and H conformers. We have suggested that the Si-H attractions, the central silicon atom size, and electronegativity play essential roles in weakly binding of a silane dimer. The calculated MP2 potential data were employed to parametrize a five-site force field for molecular simulations. The Si-Si, Si-H, and H-H interaction parameters in a pairwise-additive, site-site potential model for silane molecules were regressed from the ab initio energies.
Formation enthalpies of Al-Fe-Zr-Nd system calculated by using geometric and Miedema's models
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wang, Rongcheng; Tao, Xiaoma; Guo, Hui; Chen, Hongmei; Ouyang, Yifang
2015-04-01
Formation enthalpy is important for the phase stability and amorphous forming ability of alloys. The formation enthalpies of Fe17RE2 (RE=Ce, Pr, Nd, Gd and Er) obtained by Miedema's theory are in good agreement with those of the experiments. The dependence of formation enthalpy on concentration of Al for intermetallic (AlxFe1-x)17Nd2 have been calculated by Miedema's theory and the geometric model. The solid solubility of Al in (AlxFe1-x)17Nd2 is coincident with the concentration dependence of formation enthalpy. The mixing enthalpies of liquid alloys and formation enthalpies of alloys for Al-Fe-Zr-Nd system have been predicted. The calculated mixing enthalpy indicates that the adding of Fe or Nd decreases monotonously the magnitude of enthalpy. The formation enthalpies of Al-Fe-Zr-Nd system indicate that the shape of the enthalpy contour map changes when the content of Al is less than 50.0 at% and then it remains unchanged except the decrease of magnitude. The formation enthalpy of Al-Fe-Zr-Nd increases with the increase of Fe and/or Nd content. The negative formation enthalpy indicates that Al-Fe-Zr-Nd system has higher amorphous forming ability and wide amorphous forming range. The certain contents of Zr and/or Al are beneficial for the formation of Al-Fe-Zr-Nd intermetallics.
Simplified model of an O-ring-driven liquid-filled lens for calculating focal length
NASA Astrophysics Data System (ADS)
Lin, Chih-Wei; Shaw, Dein
2009-07-01
The purpose of this study was to develop a mathematical model that could be used to obtain the approximate focal length of O-ring-driven liquid-filled lenses. An O-ring-driven liquid-filled lens is composed of a base plate, a glass-covered liquid reservoir, a pliable membrane, an O-ring, a spring, and three actuators. The movement of the ring changes the focal length or the focus position. In previous studies, the commercial software ANSYS was used to find the membrane deformation and ZEMAX was used to find the focal length. The procedures used in those previous studies are complicated and generally require considerable design work. The proposed mathematical method employs the principle of liquid volume conservation to simplify the calculations that approximate the focal length of the lens. The result is confirmed on ZEMAX to ensure that the method is practicable. Consequently, focal lengths of lenses with different ring thicknesses, radii, and squeezing depths to contact the membrane can be calculated immediately.
NASA Technical Reports Server (NTRS)
Barton, Jonathan S.; Hall, Dorothy K.; Sigurosson, Oddur; Williams, Richard S., Jr.; Smith, Laurence C.; Garvin, James B.
1999-01-01
Two ascending European Space Agency (ESA) Earth Resources Satellites (ERS)-1/-2 tandem-mode, synthetic aperture radar (SAR) pairs are used to calculate the surface elevation of Hofsjokull, an ice cap in central Iceland. The motion component of the interferometric phase is calculated using the 30 arc-second resolution USGS GTOPO30 global digital elevation product and one of the ERS tandem pairs. The topography is then derived by subtracting the motion component from the other tandem pair. In order to assess the accuracy of the resultant digital elevation model (DEM), a geodetic airborne laser-altimetry swath is compared with the elevations derived from the interferometry. The DEM is also compared with elevations derived from a digitized topographic map of the ice cap from the University of Iceland Science Institute. Results show that low temporal correlation is a significant problem for the application of interferometry to small, low-elevation ice caps, even over a one-day repeat interval, and especially at the higher elevations. Results also show that an uncompensated error in the phase, ramping from northwest to southeast, present after tying the DEM to ground-control points, has resulted in a systematic error across the DEM.
Model calculated global, regional and megacity premature mortality due to air pollution
NASA Astrophysics Data System (ADS)
Lelieveld, J.; Barlas, C.; Giannadaki, D.; Pozzer, A.
2013-03-01
Air pollution by fine particulate matter (PM2.5) and ozone (O3) has increased strongly with industrialization and urbanization. We estimated the premature mortality rates and the years of human life lost (YLL) caused by anthropogenic PM2.5 and O3 in 2005 for epidemiological regions defined by the World Health Organization. We carried out high-resolution global model calculations to resolve urban and industrial regions in greater detail compared to previous work. We applied a health impact function to estimate premature mortality for people of 30 yr and older, using parameters derived from epidemiological cohort studies. Our results suggest that especially in large countries with extensive suburban and rural populations, air pollution-induced mortality rates have previously been underestimated. We calculate a global respiratory mortality of about 773 thousand yr-1 (YLL ≈ 5.2 million yr-1), 186 thousand yr-1 by lung cancer (YLL ≈ 1.7 million yr-1) and 2.0 million yr-1 by cardiovascular disease (YLL ≈ 14.3 million yr-1). The global mean per capita mortality caused by air pollution is about 0.1 % yr-1. The highest premature mortality rates are found in the Southeast Asia and Western Pacific regions (about 25% and 46% of the global rate, respectively) where more than a dozen of the most highly polluted megacities are located.
NASA Astrophysics Data System (ADS)
Stumpf, Harald
2006-09-01
Based on the assumption that electroweak bosons, leptons and quarks possess a substructure of elementary fermionic constituents, in previous papers the effect of CP-symmetry breaking on the effective dynamics of these particles was calculated. Motivated by the phenomenological procedure in this paper, isospin symmetry breaking will be added and the physical consequences of these calculations will be discussed. The dynamical law of the fermionic constituents is given by a relativistically invariant nonlinear spinor field equation with local interaction, canonical quantization, selfregularization and probability interpretation. The corresponding effective dynamics is derived by algebraic weak mapping theorems. In contrast to the commonly applied modifications of the quark mass matrices, CP-symmetry breaking is introduced into this algebraic formalism by an inequivalent vacuum with respect to the CP-invariant case, represented by a modified spinor field propagator. This leads to an extension of the standard model as effective theory which contains besides the "electric" electroweak bosons additional "magnetic" electroweak bosons and corresponding interactions. If furthermore the isospin invariance of the propagator is broken too, it will be demonstrated in detail that in combination with CP-symmetry breaking this induces a considerable modification of electroweak nuclear reaction rates.
Generic models of deep formation water calculated with PHREEQC using the "gebo"-database
NASA Astrophysics Data System (ADS)
Bozau, E.; van Berk, W.
2012-04-01
To identify processes during the use of formation waters for geothermal energy production an extended hydrogeochemical thermodynamic database (named "gebo"-database) for the well known and commonly used software PHREEQC has been developed by collecting and inserting data from literature. The following solution master species: Fe(+2), Fe(+3), S(-2), C(-4), Si, Zn, Pb, and Al are added to the database "pitzer.dat" which is provided with the code PHREEQC. According to the solution master species the necessary solution species and phases (solid phases and gases) are implemented. Furthermore, temperature and pressure adaptations of the mass action law constants, Pitzer parameters for the calculation of activity coefficients in waters of high ionic strength and solubility equilibria among gaseous and aqueous species of CO2, methane, and hydrogen sulphide are implemented into the "gebo"-database. Combined with the "gebo"-database the code PHREEQC can be used to test the behaviour of highly concentrated solutions (e.g. formation waters, brines). Chemical changes caused by temperature and pressure gradients as well as the exposure of the water to the atmosphere and technical equipments can be modelled. To check the plausibility of additional and adapted data/parameters experimental solubility data from literature (e.g. sulfate and carbonate minerals) are compared to modelled mineral solubilities at elevated levels of Total Dissolved Solids (TDS), temperature, and pressure. First results show good matches between modelled and experimental mineral solubility for barite, celestite, anhydrite, and calcite in high TDS waters indicating the plausibility of additional and adapted data and parameters. Furthermore, chemical parameters of geothermal wells in the North German Basin are used to test the "gebo"-database. The analysed water composition (starting with the main cations and anions) is calculated by thermodynamic equilibrium reactions of pure water with the minerals found in
Wang, Junmei; Cieplak, Piotr; Li, Jie; Cai, Qin; Hsieh, Meng-Juei; Luo, Ray; Duan, Yong
2012-06-21
In the previous publications of this series, we presented a set of Thole induced dipole interaction models using four types of screening functions. In this work, we document our effort to refine the van der Waals parameters for the Thole polarizable models. Following the philosophy of AMBER force field development, the van der Waals (vdW) parameters were tuned for the Thole model with linear screening function to reproduce both the ab initio interaction energies and the experimental densities of pure liquids. An in-house genetic algorithm was applied to maximize the fitness of "chromosomes" which is a function of the root-mean-square errors (RMSE) of interaction energy and liquid density. To efficiently explore the vdW parameter space, a novel approach was developed to estimate the liquid densities for a given vdW parameter set using the mean residue-residue interaction energies through interpolation/extrapolation. This approach allowed the costly molecular dynamics simulations be performed at the end of each optimization cycle only and eliminated the simulations during the cycle. Test results show notable improvements over the original AMBER FF99 vdW parameter set, as indicated by the reduction in errors of the calculated pure liquid densities (d), heats of vaporization (H(vap)), and hydration energies. The average percent error (APE) of the densities of 59 pure liquids was reduced from 5.33 to 2.97%; the RMSE of H(vap) was reduced from 1.98 to 1.38 kcal/mol; the RMSE of solvation free energies of 15 compounds was reduced from 1.56 to 1.38 kcal/mol. For the interaction energies of 1639 dimers, the overall performance of the optimized vdW set is slightly better than the original FF99 vdW set (RMSE of 1.56 versus 1.63 kcal/mol). The optimized vdW parameter set was also evaluated for the exponential screening function used in the Amoeba force field to assess its applicability for different types of screening functions. Encouragingly, comparable performance was
S-model calculations for high-energy-electron-impact double ionization of helium
NASA Astrophysics Data System (ADS)
Gasaneo, G.; Mitnik, D. M.; Randazzo, J. M.; Ancarani, L. U.; Colavecchia, F. D.
2013-04-01
In this paper the double ionization of helium by high-energy electron impact is studied. The corresponding four-body Schrödinger equation is transformed into a set of driven equations containing successive orders in the projectile-target interaction. The transition amplitude obtained from the asymptotic limit of the first-order solution is shown to be equivalent to the familiar first Born approximation. The first-order driven equation is solved within a generalized Sturmian approach for an S-wave (e,3e) model process with high incident energy and small momentum transfer corresponding to published measurements. Two independent numerical implementations, one using spherical and the other hyperspherical coordinates, yield mutual agreement. From our ab initio solution, the transition amplitude is extracted, and single differential cross sections are calculated and could be taken as benchmark values to test other numerical methods in a previously unexplored energy domain.
Realistic shell-model calculations and exotic nuclei around {sup 132}Sn
Covello, A.; Itaco, N.; Coraggio, L.; Gargano, A.
2008-11-11
We report on a study of exotic nuclei around doubly magic {sup 132}Sn in terms of the shell model employing a realistic effective interaction derived from the CD-Bonn nucleon-nucleon potential. The short-range repulsion of the latter is renormalized by constructing a smooth low-momentum potential, V{sub low-k}, that is used directly as input for the calculation of the effective interaction. In this paper, we focus attention on proton-neutron multiplets in the odd-odd nuclei {sup 134}Sb, {sup 136}Sb. We show that the behavior of these multiplets is quite similar to that of the analogous multiplets in the counterpart nuclei in the {sup 208}Pb region, {sup 210}Bi and {sup 212}Bi.
β-decay half-life of V50 calculated by the shell model
NASA Astrophysics Data System (ADS)
Haaranen, M.; Srivastava, P. C.; Suhonen, J.; Zuber, K.
2014-10-01
In this work we survey the detectability of the β- channel of 2350V leading to the first excited 2+ state in 2450Cr. The electron-capture (EC) half-life corresponding to the transition of 2350V to the first excited 2+ state in 2250Ti had been measured earlier. Both of the mentioned transitions are 4th-forbidden non-unique. We have performed calculations of all the involved wave functions by using the nuclear shell model with the GXPF1A interaction in the full f-p shell. The computed half-life of the EC branch is in good agreement with the measured one. The predicted half-life for the β- branch is in the range ≈2×1019 yr whereas the present experimental lower limit is 1.5×1018 yr. We discuss also the experimental lay-out needed to detect the β--branch decay.
Ab Initio No-Core Shell Model Calculations Using Realistic Two- and Three-Body Interactions
Navratil, P; Ormand, W E; Forssen, C; Caurier, E
2004-11-30
There has been significant progress in the ab initio approaches to the structure of light nuclei. One such method is the ab initio no-core shell model (NCSM). Starting from realistic two- and three-nucleon interactions this method can predict low-lying levels in p-shell nuclei. In this contribution, we present a brief overview of the NCSM with examples of recent applications. We highlight our study of the parity inversion in {sup 11}Be, for which calculations were performed in basis spaces up to 9{Dirac_h}{Omega} (dimensions reaching 7 x 10{sup 8}). We also present our latest results for the p-shell nuclei using the Tucson-Melbourne TM three-nucleon interaction with several proposed parameter sets.
Flow aerodynamics modeling of an MHD swirl combustor - Calculations and experimental verification
NASA Technical Reports Server (NTRS)
Gupta, A. K.; Beer, J. M.; Louis, J. F.; Busnaina, A. A.; Lilley, D. G.
1981-01-01
The paper describes a computer code for calculating the flow dynamics of a constant-density flow in the second-stage trumpet shaped nozzle section of a two-stage MHD swirl combustor for application to a disk generator. The primitive pressure-velocity variable, finite-difference computer code has been developed for the computation of inert nonreacting turbulent swirling flows in an axisymmetric MHD model swirl combustor. The method and program involve a staggered grid system for axial and radial velocities, and a line relaxation technique for the efficient solution of the equations. The code produces as output the flow field map of the nondimensional stream function, axial and swirl velocity. It was found that the best location for seed injection to obtain a uniform distribution at the combustor exit is in the central location for seed injected at the entrance to the second stage combustor.
Symmetry-Adapted Ab Initio Shell Model for Nuclear Structure Calculations
NASA Astrophysics Data System (ADS)
Draayer, J. P.; Dytrych, T.; Launey, K. D.; Langr, D.
2012-05-01
An innovative concept, the symmetry-adapted ab initio shell model, that capitalizes on partial as well as exact symmetries that underpin the structure of nuclei, is discussed. This framework is expected to inform the leading features of nuclear structure and reaction data for light and medium mass nuclei, which are currently inaccessible by theory and experiment and for which predictions of modern phenomenological models often diverge. We use powerful computational and group-theoretical algorithms to perform ab initio CI (configuration-interaction) calculations in a model space spanned by SU(3) symmetry-adapted many-body configurations with the JISP16 nucleon-nucleon interaction. We demonstrate that the results for the ground states of light nuclei up through A = 16 exhibit a strong dominance of low-spin and high-deformation configurations together with an evident symplectic structure. This, in turn, points to the importance of using a symmetry-adapted framework, one based on an LS coupling scheme with the associated spatial configurations organized according to deformation.
A Simplified 1-D Model for Calculating CO2 Leakage through Conduits
Zhang, Y.; Oldenburg, C.M.
2011-02-15
In geological CO{sub 2} storage projects, a cap rock is generally needed to prevent CO{sub 2} from leaking out of the storage formation. However, the injected CO{sub 2} may still encounter some discrete flow paths such as a conductive well or fault (here referred to as conduits) through the cap rock allowing escape of CO{sub 2} from the storage formation. As CO{sub 2} migrates upward, it may migrate into the surrounding formations. The amount of mass that is lost to the formation is called attenuation. This report describes a simplified model to calculate the CO{sub 2} mass flux at different locations of the conduit and the amount of attenuation to the surrounding formations. From the comparison among the three model results, we can conclude that the steady-state conduit model (SSCM) provides a more accurate solution than the PMC at a given discretization. When there is not a large difference between the permeability of the surrounding formation and the permeability of the conduits, and there is leak-off at the bottom formation (the formation immediately above the CO{sub 2} plume), a fine discretization is needed for an accurate solution. Based on this comparison, we propose to use the SSCM in the rapid prototype for now given it does not produce spurious oscillations, and is already in FORTRAN and therefore can be easily made into a dll for use in GoldSim.
Frost, G. J.; Fried, Alan; Lee, Y.- N.; Wert, B.; Henry, B.; Drummond, J. R.; Evans, M. J.; Fehsenfeld, Fred C.; Goldan, P. D.; Holloway, J. S.; Hubler, Gerhard F.; Jakoubek, R.; Jobson, B Tom T.; Knapp, K.; Kuster, W. C.; Roberts, J.; Rudolph, Jochen; Ryerson, T. B.; Stohl, A.; Stroud, C.; Sueper, D. T.; Trainer, Michael; Williams, J.
2002-04-18
Formaldehyde (CH2O) measurements from two independent instruments are compared with photochemical box model calculations. The measurements were made on the National Oceanic and Atmospheric Administration P-3 aircraft as part of the 1997 North Atlantic Regional Experiment (NARE 97). The data set considered here consists of air masses sampled between 0 and 8 km over the North Atlantic Ocean which do not show recent influence from emissions or transport. These air masses therefore should be in photochemical steady state with respect to CH2O when constrained by the other P-3 measurements, and methane oxidation was expected to be the predominant source of CH2O in these air masses. For this data set both instruments measured identical CH2O concentrations to within 40 parts per trillion by volume (pptv) on average over the 0–800 pptv range, although differences larger than the combined 2s total uncertainty estimates were observed between the two instruments in 11% of the data. Both instruments produced higher CH2O concentrations than the model in more than 90% of this data set, with a median measured-modeled [CH2O] difference of 0.13 or 0.18 ppbv (depending on the instrument), or about a factor of 2. Such large differences cannot be accounted for by varying model input parameters within their respective uncertainty ranges. After examining the possible reasons for the model-measurement discrepancy, we conclude that there are probably one or more additional unknown sources of CH2O in the North Atlantic troposphere.
Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.
Blann, Marshall
2014-11-01
Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometry Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.
Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.
2014-11-01
Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometrymore » Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.« less
Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Hada, Megumi; Cucinotta, Francis A.; Wu, Honglu
2014-01-01
The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ''biological Bragg curve'' is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta, et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called "overkill".
NASA Astrophysics Data System (ADS)
Mathews, Alyssa
Emissions from the combustion of fossil fuels are a growing pollution concern throughout the global community, as they have been linked to numerous health issues. The freight transportation sector is a large source of these emissions and is expected to continue growing as globalization persists. Within the US, the expanding development of the natural gas industry is helping to support many industries and leading to increased transportation. The process of High Volume Hydraulic Fracturing (HVHF) is one of the newer advanced extraction techniques that is increasing natural gas and oil reserves dramatically within the US, however the technique is very resource intensive. HVHF requires large volumes of water and sand per well, which is primarily transported by trucks in rural areas. Trucks are also used to transport waste away from HVHF well sites. This study focused on the emissions generated from the transportation of HVHF materials to remote well sites, dispersion, and subsequent health impacts. The Geospatial Intermodal Freight Transport (GIFT) model was used in this analysis within ArcGIS to identify roadways with high volume traffic and emissions. High traffic road segments were used as emissions sources to determine the atmospheric dispersion of particulate matter using AERMOD, an EPA model that calculates geographic dispersion and concentrations of pollutants. Output from AERMOD was overlaid with census data to determine which communities may be impacted by increased emissions from HVHF transport. The anticipated number of mortalities within the impacted communities was calculated, and mortality rates from these additional emissions were computed to be 1 in 10 million people for a simulated truck fleet meeting stricter 2007 emission standards, representing a best case scenario. Mortality rates due to increased truck emissions from average, in-use vehicles, which represent a mixed age truck fleet, are expected to be higher (1 death per 341,000 people annually).
ERIC Educational Resources Information Center
Ramananantoandro, Ramanantsoa
1988-01-01
Presented is a description of a BASIC program to be used on an IBM microcomputer for calculating and plotting synthetic seismic-reflection traces for multilayered earth models. Discusses finding raypaths for given source-receiver offsets using the "shooting method" and calculating the corresponding travel times. (Author/CW)
NASA Technical Reports Server (NTRS)
Weisenstein, Debra K.; Ko, Malcolm K. W.; Scott, Courtney J.; Shia, Run-Lie; Jackman, Charles; Fleming, Eric; Considine, David; Kinnison, Douglas; Connell, Peter; Rotman, Douglas
1998-01-01
The summary are: (1) Some chemical differences in background atmosphere are surprisingly large (NOY). (2) Differences in model transport explain a majority of the intertnodel differences in the absence of PSCs. (3) With PSCS, large differences exist in predicted O3 depletion between models with the same transport. (4) AER/LLNL model calculates more O3 depletion in NH than LLNL. (5) AER/GSFC model cannot match calculated O3 depletion of GSFC model in SH. and (6) Results sensitive to interannual temperature variations (at least in NH).
Drover, Damion, Ryan
2011-12-01
One of the largest exports in the Southeast U.S. is forest products. Interest in biofuels using forest biomass has increased recently, leading to more research into better forest management BMPs. The USDA Forest Service, along with the Oak Ridge National Laboratory, University of Georgia and Oregon State University are researching the impacts of intensive forest management for biofuels on water quality and quantity at the Savannah River Site in South Carolina. Surface runoff of saturated areas, transporting excess nutrients and contaminants, is a potential water quality issue under investigation. Detailed maps of variable source areas and soil characteristics would therefore be helpful prior to treatment. The availability of remotely sensed and computed digital elevation models (DEMs) and spatial analysis tools make it easy to calculate terrain attributes. These terrain attributes can be used in models to predict saturated areas or other attributes in the landscape. With laser altimetry, an area can be flown to produce very high resolution data, and the resulting data can be resampled into any resolution of DEM desired. Additionally, there exist many maps that are in various resolutions of DEM, such as those acquired from the U.S. Geological Survey. Problems arise when using maps derived from different resolution DEMs. For example, saturated areas can be under or overestimated depending on the resolution used. The purpose of this study was to examine the effects of DEM resolution on the calculation of topographic wetness indices used to predict variable source areas of saturation, and to find the best resolutions to produce prediction maps of soil attributes like nitrogen, carbon, bulk density and soil texture for low-relief, humid-temperate forested hillslopes. Topographic wetness indices were calculated based on the derived terrain attributes, slope and specific catchment area, from five different DEM resolutions. The DEMs were resampled from LiDAR, which is a
Development of CT scanner models for patient organ dose calculations using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Gu, Jianwei
CT scanner models in this dissertation were versatile and accurate tools for estimating dose to different patient phantoms undergoing various CT procedures. The organ doses from kV and MV CBCT were also calculated. This dissertation finally summarizes areas where future research can be performed including MV CBCT further validation and application, dose reporting software and image and dose correlation study.
Ulicny, Jozef; Leulliot, Nicolas; Ghomi, Mahmoud; Grajcar, Lydie; Baron, Marie-Helene; Jobic, Herve
1999-06-15
Geometry optimization as well as harmonic force field calculations at HF and DFT levels of theory have been performed in order to elucidate the ground state properties of anthrone and emodin, two polycyclic conjugated molecules considered as hypericin model compounds. NIS, IR and FT-Raman spectra of these compounds have been recorded to validate the calculated results (geometry and vibrational modes). Calculated NIS spectra using the lowest energy conformers are in agreement with experiment. In addition, the intramolecular H-bonds in emodin predicted by the calculations can be evidenced using IR spectra as a function of temperature.
Azoia, Nuno G; Fernandes, Margarida M; Micaêlo, Nuno M; Soares, Cláudio M; Cavaco-Paulo, Artur
2012-05-01
Molecular dynamics simulations of a keratin/peptide complex have been conducted to predict the binding affinity of four different peptides toward human hair. Free energy calculations on the peptides' interaction with the keratin model demonstrated that electrostatic interactions are believed to be the main driving force stabilizing the complex. The molecular mechanics-Poisson-Boltzmann surface area methodology used for the free energy calculations demonstrated that the dielectric constant in the protein's interior plays a major role in the free energy calculations, and the only way to obtain accordance between the free energy calculations and the experimental binding results was to use the average dielectric constant.
Legler, C R; Brown, N R; Dunbar, R A; Harness, M D; Nguyen, K; Oyewole, O; Collier, W B
2015-06-15
The Scaled Quantum Mechanical (SQM) method of scaling calculated force constants to predict theoretically calculated vibrational frequencies is expanded to include a broad array of polarized and augmented basis sets based on the split valence 6-31G and 6-311G basis sets with the B3LYP density functional. Pulay's original choice of a single polarized 6-31G(d) basis coupled with a B3LYP functional remains the most computationally economical choice for scaled frequency calculations. But it can be improved upon with additional polarization functions and added diffuse functions for complex molecular systems. The new scale factors for the B3LYP density functional and the 6-31G, 6-31G(d), 6-31G(d,p), 6-31G+(d,p), 6-31G++(d,p), 6-311G, 6-311G(d), 6-311G(d,p), 6-311G+(d,p), 6-311G++(d,p), 6-311G(2d,p), 6-311G++(2d,p), 6-311G++(df,p) basis sets are shown. The double d polarized models did not perform as well and the source of the decreased accuracy was investigated. An alternate system of generating internal coordinates that uses the out-of plane wagging coordinate whenever it is possible; makes vibrational assignments via potential energy distributions more meaningful. Automated software to produce SQM scaled vibrational calculations from different molecular orbital packages is presented. PMID:25766474
NASA Astrophysics Data System (ADS)
Legler, C. R.; Brown, N. R.; Dunbar, R. A.; Harness, M. D.; Nguyen, K.; Oyewole, O.; Collier, W. B.
2015-06-01
The Scaled Quantum Mechanical (SQM) method of scaling calculated force constants to predict theoretically calculated vibrational frequencies is expanded to include a broad array of polarized and augmented basis sets based on the split valence 6-31G and 6-311G basis sets with the B3LYP density functional. Pulay's original choice of a single polarized 6-31G(d) basis coupled with a B3LYP functional remains the most computationally economical choice for scaled frequency calculations. But it can be improved upon with additional polarization functions and added diffuse functions for complex molecular systems. The new scale factors for the B3LYP density functional and the 6-31G, 6-31G(d), 6-31G(d,p), 6-31G+(d,p), 6-31G++(d,p), 6-311G, 6-311G(d), 6-311G(d,p), 6-311G+(d,p), 6-311G++(d,p), 6-311G(2d,p), 6-311G++(2d,p), 6-311G++(df,p) basis sets are shown. The double d polarized models did not perform as well and the source of the decreased accuracy was investigated. An alternate system of generating internal coordinates that uses the out-of plane wagging coordinate whenever it is possible; makes vibrational assignments via potential energy distributions more meaningful. Automated software to produce SQM scaled vibrational calculations from different molecular orbital packages is presented.
Load effects calculation according to EN 1991-2 Load Model 1
NASA Astrophysics Data System (ADS)
Slavchev, V.
2015-01-01
In the present paper, fast calculation method for determination the effects of EN 1991-2 LM1 is presented. The influence lines method was used for calculation. Both bending moments and shear forces at typical section can be easy calculated by using one table.
Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements
NASA Astrophysics Data System (ADS)
Kim, Myung-Hee Y.; Wu, Honglu; Hada, Megumi; Cucinotta, Francis
The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET g or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ‘‘biological Bragg curve’’ is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called “overkill”. F. A. Cucinotta, I. Plante, A. L. Ponomarev, and M. Y. Kim, Nuclear Interactions in Heavy Ion Transport and Event
Yamamoto, Daisuke; Marmorini, Giacomo; Danshita, Ippei
2015-01-16
Magnetization processes of spin-1/2 layered triangular-lattice antiferromagnets (TLAFs) under a magnetic field H are studied by means of a numerical cluster mean-field method with a scaling scheme. We find that small antiferromagnetic couplings between the layers give rise to several types of extra quantum phase transitions among different high-field coplanar phases. Especially, a field-induced first-order transition is found to occur at H≈0.7H_{s}, where H_{s} is the saturation field, as another common quantum effect of ideal TLAFs in addition to the well-established one-third plateau. Our microscopic model calculation with appropriate parameters shows excellent agreement with experiments on Ba_{3}CoSb_{2}O_{9} [T. Susuki et al., Phys. Rev. Lett. 110, 267201 (2013)]. Given this fact, we suggest that the Co^{2+}-based compounds may allow for quantum simulations of intriguing properties of this simple frustrated model, such as quantum criticality and supersolid states. PMID:25635561
Kidon, Lyran; Wilner, Eli Y; Rabani, Eran
2015-12-21
The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima-Zwanzig-Mori time-convolution (TC) and the other on the Tokuyama-Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called "memory kernel" or "generator," going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green's function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.
Magnetic design calculation and FRC formation modeling for the field reversed experiment liner
Dorf, L. A.; Intrator, T. P.; Renneke, R.; Hsu, S. C.; Wurden, G. A.; Awe, T.; Siemon, R.; Semenov, V. E.
2008-10-01
Integrated magnetic modeling and design are important to meet the requirements for (1) formation, (2) translation, and (3) compression of a field reversed configuration (FRC) for magnetized target fusion. Off-the-shelf solutions do not exist for many generic design issues. A predictive capability for time-dependent magnetic diffusion in realistically complicated geometry is essential in designing the experiment. An eddy-current code was developed and used to compute the mutual inductances between driven magnetic coils and passive magnetic shields (flux excluder plates) to calculate the self-consistent axisymmetric magnetic fields during the first two stages. The plasma in the formation stage was modeled as an immobile solid cylinder with selectable constant resistivity and magnetic flux that was free to readjust itself. It was concluded that (1) use of experimentally obtained anomalously large plasma resistivity in magnetic diffusion simulations is sufficient to predict magnetic reconnection and FRC formation, (2) comparison of predicted and experimentally observed timescales for FRC Ohmic decay shows good agreement, and (3) for the typical range of resistivities, the magnetic null radius decay rate scales linearly with resistivity. The last result can be used to predict the rate of change in magnetic flux outside of the separatrix (equal to the back-emf loop voltage), and thus estimate a minimum {theta}-coil loop voltage required to form an FRC.
Chen, Ding-Jiang; Sun, Si-Yang; Jia, Ying-Na; Chen, Jia-Bo; Lü, Jun
2013-01-01
Based on the hydrological difference between the point source (PS) and nonpoint source (NPS) pollution processes and the major influencing mechanism of in-stream retention processes, a bivariate statistical model was developed for relating river phosphorus load to river water flow rate and temperature. Using the calibrated and validated four model coefficients from in-stream monitoring data, monthly phosphorus input loads to the river from PS and NPS can be easily determined by the model. Compared to current hydrologica methods, this model takes the in-stream retention process and the upstream inflow term into consideration; thus it improves the knowledge on phosphorus pollution processes and can meet the requirements of both the district-based and watershed-based wate quality management patterns. Using this model, total phosphorus (TP) input load to the Changle River in Zhejiang Province was calculated. Results indicated that annual total TP input load was (54.6 +/- 11.9) t x a(-1) in 2004-2009, with upstream water inflow, PS and NPS contributing to 5% +/- 1%, 12% +/- 3% and 83% +/- 3%, respectively. The cumulative NPS TP input load during the high flow periods (i. e. , June, July, August and September) in summer accounted for 50% +/- 9% of the annual amount, increasing the alga blooming risk in downstream water bodies. Annual in-stream TP retention load was (4.5 +/- 0.1) t x a(-1) and occupied 9% +/- 2% of the total input load. The cumulative in-stream TP retention load during the summer periods (i. e. , June-September) accounted for 55% +/- 2% of the annual amount, indicating that in-stream retention function plays an important role in seasonal TP transport and transformation processes. This bivariate statistical model only requires commonly available in-stream monitoring data (i. e. , river phosphorus load, water flow rate and temperature) with no requirement of special software knowledge; thus it offers researchers an managers with a cost-effective tool for
Combining molecular dynamics and an electrodiffusion model to calculate ion channel conductance.
Wilson, Michael A; Nguyen, Thuy Hien; Pohorille, Andrew
2014-12-14
Establishing the relation between the structures and functions of protein ion channels, which are protein assemblies that facilitate transmembrane ion transport through water-filled pores, is at the forefront of biological and medical sciences. A reliable way to determine whether our understanding of this relation is satisfactory is to reproduce the measured ionic conductance over a broad range of applied voltages. This can be done in molecular dynamics simulations by way of applying an external electric field to the system and counting the number of ions that traverse the channel per unit time. Since this approach is computationally very expensive we develop a markedly more efficient alternative in which molecular dynamics is combined with an electrodiffusion equation. This alternative approach applies if steady-state ion transport through channels can be described with sufficient accuracy by the one-dimensional diffusion equation in the potential given by the free energy profile and applied voltage. The theory refers only to line densities of ions in the channel and, therefore, avoids ambiguities related to determining the surface area of the channel near its endpoints or other procedures connecting the line and bulk ion densities. We apply the theory to a simple, model system based on the trichotoxin channel. We test the assumptions of the electrodiffusion equation, and determine the precision and consistency of the calculated conductance. We demonstrate that it is possible to calculate current/voltage dependence and accurately reconstruct the underlying (equilibrium) free energy profile, all from molecular dynamics simulations at a single voltage. The approach developed here applies to other channels that satisfy the conditions of the electrodiffusion equation. PMID:25494790
Mathematical model and calculation of water-cooling efficiency in a film-filled cooling tower
NASA Astrophysics Data System (ADS)
Laptev, A. G.; Lapteva, E. A.
2016-10-01
Different approaches to simulation of momentum, mass, and energy transfer in packed beds are considered. The mathematical model of heat and mass transfer in a wetted packed bed for turbulent gas flow and laminar wave counter flow of the fluid film in sprinkler units of a water-cooling tower is presented. The packed bed is represented as the set of equivalent channels with correction to twisting. The idea put forward by P. Kapitsa on representation of waves on the interphase film surface as elements of the surface roughness in interaction with the gas flow is used. The temperature and moisture content profiles are found from the solution of differential equations of heat and mass transfer written for the equivalent channel with the volume heat and mass source. The equations for calculation of the average coefficients of heat emission and mass exchange in regular and irregular beds with different contact elements, as well as the expression for calculation of the average turbulent exchange coefficient are presented. The given formulas determine these coefficients for the known hydraulic resistance of the packed bed element. The results of solution of the system of equations are presented, and the water temperature profiles are shown for different sprinkler units in industrial water-cooling towers. The comparison with experimental data on thermal efficiency of the cooling tower is made; this allows one to determine the temperature of the cooled water at the output. The technical solutions on increasing the cooling tower performance by equalization of the air velocity profile at the input and creation of an additional phase contact region using irregular elements "Inzhekhim" are considered.
Model calculated global, regional and megacity premature mortality due to air pollution
NASA Astrophysics Data System (ADS)
Lelieveld, J.; Barlas, C.; Giannadaki, D.; Pozzer, A.
2013-07-01
Air pollution by fine particulate matter (PM2.5) and ozone (O3) has increased strongly with industrialization and urbanization. We estimate the premature mortality rates and the years of human life lost (YLL) caused by anthropogenic PM2.5 and O3 in 2005 for epidemiological regions defined by the World Health Organization (WHO). This is based upon high-resolution global model calculations that resolve urban and industrial regions in greater detail compared to previous work. Results indicate that 69% of the global population is exposed to an annual mean anthropogenic PM2.5 concentration of >10 μg m-3 (WHO guideline) and 33% to > 25 μg m-3 (EU directive). We applied an epidemiological health impact function and find that especially in large countries with extensive suburban and rural populations, air pollution-induced mortality rates have been underestimated given that previous studies largely focused on the urban environment. We calculate a global respiratory mortality of about 773 thousand/year (YLL ≈ 5.2 million/year), 186 thousand/year by lung cancer (YLL ≈ 1.7 million/year) and 2.0 million/year by cardiovascular disease (YLL ≈ 14.3 million/year). The global mean per capita mortality caused by air pollution is about 0.1% yr-1. The highest premature mortality rates are found in the Southeast Asia and Western Pacific regions (about 25% and 46% of the global rate, respectively) where more than a dozen of the most highly polluted megacities are located.
Spectra for the A = 6 reactions calculated from a three-body resonance model
NASA Astrophysics Data System (ADS)
Paris, Mark W.; Hale, Gerald M.
2016-06-01
We develop a resonance model of the transition matrix for three-body breakup reactions of the A = 6 system and present calculations for the nucleon observed spectra, which are important for inertial confinement fusion and Big Bang nucleosynthesis (BBN). The model is motivated by the Faddeev approach where the form of the T matrix is written as a sum of the distinct Jacobi coordinate systems corresponding to particle configurations (α, n-n) and (n; n-α) to describe the final state. The structure in the spectra comes from the resonances of the two-body subsystems of the three-body final state, namely the singlet (T = 1) nucleon-nucleon (NN) anti-bound resonance, and the Nα resonances designated the ground state (Jπ = {{{3^ - }} over 2}) and first excited state (Jπ = {{{1^ - }} over 2}) of the A = 5 systems 5He and 5Li. These resonances are described in terms of single-level, single-channel R-matrix parameters that are taken from analyses of NN and Nα scattering data. While the resonance parameters are approximately charge symmetric, external charge-dependent effects are included in the penetrabilities, shifts, and hard-sphere phases, and in the level energies to account for internal Coulomb differences. The shapes of the resonance contributions to the spectrum are fixed by other, two-body data and the only adjustable parameters in the model are the combinatorial amplitudes for the compound system. These are adjusted to reproduce the observed nucleon spectra from measurements at the Omega and NIF facilities. We perform a simultaneous, least-squares fit of the tt neutron spectra and the 3He3He proton spectra. Using these amplitudes we make a prediction of the α spectra for both reactions at low energies. Significant differences in the tt and 3He3He spectra are due to Coulomb effects.
Zhang, Guozhi; Luo, Qingming; Zeng, Shaoqun; Liu, Qian
2008-02-01
A new whole-body computational phantom, the Visible Chinese Human (VCH), was developed using high-resolution transversal photographs of a Chinese adult male cadaver. Following the segmentation and tridimensional reconstruction, a voxel-based model that faithfully represented the average anatomical characteristics of the Chinese population was established for radiation dosimetry. The vascular system of VCH was fully preserved, and the cadaver specimen was processed in the standing posture. A total of 8,920 slices were obtained by continuous sectioning at 0.2 mm intervals, and 48 organs and tissues were segmented from the tomographic color images at 5440 x 4080 pixel resolution, corresponding to a voxel size of 0.1 x 0.1 x 0.2 mm3. The resulting VCH computational phantom, consisting of 230 x 120 x 892 voxels with a unit volume of 2 x 2 x 2 mm3, was ported into Monte Carlo code MCNPX2.5 to calculate the conversion coefficients from kerma free-in-air to absorbed dose and to effective dose for external monoenergetic photon beams from 15 keV to 10 MeV under six idealized external irradiation geometries (anterior-posterior, posterior-anterior, left lateral, right lateral, rotational, and isotropic). Organ masses of the VCH model are fairly different from other human phantoms. Differences of up to 300% are observed between doses from ICRP 74 data and those of VIP-Man. Detailed information from the VCH model is able to improve the radiological datasets, particular for the Chinese population, and provide insights into the research of various computational phantoms. PMID:18188046
Statistical equilibrium calculations for silicon in early-type model stellar atmospheres
NASA Technical Reports Server (NTRS)
Kamp, L. W.
1976-01-01
Line profiles of 36 multiplets of silicon (Si) II, III, and IV were computed for a grid of model atmospheres covering the range from 15,000 to 35,000 K in effective temperature and 2.5 to 4.5 in log (gravity). The computations involved simultaneous solution of the steady-state statistical equilibrium equations for the populations and of the equation of radiative transfer in the lines. The variables were linearized, and successive corrections were computed until a minimal accuracy of 1/1000 in the line intensities was reached. The common assumption of local thermodynamic equilibrium (LTE) was dropped. The model atmospheres used also were computed by non-LTE methods. Some effects that were incorporated into the calculations were the depression of the continuum by free electrons, hydrogen and ionized helium line blocking, and auto-ionization and dielectronic recombination, which later were found to be insignificant. Use of radiation damping and detailed electron (quadratic Stark) damping constants had small but significant effects on the strong resonance lines of Si III and IV. For weak and intermediate-strength lines, large differences with respect to LTE computations, the results of which are also presented, were found in line shapes and strengths. For the strong lines the differences are generally small, except for the models at the hot, low-gravity extreme of our range. These computations should be useful in the interpretation of the spectra of stars in the spectral range B0-B5, luminosity classes III, IV, and V.
Austrian Carbon Calculator (ACC) - modelling soil carbon dynamics in Austrian soils
NASA Astrophysics Data System (ADS)
Sedy, Katrin; Freudenschuss, Alexandra; Zethner, Gehard; Spiegel, Heide; Franko, Uwe; Gründling, Ralf; Xaver Hölzl, Franz; Preinstorfer, Claudia; Haslmayr, Hans Peter; Formayer, Herbert
2014-05-01
Austrian Carbon Calculator (ACC) - modelling soil carbon dynamics in Austrian soils. The project funded by the Klima- und Energiefonds, Austrian Climate Research Programme, 4th call Authors: Katrin Sedy, Alexandra Freudenschuss, Gerhard Zethner (Environment Agency Austria), Heide Spiegel (Austrian Agency for Health and Food Safety), Uwe Franko, Ralf Gründling (Helmholtz Centre for Environmental Research) Climate change will affect plant productivity due to weather extremes. However, adverse effects could be diminished and satisfying production levels may be maintained with proper soil conditions. To sustain and optimize the potential of agricultural land for plant productivity it will be necessary to focus on preserving and increasing soil organic carbon (SOC). Carbon sequestration in agricultural soils is strongly influenced by management practice. The present management is affected by management practices that tend to speed up carbon loss. Crop rotation, soil cultivation and the management of crop residues are very important measures to influence carbon dynamics and soil fertility. For the future it will be crucial to focus on practical measures to optimize SOC and to improve soil structure. To predict SOC turnover the existing humus balance model the application of the "Carbon Candy Balance" was verified by results from Austrian long term field experiments and field data of selected farms. Thus the main aim of the project is to generate a carbon balancing tool box that can be applied in different agricultural production regions to assess humus dynamics due to agricultural management practices. The toolbox will allow the selection of specific regional input parameters for calculating the C-balance at field level. However farmers or other interested user can also apply their own field data to receive the result of C-dynamics under certain management practises within the next 100 years. At regional level the impact of predefined changes in agricultural management
NASA Astrophysics Data System (ADS)
Slavinić, Petra; Cvetković, Marko
2016-01-01
The volume calculation of geological structures is one of the primary goals of interest when dealing with exploration or production of oil and gas in general. Most of those calculations are done using advanced software packages but still the mathematical workflow (equations) has to be used and understood for the initial volume calculation process. In this paper a comparison is given between bulk volume calculations of geological structures using trapezoidal and Simpson's rule and the ones obtained from cell-based models. Comparison in calculation is illustrated with four models; dome - 1/2 of ball/sphere, elongated anticline, stratigraphic trap due to lateral facies change and faulted anticline trap. Results show that Simpson's and trapezoidal rules give a very accurate volume calculation even with a few inputs(isopach areas - ordinates). A test of cell based model volume calculation precision against grid resolution is presented for various cases. For high accuracy, less the 1% of an error from coarsening, a cell area has to be 0.0008% of the reservoir area
NASA Astrophysics Data System (ADS)
Kase, Yuki; Yamashita, Haruo; Sakama, Makoto; Mizota, Manabu; Maeda, Yoshikazu; Tameshige, Yuji; Murayama, Shigeyuki
2015-08-01
In the development of an external radiotherapy treatment planning system, the output factor (OPF) is an important value for the monitor unit calculations. We developed a proton OPF calculation model with consideration for the collimator aperture edge to account for the dependence of the OPF on the collimator aperture and distance in proton beam therapy. Five parameters in the model were obtained by fitting with OPFs measured by a pinpoint chamber with the circular radiation fields of various field radii and collimator distances. The OPF model calculation using the fitted model parameters could explain the measurement results to within 1.6% error in typical proton treatment beams with 6- and 12 cm SOBP widths through a range shifter and a circular aperture more than 10.6 mm in radius. The calibration depth dependences of the model parameters were approximated by linear or quadratic functions. The semi-analytical OPF model calculation was tested with various MLC aperture shapes that included circles of various sizes as well as a rectangle, parallelogram, and L-shape for an intermediate proton treatment beam condition. The pre-calculated OPFs agreed well with the measured values, to within 2.7% error up to 620 mm in the collimator distance, though the maximum difference was 5.1% in the case of the largest collimator distance of 740 mm. The OPF calculation model would allow more accurate monitor unit calculations for therapeutic proton beams within the expected range of collimator conditions in clinical use.
Model calculations of the underwater noise of breaking waves and comparison with experiment.
Deane, Grant B; Stokes, M Dale
2010-06-01
A model for the underwater noise of whitecaps is presented and compared with the noise measured beneath plunging seawater laboratory waves. The noise from a few hundred hertz up to at least 80 kHz is assumed to be due to the pulses of sound radiated by bubbles formed within a breaking wave crest. The total noise level and its dependence on frequency are a function of bubble creation rate, bubble damping factor and an 'acoustical skin depth' associated with scattering and absorption by the bubble plume formed within the crest. Calculation of breaking wave noise is made using estimates of these factors, which are made independently of the noise itself. The results are in good agreement with wave noise measured in a laboratory flume when compensated for reverberation. A closed-form, analytical expression for the wave noise is presented, which shows a -11/6 power-law dependence of noise level on frequency, in good agreement with the -10/6 scaling law commonly observed in the open ocean.
Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran
2015-12-21
The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima–Zwanzig–Mori time-convolution (TC) and the other on the Tokuyama–Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called “memory kernel” or “generator,” going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green’s function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.
Review of Flow Models in GOTH_SNF for Spent Fuel MCO Calculations
John R. Kirkpatrick; Chris A. Dahl
2003-09-01
The present report is one of a series of three. The series provides an independent technical review of certain aspects of the GOTH_SNF code that is used for accident analysis of the multicanister overpack (MCO) that is proposed for permanent storage of spent nuclear fuel in the planned repository at Yucca Mountain, Nevada. The work documented in the present report and its two companions was done under the auspices of the National Spent Nuclear Fuel Program. The other reports in the series are DOE/SNF/REP-088 and DOE/SNF/REP-089. This report analyzes the model for flow through the fuel elements that is documented in the SNF report titled MCO Work Book GOTH_SNF Input Data.1 Reference 1 combined the multiple parallel paths through which the hot gases flow vertically inside the MCO into simpler paths. This report examines the assumptions used to combine the paths and concludes that there are other ways to combine the paths than the one used by GOTH_SNF. Two alternatives are analyzed, and the results are compared to those from the model used in GOTH_SNF. Both alternatives produced a higher pressure drop from the top to the bottom of the flow channel for a given flow velocity than did the approximation used in GOTH_SNF. Therefore, for a given pressure drop, the flow velocity given by the GOTH_SNF approximation will be lower than that from either of the two alternatives. The practical consequences of the differences in flow rate are not obvious. One way to evaluate the consequences is to repeat an important MCO calculation on GOTH_SNF using an altered hydraulic diameter (the one that produces the highest pressure drop for a given flow velocity) and see if the conclusions about the safety of the MCO are changed.
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same
NASA Technical Reports Server (NTRS)
ROBERT
1922-01-01
This report presents the attempt to develop a law which will permit the use of results obtained on small models in a tunnel for the calculation of full-sized airplanes, or if it exists, a law of similitude relating air forces on a full-sized plane to those on a reduced scale model.
Yang, Jie; Tang, Grace; Zhang, Pengpeng; Hunt, Margie; Lim, Seng B.; LoSasso, Thomas; Mageras, Gig
2016-01-01
Hypofractionated treatments generally increase the complexity of a treatment plan due to the more stringent constraints of normal tissues and target coverage. As a result, treatment plans contain more modulated MLC motions that may require extra efforts for accurate dose calculation. This study explores methods to minimize the differences between in-house dose calculation and actual delivery of hypofractionated volumetric-modulated arc therapy (VMAT), by focusing on arc approximation and tongue-and-groove (TG) modeling. For dose calculation, the continuous delivery arc is typically approximated by a series of static beams with an angular spacing of 2°. This causes significant error when there is large MLC movement from one beam to the next. While increasing the number of beams will minimize the dose error, calculation time will increase significantly. We propose a solution by inserting two additional apertures at each of the beam angle for dose calculation. These additional apertures were interpolated at two-thirds’ degree before and after each beam. Effectively, there were a total of three MLC apertures at each beam angle, and the weighted average fluence from the three apertures was used for calculation. Because the number of beams was kept the same, calculation time was only increased by about 6%–8%. For a lung plan, areas of high local dose differences (> 4%) between film measurement and calculation with one aperture were significantly reduced in calculation with three apertures. Ion chamber measurement also showed similar results, where improvements were seen with calculations using additional apertures. Dose calculation accuracy was further improved for TG modeling by developing a sampling method for beam fluence matrix. Single element point sampling for fluence transmitted through MLC was used for our fluence matrix with 1 mm resolution. For Varian HDMLC, grid alignment can cause fluence sampling error. To correct this, transmission volume averaging was
40 CFR 600.207-86 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Calculation of fuel economy values for... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values for 1977 and Later...
A Comparative Study of Power and Sample Size Calculations for Multivariate General Linear Models
ERIC Educational Resources Information Center
Shieh, Gwowen
2003-01-01
Repeated measures and longitudinal studies arise often in social and behavioral science research. During the planning stage of such studies, the calculations of sample size are of particular interest to the investigators and should be an integral part of the research projects. In this article, we consider the power and sample size calculations for…
42 CFR 425.604 - Calculation of savings under the one-sided model.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... For each performance year, CMS determines whether the estimated average per capita Medicare... calculate an ACO's per capita expenditures for each performance year. (6) Calculations of the ACO's... payment, the ACO's average per capita Medicare expenditures for the performance year must be below...
42 CFR 425.604 - Calculation of savings under the one-sided model.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... For each performance year, CMS determines whether the estimated average per capita Medicare... calculate an ACO's per capita expenditures for each performance year. (6) Calculations of the ACO's... payment, the ACO's average per capita Medicare expenditures for the performance year must be below...
42 CFR 425.604 - Calculation of savings under the one-sided model.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... For each performance year, CMS determines whether the estimated average per capita Medicare... calculate an ACO's per capita expenditures for each performance year. (6) Calculations of the ACO's... payment, the ACO's average per capita Medicare expenditures for the performance year must be below...
Tomasko, D.
1985-11-01
Sensitivity studies were performed for the Sandia Strategic Petroleum Reserve (SPR) thermal model. Analyses of the results obtained indicate that the following models are essential for correct temperature prediction: a counter-flow heat exchanger model, a mixing model, and a model for interfacial heat transfer between the saturated brine and the cavern crude oil. The thermal model was found to be fairly insensitive to the boundary conditions used at the extremities of the calculational mesh, as well as to enhanced heat transfer at the bottom of the cavern due to convection across the SPR porous media. The thermal calculations were most sensitive to variations in the thermal conductivity of the surrounding salt and the initial temperatures of the fluid in the caverns. Effects caused by uncertainties in the initial temperature of the brine were reduced by using a thermal log performed near the onset of oil fill. 7 refs., 88 figs., 4 tabs.
NASA Technical Reports Server (NTRS)
Douglass, Anne R.; Rood, Richard B.; Jackman, Charles H.; Weaver, Clark J.
1994-01-01
Two-dimensional (zonally averaged) photochemical models are commonly used for calculations of ozone changes due to various perturbations. These include calculating the ozone change expected as a result of change in the lower stratospheric composition due to the exhaust of a fleet of supersonic aircraft flying in the lower stratosphere. However, zonal asymmetries are anticipated to be important to this sort of calculation. The aircraft are expected to be restricted from flying over land at supersonic speed due to sonic booms, thus the pollutant source will not be zonally symmetric. There is loss of pollutant through stratosphere/troposphere exchange, but these processes are spatially and temporally inhomogeneous. Asymmetry in the pollutant distribution contributes to the uncertainty in the ozone changes calculated with two dimensional models. Pollutant distributions for integrations of at least 1 year of continuous pollutant emissions along flight corridors are calculated using a three dimensional chemistry and transport model. These distributions indicate the importance of asymmetry in the pollutant distributions to evaluation of the impact of stratospheric aircraft on ozone. The implications of such pollutant asymmetries to assessment calculations are discussed, considering both homogeneous and heterogeneous reactions.
Model-based calculations of off-axis ratio of conic beams for a dedicated 6 MV radiosurgery unit
Yang, J. N.; Ding, X.; Du, W.; Pino, R.
2010-10-15
Purpose: Because the small-radius photon beams shaped by cones in stereotactic radiosurgery (SRS) lack lateral electronic equilibrium and a detector's finite cross section, direct experimental measurement of dosimetric data for these beams can be subject to large uncertainties. As the dose calculation accuracy of a treatment planning system largely depends on how well the dosimetric data are measured during the machine's commissioning, there is a critical need for an independent method to validate measured results. Therefore, the authors studied the model-based calculation as an approach to validate measured off-axis ratios (OARs). Methods: The authors previously used a two-component analytical model to calculate central axis dose and associated dosimetric data (e.g., scatter factors and tissue-maximum ratio) in a water phantom and found excellent agreement between the calculated and the measured central axis doses for small 6 MV SRS conic beams. The model was based on that of Nizin and Mooij [''An approximation of central-axis absorbed dose in narrow photon beams,'' Med. Phys. 24, 1775-1780 (1997)] but was extended to account for apparent attenuation, spectral differences between broad and narrow beams, and the need for stricter scatter dose calculations for clinical beams. In this study, the authors applied Clarkson integration to this model to calculate OARs for conic beams. OARs were calculated for selected cones with radii from 0.2 to 1.0 cm. To allow comparisons, the authors also directly measured OARs using stereotactic diode (SFD), microchamber, and film dosimetry techniques. The calculated results were machine-specific and independent of direct measurement data for these beams. Results: For these conic beams, the calculated OARs were in excellent agreement with the data measured using an SFD. The discrepancies in radii and in 80%-20% penumbra were within 0.01 cm, respectively. Using SFD-measured OARs as the reference data, the authors found that the
NASA Astrophysics Data System (ADS)
Nesterenok, A. V.; Naidenov, V. O.
2015-12-01
The interaction of primary cosmic rays with the Earth's atmosphere is investigated using the simulation toolkit GEANT4. Two reference lists of physical processes - QGSP_BIC_HP and FTFP_BERT_HP - are used in the simulations of cosmic ray cascade in the atmosphere. The cosmic ray neutron fluxes are calculated for mean level of solar activity, high geomagnetic latitudes and sea level. The calculated fluxes are compared with the published results of other analogous simulations and with experimental data.
Howard, David M; Kearfott, Kimberlee J; Wilderman, Scott J; Dewaraja, Yuni K
2011-10-01
High computational requirements restrict the use of Monte Carlo algorithms for dose estimation in a clinical setting, despite the fact that they are considered more accurate than traditional methods. The goal of this study was to compare mean tumor absorbed dose estimates using the unit density sphere model incorporated in OLINDA with previously reported dose estimates from Monte Carlo simulations using the dose planning method (DPMMC) particle transport algorithm. The dataset (57 tumors, 19 lymphoma patients who underwent SPECT/CT imaging during I-131 radioimmunotherapy) included tumors of varying size, shape, and contrast. OLINDA calculations were first carried out using the baseline tumor volume and residence time from SPECT/CT imaging during 6 days post-tracer and 8 days post-therapy. Next, the OLINDA calculation was split over multiple time periods and summed to get the total dose, which accounted for the changes in tumor size. Results from the second calculation were compared with results determined by coupling SPECT/CT images with DPM Monte Carlo algorithms. Results from the OLINDA calculation accounting for changes in tumor size were almost always higher (median 22%, range -1%-68%) than the results from OLINDA using the baseline tumor volume because of tumor shrinkage. There was good agreement (median -5%, range -13%-2%) between the OLINDA results and the self-dose component from Monte Carlo calculations, indicating that tumor shape effects are a minor source of error when using the sphere model. However, because the sphere model ignores cross-irradiation, the OLINDA calculation significantly underestimated (median 14%, range 2%-31%) the total tumor absorbed dose compared with Monte Carlo. These results show that when the quantity of interest is the mean tumor absorbed dose, the unit density sphere model is a practical alternative to Monte Carlo for some applications. For applications requiring higher accuracy, computer-intensive Monte Carlo calculation is
Bondarenko, V A; Mitrikas, V G
2007-01-01
The model of a geometrical human body phantom developed for calculating the shielding functions of representative points of the body organs and systems is similar to the anthropomorphic phantom. This form of phantom can be integrated with the shielding model of the ISS Russian orbital segment to make analysis of radiation loading of crewmembers in different compartments of the vehicle. Calculation of doses absorbed by the body systems in terms of the representative points makes it clear that doses essentially depend on the phantom spatial orientation (eye direction). It also enables the absorbed dose evaluation from the shielding functions as the mean of the representative points and phantom orientation.
NASA Technical Reports Server (NTRS)
Decreau, P. M. E.; Lemaire, J.; Chappell, C. R.; Waite, J. H., Jr.
1986-01-01
The paper analyzes 28 plasmapause crossings made by the DE1 satellite in the night local time sector (from January to March 1982). Different signatures obtained by the Retarding Ion Mass Spectrometer instrument have been used for this analysis. The observed plasmapause positions (Lpp) have been organized as a function of geomagnetic indices. They are compared with the empirical relationship deduced by Carpenter and Parks (1973) from whistler observations. Moreover, the dependence of Lpp versus Kp has been inferred from model calculations using Kp dependent electric and magnetic fields derived from McIlwain's (1974) E3H electric field model and M2 magnetic field model respectively. Stationary models as well as time dependent ones, have been used to determine the positions of the plasmapause. The results of the model calculations are compared to the observations.
NASA Technical Reports Server (NTRS)
Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard
1991-01-01
Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.
NASA Astrophysics Data System (ADS)
Schartmann, M.; Meisenheimer, K.; Camenzind, M.; Wolf, S.; Henning, Th.
2005-07-01
We explore physically self-consistent models of dusty molecular tori in Active Galactic Nuclei (AGN) with the goal of interpreting VLTI observations and fitting high resolution mid-IR spectral energy distributions (SEDs). The input dust distribution is analytically calculated by assuming hydrostatic equilibrium between pressure forces - due to the turbulent motion of the gas clouds - and gravitational and centrifugal forces as a result of the contribution of the nuclear stellar distribution and the central black hole. For a fully three-dimensional treatment of the radiative transfer problem through the tori we employ the Monte Carlo code MC3D. We find that in homogeneous dust distributions the observed mid-infrared emission is dominated by the inner funnel of the torus, even when observing along the equatorial plane. Therefore, the stratification of the distribution of dust grains - both in terms of size and composition - cannot be neglected. In the current study we only include the effect of different sublimation radii which significantly alters the SED in comparison to models that assume an average dust grain property with a common sublimation radius, and suppresses the silicate emission feature at 9.7~μm. In this way we are able to fit the mean SED of both type I and type II AGN very well. Our fit of special objects for which high angular resolution observations (≤0.3´´) are available indicates that the hottest dust in NGC 1068 reaches the sublimation temperature while the maximum dust temperature in the low-luminosity AGN Circinus falls short of 1000 K.
Nuclear shell model calculations of neutralino-nucleus cross sections for [sup 29]Si and [sup 73]Ge
Ressell, M.T.; Aufderheide, M.B.; Bloom, S.D.; Griest, K.; Mathews, G.J.; Resler, D.A. Institute of Geophysics and Planetary Physics, Lawrence Livermore National Laboratory, Livermore, California 94550 Physics Department, University of California, San Diego, La Jolla, California 92093 N-Division/Physical Sciences Directorate, Lawrence Livermore National Laboratory, Livermore, California 94550 )
1993-12-15
We present the results of detailed nuclear shell model calculations of the spin-dependent elastic cross section for neutralinos scattering from [sup 29]Si and [sup 73]Ge. The calculations were performed in large model spaces which adequately describe the configuration mixing in these two nuclei. As tests of the computed nuclear wave functions we have calculated several nuclear observables and compared them with the measured values and found good agreement. In the limit of zero momentum transfer we find scattering matrix elements in agreement with previous estimates for [sup 29]Si but significantly different than previous work for [sup 73]Ge. A modest quenching, in accord with shell model studies of other heavy nuclei, has been included to bring agreement between the measured and calculated values of the magnetic moment for [sup 73]Ge. Even with this quenching, the calculated scattering rate is roughly a factor of 2 higher than the best previous estimates; without quenching, the rate is a factor of 4 higher. This implies a higher sensitivity for germanium dark matter detectors. We also investigate the role of finite momentum transfer upon the scattering response for both nuclei and find that this can significantly change the expected rates. We close with a brief discussion of the effects of some of the non-nuclear uncertainties upon the matrix elements.
Berg, Michael; Luzi, Samuel; Trang, Pham Thi Kim; Viet, Pham Hung; Giger, Walter; Stüben, Doris
2006-09-01
Arsenic removal efficiencies of 43 household sand filters were studied in rural areas of the Red River Delta in Vietnam. Simultaneously, raw groundwater from the same households and additional 31 tubewells was sampled to investigate arsenic coprecipitation with hydrous ferric iron from solution, i.e., without contact to sand surfaces. From the groundwaters containing 10-382 microg/L As, < 0.1-48 mg/L Fe, < 0.01-3.7 mg/L P, and 0.05-3.3 mg/L Mn, similar average removal rates of 80% and 76% were found for the sand filter and coprecipitation experiments, respectively. The filtering process requires only a few minutes. Removal efficiencies of Fe, phosphate, and Mn were > 99%, 90%, and 71%, respectively. The concentration of dissolved iron in groundwater was the decisive factor for the removal of arsenic. Residual arsenic levels below 50 microg/L were achieved by 90% of the studied sand filters, and 40% were even below 10 microg/L. Fe/As ratios of > or = 50 or > or = 250 were required to ensure arsenic removal to levels below 50 or 10 microg/L, respectively. Phosphate concentrations > 2.5 mg P/L slightly hampered the sand filter and coprecipitation efficiencies. Interestingly, the overall arsenic elimination was higher than predicted from model calculations based on sorption constants determined from coprecipitation experiments with artificial groundwater. This observation is assumed to result from As(lll) oxidation involving Mn, microorganisms, and possibly dissolved organic matter present in the natural groundwaters. Clear evidence of lowered arsenic burden for people consuming sand-filtered water is demonstrated from hair analyses. The investigated sand filters proved to operate fast and robust for a broad range of groundwater composition and are thus also a viable option for mitigation in other arsenic affected regions. An estimation conducted for Bangladesh indicates that a median residual level of 25 microg/L arsenic could be reached in 84% of the polluted
Douillard, J M; Henry, M
2003-07-15
A very simple route to calculation of the surface energy of solids is proposed because this value is very difficult to determine experimentally. The first step is the calculation of the attractive part of the electrostatic energy of crystals. The partial charges used in this calculation are obtained by using electronegativity equalization and scales of electronegativity and hardness deduced from physical characteristics of the atom. The lattice energies of the infinite crystal and of semi-infinite layers are then compared. The difference is related to the energy of cohesion and then to the surface energy. Very good results are obtained with ice, if one compares with the surface energy of liquid water, which is generally considered a good approximation of the surface energy of ice.
Microscopic optical model calculations of 4He, 12C-nucleus absorption cross sections
NASA Technical Reports Server (NTRS)
Dubey, R. R.; Khandelwal, G. S.; Cucinotta, F. A.; Wilson, J. W.
1996-01-01
Calculations of absorption cross sections using a microscopic first-order optical potential for heavy-ion scattering are compared with experiments. In-medium nucleon-nucleon (NN) cross sections were used to calculate the two-body scattering amplitude. A medium-modified first-order optical potential was obtained for heavy-ion scattering using the in-medium two-body scattering amplitude. A partial wave expansion of the Lippmann-Schwinger equation in momentum space was used to calculate the absorption cross sections for various systems. The results are presented for the absorption cross sections for 4He-nucleus and 12C-nucleus scattering systems and are compared with the experimental values in the energy range 18-83A MeV. The use of the in-medium NN cross sections is found to result in significant reduction of the free space absorption cross sections in agreement with experiment.
NASA Technical Reports Server (NTRS)
Lim, J. T.; Raper, C. D. Jr; Gold, H. J.; Wilkerson, G. G.; Raper CD, J. r. (Principal Investigator)
1989-01-01
A simple mathematical model for calculating the concentration of mobile carbon skeletons in the shoot of soya bean plants [Glycine max (L.) Merrill cv. Ransom] was built to examine the suitability of measured net photosynthetic rates (PN) for calculation of saccharide flux into the plant. The results suggest that either measurement of instantaneous PN overestimated saccharide influx or respiration rates utilized in the model were underestimated. If neither of these is the case, end-product inhibition of photosynthesis or waste respiration through the alternative pathway should be included in modelling of CH2O influx or efflux; and even if either of these is the case, the model output at a low coefficient of leaf activity indicates that PN still may be controlled by either end-product inhibition or alternative respiration.
Lim, J T; Raper, C D; Gold, H J; Wilkerson, G G
1989-01-01
A simple mathematical model for calculating the concentration of mobile carbon skeletons in the shoot of soya bean plants [Glycine max (L.) Merrill cv. Ransom] was built to examine the suitability of measured net photosynthetic rates (PN) for calculation of saccharide flux into the plant. The results suggest that either measurement of instantaneous PN overestimated saccharide influx or respiration rates utilized in the model were underestimated. If neither of these is the case, end-product inhibition of photosynthesis or waste respiration through the alternative pathway should be included in modelling of CH2O influx or efflux; and even if either of these is the case, the model output at a low coefficient of leaf activity indicates that PN still may be controlled by either end-product inhibition or alternative respiration.
Model Comparisons For Space Solar Cell End-Of-Life Calculations
NASA Astrophysics Data System (ADS)
Messenger, Scott; Jackson, Eric; Warner, Jeffrey; Walters, Robert; Evans, Hugh; Heynderickx, Daniel
2011-10-01
Space solar cell end-of-life (EOL) calculations are performed over a wide range of space radiation environments for GaAs-based single and multijunction solar cell technologies. Two general semi-empirical approaches will used to generate these EOL calculation results: 1) the JPL equivalent fluence (EQFLUX) and 2) the NRL displacement damage dose (SCREAM). This paper also includes the first results using the Monte Carlo-based version of SCREAM, called MC- SCREAM, which is now freely available online as part of the SPENVIS suite of programs.
Chen, S.Y.; LePoire, D.; Yu, C. ); Schafetz, S. ); Mehta, P. )
1991-01-01
The SOLID computer model was developed for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil. It is designed to assess external doses under various exposure scenarios that may be encountered in environmental restoration programs. The models four major functional features address (1) dose versus source depth in soil, (2) shielding of clean cover soil, (3) area of contamination, and (4) nonuniform distribution of sources. The model is also capable of adjusting doses when there are variations in soil densities for both source and cover soils. The model is supported by a data base of approximately 500 radionuclides. 4 refs.
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-cycle fuel economy values for a model type. 600.209-08 Section 600.209-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values §...
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-cycle fuel economy values for a model type. 600.209-08 Section 600.209-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values §...
Technology Transfer Automated Retrieval System (TEKTRAN)
Soil heat flux at the surface (G0) is strongly influenced by whether the soil is shaded or sunlit, and therefore can have large spatial variability for incomplete vegetation cover, such as across the interrows of row crops. Most practical soil-plant-atmosphere energy balance models calculate G0 as a...
FDTD calculations of SAR for child voxel models in different postures between 10 MHz and 3 GHz.
Findlay, R P; Lee, A-K; Dimbylow, P J
2009-08-01
Calculations of specific energy absorption rate (SAR) have been performed on the rescaled NORMAN 7-y-old voxel model and the Electronics and Telecommunications Research Institute (ETRI) child 7-y-old voxel model in the standing arms down, arms up and sitting postures. These calculations were for plane-wave exposure under isolated and grounded conditions between 10 MHz and 3 GHz. It was found that there was little difference at each resonant frequency between the whole-body averaged SAR values calculated for the NORMAN and ETRI 7-y-old models for each of the postures studied. However, when compared with the arms down posture, raising the arms increased the SAR by up to 25%. Electric field values required to produce the International Commission on Non-Ionizing Radiation Protection and Institute of Electrical and Electronic Engineers public basic restriction were calculated, and compared with reference levels for the different child models and postures. These showed that, under certain worst-case exposure conditions, the reference levels may not be conservative.
Shell model calculations in the lead region: /sup 205/Hg, /sup 205/Tl, /sup 211/Po, and /sup 211/Bi
Silvestre-Brac, B.; Boisson, J.P.
1981-08-01
Exact shell model calculations for nuclei consisting of three nonidentical particles outside the /sup 208/Pb closed shell core have been performed using a basis that contains correlated pairs. Two kinds of effective interactions are tested and the results are compared with the experiment. The possibility of high spin isomeric states is suggested for nuclei studied.
Jang Siyoung; Vassiliev, Oleg N.; Liu, H. Helen; Mohan, Radhe; Siebers, Jeffrey V.
2006-03-15
A multileaf collimator (MLC) model, 'MATMLC', was developed to simulate MLCs for Monte Carlo (MC) dose calculations of intensity-modulated radiation therapy (IMRT). This model describes MLCs using matrices of regions, each of which can be independently defined for its material and geometry, allowing flexibility in simulating MLCs from various manufacturers. The free parameters relevant to the dose calculations with this MLC model included MLC leaf density, interleaf air gap, and leaf geometry. To commission the MLC model and its free parameters for the Varian Millennium MLC-120 (Varian Oncology Systems, Palo Alto, CA), we used the following leaf patterns: (1) MLC-blocked fields to test the effects of leaf transmission and leakage; (2) picket-fence fields to test the effects of the interleaf air gap and tongue-groove design; and (3) abutting-gap fields to test the effects of rounded leaf ends. Transmission ratios and intensity maps for these leaf patterns were calculated with various sets of modeling parameters to determine their dosimetric effects, sensitivities, and their optimal combinations to give the closest agreement with measured results. Upon commissioning the MLC model, we computed dose distributions for clinical IMRT plans using the MC system and verified the results with those from ion chamber and thermoluminescent dosimeter measurements in water phantoms and anthropomorphic phantoms. This study showed that the MLC transmission ratios were strongly dependent on both leaf density and the interleaf air gap. The effect of interleaf air gap and tongue-groove geometry can be determined most effectively through fence-type MLC patterns. Using the commissioned MLC model, we found that the calculated dose from the MC system agreed with the measured data within clinically acceptable criteria from low- to high-dose regions, showing that the model is acceptable for clinical applications.
Sugiyama, A.; Nakayama, T.; Kato, M.; Maruyama, Y.
1997-08-01
A two-dimensional rate equation model, taking into consideration the transverse absorption loss of pump laser power, is proposed to evaluate the characteristics of a dye laser amplifier with a large input laser beam diameter pumped by high average power copper vapor lasers. The calculations are in good agreement with the measurements taken with a Rhodamine 6G dye, and the model can be used for evaluation of the dye concentration at any wavelength. {copyright} 1997 Optical Society of America
NASA Technical Reports Server (NTRS)
Kamaratos, E.
1985-01-01
A statistical model, the local plasma approximation, is considered for the calculation of the logarithmic mean excitation energy for stopping power of chemically bound particles by taking into consideration chemical bonding. This statistical model is applied to molecular hydrogen and leads to results that suggest a value for the logarithmic mean excitation energy of molecular hydrogen that is larger than the accepted experimental and theoretical values.
Assessment Of Semi-Empirical Dynamic Stall Models For Turboprop Stall Calculations
NASA Technical Reports Server (NTRS)
Kaza, K. R. V.; Reddy, T. S. R.
1989-01-01
Report presents comparison of stall-flutter responses obtained from three semiempirical dynamic stall models. Part of effort to develop models for stall-flutter analysis of highly loaded propellers (advanced turboprops). Available models of dynamic stall applied to simple structural models to study extent of validity and select appropriate model for application to advanced turboprops. Conclusion during study is operating environment of advanced turboprop favors conditions of light stall, in which loads induced by vortexes not severe.
Empirical rate equation model and rate calculations of hydrogen generation for Hanford tank waste
HU, T.A.
1999-07-13
Empirical rate equations are derived to estimate hydrogen generation based on chemical reactions, radiolysis of water and organic compounds, and corrosion processes. A comparison of the generation rates observed in the field with the rates calculated for twenty eight tanks shows agreement with in a factor of two to three.
40 CFR 600.207-93 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., as described in 40 CFR 86.084-21 or 40 CFR 86.1844-01 as applicable. (4) Vehicle configuration fuel... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES...
Soil heat flux calculation for sunlit and shaded surfaces under row crops: 2. Model Test
Technology Transfer Automated Retrieval System (TEKTRAN)
A method to calculate surface soil heat flux (G0) as a function of net radiation to the soil (RN,S) was developed that accounts for positional variability across a row crop interrow. The method divides the interrow into separate sections, which may be shaded, partially sunlit, or fully sunlit, and c...
ERIC Educational Resources Information Center
Weigand, Hans-Georg
2009-01-01
A long-term project (2005-2012) was started to test the use of symbolic calculators (SCs) in Bavarian grammar schools (Germany) in Grade 10, 11 and 12. The results showed the necessity of having better diagnostics of the students' ability to understand the mathematics and their ability to work with the SC. How does a student understand particular…
Badhwar, G D; Atwell, W
1999-06-01
The dose rate dynamics of the October 19-20, 1989 solar energetic particle (SPE) event as observed by the Liulin instrument onboard the Mir orbital station was analyzed in light of new calculations of the geomagnetic cutoff and improved estimates of the >100 MeV energy spectra from the GOES satellite instrument. The new calculations were performed using the as-flown Mir orbital trajectory and includes time variations of the cutoff rigidity due to changes in the Kp index. Although the agreement of total event integrated calculated dose to the measured dose is good, it results from some measured dose-time profile being higher and some lower than model calculations. They point to the need to include the diurnal variation of the geomagnetic cutoff and modifications of the cutoffs to variations in Kp in model calculations. Understanding of such events in light of the upcoming construction of the International Space Station during the period of maximum solar activity needs to be vigorously pursued.
Yang, X.
1998-12-31
Modeling ground motions from multi-shot, delay-fired mining blasts is important to the understanding of their source characteristics such as spectrum modulation. MineSeis is a MATLAB{reg_sign} (a computer language) Graphical User Interface (GUI) program developed for the effective modeling of these multi-shot mining explosions. The program provides a convenient and interactive tool for modeling studies. Multi-shot, delay-fired mining blasts are modeled as the time-delayed linear superposition of identical single shot sources in the program. These single shots are in turn modeled as the combination of an isotropic explosion source and a spall source. Mueller and Murphy`s (1971) model for underground nuclear explosions is used as the explosion source model. A modification of Anandakrishnan et al.`s (1997) spall model is developed as the spall source model. Delays both due to the delay-firing and due to the single-shot location differences are taken into account in calculating the time delays of the superposition. Both synthetic and observed single-shot seismograms can be used to construct the superpositions. The program uses MATLAB GUI for input and output to facilitate user interaction with the program. With user provided source and path parameters, the program calculates and displays the source time functions, the single shot synthetic seismograms and the superimposed synthetic seismograms. In addition, the program provides tools so that the user can manipulate the results, such as filtering, zooming and creating hard copies.
Ishida, Toyokazu
2008-09-17
To further understand the catalytic role of the protein environment in the enzymatic process, the author has analyzed the reaction mechanism of the Claisen rearrangement of Bacillus subtilis chorismate mutase (BsCM). By introducing a new computational strategy that combines all-electron QM calculations with ab initio QM/MM modelings, it was possible to simulate the molecular interactions between the substrate and the protein environment. The electrostatic nature of the transition state stabilization was characterized by performing all-electron QM calculations based on the fragment molecular orbital technique for the entire enzyme.
Near-LTE linear response calculations with a collisional-radiative model for He-like Al ions
More, R.M.; Kato, T.
1998-01-06
We investigate the non-equilibrium atomic kinetics using a collisional-radiative (CR) model modified to include line absorption. Steady-state emission is calculated for He-like aluminum ions immersed in a specified radiation field having fixed deviations from a Planck spectrum. The net emission is interpreted in terms of NLTE population changes. The calculation provides an NLTE response matrix, and in agreement with a general relation of non-equilibrium thermodynamics, the response matrix is symmetric. We compute the response matrix for 1% and 50% changes in the photon temperature and find linear response over a surprisingly large range.
More, R.; Kato, T.
1998-04-06
We investigate non-equilibrium atomic kinetics using a collisional- radiative model modified to include line absorption. Steady-state emission is calculated for He-like aluminum immersed in a specified radiation field having fixed deviations from a Planck spectrum. The calculated net emission is presented as a NLTE response matrix. In agreement with a rigorous general rule of non-equilibrium thermodynamics, the linear response is symmetric. We compute the response matrix for 1% and {+-} 50% changes in the photon temperature and find linear response over a surprisingly large range.
NASA Astrophysics Data System (ADS)
Kaplan, Abdullah; Capali, Veli; Ozdogan, Hasan
2015-07-01
Implementation of projects of new generation nuclear power plants requires the solving of material science and technological issues in developing of reactor materials. Melts of heavy metals (Pb, Bi and Pb-Bi) due to their nuclear and thermophysical properties, are the candidate coolants for fast reactors and accelerator-driven systems (ADS). In this study, α, γ, p, n and 3He induced fission cross section calculations for 209Bi target nucleus at high-energy regions for (α,f), (γ,f), (p,f), (n,f) and (3He,f) reactions have been investigated using different fission reaction models. Mamdouh Table, Sierk, Rotating Liquid Drop and Fission Path models of theoretical fission barriers of TALYS 1.6 code have been used for the fission cross section calculations. The calculated results have been compared with the experimental data taken from the EXFOR database. TALYS 1.6 Sierk model calculations exhibit generally good agreement with the experimental measurements for all reactions used in this study.
Influence of a detailed model of man on proton depth/dose calculation
NASA Technical Reports Server (NTRS)
Kase, P. G.
1972-01-01
The development of a detailed radiation shielding model of man is discussed. This model will be used to plan for manned space missions in which sensitive human tissues may be subjected to excessive radiation. The model has two configurations: standing and seated. More than 2500 individual elements were used to depict the external conformation, skeleton, and principal organs. The model is briefly described and several examples of its application to mission planning are given.
Maudlin, P.J.; Bingert, J.F.; House, J.W.
1997-04-01
Taylor impact tests using specimens cut from a rolled plate of Ta were conducted. The Ta was well-characterized in terms of flow stress and crystallographic texture. A piece-wise yield surface was interrogated from this orthotropic texture, and used in EPIC-95 3D simulations of the Taylor test. Good agreement was realized between the calculations and the post-test geometries in terms of major and minor side profiles and impact-interface footprints.
Use precise calculation models to operate or design refinery gas treating systems
1996-07-01
Amine simulators using rate-based calculation methodology can show refinery operators how to treat more acid gas with existing equipment. These simulators can rate the performance and design of an existing unit by evaluating tray size, downcomer configuration, column diameter, wier height, tray depth and operation with a particular solvent. In addition, these simulators can optimize plant designers` solvent selection and equipment sizing in grassroots applications.
Griffin, S; Marcus, A; Schulz, T; Walker, S
1999-01-01
The integrated exposure uptake biokinetic (IEUBK) model, recommended for use by the U.S. Environmental Protection Agency at residential Superfund sites to predict potential risks to children from lead exposure and to establish lead remediation levels, requires an interindividual geometric standard deviation (GSDi) as an essential input parameter. The GSDi quantifies the variability of blood lead concentrations for children exposed to similar environmental concentrations of lead. Estimates of potential risks are directly related to the GSDi, and therefore the GSDi directly impacts the scope of remediation at Superfund sites. Site-specific GSDi can be calculated for sites where blood lead and environmental lead have been measured. This paper uses data from blood and environmental lead studies conducted at the Bingham Creek and Sandy, Utah, Superfund sites to calculate GSDi using regression modeling, box modeling, and structural equation modeling. GSDis were calculated using various methods for treating values below the analytical method detection and quantitation limits. Treatment of nonquantifiable blood lead concentrations affected the GSDi more than the statistical method used to calculate the GSDi. For any given treatment, the different statistical methods produced similar GSDis. Because of the uncertainties associated with data in the blood lead studies, we recommend that a range of GSDis be used when analyzing site-specific risks associated with exposure to environmental lead instead of a single estimate. Because the different statistical methods produce similar GSDis, we recommend a simple procedure to calculate site-specific GSDi from a scientifically sound blood and environmental lead study. Images Figure 1 PMID:10339449
NASA Astrophysics Data System (ADS)
Mihailovic, D. T.; Alapaty, K.; Lalic, B.; Arsenic, I.; Rajkovic, B.; Malinovic, S.
2004-10-01
A method for estimating profiles of turbulent transfer coefficients inside a vegetation canopy and their use in calculating the air temperature inside tall grass canopies in land surface schemes for environmental modeling is presented. The proposed method, based on K theory, is assessed using data measured in a maize canopy. The air temperature inside the canopy is determined diagnostically by a method based on detailed consideration of 1) calculations of turbulent fluxes, 2) the shape of the wind and turbulent transfer coefficient profiles, and 3) calculation of the aerodynamic resistances inside tall grass canopies. An expression for calculating the turbulent transfer coefficient inside sparse tall grass canopies is also suggested, including modification of the corresponding equation for the wind profile inside the canopy. The proposed calculations of K-theory parameters are tested using the Land Air Parameterization Scheme (LAPS). Model outputs of air temperature inside the canopy for 8 17 July 2002 are compared with micrometeorological measurements inside a sunflower field at the Rimski Sancevi experimental site (Serbia). To demonstrate how changes in the specification of canopy density affect the simulation of air temperature inside tall grass canopies and, thus, alter the growth of PBL height, numerical experiments are performed with LAPS coupled with a one-dimensional PBL model over a sunflower field. To examine how the turbulent transfer coefficient inside tall grass canopies over a large domain represents the influence of the underlying surface on the air layer above, sensitivity tests are performed using a coupled system consisting of the NCEP Nonhydrostatic Mesoscale Model and LAPS.
NASA Technical Reports Server (NTRS)
Bui, Trong T.
1993-01-01
New turbulence modeling options recently implemented for the 3D version of Proteus, a Reynolds-averaged compressible Navier-Stokes code, are described. The implemented turbulence models include: the Baldwin-Lomax algebraic model, the Baldwin-Barth one-equation model, the Chien k-epsilon model, and the Launder-Sharma k-epsilon model. Features of this turbulence modeling package include: well documented and easy to use turbulence modeling options, uniform integration of turbulence models from different classes, automatic initialization of turbulence variables for calculations using one- or two-equation turbulence models, multiple solid boundaries treatment, and fully vectorized L-U solver for one- and two-equation models. Good agreements are obtained between the computational results and experimental data. Sensitivity of the compressible turbulent solutions with the method of y(+) computation, the turbulent length scale correction, and some compressibility corrections are examined in detail. Test cases show that the highly optimized one- and two-equation turbulence models can be used in routine 3D Navier-Stokes computations with no significant increase in CPU time as compared with the Baldwin-Lomax algebraic model.
NASA Technical Reports Server (NTRS)
Bui, Trong T.
1993-01-01
New turbulence modeling options recently implemented for the 3-D version of Proteus, a Reynolds-averaged compressible Navier-Stokes code, are described. The implemented turbulence models include: the Baldwin-Lomax algebraic model, the Baldwin-Barth one-equation model, the Chien k-epsilon model, and the Launder-Sharma k-epsilon model. Features of this turbulence modeling package include: well documented and easy to use turbulence modeling options, uniform integration of turbulence models from different classes, automatic initialization of turbulence variables for calculations using one- or two-equation turbulence models, multiple solid boundaries treatment, and fully vectorized L-U solver for one- and two-equation models. Validation test cases include the incompressible and compressible flat plate turbulent boundary layers, turbulent developing S-duct flow, and glancing shock wave/turbulent boundary layer interaction. Good agreement is obtained between the computational results and experimental data. Sensitivity of the compressible turbulent solutions with the method of y(sup +) computation, the turbulent length scale correction, and some compressibility corrections are examined in detail. The test cases show that the highly optimized one-and two-equation turbulence models can be used in routine 3-D Navier-Stokes computations with no significant increase in CPU time as compared with the Baldwin-Lomax algebraic model.
Kroeger, Ingo; Stadtmueller, Benjamin; Wagner, Christian; Weiss, Christian; Temirov, Ruslan; Tautz, F. Stefan; Kumpf, Christian
2011-12-21
The understanding and control of epitaxial growth of organic thin films is of crucial importance in order to optimize the performance of future electronic devices. In particular, the start of the submonolayer growth plays an important role since it often determines the structure of the first layer and subsequently of the entire molecular film. We have investigated the structure formation of 3,4,9,10-perylene-tetracarboxylic dianhydride and copper-phthalocyanine molecules on Au(111) using pair-potential calculations based on van der Waals and electrostatic intermolecular interactions. The results are compared with the fundamental lateral structures known from experiment and an excellent agreement was found for these weakly interacting systems. Furthermore, the calculations are even suitable for chemisorptive adsorption as demonstrated for copper-phthalocyanine/Cu(111), if the influence of charge transfer between substrate and molecules is known and the corresponding charge redistribution in the molecules can be estimated. The calculations are of general applicability for molecular adsorbate systems which are dominated by electrostatic and van der Waals interaction.
NASA Astrophysics Data System (ADS)
Grebeshkov, V. V.; Smolyakov, V. M.
2012-05-01
A 16-constant additive scheme was derived for calculating the physicochemical properties of saturated monoalcohols CH4O-C9H20O and decomposing the triangular numbers of the Pascal triangle based on the similarity of subgraphs in the molecular graphs (MGs) of the homologous series of these alcohols. It was shown, using this scheme for calculation of properties of saturated monoalcohols as an example, that each coefficient of the scheme (in other words, the number of methods to impose a chain of a definite length i 1, i 2, … on a molecular graph) is the result of the decomposition of the triangular numbers of the Pascal triangle. A linear dependence was found within the adopted classification of structural elements. Sixteen parameters of the schemes were recorded as linear combinations of 17 parameters. The enthalpies of vaporization L {298/K 0} of the saturated monoalcohols CH4O-C9H20O, for which there were no experimental data, were calculated. It was shown that the parameters are not chosen randomly when using the given procedure for constructing an additive scheme by decomposing the triangular numbers of the Pascal triangle.
A new model for the calculation and prediction of solar proton fluences
NASA Technical Reports Server (NTRS)
Feynman, Joan; Gabriel, Stephen B.
1990-01-01
A new predictive engineering model for the energy greater than 10 MeV and greater than 30 MeV solar proton environment at earth is reviewed. The data used are from observations made from 1956 through 1985. In this data set, the distinction between 'ordinary events' and 'anomalously large events' that was required in earlier models disappeared. This permitted the use of statistical analysis methods developed for ordinary events on the entire data set. The greater than 10-MeV fluences with the new model are about twice those expected on the basis of earlier models. At energies greater than 30 MeV, the old and new models agree.
NASA Technical Reports Server (NTRS)
Demuren, A. O.
1990-01-01
A multigrid method is presented for calculating turbulent jets in crossflow. Fairly rapid convergence is obtained with the k-epsilon turbulence model, but computations with a full Reynolds stress turbulence model (RSM) are not yet very efficient. Grid dependency tests show that there are slight differences between results obtained on the two finest grid levels. Computations using the RSM are significantly different from those with k-epsilon model and compare better to experimental data. Some work is still required to improve the efficiency of the computations with the RSM.
NASA Astrophysics Data System (ADS)
Talha, Nora; Bouazza, Benyounes; Guen Bouazza, Ahlam; Kadoun, Abd-Ed-Daim
2016-07-01
Steady-state electron properties are investigated in 6H-SiC at various temperatures, using Monte Carlo simulation where the band structure model is a major part when dealing with high fields. The aim of this work is to optimize the number of valleys involved in the simulation program in order to obtain accurate results while improving the calculation efficiency. For high fields, a five valley model was found to be more accurate than a three valley model and as efficient as the full band method though much less computer time-consuming.
Ignition calculations using a reduced coupled-mode electron- ion energy exchange model*
NASA Astrophysics Data System (ADS)
Garbett, W. J.; Chapman, D. A.
2016-03-01
Coupled-mode models for electron-ion energy exchange can predict large deviations from standard binary collision models in some regimes. A recently developed reduced coupled-mode model for electron-ion energy exchange, which accurately reproduces full numerical results over a wide range of density and temperature space, has been implemented in the Nym hydrocode and used to assess the impact on ICF capsule fuel assembly and performance. Simulations show a lack of sensitivity to the model, consistent with results from a range of simpler alternative models. Since the coupled-mode model is conceptually distinct to models based on binary collision theory, this result provides increased confidence that uncertainty in electron-ion energy exchange will not impact ignition attempts.
Mitrikas, V G
2015-01-01
Monitoring of the radiation loading on cosmonauts requires calculation of absorbed dose dynamics with regard to the stay of cosmonauts in specific compartments of the space vehicle that differ in shielding properties and lack means of radiation measurement. The paper discusses different aspects of calculation modeling of radiation effects on human body organs and tissues and reviews the effective dose estimates for cosmonauts working in one or another compartment over the previous period of the International space station operation. It was demonstrated that doses measured by a real or personal dosimeters can be used to calculate effective dose values. Correct estimation of accumulated effective dose can be ensured by consideration for time course of the space radiation quality factor. PMID:26292419
NASA Astrophysics Data System (ADS)
Seijo, Luis
1995-05-01
Presented in this paper, is a practical implementation of the use of the Wood-Boring Hamiltonian [Phys. Rev. B 18, 2701 (1978)] in atomic and molecular ab initio core model potential calculations (AIMP), as a means to include spin-orbit relativistic effects, in addition to the mass-velocity and Darwin operators, which were already included in the spin-free version of the relativistic AIMP method. Calculations on the neutral and singly ionized atoms of the halogen elements and sixth-row p-elements Tl-Rn are presented, as well as on the one or two lowest lying states of the diatomic molecules HX, HX+, (X=F, Cl, Br, I, At) TlH, PbH, BiH, and PoH. The calculated spin-orbit splittings and bonding properties show a stable, good quality, of the size of what can be expected from an effective potential method.
Mitrikas, V G
2015-01-01
Monitoring of the radiation loading on cosmonauts requires calculation of absorbed dose dynamics with regard to the stay of cosmonauts in specific compartments of the space vehicle that differ in shielding properties and lack means of radiation measurement. The paper discusses different aspects of calculation modeling of radiation effects on human body organs and tissues and reviews the effective dose estimates for cosmonauts working in one or another compartment over the previous period of the International space station operation. It was demonstrated that doses measured by a real or personal dosimeters can be used to calculate effective dose values. Correct estimation of accumulated effective dose can be ensured by consideration for time course of the space radiation quality factor.
NASA Astrophysics Data System (ADS)
Zhang, F. H.; Zhou, G. D.; Cui, W. Y.; Zhang, B.
2013-01-01
An investigation on the distribution of neutron exposuresin the low-mass asymptotic giant branch (AGB) stars is presented, according to the s-process nucleosynthesis model with the (^{12}C(α,n)^{16}O) reaction occurring under radiative conditions in the interpulse phases. The model parameters, such as the fractional overlap of two successive convective thermal pulses (r), the mass fraction of (^{13}C) pocket in the He intershell (q), and the mass of the effective (^{13}C) in the (^{13}C) pocket, vary with pulse number. Considering these factors, the calculating method for the distribution of neutron exposures in the He intershell is presented. This method has the characteristics of simplicity and universality. Using this method, the exposure distributions of the stellar model for a star with a mass of (3 {M_⊙}) and solar metallicity are calculated. The results suggest that, with the reasonable assumption that the (^{13}C) pocket has a uniform composition, the final exposure distribution can still be approximated by an exponential law. For a stellar model with a fixed initial mass and metallicity, there is a definite relation between the mean neutron exposure ({τ_0}), and the neutron exposure (Δτ) for per interpulse. That is ({τ_0} = 0.434λ({q_{1,}}{q_2} … {q_{{m_{max}} + 1}},{r_{1,}}{r_2} … {r_{{m_{max}} + 1}})Δτ), where ({m_{max}}) is the total number of thermal pulses with the third dredge-up episode, and the proportional coefficient (λ({q_{1,}}{q_2} … {q_{{m_{max}} + 1}},{r_{1,}}{r_2} … {r_{{m_{max}} + 1}})) can be determined through an exponential curve fitting to the final exposure distribution. This new formula quantitatively unifies the classical model with the stellar model in terms of the distribution of neutron exposures, and makes the classical model continue to offer guidance and constraints to the s-process numerical calculations in stellar models.
NASA Astrophysics Data System (ADS)
Merker, L.; Weichselbaum, A.; Costi, T. A.
2012-08-01
Recent developments in the numerical renormalization group (NRG) allow the construction of the full density matrix (FDM) of quantum impurity models [see A. Weichselbaum and J. von Delft, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.99.076402 99, 076402 (2007)] by using the completeness of the eliminated states introduced by F. B. Anders and A. Schiller [F. B. Anders and A. Schiller, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.95.196801 95, 196801 (2005)]. While these developments prove particularly useful in the calculation of transient response and finite-temperature Green's functions of quantum impurity models, they may also be used to calculate thermodynamic properties. In this paper, we assess the FDM approach to thermodynamic properties by applying it to the Anderson impurity model. We compare the results for the susceptibility and specific heat to both the conventional approach within NRG and to exact Bethe ansatz results. We also point out a subtlety in the calculation of the susceptibility (in a uniform field) within the FDM approach. Finally, we show numerically that for the Anderson model, the susceptibilities in response to a local and a uniform magnetic field coincide in the wide-band limit, in accordance with the Clogston-Anderson compensation theorem.
Rothman, A.C.
1980-01-01
Two of the most powerful theoretical constraints on gague theories of the weak and electromagnetic interactions are calculability of the generalized Cabibbo mixing angles and Natural Flavor Conservation (NFC) in gauge boson and Higgs mediated neutral currents. Much of the work in these areas has been done in the context of the standard SU(2) x U(1) gauge model Calculability is defined here in a precise way for an arbitrary gauge model with an unbroken U(1) symmetry (WET) for the first time and tis implications are explored. Also in the context of an arbitrary WET, it is found that NFC requires all quarks of a given charge and helicity to transform identically under the gauge group. The question as to whether a WET that obeys the fiats of NFC can support calculable mixing angles is answered in the negative. Similar results have been obtained for the standard model. This thesis addresses other outstanding problems in these areas, as well as formulating and examining a new left-right symmetric gauge model of the weak and electromagnetic interactions which exploits the gauge group SU(2)/sub L/ x SU(2)/sub R/ x U(1) employed first by Pati, Salam, and Mohapatra.
Denegri, Bernard; Matić, Mirela; Kronja, Olga
2014-08-14
The most comprehensive nucleofugality scale, based on the correlation and solvolytic rate constants of benzhydrylium derivatives, has recently been proposed by Mayr and co-workers (Acc. Chem. Res., 2010, 43, 1537-1549). In this work, the possibility of employing quantum chemical calculations in further determination of nucleofugality (Nf) parameters of leaving groups is explored. Whereas the heterolytic transition state of benzhydryl carboxylate cannot be optimized by quantum chemical calculations, the possibility of an alternative model reaction is examined in order to obtain nucleofugality parameters of various aliphatic carboxylates, which can properly be included in the current nucleofugality scale. For that purpose, ground and transition state structures have been optimized for the proposed model reaction, which includes anchimerically assisted heterolytic dissociation of cis-2,3-dihydroxycyclopropyl trans-carboxylates. The validity of the model reaction as well as of applied DFT methods in the presence of the IEFPCM solvation model is verified by correlating calculated free energies of activation of the model reaction with literature experimental data for solvolysis of reference dianisylmethyl carboxylates. For this purpose the ability of several functionals (including popular B3LYP) is examined, among which the M06-2X gives the best results. The very good correlation indicates acceptable accurate relative reactivities of aliphatic carboxylates, and enables the estimation of rate constants for solvolysis of other dianisylmethyl carboxylates in aqueous ethanol mixtures, from which the corresponding Nf parameters are determined using mentioned Mayr's equation. In addition, DFT calculations confirm the previous experimental observation that the abilities of aliphatic carboxylate leaving groups in solution are governed by the inductive effect of substituents attached to the carboxyl group. PMID:24964919
Cashmore, Jason; Golubev, Sergey; Dumont, Jose Luis; Sikora, Marcin; Alber, Markus; Ramtohul, Mark
2012-06-15
Purpose: A linac delivering intensity-modulated radiotherapy (IMRT) can benefit from a flattening filter free (FFF) design which offers higher dose rates and reduced accelerator head scatter than for conventional (flattened) delivery. This reduction in scatter simplifies beam modeling, and combining a Monte Carlo dose engine with a FFF accelerator could potentially increase dose calculation accuracy. The objective of this work was to model a FFF machine using an adapted version of a previously published virtual source model (VSM) for Monte Carlo calculations and to verify its accuracy. Methods: An Elekta Synergy linear accelerator operating at 6 MV has been modified to enable irradiation both with and without the flattening filter (FF). The VSM has been incorporated into a commercially available treatment planning system (Monaco Trade-Mark-Sign v 3.1) as VSM 1.6. Dosimetric data were measured to commission the treatment planning system (TPS) and the VSM adapted to account for the lack of angular differential absorption and general beam hardening. The model was then tested using standard water phantom measurements and also by creating IMRT plans for a range of clinical cases. Results: The results show that the VSM implementation handles the FFF beams very well, with an uncertainty between measurement and calculation of <1% which is comparable to conventional flattened beams. All IMRT beams passed standard quality assurance tests with >95% of all points passing gamma analysis ({gamma} < 1) using a 3%/3 mm tolerance. Conclusions: The virtual source model for flattened beams was successfully adapted to a flattening filter free beam production. Water phantom and patient specific QA measurements show excellent results, and comparisons of IMRT plans generated in conventional and FFF mode are underway to assess dosimetric uncertainties and possible improvements in dose calculation and delivery.
NASA Astrophysics Data System (ADS)
Ueunten, Kevin K.
With the scheduled 30 September 2015 integration of Unmanned Aerial System (UAS) into the national airspace, the Federal Aviation Administration (FAA) is concerned with UAS capabilities to sense and avoid conflicts. Since the operator is outside the cockpit, the proposed collision awareness plugin (CAPlugin), based on probability and error propagation, conservatively predicts potential conflicts with other aircraft and airspaces, thus increasing the operator's situational awareness. The conflict predictions are calculated using a forward state estimator (FSE) and a conflict calculator. Predicting an aircraft's position, modeled as a mixed Gaussian distribution, is the FSE's responsibility. Furthermore, the FSE supports aircraft engaged in the following three flight modes: free flight, flight path following and orbits. The conflict calculator uses the FSE result to calculate the conflict probability between an aircraft and airspace or another aircraft. Finally, the CAPlugin determines the highest conflict probability and warns the operator. In addition to discussing the FSE free flight, FSE orbit and the airspace conflict calculator, this thesis describes how each algorithm is implemented and tested. Lastly two simulations demonstrates the CAPlugin's capabilities.
Numerical models for ac loss calculation in large-scale applications of HTS coated conductors
NASA Astrophysics Data System (ADS)
Quéval, Loïc; Zermeño, Víctor M. R.; Grilli, Francesco
2016-02-01
Numerical models are powerful tools to predict the electromagnetic behavior of superconductors. In recent years, a variety of models have been successfully developed to simulate high-temperature-superconducting (HTS) coated conductor tapes. While the models work well for the simulation of individual tapes or relatively small assemblies, their direct applicability to devices involving hundreds or thousands of tapes, e.g., coils used in electrical machines, is questionable. Indeed, the simulation time and memory requirement can quickly become prohibitive. In this paper, we develop and compare two different models for simulating realistic HTS devices composed of a large number of tapes: (1) the homogenized model simulates the coil using an equivalent anisotropic homogeneous bulk with specifically developed current constraints to account for the fact that each turn carries the same current; (2) the multi-scale model parallelizes and reduces the computational problem by simulating only several individual tapes at significant positions of the coil’s cross-section using appropriate boundary conditions to account for the field generated by the neighboring turns. Both methods are used to simulate a coil made of 2000 tapes, and compared against the widely used H-formulation finite-element model that includes all the tapes. Both approaches allow faster simulations of large number of HTS tapes by 1-3 orders of magnitudes, while maintaining good accuracy of the results. Both models can therefore be used to design and optimize large-scale HTS devices. This study provides key advancement with respect to previous versions of both models. The homogenized model is extended from simple stacks to large arrays of tapes. For the multi-scale model, the importance of the choice of the current distribution used to generate the background field is underlined; the error in ac loss estimation resulting from the most obvious choice of starting from a uniform current distribution is revealed.
NASA Astrophysics Data System (ADS)
Noji, H.
This study investigates the losses in a two conducting-layer REBCO cable fabricated by researchers at Furukawa Electric Co. Ltd. The losses were calculated using a combination of my electric circuit (EC) model with a two-dimensional finite element method (2D FEM). The helical pitches of the tapes in each layer, P1 and P2, were adjusted to equalize the current in both cable layers, although the loss calculation assumed infinite helical pitches and the same current in each layer at first. The results showed that the losses depended on the relative tape-position angle between the layers (θ/θ'), because the vertical field between adjacent tapes in the same layer varied with θ/θ'. When simulating the real cable, the helical pitches were adjusted and the layer currents were calculated by the EC model. These currents were input to the 2D FEM to compute the losses. The losses changed along the cable length because the difference between P1 and P2 altered the θ/θ' along this direction. The average angle-dependent and position-dependent losses were equal and closely approximated the measured losses. As an example to reduce the loss in this cable, the angle and the helical pitches were fixed at θ/θ' = 0.5 and P1 = P2 = 100 mm (S-direction). The calculation with these conditions indicated that the loss is about one order of magnitude lower than the measurement.
NASA Astrophysics Data System (ADS)
Liu, Lang
2015-05-01
The unitary correlation operator method (UCOM) and the similarity renormalization group theory (SRG) are compared and discussed in the framework of the no-core Monte Carlo shell model (MCSM) calculations for 3H and 4He. The treatment of spurious center-of-mass motion by Lawson's prescription is performed in the MCSM calculations. These results with both transformed interactions show good suppression of spurious center-of-mass motion with proper Lawson's prescription parameter βc.m. values. The UCOM potentials obtain faster convergence of total energy for the ground state than that of SRG potentials in the MCSM calculations, which differs from the cases in the no-core shell model calculations (NCSM). These differences are discussed and analyzed in terms of the truncation scheme in the MCSM and NCSM, as well as the properties of the potentials of SRG and UCOM. Supported by Fundamental Research Funds for the Central Universities (JUSRP1035), National Natural Science Foundation of China (11305077)
Comment on 'Model calculation of the scanned field enhancement factor of CNTs'.
Zhbanov, A I; Lee, Yong-Gu; Pogorelov, E G; Chang, Yia-Chung
2010-09-01
The model proposed by Ahmad and Tripathi (2006 Nanotechnology 17 3798) demonstrates that the field enhancement factor of carbon nanotubes (CNTs) reaches a maximum at a certain length. Here, we show that this behavior should not occur and suggest our correction to this model. PMID:20689163
UNCERTAINTY AND THE JOHNSON-ETTINGER MODEL FOR VAPOR INTRUSION CALCULATIONS
The Johnson-Ettinger Model is widely used for assessing the impacts of contaminated vapors on residential air quality. Typical use of this model relies on a suite of estimated data, with few site-specific measurements. Software was developed to provide the public with automate...
A Classroom Note on: Modeling Functions with the TI-83/84 Calculator
ERIC Educational Resources Information Center
Lubowsky, Jack
2011-01-01
In Pre-Calculus courses, students are taught the composition and combination of functions to model physical applications. However, when combining two or more functions into a single more complicated one, students may lose sight of the physical picture which they are attempting to model. A block diagram, or flow chart, in which each block…
UAH mathematical model of the variable polarity plasma ARC welding system calculation
NASA Technical Reports Server (NTRS)
Hung, R. J.
1994-01-01
Significant advantages of Variable Polarity Plasma Arc (VPPA) welding process include faster welding, fewer repairs, less joint preparation, reduced weldment distortion, and absence of porosity. A mathematical model is presented to analyze the VPPA welding process. Results of the mathematical model were compared with the experimental observation accomplished by the GDI team.
Yeh, Hsu-Chi; Phalen, R.F.; Chang, I.
1995-12-01
The National Council on Radiation Protection and Measurements (NCRP) in the United States and the International Commission on Radiological Protection (ICRP) have been independently reviewing and revising respiratory tract dosimetry models for inhaled radioactive aerosols. The newly proposed NCRP respiratory tract dosimetry model represents a significant change in philosophy from the old ICRP Task Group model. The proposed NCRP model describes respiratory tract deposition, clearance, and dosimetry for radioactive substances inhaled by workers and the general public and is expected to be published soon. In support of the NCRP proposed model, ITRI staff members have been developing computer software. Although this software is still incomplete, the deposition portion has been completed and can be used to calculate inhaled particle deposition within the respiratory tract for particle sizes as small as radon and radon progeny ({approximately} 1 nm) to particles larger than 100 {mu}m. Recently, ICRP published their new dosimetric model for the respiratory tract, ICRP66. Based on ICRP66, the National Radiological Protection Board of the UK developed PC-based software, LUDEP, for calculating particle deposition and internal doses. The purpose of this report is to compare the calculated respiratory tract deposition of particles using the NCRP/ITRI model and the ICRP66 model, under the same particle size distribution and breathing conditions. In summary, the general trends of the deposition curves for the two models were similar.
Bypass Transitional Flow Calculations Using a Navier-Stokes Solver and Two-Equation Models
NASA Technical Reports Server (NTRS)
Liuo, William W.; Shih, Tsan-Hsing; Povinelli, L. A. (Technical Monitor)
2000-01-01
Bypass transitional flows over a flat plate were simulated using a Navier-Stokes solver and two equation models. A new model for the bypass transition, which occurs in cases with high free stream turbulence intensity (TI), is described. The new transition model is developed by including an intermittency correction function to an existing two-equation turbulence model. The advantages of using Navier-Stokes equations, as opposed to boundary-layer equations, in bypass transition simulations are also illustrated. The results for two test flows over a flat plate with different levels of free stream turbulence intensity are reported. Comparisons with the experimental measurements show that the new model can capture very well both the onset and the length of bypass transition.
Reflected kinetics model for nuclear space reactor kinetics and control scoping calculations
Washington, K.E.
1986-05-01
The objective of this research is to develop a model that offers an alternative to the point kinetics (PK) modelling approach in the analysis of space reactor kinetics and control studies. Modelling effort will focus on the explicit treatment of control drums as reactivity input devices so that the transition to automatic control can be smoothly done. The proposed model is developed for the specific integration of automatic control and the solution of the servo mechanism problem. The integration of the kinetics model with an automatic controller will provide a useful tool for performing space reactor scoping studies for different designs and configurations. Such a tool should prove to be invaluable in the design phase of a space nuclear system from the point of view of kinetics and control limitations.
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiao-Wen; Jupp, David L. B.
1991-01-01
The bidirectional radiance or reflectance of a forest or woodland can be modeled using principles of geometric optics and Boolean models for random sets in a three dimensional space. This model may be defined at two levels, the scene includes four components; sunlight and shadowed canopy, and sunlit and shadowed background. The reflectance of the scene is modeled as the sum of the reflectances of the individual components as weighted by their areal proportions in the field of view. At the leaf level, the canopy envelope is an assemblage of leaves, and thus the reflectance is a function of the areal proportions of sunlit and shadowed leaf, and sunlit and shadowed background. Because the proportions of scene components are dependent upon the directions of irradiance and exitance, the model accounts for the hotspot that is well known in leaf and tree canopies.
REMAP: A reaction transport model for isotope ratio calculations in porous media
NASA Astrophysics Data System (ADS)
Chernyavsky, Boris M.; Wortmann, Ulrich G.
2007-02-01
Reactive transport modeling has become an important tool in geochemistry and has recently been expanded to isotopic studies as well. However, there is currently no publicly available code specifically tailored to isotopic studies. We therefore present here a computer program for 1-D reactive transport modeling of bacterially mediated isotope fractionation processes in porous media. Our numerical method allows the modeling of both stationary and time-dependent processes and implements various boundary condition types, including floating boundaries. The model specifically handles cases where the substrate is fully consumed, either at the lower boundary or at any point within the model. We provide a detailed analysis of our implementation as well as some example cases. The program is designed to be run on Matlab or Octave, which are available for all major operating systems.
Calculated mineral precipitation upon evaporation of a model Martian groundwater near 0 C
NASA Technical Reports Server (NTRS)
Debraal, J. D.; Reed, M. H.; Plumlee, G. S.
1992-01-01
Previously, the effect of weathering a basalt of Shergotty meteorite composition with pure water buffered at martian atmospheric values of CO2 and O2, to place constraints upon the composition of martian groundwater, and to determine possible equilibrium mineral assemblages was calculated. A revised calculation of the composition of the aqueous phase in the weathering reaction as a function of the amount of basalt titrated into the solution is shown. The concentrations of sulfate and chloride ions increase in the solution from high water/rock ratios (w/r) on the left to low water/rock ratios on the right, until at w/r = 1, where 1 kg of basalt has been titrated, sulfate concentration is 1564 ppm and chloride is 104 ppm. This resulting fluid is dominated by sulfate and sodium, with bicarbonate and chloride at about the same concentration. This solution was evaporated in an attempt to determine if the resulting evaporite can explain the Viking XRF data. The program CHILLER was used to evaporate this solution at 0.1 C.
A comparison of models used for calculation of RFLP pattern frequencies.
Herrin, G
1992-11-01
In recent years the application of DNA typing information to criminal investigations has gained widespread acceptance. The primary method currently in use relies on length variation of DNA restriction fragments between individuals. These variations are identified using variable number tandem repeat (VNTR) DNA probes. As this technology becomes more widely used, it is crucial that scientifically valid methods of interpreting the significance of a DNA typing result be adopted. The method chosen should not only give a reliable approximation of the statistical likelihood of a particular RFLP pattern occurring, but should also be easy to present and for the court to understand. In this manuscript five methods of calculating a frequency of occurrence of a RFLP pattern will be presented (fixed bin genotype, floating bin phenotype, floating bin genotype, National Research Council (NRC) method using fixed bins and the NRC method using floating bins). The calculations discussed here demonstrated that the fixed bin genotype method produces a frequency very similar to that obtained from floating bin phenotypes. In addition, regardless of the method chosen or the database size, the frequency of any particular banding pattern in the population over several loci was found to be very rare.
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Badavi, Francis F.
1993-01-01
Nuclear fragmentation cross sections of Silberberg and Tsao that are more accurate for a hydrogen target were implemented in the data base to replace those of Rudstam for a galactic cosmic ray transport code (HZETRN). Sample calculations were made for the transported galactic cosmic ray flux through a liquid hydrogen shield at solar minimum condition to determine the effect of such a change. The transported flux based on the Silberberg-Tsao semiempirical formalism contains fewer high-LET (linear energy transfer) components but more low-LET components than the results based on Rudstam's formalism: and this disparity deepens as the shield thickness increases. A comparison of the results obtained from using both energy-dependent and energy-independent cross sections of Silberberg and Tsao indicates that the energy-independent assumption results in an underestimation of high-LET flux above 100 keV/micron by approximately 40 percent for a 15-g/cm(sup 2) thickness of liquid hydrogen. Similar results were obtained in a previous study when both energy-dependent and energy-independent cross sections of Rudstam were considered. Nonetheless, the present study found that an energy-independent calculation would be best accomplished by using Rudstam's cross sections as done in the past for various engineering applications.
NASA Astrophysics Data System (ADS)
Greco, Cristina; Yiang, Ying; Kremer, Kurt; Chen, Jeff; Daoulas, Kostas
Polymer liquid crystals, apart from traditional applications as high strength materials, are important for new technologies, e.g. Organic Electronics. Their studies often invoke mesoscale models, parameterized to reproduce thermodynamic properties of the real material. Such top-down strategies require advanced simulation techniques, predicting accurately the thermodynamics of mesoscale models as a function of characteristic features and parameters. Here a recently developed model describing nematic polymers as worm-like chains interacting with soft directional potentials is considered. We present a special thermodynamic integration scheme delivering free energies in particle-based Monte Carlo simulations of this model, avoiding thermodynamic singularities. Conformational and structural properties, as well as Helmholtz free energies are reported as a function of interaction strength. They are compared with state-of-art SCF calculations invoking a continuum analog of the same model, demonstrating the role of liquid-packing and fluctuations.
NASA Technical Reports Server (NTRS)
Thompson, Anne M.; Stewart, Richard W.
1994-01-01
Monte Carlo methods are frequently applied to the evaluation of uncertainties in models with multiple inputs that themselves have associated imprecisions. In the case of photochemical models used to evaluate changes in O3 or OH, inputs analyzed include rate coefficients measured in the laboratory, and chemical and physical constituents measured in the atmosphere. The Monte Carlo method was used with the 1-dimensional GSFC tropospheric photochemical model to examine uncertainty propagation to calculation of NO(x) and the major oxidants (O3, OH) in the upper troposphere. In all cases chemical kinetics inputs are varied and the NO(x) perturbation of a subsonic fleet is simulated. A series of model runs is used to explore sensitivities of model-computed parameters to other parameters: heterogeneous processes, the uncertainty in upper tropospheric H2O vapor measurements, aircraft emissions at different latitudes.
Umarova, Zhanat; Botayeva, Saule; Yegenova, Aliya; Usenova, Aisaule
2015-05-15
In the given article, the main thermodynamic aspects of the issue of modeling diffusion transfer in molecular sieves have been formulated. Dissipation function is used as a basic notion. The differential equation, connecting volume flow with the change of the concentration of catchable component has been derived. As a result, the expression for changing the concentration of the catchable component and the coefficient of membrane detecting has been received. As well, the system approach to describing the process of gases separation in ultra porous membranes has been realized and micro and meso-levels of mathematical modeling have been distinguished. The non-ideality of the shared system is primarily taken into consideration at the micro-level and the departure from the diffusion law of Fick has been taken into account. The calculation method of selectivity considering fractal structure of membranes has been developed at the meso level. The calculation algorithm and its software implementation have been suggested.
Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung
2013-03-01
The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards.