Sample records for model calculation based

  1. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Lu, W

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less

  2. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  3. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  4. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  5. Simulation and analysis of main steam control system based on heat transfer calculation

    NASA Astrophysics Data System (ADS)

    Huang, Zhenqun; Li, Ruyan; Feng, Zhongbao; Wang, Songhan; Li, Wenbo; Cheng, Jiwei; Jin, Yingai

    2018-05-01

    In this paper, after thermal power plant 300MW boiler was studied, mat lab was used to write calculation program about heat transfer process between the main steam and boiler flue gas and amount of water was calculated to ensure the main steam temperature keeping in target temperature. Then heat transfer calculation program was introduced into Simulink simulation platform based on control system multiple models switching and heat transfer calculation. The results show that multiple models switching control system based on heat transfer calculation not only overcome the large inertia of main stream temperature, a large hysteresis characteristic of main stream temperature, but also adapted to the boiler load changing.

  6. Full-wave and ray-based modeling of cross-beam energy transfer between laser beams with distributed phase plates and polarization smoothing

    DOE PAGES

    Follett, R. K.; Edgell, D. H.; Froula, D. H.; ...

    2017-10-20

    Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less

  7. Full-wave and ray-based modeling of cross-beam energy transfer between laser beams with distributed phase plates and polarization smoothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follett, R. K.; Edgell, D. H.; Froula, D. H.

    Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less

  8. Modeling of the metallic port in breast tissue expanders for photon radiotherapy.

    PubMed

    Yoon, Jihyung; Xie, Yibo; Heins, David; Zhang, Rui

    2018-03-30

    The purpose of this study was to model the metallic port in breast tissue expanders and to improve the accuracy of dose calculations in a commercial photon treatment planning system (TPS). The density of the model was determined by comparing TPS calculations and ion chamber (IC) measurements. The model was further validated and compared with two widely used clinical models by using a simplified anthropomorphic phantom and thermoluminescent dosimeters (TLD) measurements. Dose perturbations and target coverage for a single postmastectomy radiotherapy (PMRT) patient were also evaluated. The dimensions of the metallic port model were determined to be 1.75 cm in diameter and 5 mm in thickness. The density of the port was adjusted to be 7.5 g/cm 3 which minimized the differences between IC measurements and TPS calculations. Using the simplified anthropomorphic phantom, we found the TPS calculated point doses based on the new model were in agreement with TLD measurements within 5.0% and were more accurate than doses calculated based on the clinical models. Based on the photon treatment plans for a real patient, we found that the metallic port has a negligible dosimetric impact on chest wall, while the port introduced significant dose shadow in skin area. The current clinical port models either overestimate or underestimate the attenuation from the metallic port, and the dose perturbation depends on the plan and the model in a complex way. TPS calculations based on our model of the metallic port showed good agreement with measurements for all cases. This new model could improve the accuracy of dose calculations for PMRT patients who have temporary tissue expanders implanted during radiotherapy and could potentially reduce the risk of complications after the treatment. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. Influence of channel base current and varying return stroke speed on the calculated fields of three important return stroke models

    NASA Technical Reports Server (NTRS)

    Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard

    1991-01-01

    Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.

  10. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Elicson; Bentley Harwood; Jim Bouchard

    Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less

  12. Group additivity calculations of the thermodynamic properties of unfolded proteins in aqueous solution: a critical comparison of peptide-based and HKF models.

    PubMed

    Hakin, A W; Hedwig, G R

    2001-02-15

    A recent paper in this journal [Amend and Helgeson, Biophys. Chem. 84 (2000) 105] presented a new group additivity model to calculate various thermodynamic properties of unfolded proteins in aqueous solution. The parameters given for the revised Helgeson-Kirkham-Flowers (HKF) equations of state for all the constituent groups of unfolded proteins can be used, in principle, to calculate the partial molar heat capacity, C(o)p.2, and volume, V2(0), at infinite dilution of any polypeptide. Calculations of the values of C(o)p.2 and V2(0) for several polypeptides have been carried out to test the predictive utility of the HKF group additivity model. The results obtained are in very poor agreement with experimental data, and also with results calculated using a peptide-based group additivity model. A critical assessment of these two additivity models is presented.

  13. Pricing of premiums for equity-linked life insurance based on joint mortality models

    NASA Astrophysics Data System (ADS)

    Riaman; Parmikanti, K.; Irianingsih, I.; Supian, S.

    2018-03-01

    Life insurance equity - linked is a financial product that not only offers protection, but also investment. The calculation of equity-linked life insurance premiums generally uses mortality tables. Because of advances in medical technology and reduced birth rates, it appears that the use of mortality tables is less relevant in the calculation of premiums. To overcome this problem, we use a combination mortality model which in this study is determined based on Indonesian Mortality table 2011 to determine the chances of death and survival. In this research, we use the Combined Mortality Model of the Weibull, Inverse-Weibull, and Gompertz Mortality Model. After determining the Combined Mortality Model, simulators calculate the value of the claim to be given and the premium price numerically. By calculating equity-linked life insurance premiums well, it is expected that no party will be disadvantaged due to the inaccuracy of the calculation result

  14. Predicted phototoxicities of carbon nano-material by quantum mechanical calculations

    EPA Science Inventory

    The purpose of this research is to develop a predictive model for the phototoxicity potential of carbon nanomaterials (fullerenols and single-walled carbon nanotubes). This model is based on the quantum mechanical (ab initio) calculations on these carbon-based materials and compa...

  15. The induced electric field due to a current transient

    NASA Astrophysics Data System (ADS)

    Beck, Y.; Braunstein, A.; Frankental, S.

    2007-05-01

    Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.

  16. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations fo...

  17. Using experimental data to test an n -body dynamical model coupled with an energy-based clusterization algorithm at low incident energies

    NASA Astrophysics Data System (ADS)

    Kumar, Rohit; Puri, Rajeev K.

    2018-03-01

    Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.

  18. CELSS scenario analysis: Breakeven calculations

    NASA Technical Reports Server (NTRS)

    Mason, R. M.

    1980-01-01

    A model of the relative mass requirements of food production components in a controlled ecological life support system (CELSS) based on regenerative concepts is described. Included are a discussion of model scope, structure, and example calculations. Computer programs for cultivar and breakeven calculations are also included.

  19. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    NASA Astrophysics Data System (ADS)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  20. CDMBE: A Case Description Model Based on Evidence

    PubMed Central

    Zhu, Jianlin; Yang, Xiaoping; Zhou, Jing

    2015-01-01

    By combining the advantages of argument map and Bayesian network, a case description model based on evidence (CDMBE), which is suitable to continental law system, is proposed to describe the criminal cases. The logic of the model adopts the credibility logical reason and gets evidence-based reasoning quantitatively based on evidences. In order to consist with practical inference rules, five types of relationship and a set of rules are defined to calculate the credibility of assumptions based on the credibility and supportability of the related evidences. Experiments show that the model can get users' ideas into a figure and the results calculated from CDMBE are in line with those from Bayesian model. PMID:26421006

  1. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less

  2. Comparison of results of experimental research with numerical calculations of a model one-sided seal

    NASA Astrophysics Data System (ADS)

    Joachimiak, Damian; Krzyślak, Piotr

    2015-06-01

    Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.

  3. Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru

    2015-05-01

    Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.

  4. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  5. Comparison between phenomenological and ab-initio reaction and relaxation models in DSMC

    NASA Astrophysics Data System (ADS)

    Sebastião, Israel B.; Kulakhmetov, Marat; Alexeenko, Alina

    2016-11-01

    New state-specific vibrational-translational energy exchange and dissociation models, based on ab-initio data, are implemented in direct simulation Monte Carlo (DSMC) method and compared to the established Larsen-Borgnakke (LB) and total collision energy (TCE) phenomenological models. For consistency, both the LB and TCE models are calibrated with QCT-calculated O2+O data. The model comparison test cases include 0-D thermochemical relaxation under adiabatic conditions and 1-D normal shockwave calculations. The results show that both the ME-QCT-VT and LB models can reproduce vibrational relaxation accurately but the TCE model is unable to reproduce nonequilibrium rates even when it is calibrated to accurate equilibrium rates. The new reaction model does capture QCT-calculated nonequilibrium rates. For all investigated cases, we discuss the prediction differences based on the new model features.

  6. The crustal structure in the transition zone between the western and eastern Barents Sea

    NASA Astrophysics Data System (ADS)

    Shulgin, Alexey; Mjelde, Rolf; Faleide, Jan Inge; Høy, Tore; Flueh, Ernst; Thybo, Hans

    2018-04-01

    We present a crustal-scale seismic profile in the Barents Sea based on new data. Wide-angle seismic data were recorded along a 600 km long profile at 38 ocean bottom seismometer and 52 onshore station locations. The modeling uses the joint refraction/reflection tomography approach where co-located multi-channel seismic reflection data constrain the sedimentary structure. Further, forward gravity modeling is based on the seismic model. We also calculate net regional erosion based on the calculated shallow velocity structure.

  7. The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation

    PubMed Central

    Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt

    2010-01-01

    Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.

  8. Three-Dimensional Electron Beam Dose Calculations.

    NASA Astrophysics Data System (ADS)

    Shiu, Almon Sowchee

    The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry. A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems.

  9. Comparison of TG-43 and TG-186 in breast irradiation using a low energy electronic brachytherapy source.

    PubMed

    White, Shane A; Landry, Guillaume; Fonseca, Gabriel Paiva; Holt, Randy; Rusch, Thomas; Beaulieu, Luc; Verhaegen, Frank; Reniers, Brigitte

    2014-06-01

    The recently updated guidelines for dosimetry in brachytherapy in TG-186 have recommended the use of model-based dosimetry calculations as a replacement for TG-43. TG-186 highlights shortcomings in the water-based approach in TG-43, particularly for low energy brachytherapy sources. The Xoft Axxent is a low energy (<50 kV) brachytherapy system used in accelerated partial breast irradiation (APBI). Breast tissue is a heterogeneous tissue in terms of density and composition. Dosimetric calculations of seven APBI patients treated with Axxent were made using a model-based Monte Carlo platform for a number of tissue models and dose reporting methods and compared to TG-43 based plans. A model of the Axxent source, the S700, was created and validated against experimental data. CT scans of the patients were used to create realistic multi-tissue/heterogeneous models with breast tissue segmented using a published technique. Alternative water models were used to isolate the influence of tissue heterogeneity and backscatter on the dose distribution. Dose calculations were performed using Geant4 according to the original treatment parameters. The effect of the Axxent balloon applicator used in APBI which could not be modeled in the CT-based model, was modeled using a novel technique that utilizes CAD-based geometries. These techniques were validated experimentally. Results were calculated using two dose reporting methods, dose to water (Dw,m) and dose to medium (Dm,m), for the heterogeneous simulations. All results were compared against TG-43-based dose distributions and evaluated using dose ratio maps and DVH metrics. Changes in skin and PTV dose were highlighted. All simulated heterogeneous models showed a reduced dose to the DVH metrics that is dependent on the method of dose reporting and patient geometry. Based on a prescription dose of 34 Gy, the average D90 to PTV was reduced by between ~4% and ~40%, depending on the scoring method, compared to the TG-43 result. Peak skin dose is also reduced by 10%-15% due to the absence of backscatter not accounted for in TG-43. The balloon applicator also contributed to the reduced dose. Other ROIs showed a difference depending on the method of dose reporting. TG-186-based calculations produce results that are different from TG-43 for the Axxent source. The differences depend strongly on the method of dose reporting. This study highlights the importance of backscatter to peak skin dose. Tissue heterogeneities, applicator, and patient geometries demonstrate the need for a more robust dose calculation method for low energy brachytherapy sources.

  10. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  11. Method and system for detecting a failure or performance degradation in a dynamic system such as a flight vehicle

    NASA Technical Reports Server (NTRS)

    Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)

    2003-01-01

    A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.

  12. M ξ, M αβ, M γ and M m X-ray production cross-sections for elements with 71⩽ z⩽92 at 5.96 keV photon energy

    NASA Astrophysics Data System (ADS)

    Sharma, Manju; Sharma, Veena; Kumar, Sanjeev; Puri, S.; Singh, Nirmal

    2006-11-01

    The M ξ, M αβ, M γ and M m X-ray production (XRP) cross-sections have been measured for the elements with 71⩽ Z⩽92 at 5.96 keV incident photon energy satisfying EM1< Einc< EL3, where EM1(L3) is the M 1(L 3) subshell binding energy. These XRP cross-sections have been calculated using photoionization cross-sections based on the relativistic Dirac-Hartree-Slater (RDHS) model with three sets of X-ray emission rates, fluorescence, Coster-Kronig and super Coster-Kronig yields based on (i) the non-relativistic Hartree-Slater (NRHS) potential model, (ii) the RDHS model and (iii) the relativistic Dirac-Fock (RDF) model. For the third set, the M i ( i=1-5) subshell fluorescence yields have been calculated using the RDF model-based X-ray emission rates and total widths reevaluated to incorporate the RDF model-based radiative widths. The measured cross-sections have been compared with the calculated values to check the applicability of the physical parameters based on different models.

  13. Evaluating the effect of human activity patterns on air pollution exposure using an integrated field-based and agent-based modelling framework

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Beelen, Rob M. J.; de Bakker, Merijn P.; Karssenberg, Derek

    2015-04-01

    Constructing spatio-temporal numerical models to support risk assessment, such as assessing the exposure of humans to air pollution, often requires the integration of field-based and agent-based modelling approaches. Continuous environmental variables such as air pollution are best represented using the field-based approach which considers phenomena as continuous fields having attribute values at all locations. When calculating human exposure to such pollutants it is, however, preferable to consider the population as a set of individuals each with a particular activity pattern. This would allow to account for the spatio-temporal variation in a pollutant along the space-time paths travelled by individuals, determined, for example, by home and work locations, road network, and travel times. Modelling this activity pattern requires an agent-based or individual based modelling approach. In general, field- and agent-based models are constructed with the help of separate software tools, while both approaches should play together in an interacting way and preferably should be combined into one modelling framework, which would allow for efficient and effective implementation of models by domain specialists. To overcome this lack in integrated modelling frameworks, we aim at the development of concepts and software for an integrated field-based and agent-based modelling framework. Concepts merging field- and agent-based modelling were implemented by extending PCRaster (http://www.pcraster.eu), a field-based modelling library implemented in C++, with components for 1) representation of discrete, mobile, agents, 2) spatial networks and algorithms by integrating the NetworkX library (http://networkx.github.io), allowing therefore to calculate e.g. shortest routes or total transport costs between locations, and 3) functions for field-network interactions, allowing to assign field-based attribute values to networks (i.e. as edge weights), such as aggregated or averaged concentration values. We demonstrate the approach by using six land use regression (LUR) models developed in the ESCAPE (European Study of Cohorts for Air Pollution Effects) project. These models calculate several air pollutants (e.g. NO2, NOx, PM2.5) for the entire Netherlands at a high (5 m) resolution. Using these air pollution maps, we compare exposure of individuals calculated at their x, y location of their home, their work place, and aggregated over the close surroundings of these locations. In addition, total exposure is accumulated over daily activity patterns, summing exposure at home, at the work place, and while travelling between home and workplace, by routing individuals over the Dutch road network, using the shortest route. Finally, we illustrate how routes can be calculated with the minimum total exposure (instead of shortest distance).

  14. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system

    NASA Astrophysics Data System (ADS)

    Wang, Lilie; Ding, George X.

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.

  15. Modeling the long-term evolution of space debris

    DOEpatents

    Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.

    2017-03-07

    A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.

  16. Windward Cooling: An Overlooked Factor in the Calculation of Wind Chill.

    NASA Astrophysics Data System (ADS)

    Osczevski, Randall J.

    2000-12-01

    Wind chill equivalent temperatures calculated from a recent vertical cylinder model of wind chill are several degrees colder than those calculated from a facial cooling model. The latter was based on experiments with a heated model of a face in a wind tunnel. Wind chill has sometimes been modeled as the overall heat transfer from the surface of a cylinder in cross flow, but such models average the cooling over the whole surface and thus minimize the effect of local cooling on the upwind side, particularly at low wind speeds. In this paper, a vertical cylinder model of wind chill has been modified so that just the cooling of its windward side is considered. Wind chill equivalent temperatures calculated with this new model compare favorably with those calculated by the facial cooling model.

  17. The Band Structure of Polymers: Its Calculation and Interpretation. Part 2. Calculation.

    ERIC Educational Resources Information Center

    Duke, B. J.; O'Leary, Brian

    1988-01-01

    Details ab initio crystal orbital calculations using all-trans-polyethylene as a model. Describes calculations based on various forms of translational symmetry. Compares these calculations with ab initio molecular orbital calculations discussed in a preceding article. Discusses three major approximations made in the crystal case. (CW)

  18. A quantum wave based compact modeling approach for the current in ultra-short DG MOSFETs suitable for rapid multi-scale simulations

    NASA Astrophysics Data System (ADS)

    Hosenfeld, Fabian; Horst, Fabian; Iñíguez, Benjamín; Lime, François; Kloes, Alexander

    2017-11-01

    Source-to-drain (SD) tunneling decreases the device performance in MOSFETs falling below the 10 nm channel length. Modeling quantum mechanical effects including SD tunneling has gained more importance specially for compact model developers. The non-equilibrium Green's function (NEGF) has become a state-of-the-art method for nano-scaled device simulation in the past years. In the sense of a multi-scale simulation approach it is necessary to bridge the gap between compact models with their fast and efficient calculation of the device current, and numerical device models which consider quantum effects of nano-scaled devices. In this work, an NEGF based analytical model for nano-scaled double-gate (DG) MOSFETs is introduced. The model consists of a closed-form potential solution of a classical compact model and a 1D NEGF formalism for calculating the device current, taking into account quantum mechanical effects. The potential calculation omits the iterative coupling and allows the straightforward current calculation. The model is based on a ballistic NEGF approach whereby backscattering effects are considered as second order effect in a closed-form. The accuracy and scalability of the non-iterative DG MOSFET model is inspected in comparison with numerical NanoMOS TCAD data for various channel lengths. With the help of this model investigations on short-channel and temperature effects are performed.

  19. Measurement-based model of a wide-bore CT scanner for Monte Carlo dosimetric calculations with GMCTdospp software.

    PubMed

    Skrzyński, Witold

    2014-11-01

    The aim of this work was to create a model of a wide-bore Siemens Somatom Sensation Open CT scanner for use with GMCTdospp, which is an EGSnrc-based software tool dedicated for Monte Carlo calculations of dose in CT examinations. The method was based on matching spectrum and filtration to half value layer and dose profile, and thus was similar to the method of Turner et al. (Med. Phys. 36, pp. 2154-2164). Input data on unfiltered beam spectra were taken from two sources: the TASMIP model and IPEM Report 78. Two sources of HVL data were also used, namely measurements and documentation. Dose profile along the fan-beam was measured with Gafchromic RTQA-1010 (QA+) film. Two-component model of filtration was assumed: bow-tie filter made of aluminum with 0.5 mm thickness on central axis, and flat filter made of one of four materials: aluminum, graphite, lead, or titanium. Good agreement between calculations and measurements was obtained for models based on the measured values of HVL. Doses calculated with GMCTdospp differed from the doses measured with pencil ion chamber placed in PMMA phantom by less than 5%, and root mean square difference for four tube potentials and three positions in the phantom did not exceed 2.5%. The differences for models based on HVL values from documentation exceeded 10%. Models based on TASMIP spectra and IPEM78 spectra performed equally well. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Modeling Anaerobic Soil Organic Carbon Decomposition in Arctic Polygon Tundra: Insights into Soil Geochemical Influences on Carbon Mineralization: Modeling Archive

    DOE Data Explorer

    Zheng, Jianqiu; Thornton, Peter; Painter, Scott; Gu, Baohua; Wullschleger, Stan; Graham, David

    2018-06-13

    This anaerobic carbon decomposition model is developed with explicit representation of fermentation, methanogenesis and iron reduction by combining three well-known modeling approaches developed in different disciplines. A pool-based model to represent upstream carbon transformations and replenishment of DOC pool, a thermodynamically-based model to calculate rate kinetics and biomass growth for methanogenesis and Fe(III) reduction, and a humic ion-binding model for aqueous phase speciation and pH calculation are implemented into the open source geochemical model PHREEQC (V3.0). Installation of PHREEQC is required to run this model.

  1. Modeling of water lighting process and calculation of the reactor-clarifier to improve energy efficiency

    NASA Astrophysics Data System (ADS)

    Skolubovich, Yuriy; Skolubovich, Aleksandr; Voitov, Evgeniy; Soppa, Mikhail; Chirkunov, Yuriy

    2017-10-01

    The article considers the current questions of technological modeling and calculation of the new facility for cleaning natural waters, the clarifier reactor for the optimal operating mode, which was developed in Novosibirsk State University of Architecture and Civil Engineering (SibSTRIN). A calculation technique based on well-known dependences of hydraulics is presented. A calculation example of a structure on experimental data is considered. The maximum possible rate of ascending flow of purified water was determined, based on the 24 hour clarification cycle. The fractional composition of the contact mass was determined with minimal expansion of contact mass layer, which ensured the elimination of stagnant zones. The clarification cycle duration was clarified by the parameters of technological modeling by recalculating maximum possible upward flow rate of clarified water. The thickness of the contact mass layer was determined. Likewise, clarification reactors can be calculated for any other lightening conditions.

  2. Quantification of root water uptake in soil using X-ray computed tomography and image-based modelling.

    PubMed

    Daly, Keith R; Tracy, Saoirse R; Crout, Neil M J; Mairhofer, Stefan; Pridmore, Tony P; Mooney, Sacha J; Roose, Tiina

    2018-01-01

    Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system. © 2017 John Wiley & Sons Ltd.

  3. Modeling of the radiation belt megnetosphere in decisional timeframes

    DOEpatents

    Koller, Josef; Reeves, Geoffrey D; Friedel, Reiner H.W.

    2013-04-23

    Systems and methods for calculating L* in the magnetosphere with essentially the same accuracy as with a physics based model at many times the speed by developing a surrogate trained to be a surrogate for the physics-based model. The trained model can then beneficially process input data falling within the training range of the surrogate model. The surrogate model can be a feedforward neural network and the physics-based model can be the TSK03 model. Operatively, the surrogate model can use parameters on which the physics-based model was based, and/or spatial data for the location where L* is to be calculated. Surrogate models should be provided for each of a plurality of pitch angles. Accordingly, a surrogate model having a closed drift shell can be used from the plurality of models. The feedforward neural network can have a plurality of input-layer units, there being at least one input-layer unit for each physics-based model parameter, a plurality of hidden layer units and at least one output unit for the value of L*.

  4. Activity-based costing: a practical model for cost calculation in radiotherapy.

    PubMed

    Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien

    2003-10-01

    The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.

  5. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  6. [Parameters modification and evaluation of two evapotranspiration models based on Penman-Monteith model for summer maize].

    PubMed

    Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing

    2017-06-18

    The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.

  7. Finite area combustor theoretical rocket performance

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Mcbride, Bonnie J.

    1988-01-01

    Previous to this report, the computer program of NASA SP-273 and NASA TM-86885 was capable of calculating theoretical rocket performance based only on the assumption of an infinite area combustion chamber (IAC). An option was added to this program which now also permits the calculation of rocket performance based on the assumption of a finite area combustion chamber (FAC). In the FAC model, the combustion process in the cylindrical chamber is assumed to be adiabatic, but nonisentropic. This results in a stagnation pressure drop from the injector face to the end of the chamber and a lower calculated performance for the FAC model than the IAC model.

  8. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  9. A new radiation infrastructure for the Modular Earth Submodel System (MESSy, based on version 2.51)

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Jöckel, Patrick; Tost, Holger; Kunze, Markus; Gellhorn, Catrin; Brinkop, Sabine; Frömming, Christine; Ponater, Michael; Steil, Benedikt; Lauer, Axel; Hendricks, Johannes

    2016-06-01

    The Modular Earth Submodel System (MESSy) provides an interface to couple submodels to a base model via a highly flexible data management facility (Jöckel et al., 2010). In the present paper we present the four new radiation related submodels RAD, AEROPT, CLOUDOPT, and ORBIT. The submodel RAD (including the shortwave radiation scheme RAD_FUBRAD) simulates the radiative transfer, the submodel AEROPT calculates the aerosol optical properties, the submodel CLOUDOPT calculates the cloud optical properties, and the submodel ORBIT is responsible for Earth orbit calculations. These submodels are coupled via the standard MESSy infrastructure and are largely based on the original radiation scheme of the general circulation model ECHAM5, however, expanded with additional features. These features comprise, among others, user-friendly and flexibly controllable (by namelists) online radiative forcing calculations by multiple diagnostic calls of the radiation routines. With this, it is now possible to calculate radiative forcing (instantaneous as well as stratosphere adjusted) of various greenhouse gases simultaneously in only one simulation, as well as the radiative forcing of cloud perturbations. Examples of online radiative forcing calculations in the ECHAM/MESSy Atmospheric Chemistry (EMAC) model are presented.

  10. Theoretical Modeling of Hydrogen Bonding in omolecular Solutions: The Combination of Quantum Mechanics and Molecular Mechanics

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Jiang, Nan; Li, Hui

    Hydrogen bonding interaction takes an important position in solutions. The non-classic nature of hydrogen bonding requires the resource-demanding quantum mechanical (QM) calculations. The molecular mechanics (MM) method, with much lower computational load, is applicable to the large-sized system. The combination of QM and MM is an efficient way in the treatment of solution. Taking advantage of the low-cost energy-based fragmentation QM approach (in which the o-molecule is divided into several subsystems, and QM calculation is carried out on each subsystem that is embedded in the environment of background charges of distant parts), the fragmentation-based QM/MM and polarization models have been implemented for the modeling of o-molecule in aqueous solutions, respectively. Within the framework of the fragmentation-based QM/MM hybrid model, the solute is treated by the fragmentation QM calculation while the numerous solvent molecules are described by MM. In the polarization model, the polarizability is considered by allowing the partial charges and fragment-centered dipole moments to be variables, with values coming from the energy-based fragmentation QM calculations. Applications of these two methods to the solvated long oligomers and cyclic peptides have demonstrated that the hydrogen bonding interaction affects the dynamic change in chain conformations of backbone.

  11. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  12. Estimation of Critical Gap Based on Raff's Definition

    PubMed Central

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result. PMID:25574160

  13. Estimation of critical gap based on Raff's definition.

    PubMed

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result.

  14. A program code generator for multiphysics biological simulation using markup languages.

    PubMed

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  15. Model Calculations with Excited Nuclear Fragmentations and Implications of Current GCR Spectra

    NASA Astrophysics Data System (ADS)

    Saganti, Premkumar

    As a result of the fragmentation process in nuclei, energy from the excited states may also contribute to the radiation damage on the cell structure. Radiation induced damage to the human body from the excited states of oxygen and several other nuclei and its fragments are of a concern in the context of the measured abundance of the current galactic cosmic rays (GCR) environment. Nuclear Shell model based calculations of the Selective-Core (Saganti-Cucinotta) approach are being expanded for O-16 nuclei fragments into N-15 with a proton knockout and O-15 with a neutron knockout are very promising. In our on going expansions of these nuclear fragmentation model calculations and assessments, we present some of the prominent nuclei interactions from a total of 190 isotopes that were identified for the current model expansion based on the Quantum Multiple Scattering Fragmentation Model (QMSFRG) of Cucinotta. Radiation transport model calculations with the implementation of these energy level spectral characteristics are expected to enhance the understanding of radiation damage at the cellular level. Implications of these excited energy spectral calculations in the assessment of radiation damage to the human body may provide enhanced understanding of the space radiation risk assessment.

  16. VHDL-AMS modelling and simulation of a planar electrostatic micromotor

    NASA Astrophysics Data System (ADS)

    Endemaño, A.; Fourniols, J. Y.; Camon, H.; Marchese, A.; Muratet, S.; Bony, F.; Dunnigan, M.; Desmulliez, M. P. Y.; Overton, G.

    2003-09-01

    System level simulation results of a planar electrostatic micromotor, based on analytical models of the static and dynamic torque behaviours, are presented. A planar variable capacitance (VC) electrostatic micromotor designed, fabricated and tested at LAAS (Toulouse) in 1995 is simulated using the high level language VHDL-AMS (VHSIC (very high speed integrated circuits) hardware description language-analog mixed signal). The analytical torque model is obtained by first calculating the overlaps and capacitances between different electrodes based on a conformal mapping transformation. Capacitance values in the order of 10-16 F and torque values in the order of 10-11 N m have been calculated in agreement with previous measurements and simulations from this type of motor. A dynamic model has been developed for the motor by calculating the inertia coefficient and estimating the friction-coefficient-based values calculated previously for other similar devices. Starting voltage results obtained from experimental measurement are in good agreement with our proposed simulation model. Simulation results of starting voltage values, step response, switching response and continuous operation of the micromotor, based on the dynamic model of the torque, are also presented. Four VHDL-AMS blocks were created, validated and simulated for power supply, excitation control, micromotor torque creation and micromotor dynamics. These blocks can be considered as the initial phase towards the creation of intellectual property (IP) blocks for microsystems in general and electrostatic micromotors in particular.

  17. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    PubMed

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift generation with flapping wings.

  18. Agent Based Modeling: Fine-Scale Spatio-Temporal Analysis of Pertussis

    NASA Astrophysics Data System (ADS)

    Mills, D. A.

    2017-10-01

    In epidemiology, spatial and temporal variables are used to compute vaccination efficacy and effectiveness. The chosen resolution and scale of a spatial or spatio-temporal analysis will affect the results. When calculating vaccination efficacy, for example, a simple environment that offers various ideal outcomes is often modeled using coarse scale data aggregated on an annual basis. In contrast to the inadequacy of this aggregated method, this research uses agent based modeling of fine-scale neighborhood data centered around the interactions of infants in daycare and their families to demonstrate an accurate reflection of vaccination capabilities. Despite being able to prevent major symptoms, recent studies suggest that acellular Pertussis does not prevent the colonization and transmission of Bordetella Pertussis bacteria. After vaccination, a treated individual becomes a potential asymptomatic carrier of the Pertussis bacteria, rather than an immune individual. Agent based modeling enables the measurable depiction of asymptomatic carriers that are otherwise unaccounted for when calculating vaccination efficacy and effectiveness. Using empirical data from a Florida Pertussis outbreak case study, the results of this model demonstrate that asymptomatic carriers bias the calculated vaccination efficacy and reveal a need for reconsidering current methods that are widely used for calculating vaccination efficacy and effectiveness.

  19. Diffuse sorption modeling.

    PubMed

    Pivovarov, Sergey

    2009-04-01

    This work presents a simple solution for the diffuse double layer model, applicable to calculation of surface speciation as well as to simulation of ionic adsorption within the diffuse layer of solution in arbitrary salt media. Based on Poisson-Boltzmann equation, the Gaines-Thomas selectivity coefficient for uni-bivalent exchange on clay, K(GT)(Me(2+)/M(+))=(Q(Me)(0.5)/Q(M)){M(+)}/{Me(2+)}(0.5), (Q is the equivalent fraction of cation in the exchange capacity, and {M(+)} and {Me(2+)} are the ionic activities in solution) may be calculated as [surface charge, mueq/m(2)]/0.61. The obtained solution of the Poisson-Boltzmann equation was applied to calculation of ionic exchange on clays and to simulation of the surface charge of ferrihydrite in 0.01-6 M NaCl solutions. In addition, a new model of acid-base properties was developed. This model is based on assumption that the net proton charge is not located on the mathematical surface plane but diffusely distributed within the subsurface layer of the lattice. It is shown that the obtained solution of the Poisson-Boltzmann equation makes such calculations possible, and that this approach is more efficient than the original diffuse double layer model.

  20. GIAO-DFT calculation of 15 N NMR chemical shifts of Schiff bases: Accuracy factors and protonation effects.

    PubMed

    Semenov, Valentin A; Samultsev, Dmitry O; Krivdin, Leonid B

    2018-02-09

    15 N NMR chemical shifts in the representative series of Schiff bases together with their protonated forms have been calculated at the density functional theory level in comparison with available experiment. A number of functionals and basis sets have been tested in terms of a better agreement with experiment. Complimentary to gas phase results, 2 solvation models, namely, a classical Tomasi's polarizable continuum model (PCM) and that in combination with an explicit inclusion of one molecule of solvent into calculation space to form supermolecule 1:1 (SM + PCM), were examined. Best results are achieved with PCM and SM + PCM models resulting in mean absolute errors of calculated 15 N NMR chemical shifts in the whole series of neutral and protonated Schiff bases of accordingly 5.2 and 5.8 ppm as compared with 15.2 ppm in gas phase for the range of about 200 ppm. Noticeable protonation effects (exceeding 100 ppm) in protonated Schiff bases are rationalized in terms of a general natural bond orbital approach. Copyright © 2018 John Wiley & Sons, Ltd.

  1. The modeler's influence on calculated solubilities for performance assessments at the Aspo Hard-rock Laboratory

    USGS Publications Warehouse

    Ernren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.

    1999-01-01

    Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH)4, under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability.In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem.In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem.

  2. Comparisons of calculated respiratory tract deposition of particles based on the NCRP/ITRI model and the new ICRP66 model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, Hsu-Chi; Phalen, R.F.; Chang, I.

    1995-12-01

    The National Council on Radiation Protection and Measurements (NCRP) in the United States and the International Commission on Radiological Protection (ICRP) have been independently reviewing and revising respiratory tract dosimetry models for inhaled radioactive aerosols. The newly proposed NCRP respiratory tract dosimetry model represents a significant change in philosophy from the old ICRP Task Group model. The proposed NCRP model describes respiratory tract deposition, clearance, and dosimetry for radioactive substances inhaled by workers and the general public and is expected to be published soon. In support of the NCRP proposed model, ITRI staff members have been developing computer software. Althoughmore » this software is still incomplete, the deposition portion has been completed and can be used to calculate inhaled particle deposition within the respiratory tract for particle sizes as small as radon and radon progeny ({approximately} 1 nm) to particles larger than 100 {mu}m. Recently, ICRP published their new dosimetric model for the respiratory tract, ICRP66. Based on ICRP66, the National Radiological Protection Board of the UK developed PC-based software, LUDEP, for calculating particle deposition and internal doses. The purpose of this report is to compare the calculated respiratory tract deposition of particles using the NCRP/ITRI model and the ICRP66 model, under the same particle size distribution and breathing conditions. In summary, the general trends of the deposition curves for the two models were similar.« less

  3. A colored petri nets based workload evaluation model and its validation through Multi-Attribute Task Battery-II.

    PubMed

    Wang, Peng; Fang, Weining; Guo, Beiyuan

    2017-04-01

    This paper proposed a colored petri nets based workload evaluation model. A formal interpretation of workload was firstly introduced based on the process that reflection of petri nets components to task. A petri net based description of Multiple Resources theory was given by comprehending it from a new angle. A new application of VACP rating scales named V/A-C-P unit, and the definition of colored transitions were proposed to build a model of task process. The calculation of workload mainly has the following four steps: determine token's initial position and values; calculate the weight of directed arcs on the basis of the rules proposed; calculate workload from different transitions, and correct the influence of repetitive behaviors. Verify experiments were carried out based on Multi-Attribute Task Battery-II software. Our results show that there is a strong correlation between the model values and NASA -Task Load Index scores (r=0.9513). In addition, this method can also distinguish behavior characteristics between different people. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. NESSY: NLTE spectral synthesis code for solar and stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.

    2017-07-01

    Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.

  5. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  6. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  7. Numerical Simulation of Bulging Deformation for Wide-Thick Slab Under Uneven Cooling Conditions

    NASA Astrophysics Data System (ADS)

    Wu, Chenhui; Ji, Cheng; Zhu, Miaoyong

    2018-06-01

    In the present work, the bulging deformation of a wide-thick slab under uneven cooling conditions was studied using finite element method. The non-uniform solidification was first calculated using a 2D heat transfer model. The thermal material properties were derived based on a microsegregation model, and the water flux distribution was measured and applied to calculate the cooling boundary conditions. Based on the solidification results, a 3D bulging model was established. The 2D heat transfer model was verified by the measured shell thickness and the slab surface temperature, and the 3D bulging model was verified by the calculated maximum bulging deflections using formulas. The bulging deformation behavior of the wide-thick slab under uneven cooling condition was then determined, and the effect of uneven solidification, casting speed, and roll misalignment were investigated.

  8. Numerical Simulation of Bulging Deformation for Wide-Thick Slab Under Uneven Cooling Conditions

    NASA Astrophysics Data System (ADS)

    Wu, Chenhui; Ji, Cheng; Zhu, Miaoyong

    2018-02-01

    In the present work, the bulging deformation of a wide-thick slab under uneven cooling conditions was studied using finite element method. The non-uniform solidification was first calculated using a 2D heat transfer model. The thermal material properties were derived based on a microsegregation model, and the water flux distribution was measured and applied to calculate the cooling boundary conditions. Based on the solidification results, a 3D bulging model was established. The 2D heat transfer model was verified by the measured shell thickness and the slab surface temperature, and the 3D bulging model was verified by the calculated maximum bulging deflections using formulas. The bulging deformation behavior of the wide-thick slab under uneven cooling condition was then determined, and the effect of uneven solidification, casting speed, and roll misalignment were investigated.

  9. Stability and mobility of Cu-vacancy clusters in Fe-Cu alloys: A computational study based on the use of artificial neural networks for energy barrier calculations

    NASA Astrophysics Data System (ADS)

    Pascuet, M. I.; Castin, N.; Becquart, C. S.; Malerba, L.

    2011-05-01

    An atomistic kinetic Monte Carlo (AKMC) method has been applied to study the stability and mobility of copper-vacancy clusters in Fe. This information, which cannot be obtained directly from experimental measurements, is needed to parameterise models describing the nanostructure evolution under irradiation of Fe alloys (e.g. model alloys for reactor pressure vessel steels). The physical reliability of the AKMC method has been improved by employing artificial intelligence techniques for the regression of the activation energies required by the model as input. These energies are calculated allowing for the effects of local chemistry and relaxation, using an interatomic potential fitted to reproduce them as accurately as possible and the nudged-elastic-band method. The model validation was based on comparison with available ab initio calculations for verification of the used cohesive model, as well as with other models and theories.

  10. An Investigation of Two Finite Element Modeling Solutions for Biomechanical Simulation Using a Case Study of a Mandibular Bone.

    PubMed

    Liu, Yun-Feng; Fan, Ying-Ying; Dong, Hui-Yue; Zhang, Jian-Xing

    2017-12-01

    The method used in biomechanical modeling for finite element method (FEM) analysis needs to deliver accurate results. There are currently two solutions used in FEM modeling for biomedical model of human bone from computerized tomography (CT) images: one is based on a triangular mesh and the other is based on the parametric surface model and is more popular in practice. The outline and modeling procedures for the two solutions are compared and analyzed. Using a mandibular bone as an example, several key modeling steps are then discussed in detail, and the FEM calculation was conducted. Numerical calculation results based on the models derived from the two methods, including stress, strain, and displacement, are compared and evaluated in relation to accuracy and validity. Moreover, a comprehensive comparison of the two solutions is listed. The parametric surface based method is more helpful when using powerful design tools in computer-aided design (CAD) software, but the triangular mesh based method is more robust and efficient.

  11. Atomistic full-quantum transport model for zigzag graphene nanoribbon-based structures: Complex energy-band method

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Nan; Luo, Win-Jet; Shyu, Feng-Lin; Chung, Hsien-Ching; Lin, Chiun-Yan; Wu, Jhao-Ying

    2018-01-01

    Using a non-equilibrium Green’s function framework in combination with the complex energy-band method, an atomistic full-quantum model for solving quantum transport problems for a zigzag-edge graphene nanoribbon (zGNR) structure is proposed. For transport calculations, the mathematical expressions from the theory for zGNR-based device structures are derived in detail. The transport properties of zGNR-based devices are calculated and studied in detail using the proposed method.

  12. Errors in the Calculation of 27Al Nuclear Magnetic Resonance Chemical Shifts

    PubMed Central

    Wang, Xianlong; Wang, Chengfei; Zhao, Hui

    2012-01-01

    Computational chemistry is an important tool for signal assignment of 27Al nuclear magnetic resonance spectra in order to elucidate the species of aluminum(III) in aqueous solutions. The accuracy of the popular theoretical models for computing the 27Al chemical shifts was evaluated by comparing the calculated and experimental chemical shifts in more than one hundred aluminum(III) complexes. In order to differentiate the error due to the chemical shielding tensor calculation from that due to the inadequacy of the molecular geometry prediction, single-crystal X-ray diffraction determined structures were used to build the isolated molecule models for calculating the chemical shifts. The results were compared with those obtained using the calculated geometries at the B3LYP/6-31G(d) level. The isotropic chemical shielding constants computed at different levels have strong linear correlations even though the absolute values differ in tens of ppm. The root-mean-square difference between the experimental chemical shifts and the calculated values is approximately 5 ppm for the calculations based on the X-ray structures, but more than 10 ppm for the calculations based on the computed geometries. The result indicates that the popular theoretical models are adequate in calculating the chemical shifts while an accurate molecular geometry is more critical. PMID:23203134

  13. Beta decay rates of neutron-rich nuclei

    NASA Astrophysics Data System (ADS)

    Marketin, Tomislav; Huther, Lutz; Martínez-Pinedo, Gabriel

    2015-10-01

    Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. Currently, a single large-scale calculation is available based on a QRPA calculation with a schematic interaction on top of the Finite Range Droplet Model. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei.

  14. Effect of wave function on the proton induced L XRP cross sections for 62Sm and 74W

    NASA Astrophysics Data System (ADS)

    Shehla, Kaur, Rajnish; Kumar, Anil; Puri, Sanjiv

    2015-08-01

    The Lk(k= 1, α, β, γ) X-ray production cross sections have been calculated for 74W and 62Sm at different incident proton energies ranging 1-5 MeV using theoretical data sets of different physical parameters, namely, the Li(i=1-3) sub-shell X-ray emission rates based on the Dirac-Fork (DF) model, the fluorescence and Coster Kronig yields based on the Dirac- Hartree-Slater (DHS) model and two sets the proton ionization cross sections based on the DHS model and the ECPSSR in order to assess the influence of the wave function on the XRP cross sections. The calculated cross sections have been compared with the measured cross sections reported in the recent compilation to check the reliability of the calculated values.

  15. Calcul numérique des ondes de surface par une méthode de projection avec un maillage eulérien adaptatif

    NASA Astrophysics Data System (ADS)

    Guillou, Sylvain; Barbry, Nathaly; Nguyen, Kim Dan

    A non hydrostatic vertical two-dimensional numerical model is proposed to calculate free-surface flows. This model is based on resolving the full Navier-Stokes equations by a finite-difference method coupled with Chorin's projection method. An adaptative-Eulerian grid in the sigma-coordinate system is used. The model permits the calculation of surface-waves in estuarine and coastal zones. A benchmark test relative to the soliton propagation is realised to validate the model.

  16. Calculations of turbulent separated flows

    NASA Technical Reports Server (NTRS)

    Zhu, J.; Shih, T. H.

    1993-01-01

    A numerical study of incompressible turbulent separated flows is carried out by using two-equation turbulence models of the K-epsilon type. On the basis of realizability analysis, a new formulation of the eddy-viscosity is proposed which ensures the positiveness of turbulent normal stresses - a realizability condition that most existing two-equation turbulence models are unable to satisfy. The present model is applied to calculate two backward-facing step flows. Calculations with the standard K-epsilon model and a recently developed RNG-based K-epsilon model are also made for comparison. The calculations are performed with a finite-volume method. A second-order accurate differencing scheme and sufficiently fine grids are used to ensure the numerical accuracy of solutions. The calculated results are compared with the experimental data for both mean and turbulent quantities. The comparison shows that the present model performs quite well for separated flows.

  17. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... may use fuel economy data from tests conducted on these vehicle configuration(s) at high altitude to...) Calculate the city, highway, and combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests...

  18. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel economy for the... combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural...

  19. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel economy for the... combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural...

  20. The reduced transition probabilities for excited states of rare-earths and actinide even-even nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghumman, S. S.

    The theoretical B(E2) ratios have been calculated on DF, DR and Krutov models. A simple method based on the work of Arima and Iachello is used to calculate the reduced transition probabilities within SU(3) limit of IBA-I framework. The reduced E2 transition probabilities from second excited states of rare-earths and actinide even–even nuclei calculated from experimental energies and intensities from recent data, have been found to compare better with those calculated on the Krutov model and the SU(3) limit of IBA than the DR and DF models.

  1. Modeling of Pressure Drop During Refrigerant Condensation in Pipe Minichannels

    NASA Astrophysics Data System (ADS)

    Sikora, Małgorzata; Bohdal, Tadeusz

    2017-12-01

    Investigations of refrigerant condensation in pipe minichannels are very challenging and complicated issue. Due to the multitude of influences very important is mathematical and computer modeling. Its allows for performing calculations for many different refrigerants under different flow conditions. A large number of experimental results published in the literature allows for experimental verification of correctness of the models. In this work is presented a mathematical model for calculation of flow resistance during condensation of refrigerants in the pipe minichannel. The model was developed in environment based on conservation equations. The results of calculations were verified by authors own experimental investigations results.

  2. A modeling approach to account for toxicokinetic interactions in the calculation of biological hazard index for chemical mixtures.

    PubMed

    Haddad, S; Tardif, R; Viau, C; Krishnan, K

    1999-09-05

    Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.

  3. Infiltration modeling guidelines for commercial building energy analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowri, Krishnan; Winiarski, David W.; Jarnagin, Ronald E.

    This report presents a methodology for modeling air infiltration in EnergyPlus to account for envelope air barrier characteristics. Based on a review of various infiltration modeling options available in EnergyPlus and sensitivity analysis, the linear wind velocity coefficient based on DOE-2 infiltration model is recommended. The methodology described in this report can be used to calculate the EnergyPlus infiltration input for any given building level infiltration rate specified at known pressure difference. The sensitivity analysis shows that EnergyPlus calculates the wind speed based on zone altitude, and the linear wind velocity coefficient represents the variation in infiltration heat loss consistentmore » with building location and weather data.« less

  4. TK Modeler version 1.0, a Microsoft® Excel®-based modeling software for the prediction of diurnal blood/plasma concentration for toxicokinetic use.

    PubMed

    McCoy, Alene T; Bartels, Michael J; Rick, David L; Saghir, Shakil A

    2012-07-01

    TK Modeler 1.0 is a Microsoft® Excel®-based pharmacokinetic (PK) modeling program created to aid in the design of toxicokinetic (TK) studies. TK Modeler 1.0 predicts the diurnal blood/plasma concentrations of a test material after single, multiple bolus or dietary dosing using known PK information. Fluctuations in blood/plasma concentrations based on test material kinetics are calculated using one- or two-compartment PK model equations and the principle of superposition. This information can be utilized for the determination of appropriate dosing regimens based on reaching a specific desired C(max), maintaining steady-state blood/plasma concentrations, or other exposure target. This program can also aid in the selection of sampling times for accurate calculation of AUC(24h) (diurnal area under the blood concentration time curve) using sparse-sampling methodologies (one, two or three samples). This paper describes the construction, use and validation of TK Modeler. TK Modeler accurately predicted blood/plasma concentrations of test materials and provided optimal sampling times for the calculation of AUC(24h) with improved accuracy using sparse-sampling methods. TK Modeler is therefore a validated, unique and simple modeling program that can aid in the design of toxicokinetic studies. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  6. A 3D model retrieval approach based on Bayesian networks lightfield descriptor

    NASA Astrophysics Data System (ADS)

    Xiao, Qinhan; Li, Yanjun

    2009-12-01

    A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.

  7. SU-E-T-466: Implementation of An Extension Module for Dose Response Models in the TOPAS Monte Carlo Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Mendez, J; Faddegon, B; Perl, J

    2015-06-15

    Purpose: To develop and verify an extension to TOPAS for calculation of dose response models (TCP/NTCP). TOPAS wraps and extends Geant4. Methods: The TOPAS DICOM interface was extended to include structure contours, for subsequent calculation of DVH’s and TCP/NTCP. The following dose response models were implemented: Lyman-Kutcher-Burman (LKB), critical element (CE), population based critical volume (CV), parallel-serials, a sigmoid-based model of Niemierko for NTCP and TCP, and a Poisson-based model for TCP. For verification, results for the parallel-serial and Poisson models, with 6 MV x-ray dose distributions calculated with TOPAS and Pinnacle v9.2, were compared to data from the benchmarkmore » configuration of the AAPM Task Group 166 (TG166). We provide a benchmark configuration suitable for proton therapy along with results for the implementation of the Niemierko, CV and CE models. Results: The maximum difference in DVH calculated with Pinnacle and TOPAS was 2%. Differences between TG166 data and Monte Carlo calculations of up to 4.2%±6.1% were found for the parallel-serial model and up to 1.0%±0.7% for the Poisson model (including the uncertainty due to lack of knowledge of the point spacing in TG166). For CE, CV and Niemierko models, the discrepancies between the Pinnacle and TOPAS results are 74.5%, 34.8% and 52.1% when using 29.7 cGy point spacing, the differences being highly sensitive to dose spacing. On the other hand, with our proposed benchmark configuration, the largest differences were 12.05%±0.38%, 3.74%±1.6%, 1.57%±4.9% and 1.97%±4.6% for the CE, CV, Niemierko and LKB models, respectively. Conclusion: Several dose response models were successfully implemented with the extension module. Reference data was calculated for future benchmarking. Dose response calculated for the different models varied much more widely for the TG166 benchmark than for the proposed benchmark, which had much lower sensitivity to the choice of DVH dose points. This work was supported by National Cancer Institute Grant R01CA140735.« less

  8. Computational model for fuel component supply into a combustion chamber of LRE

    NASA Astrophysics Data System (ADS)

    Teterev, A. V.; Mandrik, P. A.; Rudak, L. V.; Misyuchenko, N. I.

    2017-12-01

    A 2D-3D computational model for calculating a flow inside jet injectors that feed fuel components to a combustion chamber of a liquid rocket engine is described. The model is based on the gasdynamic calculation of compressible medium. Model software provides calculation of both one- and two-component injectors. Flow simulation in two-component injectors is realized using the scheme of separate supply of “gas-gas” or “gas-liquid” fuel components. An algorithm for converting a continuous liquid medium into a “cloud” of drops is described. Application areas of the developed model and the results of 2D simulation of injectors to obtain correction factors in the calculation formulas for fuel supply are discussed.

  9. Modeling and Ab initio Calculations of Thermal Transport in Si-Based Clathrates and Solar Perovskites

    NASA Astrophysics Data System (ADS)

    He, Yuping

    2015-03-01

    We present calculations of the thermal transport coefficients of Si-based clathrates and solar perovskites, as obtained from ab initio calculations and models, where all input parameters derived from first principles. We elucidated the physical mechanisms responsible for the measured low thermal conductivity in Si-based clatherates and predicted their electronic properties and mobilities, which were later confirmed experimentally. We also predicted that by appropriately tuning the carrier concentration, the thermoelectric figure of merit of Sn and Pb based perovskites may reach values ranging between 1 and 2, which could possibly be further increased by optimizing the lattice thermal conductivity through engineering perovskite superlattices. Work done in collaboration with Prof. G. Galli, and supported by DOE/BES Grant No. DE-FG0206ER46262.

  10. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  11. A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES

    EPA Science Inventory

    A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromagnetic properties of the model are symmetric with respect ...

  12. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  13. Development and application of a backscatter lidar forward operator for quantitative validation of aerosol dispersion models and future data assimilation

    NASA Astrophysics Data System (ADS)

    Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Strohbach, Jens; Förstner, Jochen; Potthast, Roland

    2017-12-01

    A new backscatter lidar forward operator was developed which is based on the distinct calculation of the aerosols' backscatter and extinction properties. The forward operator was adapted to the COSMO-ART ash dispersion simulation of the Eyjafjallajökull eruption in 2010. While the particle number concentration was provided as a model output variable, the scattering properties of each individual particle type were determined by dedicated scattering calculations. Sensitivity studies were performed to estimate the uncertainties related to the assumed particle properties. Scattering calculations for several types of non-spherical particles required the usage of T-matrix routines. Due to the distinct calculation of the backscatter and extinction properties of the models' volcanic ash size classes, the sensitivity studies could be made for each size class individually, which is not the case for forward models based on a fixed lidar ratio. Finally, the forward-modeled lidar profiles have been compared to automated ceilometer lidar (ACL) measurements both qualitatively and quantitatively while the attenuated backscatter coefficient was chosen as a suitable physical quantity. As the ACL measurements were not calibrated automatically, their calibration had to be performed using satellite lidar and ground-based Raman lidar measurements. A slight overestimation of the model-predicted volcanic ash number density was observed. Major requirements for future data assimilation of data from ACL have been identified, namely, the availability of calibrated lidar measurement data, a scattering database for atmospheric aerosols, a better representation and coverage of aerosols by the ash dispersion model, and more investigation in backscatter lidar forward operators which calculate the backscatter coefficient directly for each individual aerosol type. The introduced forward operator offers the flexibility to be adapted to a multitude of model systems and measurement setups.

  14. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.

  15. Areas of Anomalous Surface Temperature in Archuleta County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Archuleta County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma

  16. Areas of Anomalous Surface Temperature in Dolores County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Dolores County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma

  17. Areas of Anomalous Surface Temperature in Chaffee County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Chaffee County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma

  18. Areas of Anomalous Surface Temperature in Garfield County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Garfield County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  19. Areas of Anomalous Surface Temperature in Routt County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Routt County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  20. The Establishment of LTO Emission Inventory of Civil Aviation Airports Based on Big Data

    NASA Astrophysics Data System (ADS)

    Lu, Chengwei; Liu, Hefan; Song, Danlin; Yang, Xinyue; Tan, Qinwen; Hu, Xiang; Kang, Xue

    2018-03-01

    An estimation model on LTO emissions of civil aviation airports was developed in this paper, LTO big data was acquired by analysing the internet with Python, while the LTO emissions was dynamically calculated based on daily LTO data, an uncertainty analysis was conducted with Monte Carlo method. Through the model, the emission of LTO in Shuangliu International Airport was calculated, and the characteristics and temporal distribution of LTO in 2015 was analysed. Results indicates that compared with the traditional methods, the model established can calculate the LTO emissions from different types of airplanes more accurately. Based on the hourly LTO information of 302 valid days, it was obtained that the total number of LTO cycles in Chengdu Shuangliu International Airport was 274,645 and the annual amount of emission of SO2, NOx, VOCs, CO, PM10 and PM2.5 was estimated, and the uncertainty of the model was around 7% to 10% varies on pollutants.

  1. Transcranial Magnetic Stimulation: An Automated Procedure to Obtain Coil-specific Models for Field Calculations.

    PubMed

    Madsen, Kristoffer H; Ewald, Lars; Siebner, Hartwig R; Thielscher, Axel

    2015-01-01

    Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector potential of the TMS coils. To develop an approach to reconstruct the magnetic vector potential based on automated measurements. We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel approach to determine the magnetic vector potential via volume integration of the measured field. The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well with that calculated using a model reconstructed from x-ray images. The setup can supply validated models for existing and newly appearing TMS coils. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. 77 FR 21595 - Applications and Amendments to Facility Operating Licenses and Combined Licenses Involving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... must be one which, if proven, would entitle the requestor/petitioner to relief. A requestor/ petitioner..., and fire modeling calculations, have been performed to demonstrate that the performance-based... may include engineering evaluations, probabilistic safety assessments, and fire modeling calculations...

  3. A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES

    EPA Science Inventory

    A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromag-netic properties of the model are symmetric with respect...

  4. Dose conversion coefficients based on the Chinese mathematical phantom and MCNP code for external photon irradiation.

    PubMed

    Qiu, Rui; Li, Junli; Zhang, Zhan; Liu, Liye; Bi, Lei; Ren, Li

    2009-02-01

    A set of conversion coefficients from kerma free-in-air to the organ-absorbed dose are presented for external monoenergetic photon beams from 10 keV to 10 MeV based on the Chinese mathematical phantom, a whole-body mathematical phantom model. The model was developed based on the methods of the Oak Ridge National Laboratory mathematical phantom series and data from the Chinese Reference Man and the Reference Asian Man. This work is carried out to obtain the conversion coefficients based on this model, which represents the characteristics of the Chinese population, as the anatomical parameters of the Chinese are different from those of Caucasians. Monte Carlo simulation with MCNP code is carried out to calculate the organ dose conversion coefficients. Before the calculation, the effects from the physics model and tally type are investigated, considering both the calculation efficiency and precision. In the calculation irradiation conditions include anterior-posterior, posterior-anterior, right lateral, left lateral, rotational and isotropic geometries. Conversion coefficients from this study are compared with those recommended in the Publication 74 of International Commission on Radiological Protection (ICRP74) since both the sets of data are calculated with mathematical phantoms. Overall, consistency between the two sets of data is observed and the difference for more than 60% of the data is below 10%. However, significant deviations are also found, mainly for the superficial organs (up to 65.9%) and bone surface (up to 66%). The big difference of the dose conversion coefficients for the superficial organs at high photon energy could be ascribed to kerma approximation for the data in ICRP74. Both anatomical variations between races and the calculation method contribute to the difference of the data for bone surface.

  5. Calculation of optical parameters for covalent binary alloys used in optical memories/solar cells: a modified approach

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Promod K.; Gupta, Poonam; Singh, Laxman

    2001-06-01

    Chalcogenide based alloys find applications in a number of devices like optical memories, IR detectors, optical switches, photovoltaics, compound semiconductor heterosrtuctures etc. We have modified the Gurman's statistical thermodynamic model (STM) of binary covalent alloys. In the Gurman's model, entropy calculations are based on the number of structural units present. The need to modify this model arose due to the fact that it gives equal probability for all the tetrahedra present in the alloy. We have modified the Gurman's model by introducing the concept that the entropy is based on the bond arrangement rather than that on the structural units present. In the present work calculation based on this modification have been presented for optical properties, which find application in optical switching/memories, solar cells and other optical devices. It has been shown that the calculated optical parameters (for a typical case of GaxSe1-x) based on modified model are closer to the available experimental results. These parameters include refractive index, extinction coefficient, dielectric functions, optical band gap etc. GaxSe1-x has been found to be suitable for reversible optical memories also, where phase change (a yields c and vice versa) takes place at specified physical conditions. DTA/DSC studies also suggest the suitability of this material for optical switching/memory applications. We have also suggested possible use of GaxSe1-x (x = 0.4) in place of oxide layer in a Metal - Oxide - Semiconductor type solar cells. The new structure is Metal - Ga2Se3 - GaAs. The I-V characteristics and other parameters calculated for this structure are found to be much better than that for Si based solar cells. Maximum output power is obtained at the intermediate layer thickness approximately 40 angstroms for this typical solar cell.

  6. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    DTIC Science & Technology

    2017-09-19

    Results show that the finite element computational models accurately match analytical calculations, and that the composite material studied in this...products. 15. SUBJECT TERMS Finite Element Analysis, Structural Acoustics, Fiber-Reinforced Composites, Physics-Based Modeling 16. SECURITY...2 4 FINITE ELEMENT MODEL DESCRIPTION

  7. Effect of wave function on the proton induced L XRP cross sections for {sub 62}Sm and {sub 74}W

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shehla,; Kaur, Rajnish; Kumar, Anil

    The L{sub k}(k= 1, α, β, γ) X-ray production cross sections have been calculated for {sub 74}W and {sub 62}Sm at different incident proton energies ranging 1-5 MeV using theoretical data sets of different physical parameters, namely, the Li(i=1-3) sub-shell X-ray emission rates based on the Dirac-Fork (DF) model, the fluorescence and Coster Kronig yields based on the Dirac- Hartree-Slater (DHS) model and two sets the proton ionization cross sections based on the DHS model and the ECPSSR in order to assess the influence of the wave function on the XRP cross sections. The calculated cross sections have been compared withmore » the measured cross sections reported in the recent compilation to check the reliability of the calculated values.« less

  8. Cumulus cloud model estimates of trace gas transports

    NASA Technical Reports Server (NTRS)

    Garstang, Michael; Scala, John; Simpson, Joanne; Tao, Wei-Kuo; Thompson, A.; Pickering, K. E.; Harris, R.

    1989-01-01

    Draft structures in convective clouds are examined with reference to the results of the NASA Amazon Boundary Layer Experiments (ABLE IIa and IIb) and calculations based on a multidimensional time dependent dynamic and microphysical numerical cloud model. It is shown that some aspects of the draft structures can be calculated from measurements of the cloud environment. Estimated residence times in the lower regions of the cloud based on surface observations (divergence and vertical velocities) are within the same order of magnitude (about 20 min) as model trajectory estimates.

  9. Nonlinear modelling of high-speed catenary based on analytical expressions of cable and truss elements

    NASA Astrophysics Data System (ADS)

    Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing

    2015-10-01

    Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.

  10. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  11. Pseudo-Boltzmann model for modeling the junctionless transistors

    NASA Astrophysics Data System (ADS)

    Avila-Herrera, F.; Cerdeira, A.; Roldan, J. B.; Sánchez-Moreno, P.; Tienda-Luna, I. M.; Iñiguez, B.

    2014-05-01

    Calculation of the carrier concentrations in semiconductors using the Fermi-Dirac integral requires complex numerical calculations; in this context, practically all analytical device models are based on Boltzmann statistics, even though it is known that it leads to an over-estimation of carriers densities for high doping concentrations. In this paper, a new approximation to Fermi-Dirac integral, called Pseudo-Boltzmann model, is presented for modeling junctionless transistors with high doping concentrations.

  12. Low-energy proton induced M X-ray production cross sections for 70Yb, 81Tl and 82Pb

    NASA Astrophysics Data System (ADS)

    Shehla; Mandal, A.; Kumar, Ajay; Roy Chowdhury, M.; Puri, Sanjiv; Tribedi, L. C.

    2018-07-01

    The cross sections for production of Mk (k = Mξ, Mαβ, Mγ, Mm1) X-rays of 70Yb, 81Tl and 82Pb induced by 50-250 keV protons have been measured in the present work. The experimental cross sections have been compared with the earlier reported values and those calculated using the ionization cross sections based on the ECPSSR (Perturbed (P) stationary(S) state(S), incident ion energy (E) loss, Coulomb (C) deflection and relativistic (R) correction) model, the X-ray emission rates based on the Dirac-Fock model, the fluorescence and Coster-Kronig yields based on the Dirac-Hartree-Slater (DHS) model. In addition, the present measured proton induced X-ray production cross sections have also been compared with those calculated using the Dirac-Hartree-Slater (DHS) model based ionization cross sections and those based on the Plane wave Born Approximation (PWBA). The measured M X-ray production cross sections are, in general, found to be higher than the ECPSSR and DHS model based values and lower than the PWBA model based cross sections.

  13. Predicting p Ka values from EEM atomic charges

    PubMed Central

    2013-01-01

    The acid dissociation constant p Ka is a very important molecular property, and there is a strong interest in the development of reliable and fast methods for p Ka prediction. We have evaluated the p Ka prediction capabilities of QSPR models based on empirical atomic charges calculated by the Electronegativity Equalization Method (EEM). Specifically, we collected 18 EEM parameter sets created for 8 different quantum mechanical (QM) charge calculation schemes. Afterwards, we prepared a training set of 74 substituted phenols. Additionally, for each molecule we generated its dissociated form by removing the phenolic hydrogen. For all the molecules in the training set, we then calculated EEM charges using the 18 parameter sets, and the QM charges using the 8 above mentioned charge calculation schemes. For each type of QM and EEM charges, we created one QSPR model employing charges from the non-dissociated molecules (three descriptor QSPR models), and one QSPR model based on charges from both dissociated and non-dissociated molecules (QSPR models with five descriptors). Afterwards, we calculated the quality criteria and evaluated all the QSPR models obtained. We found that QSPR models employing the EEM charges proved as a good approach for the prediction of p Ka (63% of these models had R2 > 0.9, while the best had R2 = 0.924). As expected, QM QSPR models provided more accurate p Ka predictions than the EEM QSPR models but the differences were not significant. Furthermore, a big advantage of the EEM QSPR models is that their descriptors (i.e., EEM atomic charges) can be calculated markedly faster than the QM charge descriptors. Moreover, we found that the EEM QSPR models are not so strongly influenced by the selection of the charge calculation approach as the QM QSPR models. The robustness of the EEM QSPR models was subsequently confirmed by cross-validation. The applicability of EEM QSPR models for other chemical classes was illustrated by a case study focused on carboxylic acids. In summary, EEM QSPR models constitute a fast and accurate p Ka prediction approach that can be used in virtual screening. PMID:23574978

  14. Configurations of base-pair complexes in solutions. [nucleotide chemistry

    NASA Technical Reports Server (NTRS)

    Egan, J. T.; Nir, S.; Rein, R.; Macelroy, R.

    1978-01-01

    A theoretical search for the most stable conformations (i.e., stacked or hydrogen bonded) of the base pairs A-U and G-C in water, CCl4, and CHCl3 solutions is presented. The calculations of free energies indicate a significant role of the solvent in determining the conformations of the base-pair complexes. The application of the continuum method yields preferred conformations in good agreement with experiment. Results of the calculations with this method emphasize the importance of both the electrostatic interactions between the two bases in a complex, and the dipolar interaction of the complex with the entire medium. In calculations with the solvation shell method, the last term, i.e., dipolar interaction of the complex with the entire medium, was added. With this modification the prediction of the solvation shell model agrees both with the continuum model and with experiment, i.e., in water the stacked conformation of the bases is preferred.

  15. Simulations of an accelerator-based shielding experiment using the particle and heavy-ion transport code system PHITS.

    PubMed

    Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K

    2005-01-01

    In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees. c2005 Published by Elsevier Ltd on behalf of COSPAR.

  16. 75 FR 75961 - Notice of Implementation of the Wind Erosion Prediction System for Soil Erodibility System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ... Wind Erosion Prediction System for Soil Erodibility System Calculations for the Natural Resources... Erosion Prediction System (WEPS) for soil erodibility system calculations scheduled for implementation for... computer model is a process-based, daily time-step computer model that predicts soil erosion via simulation...

  17. A Theoretical Trombone

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2014-01-01

    What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that…

  18. Nonadiabatic nonradial p-mode frequencies of the standard solar model, with and without helium diffusion

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.

    1994-01-01

    The nonadiabatic frequencies of a standard solar model and a solar model that includes helium diffusion are discussed. The nonadiabatic pulsation calculation includes physics that describes the losses and gains due to radiation. Radiative gains and losses are modeled in both the diffusion approximation, which is only valid in optically thick regions, and the Eddington approximation, which is valid in both optically thin and thick regions. The calculated pulsation frequencies for modes with l less than or equal to 1320 are compared to the observed spectrum of the Sun. Compared to a strictly adiabatic calculation, the nonadiabatic calculation of p-mode frequencies improves the agreement between model and observation. When helium diffusion is included in the model the frequencies of the modes that are sensitive to regions near the base of the convection zone are improved (i.e., brought into closer agreement with observation), but the agreement is made worse for other modes. Cyclic variations in the frequency spacings of the Sun as a function of frequency of n are presented as evidence for a discontinuity in the structure of the Sun, possibly located near the base of the convection zone.

  19. PDF-based heterogeneous multiscale filtration model.

    PubMed

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  20. Calculation of conductivities and currents in the ionosphere

    NASA Technical Reports Server (NTRS)

    Kirchhoff, V. W. J. H.; Carpenter, L. A.

    1975-01-01

    Formulas and procedures to calculate ionospheric conductivities are summarized. Ionospheric currents are calculated using a semidiurnal E-region neutral wind model and electric fields from measurements at Millstone Hill. The results agree well with ground based magnetogram records for magnetic quiet days.

  1. Full-Scale Model of Subionospheric VLF Signal Propagation Based on First-Principles Charged Particle Transport Calculations

    NASA Astrophysics Data System (ADS)

    Kouznetsov, A.; Cully, C. M.; Knudsen, D. J.

    2016-12-01

    Changes in D-Region ionization caused by energetic particle precipitation are monitored by the Array for Broadband Observations of VLF/ELF Emissions (ABOVE) - a network of receivers deployed across Western Canada. The observed amplitudes and phases of subionospheric-propagating VLF signals from distant artificial transmitters depend sensitively on the free electron population created by precipitation of energetic charged particles. Those include both primary (electrons, protons and heavier ions) and secondary (cascades of ionized particles and electromagnetic radiation) components. We have designed and implemented a full-scale model to predict the received VLF signals based on first-principle charged particle transport calculations coupled to the Long Wavelength Propagation Capability (LWPC) software. Calculations of ionization rates and free electron densities are based on MCNP-6 (a general-purpose Monte Carlo N- Particle) software taking advantage of its capability of coupled neutron/photon/electron transport and novel library of cross-sections for low-energetic electron and photon interactions with matter. Cosmic ray calculations of background ionization are based on source spectra obtained both from PAMELA direct Cosmic Rays spectra measurements and based on the recently-implemented MCNP 6 galactic cosmic-ray source, scaled using our (Calgary) neutron monitor measurement results. Conversion from calculated fluxes (MCNP F4 tallies) to ionization rates for low-energy electrons are based on the total ionization cross-sections for oxygen and nitrogen molecules from the National Institute of Standard and Technology. We use our model to explore the complexity of the physical processes affecting VLF propagation.

  2. Three-dimensional assessment of scoliosis based on ultrasound data

    NASA Astrophysics Data System (ADS)

    Zhang, Junhua; Li, Hongjian; Yu, Bo

    2015-12-01

    In this study, an approach was proposed to assess the 3D scoliotic deformity based on ultrasound data. The 3D spine model was reconstructed by using a freehand 3D ultrasound imaging system. The geometric torsion was then calculated from the reconstructed spine model. A thoracic spine phantom set at a given pose was used in the experiment. The geometric torsion of the spine phantom calculated from the freehand ultrasound imaging system was 0.041 mm-1 which was close to that calculated from the biplanar radiographs (0.025 mm-1). Therefore, ultrasound is a promising technique for the 3D assessment of scoliosis.

  3. Models for the Configuration and Integrity of Partially Oxidized Fuel Rod Cladding at High Temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siefken, L.J.

    1999-01-01

    Models were designed to resolve deficiencies in the SCDAP/RELAP5/MOD3.2 calculations of the configuration and integrity of hot, partially oxidized cladding. These models are expected to improve the calculations of several important aspects of fuel rod behavior. First, an improved mapping was established from a compilation of PIE results from severe fuel damage tests of the configuration of melted metallic cladding that is retained by an oxide layer. The improved mapping accounts for the relocation of melted cladding in the circumferential direction. Then, rules based on PIE results were established for calculating the effect of cladding that has relocated from abovemore » on the oxidation and integrity of the lower intact cladding upon which it solidifies. Next, three different methods were identified for calculating the extent of dissolution of the oxidic part of the cladding due to its contact with the metallic part. The extent of dissolution effects the stress and thus the integrity of the oxidic part of the cladding. Then, an empirical equation was presented for calculating the stress in the oxidic part of the cladding and evaluating its integrity based on this calculated stress. This empirical equation replaces the current criterion for loss of integrity which is based on temperature and extent of oxidation. Finally, a new rule based on theoretical and experimental results was established for identifying the regions of a fuel rod with oxidation of both the inside and outside surfaces of the cladding. The implementation of these models is expected to eliminate the tendency of the SCDAP/RELAP5 code to overpredict the extent of oxidation of the upper part of fuel rods and to underpredict the extent of oxidation of the lower part of fuel rods and the part with a high concentration of relocated material. This report is a revision and reissue of the report entitled, Improvements in Modeling of Cladding Oxidation and Meltdown.« less

  4. Input-output model for MACCS nuclear accident impacts estimation¹

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less

  5. Development and Current Status of the “Cambridge” Loudness Models

    PubMed Central

    2014-01-01

    This article reviews the evolution of a series of models of loudness developed in Cambridge, UK. The first model, applicable to stationary sounds, was based on modifications of the model developed by Zwicker, including the introduction of a filter to allow for the effects of transfer of sound through the outer and middle ear prior to the calculation of an excitation pattern, and changes in the way that the excitation pattern was calculated. Later, modifications were introduced to the assumed middle-ear transfer function and to the way that specific loudness was calculated from excitation level. These modifications led to a finite calculated loudness at absolute threshold, which made it possible to predict accurately the absolute thresholds of broadband and narrowband sounds, based on the assumption that the absolute threshold corresponds to a fixed small loudness. The model was also modified to give predictions of partial loudness—the loudness of one sound in the presence of another. This allowed predictions of masked thresholds based on the assumption that the masked threshold corresponds to a fixed small partial loudness. Versions of the model for time-varying sounds were developed, which allowed prediction of the masked threshold of any sound in a background of any other sound. More recent extensions incorporate binaural processing to account for the summation of loudness across ears. In parallel, versions of the model for predicting loudness for hearing-impaired ears have been developed and have been applied to the development of methods for fitting multichannel compression hearing aids. PMID:25315375

  6. DEVELOPMENT OF A WATERSHED-BASED MERCURY POLLUTION CHARACTERIZATION SYSTEM

    EPA Science Inventory

    To investigate total mercury loadings to streams in a watershed, we have developed a watershed-based source quantification model ? Watershed Mercury Characterization System. The system uses the grid-based GIS modeling technology to calculate total soil mercury concentrations and ...

  7. A Comparison of Volume Scattering Strength Data with Model Calculations based on Quasisynoptically Collected Fishery Data

    DTIC Science & Technology

    1993-06-03

    personal communication , Institute of Marine Research, Bergen, Norway (1990). fishery sense although they could be major contributors to 7"Report of the...volume scattering strength data with model calculations based or, Program Element No. quasisynoptically collected fishery data Pjfect No. 6. Author(s...and 5000 Hz in the Norwegian Sea in August 1988 and west of Great Britain in April 1989. Coincidentally, extensive fishery surveys were conducted at

  8. A review of the calculation procedure for critical acid loads for terrestrial ecosystems.

    PubMed

    van der Salm, C; de Vries, W

    2001-04-23

    Target loads for acid deposition in the Netherlands, as formulated in the Dutch environmental policy plan, are based on critical load calculations at the end of the 1980s. Since then knowledge on the effect of acid deposition on terrestrial ecosystems has substantially increased. In the early 1990s a simple mass balance model was developed to calculate critical loads. This model was evaluated and the methods were adapted to represent the current knowledge. The main changes in the model are the use of actual empirical relationships between Al and H concentrations in the soil solution, the addition of a constant base saturation as a second criterion for soil quality and the use of tree species-dependant critical Al/base cation (BC) ratios for Dutch circumstances. The changes in the model parameterisation and in the Al/BC criteria led to considerably (50%) higher critical loads for root damage. The addition of a second criterion in the critical load calculations for soil quality caused a decrease in the critical loads for soils with a median to high base saturation such as loess and clay soils. The adaptation hardly effected the median critical load for soil quality in the Netherlands, since only 15% of the Dutch forests occur on these soils. On a regional scale, however, critical loads were (much) lower in areas where those soils are located.

  9. Versatile fusion source integrator AFSI for fast ion and neutron studies in fusion devices

    NASA Astrophysics Data System (ADS)

    Sirén, Paula; Varje, Jari; Äkäslompolo, Simppa; Asunta, Otto; Giroud, Carine; Kurki-Suonio, Taina; Weisen, Henri; JET Contributors, The

    2018-01-01

    ASCOT Fusion Source Integrator AFSI, an efficient tool for calculating fusion reaction rates and characterizing the fusion products, based on arbitrary reactant distributions, has been developed and is reported in this paper. Calculation of reactor-relevant D-D, D-T and D-3He fusion reactions has been implemented based on the Bosch-Hale fusion cross sections. The reactions can be calculated between arbitrary particle populations, including Maxwellian thermal particles and minority energetic particles. Reaction rate profiles, energy spectra and full 4D phase space distributions can be calculated for the non-isotropic reaction products. The code is especially suitable for integrated modelling in self-consistent plasma physics simulations as well as in the Serpent neutronics calculation chain. Validation of the model has been performed for neutron measurements at the JET tokamak and the code has been applied to predictive simulations in ITER.

  10. Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow

    NASA Astrophysics Data System (ADS)

    Kemerink, G. J.; Pleiter, F.

    1986-08-01

    The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.

  11. Ab initio state-specific N2 + O dissociation and exchange modeling for molecular simulations

    NASA Astrophysics Data System (ADS)

    Luo, Han; Kulakhmetov, Marat; Alexeenko, Alina

    2017-02-01

    Quasi-classical trajectory (QCT) calculations are used in this work to calculate state-specific N2(X1Σ ) +O(3P ) →2 N(4S ) +O(3P ) dissociation and N2(X1Σ ) +O(3P ) →NO(X2Π ) +N(4S ) exchange cross sections and rates based on the 13A″ and 13A' ab initio potential energy surface by Gamallo et al. [J. Chem. Phys. 119, 2545-2556 (2003)]. The calculations consider translational energies up to 23 eV and temperatures between 1000 K and 20 000 K. Vibrational favoring is observed for dissociation reaction at the whole range of collision energies and for exchange reaction around the dissociation limit. For the same collision energy, cross sections for v = 30 are 4 to 6 times larger than those for the ground state. The exchange reaction has an effective activation energy that is dependent on the initial rovibrational level, which is different from dissociation reaction. In addition, the exchange cross sections have a maximum when the total collision energy (TCE) approaches dissociation energy. The calculations are used to generate compact QCT-derived state-specific dissociation (QCT-SSD) and QCT-derived state-specific exchange (QCT-SSE) models, which describe over 1 × 106 cross sections with about 150 model parameters. The models can be used directly within direct simulation Monte Carlo and computational fluid dynamics simulations. Rate constants predicted by the new models are compared to the experimental measurements, direct QCT calculations and predictions by other models that include: TCE model, Bose-Candler QCT-based exchange model, Macheret-Fridman dissociation model, Macheret's exchange model, and Park's two-temperature model. The new models match QCT-calculated and experimental rates within 30% under nonequilibrium conditions while other models under predict by over an order of magnitude under vibrationally-cold conditions.

  12. Using a CBL Unit, a Temperature Sensor, and a Graphing Calculator to Model the Kinetics of Consecutive First-Order Reactions as Safe In-Class Demonstrations

    ERIC Educational Resources Information Center

    Moore-Russo, Deborah A.; Cortes-Figueroa, Jose E.; Schuman, Michael J.

    2006-01-01

    The use of Calculator-Based Laboratory (CBL) technology, the graphing calculator, and the cooling and heating of water to model the behavior of consecutive first-order reactions is presented, where B is the reactant, I is the intermediate, and P is the product for an in-class demonstration. The activity demonstrates the spontaneous and consecutive…

  13. Validation of a Pressed Pentolite Donor for the Large Scale Gap Test (LGST) at DSTO

    DTIC Science & Technology

    2013-03-01

    20.5 a based on dent depth measurement and referenced to RT 60/40 3.1.3 Modelling Data The VoD and PCJ were calculated using CHEETAH 4.0...compositions as well as ideal explosives. It should be noted that CHEETAH assumes the charge is of infinite length as it can not model size effects. It can...calculated from CHEETAH 4.0 and Experimental data Donor Origin Density (g.cm-3) Calculated PCJ (GPa) Experimental P (GPa) Calculated VoD (m.s-1

  14. Areas of Anomalous Surface Temperature in Alamosa and Saguache Counties, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies) Note: 'o' is used in this description to represent lowercase sigma.

  15. Structure and energetics of carbon, hexagonal boron nitride, and carbon/hexagonal boron nitride single-layer and bilayer nanoscrolls

    NASA Astrophysics Data System (ADS)

    Siahlo, Andrei I.; Poklonski, Nikolai A.; Lebedev, Alexander V.; Lebedeva, Irina V.; Popov, Andrey M.; Vyrko, Sergey A.; Knizhnik, Andrey A.; Lozovik, Yurii E.

    2018-03-01

    Single-layer and bilayer carbon and hexagonal boron nitride nanoscrolls as well as nanoscrolls made of bilayer graphene/hexagonal boron nitride heterostructure are considered. Structures of stable states of the corresponding nanoscrolls prepared by rolling single-layer and bilayer rectangular nanoribbons are obtained based on the analytical model and numerical calculations. The lengths of nanoribbons for which stable and energetically favorable nanoscrolls are possible are determined. Barriers to rolling of single-layer and bilayer nanoribbons into nanoscrolls and barriers to nanoscroll unrolling are calculated. Based on the calculated barriers nanoscroll lifetimes in the stable state are estimated. Elastic constants for bending of graphene and hexagonal boron nitride layers used in the model are found by density functional theory calculations.

  16. Enhancement of the output emission efficiency of thin-film photoluminescence composite structures based on PbSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anisimova, N. P.; Tropina, N. E., E-mail: Mazina_ne@mail.ru; Tropin, A. N.

    2010-12-15

    The opportunity to increase the output emission efficiency of PbSe-based photoluminescence structures by depositing an antireflection layer is analyzed. A model of a three-layer thin film where the central layer is formed of a composite medium is proposed to calculate the reflectance spectra of the system. In von Bruggeman's approximation of the effective medium theory, the effective permittivity of the composite layer is calculated. The model proposed in the study is used to calculate the thickness of the arsenic chalcogenide (AsS{sub 4}) antireflection layer. The optimal AsS{sub 4} layer thickness determined experimentally is close to the results of calculation, andmore » the corresponding gain in the output photoluminescence efficiency is as high as 60%.« less

  17. Fundamental studies on kinetic isotope effect (KIE) of hydrogen isotope fractionation in natural gas systems

    USGS Publications Warehouse

    Ni, Y.; Ma, Q.; Ellis, G.S.; Dai, J.; Katz, B.; Zhang, S.; Tang, Y.

    2011-01-01

    Based on quantum chemistry calculations for normal octane homolytic cracking, a kinetic hydrogen isotope fractionation model for methane, ethane, and propane formation is proposed. The activation energy differences between D-substitute and non-substituted methane, ethane, and propane are 318.6, 281.7, and 280.2cal/mol, respectively. In order to determine the effect of the entropy contribution for hydrogen isotopic substitution, a transition state for ethane bond rupture was determined based on density function theory (DFT) calculations. The kinetic isotope effect (KIE) associated with bond rupture in D and H substituted ethane results in a frequency factor ratio of 1.07. Based on the proposed mathematical model of hydrogen isotope fractionation, one can potentially quantify natural gas thermal maturity from measured hydrogen isotope values. Calculated gas maturity values determined by the proposed mathematical model using ??D values in ethane from several basins in the world are in close agreement with similar predictions based on the ??13C composition of ethane. However, gas maturity values calculated from field data of methane and propane using both hydrogen and carbon kinetic isotopic models do not agree as closely. It is possible that ??D values in methane may be affected by microbial mixing and that propane values might be more susceptible to hydrogen exchange with water or to analytical errors. Although the model used in this study is quite preliminary, the results demonstrate that kinetic isotope fractionation effects in hydrogen may be useful in quantitative models of natural gas generation, and that ??D values in ethane might be more suitable for modeling than comparable values in methane and propane. ?? 2011 Elsevier Ltd.

  18. Modelling crystal plasticity by 3D dislocation dynamics and the finite element method: The Discrete-Continuous Model revisited

    NASA Astrophysics Data System (ADS)

    Vattré, A.; Devincre, B.; Feyel, F.; Gatti, R.; Groh, S.; Jamond, O.; Roos, A.

    2014-02-01

    A unified model coupling 3D dislocation dynamics (DD) simulations with the finite element (FE) method is revisited. The so-called Discrete-Continuous Model (DCM) aims to predict plastic flow at the (sub-)micron length scale of materials with complex boundary conditions. The evolution of the dislocation microstructure and the short-range dislocation-dislocation interactions are calculated with a DD code. The long-range mechanical fields due to the dislocations are calculated by a FE code, taking into account the boundary conditions. The coupling procedure is based on eigenstrain theory, and the precise manner in which the plastic slip, i.e. the dislocation glide as calculated by the DD code, is transferred to the integration points of the FE mesh is described in full detail. Several test cases are presented, and the DCM is applied to plastic flow in a single-crystal Nickel-based superalloy.

  19. Development of a coupled model of a distributed hydrological model and a rice growth model for optimizing irrigation schedule

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Kumiko; Homma, Koki; Koike, Toshio; Ohta, Tetsu

    2013-04-01

    A coupled model of a distributed hydrological model and a rice growth model was developed in this study. The distributed hydrological model used in this study is the Water and Energy Budget-based Distributed Hydrological Model (WEB-DHM) developed by Wang et al. (2009). This model includes a modified SiB2 (Simple Biosphere Model, Sellers et al., 1996) and the Geomorphology-Based Hydrological Model (GBHM) and thus it can physically calculate both water and energy fluxes. The rice growth model used in this study is the Simulation Model for Rice-Weather relations (SIMRIW) - rainfed developed by Homma et al. (2009). This is an updated version of the original SIMRIW (Horie et al., 1987) and can calculate rice growth by considering the yield reduction due to water stress. The purpose of the coupling is the integration of hydrology and crop science to develop a tool to support decision making 1) for determining the necessary agricultural water resources and 2) for allocating limited water resources to various sectors. The efficient water use and optimal water allocation in the agricultural sector are necessary to balance supply and demand of limited water resources. In addition, variations in available soil moisture are the main reasons of variations in rice yield. In our model, soil moisture and the Leaf Area Index (LAI) are calculated inside SIMRIW-rainfed so that these variables can be simulated dynamically and more precisely based on the rice than the more general calculations is the original WEB-DHM. At the same time by coupling SIMRIW-rainfed with WEB-DHM, lateral flow of soil water, increases in soil moisture and reduction of river discharge due to the irrigation, and its effects on the rice growth can be calculated. Agricultural information such as planting date, rice cultivar, fertilization amount are given in a fully distributed manner. The coupled model was validated using LAI and soil moisture in a small basin in western Cambodia (Sangker River Basin). This basin is mostly rainfed paddy so that irrigation scheme was firstly switched off. Several simulations with varying irrigation scheme were performed to determine the optimal irrigation schedule in this basin.

  20. Application of ATHLET/DYN3D coupled codes system for fast liquid metal cooled reactor steady state simulation

    NASA Astrophysics Data System (ADS)

    Ivanov, V.; Samokhin, A.; Danicheva, I.; Khrennikov, N.; Bouscuet, J.; Velkov, K.; Pasichnyk, I.

    2017-01-01

    In this paper the approaches used for developing of the BN-800 reactor test model and for validation of coupled neutron-physic and thermohydraulic calculations are described. Coupled codes ATHLET 3.0 (code for thermohydraulic calculations of reactor transients) and DYN3D (3-dimensional code of neutron kinetics) are used for calculations. The main calculation results of reactor steady state condition are provided. 3-D model used for neutron calculations was developed for start reactor BN-800 load. The homogeneous approach is used for description of reactor assemblies. Along with main simplifications, the main reactor BN-800 core zones are described (LEZ, MEZ, HEZ, MOX, blankets). The 3D neutron physics calculations were provided with 28-group library, which is based on estimated nuclear data ENDF/B-7.0. Neutron SCALE code was used for preparation of group constants. Nodalization hydraulic model has boundary conditions by coolant mass-flow rate for core inlet part, by pressure and enthalpy for core outlet part, which can be chosen depending on reactor state. Core inlet and outlet temperatures were chosen according to reactor nominal state. The coolant mass flow rate profiling through the core is based on reactor power distribution. The test thermohydraulic calculations made with using of developed model showed acceptable results in coolant mass flow rate distribution through the reactor core and in axial temperature and pressure distribution. The developed model will be upgraded in future for different transient analysis in metal-cooled fast reactors of BN type including reactivity transients (control rods withdrawal, stop of the main circulation pump, etc.).

  1. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  2. Nonlinear Model Reduction in Power Systems by Balancing of Empirical Controllability and Observability Covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Wang, Jianhui; Liu, Hui

    Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less

  3. Experimental verification and comparison of the rubber V- belt continuously variable transmission models

    NASA Astrophysics Data System (ADS)

    Grzegożek, W.; Dobaj, K.; Kot, A.

    2016-09-01

    The paper includes the analysis of the rubber V-belt cooperation with the CVT transmission pulleys. The analysis of the forces and torques acting in the CVT transmission was conducted basing on calculated characteristics of the centrifugal regulator and the torque regulator. The accurate estimation of the regulator surface curvature allowed for calculation of the relation between the driving wheel axial force, the engine rotational speed and the gear ratio of the CVT transmission. Simplified analytical models of the rubber V-belt- pulley cooperation are based on three basic approaches. The Dittrich model assumes two contact regions on the driven and driving wheel. The Kim-Kim model considers, in addition to the previous model, also the radial friction. The radial friction results in the lack of the developed friction area on the driving pulley. The third approach, formulated in the Cammalleri model, assumes variable sliding angle along the wrap arch and describes it as a result the belt longitudinal and cross flexibility. Theoretical torque on the driven and driving wheel was calculated on the basis of the known regulators characteristics. The calculated torque was compared to the measured loading torque. The best accordance, referring to the centrifugal regulator range of work, was obtained for the Kim-Kim model.

  4. A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling

    PubMed Central

    Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  5. Research on flow stress model and dynamic recrystallization model of X12CrMoWVNbN10-1-1 steel

    NASA Astrophysics Data System (ADS)

    Sui, Da-shan; Wang, Wei; Fu, Bo; Cui, Zhen-shan

    2013-05-01

    Plastic deformation behavior of X12CrMoWVNbN10-1-1 ferrite heat-resistant steel was studied systematically at high temperature. The stress-strain curves were measured at the temperature of 950°C-1250°C and strain rate of 0.0005s-1-0.1s-1 by Gleeble thermo-mechanical simulator. The flow stress model and dynamic recrystallization model were established based on Laasraoui two-stage model. The activation energy was calculated and the parameters were determined accordingly based on the experimental results and Sellars creep equation. The verification was performed to prove the models and it indicated the calculated results were identical to the experimental data.

  6. Improvement of a 2D numerical model of lava flows

    NASA Astrophysics Data System (ADS)

    Ishimine, Y.

    2013-12-01

    I propose an improved procedure that reduces an improper dependence of lava flow directions on the orientation of Digital Elevation Model (DEM) in two-dimensional simulations based on Ishihara et al. (in Lava Flows and Domes, Fink, JH eds., 1990). The numerical model for lava flow simulations proposed by Ishihara et al. (1990) is based on two-dimensional shallow water model combined with a constitutive equation for a Bingham fluid. It is simple but useful because it properly reproduces distributions of actual lava flows. Thus, it has been regarded as one of pioneer work of numerical simulations of lava flows and it is still now widely used in practical hazard prediction map for civil defense officials in Japan. However, the model include an improper dependence of lava flow directions on the orientation of DEM because the model separately assigns the condition for the lava flow to stop due to yield stress for each of two orthogonal axes of rectangular calculating grid based on DEM. This procedure brings a diamond-shaped distribution as shown in Fig. 1 when calculating a lava flow supplied from a point source on a virtual flat plane although the distribution should be circle-shaped. To improve the drawback, I proposed a modified procedure that uses the absolute value of yield stress derived from both components of two orthogonal directions of the slope steepness to assign the condition for lava flows to stop. This brings a better result as shown in Fig. 2. Fig. 1. (a) Contour plots calculated with the original model of Ishihara et al. (1990). (b) Contour plots calculated with a proposed model.

  7. Generalization techniques to reduce the number of volume elements for terrain effect calculations in fully analytical gravitational modelling

    NASA Astrophysics Data System (ADS)

    Benedek, Judit; Papp, Gábor; Kalmár, János

    2018-04-01

    Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable situation without the loss of reliability of the calculated gravity field parameters.

  8. Framework for scalable adsorbate–adsorbate interaction models

    DOE PAGES

    Hoffmann, Max J.; Medford, Andrew J.; Bligaard, Thomas

    2016-06-02

    Here, we present a framework for physically motivated models of adsorbate–adsorbate interaction between small molecules on transition and coinage metals based on modifications to the substrate electronic structure due to adsorption. We use this framework to develop one model for transition and one for coinage metal surfaces. The models for transition metals are based on the d-band center position, and the models for coinage metals are based on partial charges. The models require no empirical parameters, only two first-principles calculations per adsorbate as input, and therefore scale linearly with the number of reaction intermediates. By theory to theory comparison withmore » explicit density functional theory calculations over a wide range of adsorbates and surfaces, we show that the root-mean-squared error for differential adsorption energies is less than 0.2 eV for up to 1 ML coverage.« less

  9. Virial Coefficients for the Liquid Argon

    NASA Astrophysics Data System (ADS)

    Korth, Micheal; Kim, Saesun

    2014-03-01

    We begin with a geometric model of hard colliding spheres and calculate probability densities in an iterative sequence of calculations that lead to the pair correlation function. The model is based on a kinetic theory approach developed by Shinomoto, to which we added an interatomic potential for argon based on the model from Aziz. From values of the pair correlation function at various values of density, we were able to find viral coefficients of liquid argon. The low order coefficients are in good agreement with theoretical hard sphere coefficients, but appropriate data for argon to which these results might be compared is difficult to find.

  10. Effects of Residual Stress, Axial Stretch, and Circumferential Shrinkage on Coronary Plaque Stress and Strain Calculations: A Modeling Study Using IVUS-Based Near-Idealized Geometries

    PubMed Central

    Wang, Liang; Zhu, Jian; Samady, Habib; Monoly, David; Zheng, Jie; Guo, Xiaoya; Maehara, Akiko; Yang, Chun; Ma, Genshan; Mintz, Gary S.; Tang, Dalin

    2017-01-01

    Accurate stress and strain calculations are important for plaque progression and vulnerability assessment. Models based on in vivo data often need to form geometries with zero-stress/strain conditions. The goal of this paper is to use IVUS-based near-idealized geometries and introduce a three-step model construction process to include residual stress, axial shrinkage, and circumferential shrinkage and investigate their impacts on stress and strain calculations. In Vivo intravascular ultrasound (IVUS) data of human coronary were acquired for model construction. In Vivo IVUS movie data were acquired and used to determine patient-specific material parameter values. A three-step modeling procedure was used to make our model: (a) wrap the zero-stress vessel sector to obtain the residual stress; (b) stretch the vessel axially to its length in vivo; and (c) pressurize the vessel to recover its in vivo geometry. Eight models were constructed for our investigation. Wrapping led to reduced lumen and cap stress and increased out boundary stress. The model with axial stretch, circumferential shrink, but no wrapping overestimated lumen and cap stress by 182% and 448%, respectively. The model with wrapping, circumferential shrink, but no axial stretch predicted average lumen stress and cap stress as 0.76 kPa and −15 kPa. The same model with 10% axial stretch had 42.53 kPa lumen stress and 29.0 kPa cap stress, respectively. Skipping circumferential shrinkage leads to overexpansion of the vessel and incorrect stress/strain calculations. Vessel stiffness increase (100%) leads to 75% lumen stress increase and 102% cap stress increase. PMID:27814429

  11. Oxygen Pickup Ions Measured by MAVEN Outside the Martian Bow Shock

    NASA Astrophysics Data System (ADS)

    Rahmati, A.; Cravens, T.; Larson, D. E.; Lillis, R. J.; Dunn, P.; Halekas, J. S.; Connerney, J. E. P.; Eparvier, F. G.; Thiemann, E.; Mitchell, D. L.; Jakosky, B. M.

    2015-12-01

    The MAVEN (Mars Atmosphere and Volatile EvolutioN) spacecraft entered orbit around Mars on September 21, 2014 and has since been detecting energetic oxygen pickup ions by its SEP (Solar Energetic Particles) and SWIA (Solar Wind Ion Analyzer) instruments. The oxygen pickup ions detected outside the Martian bowshock and in the upstream solar wind are associated with the extended hot oxygen exosphere of Mars, which is created mainly by the dissociative recombination of molecular oxygen ions with electrons in the ionosphere. We use analytic solutions to the equations of motion of pickup ions moving in the undisturbed upstream solar wind magnetic and motional electric fields and calculate the flux of oxygen pickup ions at the location of MAVEN. Our model calculates the ionization rate of oxygen atoms in the exosphere based on the hot oxygen densities predicted by Rahmati et al. (2014), and the sources of ionization include photo-ionization, charge exchange, and electron impact ionization. The photo-ionization frequency is calculated using the FISM (Flare Irradiance Spectral Model) solar flux model, based on MAVEN EUVM (Extreme Ultra-Violet Monitor) measurements. The frequency of charge exchange between a solar wind proton and an oxygen atom is calculated using MAVEN SWIA solar wind proton flux measurements, and the electron impact ionization frequency is calculated based on MAVEN SWEA (Solar Wind Electron Analyzer) solar wind electron flux measurements. The solar wind magnetic field used in the model is from the measurements taken by MAVEN MAG (magnetometer) in the upstream solar wind. The good agreement between our predicted pickup oxygen fluxes and the MAVEN SEP and SWIA measured ones confirms detection of oxygen pickup ions and these model-data comparisons can be used to constrain models of hot oxygen densities and photochemical escape flux.

  12. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600.208-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR...

  13. SWB-A modified Thornthwaite-Mather Soil-Water-Balance code for estimating groundwater recharge

    USGS Publications Warehouse

    Westenbroek, S.M.; Kelson, V.A.; Dripps, W.R.; Hunt, R.J.; Bradbury, K.R.

    2010-01-01

    A Soil-Water-Balance (SWB) computer code has been developed to calculate spatial and temporal variations in groundwater recharge. The SWB model calculates recharge by use of commonly available geographic information system (GIS) data layers in combination with tabular climatological data. The code is based on a modified Thornthwaite-Mather soil-water-balance approach, with components of the soil-water balance calculated at a daily timestep. Recharge calculations are made on a rectangular grid of computational elements that may be easily imported into a regional groundwater-flow model. Recharge estimates calculated by the code may be output as daily, monthly, or annual values.

  14. Microscopic approach based on a multiscale algebraic version of the resonating group model for radiative capture reactions

    NASA Astrophysics Data System (ADS)

    Solovyev, Alexander S.; Igashov, Sergey Yu.

    2017-12-01

    A microscopic approach to description of radiative capture reactions based on a multiscale algebraic version of the resonating group model is developed. The main idea of the approach is to expand wave functions of discrete spectrum and continuum for a nuclear system over different bases of the algebraic version of the resonating group model. These bases differ from each other by values of oscillator radius playing a role of scale parameter. This allows us in a unified way to calculate total and partial cross sections (astrophysical S factors) as well as branching ratio for the radiative capture reaction, to describe phase shifts for the colliding nuclei in the initial channel of the reaction, and at the same time to reproduce breakup thresholds of the final nucleus. The approach is applied to the theoretical study of the mirror 3H(α ,γ )7Li and 3He(α ,γ )7Be reactions, which are of great interest to nuclear astrophysics. The calculated results are compared with existing experimental data and with our previous calculations in the framework of the single-scale algebraic version of the resonating group model.

  15. Stock price prediction using geometric Brownian motion

    NASA Astrophysics Data System (ADS)

    Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM

    2018-03-01

    Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.

  16. A Monte Carlo calculation model of electronic portal imaging device for transit dosimetry through heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu; Kim, Jong Oh

    2016-05-15

    Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate themore » model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously investigated for transit dosimetry.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonelli, Perry Edward

    A low-level model-to-model interface is presented that will enable independent models to be linked into an integrated system of models. The interface is based on a standard set of functions that contain appropriate export and import schemas that enable models to be linked with no changes to the models themselves. These ideas are presented in the context of a specific multiscale material problem that couples atomistic-based molecular dynamics calculations to continuum calculations of fluid ow. These simulations will be used to examine the influence of interactions of the fluid with an adjacent solid on the fluid ow. The interface willmore » also be examined by adding it to an already existing modeling code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and comparing it with our own molecular dynamics code.« less

  18. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.

    PubMed

    Desai, Trunil S; Srivastava, Shireesh

    2018-01-01

    13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.

  19. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses

    PubMed Central

    Desai, Trunil S.

    2018-01-01

    13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347

  20. Carbon dioxide capture using covalent organic frameworks (COFs) type material-a theoretical investigation.

    PubMed

    Dash, Bibek

    2018-04-26

    The present work deals with a density functional theory (DFT) study of porous organic framework materials containing - groups for CO 2 capture. In this study, first principle calculations were performed for CO 2 adsorption using N-containing covalent organic framework (COFs) models. Ab initio and DFT-based methods were used to characterize the N-containing porous model system based on their interaction energies upon complexing with CO 2 and nitrogen gas. Binding energies (BEs) of CO 2 and N 2 molecules with the polymer framework were calculated with DFT methods. Hybrid B3LYP and second order MP2 methods combined with of Pople 6-31G(d,p) and correlation consistent basis sets cc-pVDZ, cc-pVTZ and aug-ccVDZ were used to calculate BEs. The effect of linker groups in the designed covalent organic framework model system on the CO 2 and N 2 interactions was studied using quantum calculations.

  1. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Archuleta County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  2. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, San Miguel County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  3. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Fremont County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  4. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Routt County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled"warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  5. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Alamosa and Saguache Counties, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  6. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Dolores County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  7. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    PubMed

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  8. Predicting in-hospital mortality of traffic victims: A comparison between AIS-and ICD-9-CM-related injury severity scales when only ICD-9-CM is reported.

    PubMed

    Van Belleghem, Griet; Devos, Stefanie; De Wit, Liesbet; Hubloue, Ives; Lauwaert, Door; Pien, Karen; Putman, Koen

    2016-01-01

    Injury severity scores are important in the context of developing European and national goals on traffic safety, health-care benchmarking and improving patient communication. Various severity scores are available and are mostly based on Abbreviated Injury Scale (AIS) or International Classification of Diseases (ICD). The aim of this paper is to compare the predictive value for in-hospital mortality between the various severity scores if only International Classification of Diseases, 9th revision, Clinical Modification ICD-9-CM is reported. To estimate severity scores based on the AIS lexicon, ICD-9-CM codes were converted with ICD Programmes for Injury Categorization (ICDPIC) and four AIS-based severity scores were derived: Maximum AIS (MaxAIS), Injury Severity Score (ISS), New Injury Severity Score (NISS) and Exponential Injury Severity Score (EISS). Based on ICD-9-CM, six severity scores were calculated. Determined by the number of injuries taken into account and the means by which survival risk ratios (SRRs) were calculated, four different approaches were used to calculate the ICD-9-based Injury Severity Scores (ICISS). The Trauma Mortality Prediction Model (TMPM) was calculated with the ICD-9-CM-based model averaged regression coefficients (MARC) for both the single worst injury and multiple injuries. Severity scores were compared via model discrimination and calibration. Model comparisons were performed separately for the severity scores based on the single worst injury and multiple injuries. For ICD-9-based scales, estimation of area under the receiver operating characteristic curve (AUROC) ranges between 0.94 and 0.96, while AIS-based scales range between 0.72 and 0.76, respectively. The intercept in the calibration plots is not significantly different from 0 for MaxAIS, ICISS and TMPM. When only ICD-9-CM codes are reported, ICD-9-CM-based severity scores perform better than severity scores based on the conversion to AIS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Empirical Estimation of Local Dielectric Constants: Toward Atomistic Design of Collagen Mimetic Peptides

    PubMed Central

    Pike, Douglas H.; Nanda, Vikas

    2017-01-01

    One of the key challenges in modeling protein energetics is the treatment of solvent interactions. This is particularly important in the case of peptides, where much of the molecule is highly exposed to solvent due to its small size. In this study, we develop an empirical method for estimating the local dielectric constant based on an additive model of atomic polarizabilities. Calculated values match reported apparent dielectric constants for a series of Staphylococcus aureus nuclease mutants. Calculated constants are used to determine screening effects on Coulombic interactions and to determine solvation contributions based on a modified Generalized Born model. These terms are incorporated into the protein modeling platform protCAD, and benchmarked on a data set of collagen mimetic peptides for which experimentally determined stabilities are available. Computing local dielectric constants using atomistic protein models and the assumption of additive atomic polarizabilities is a rapid and potentially useful method for improving electrostatics and solvation calculations that can be applied in the computational design of peptides. PMID:25784456

  10. Model of coordination melting of crystals and anisotropy of physical and chemical properties of the surface

    NASA Astrophysics Data System (ADS)

    Bokarev, Valery P.; Krasnikov, Gennady Ya

    2018-02-01

    Based on the evaluation of the properties of crystals, such as surface energy and its anisotropy, the surface melting temperature, the anisotropy of the work function of the electron, and the anisotropy of adsorption, were shown the advantages of the model of coordination melting (MCM) in calculating the surface properties of crystals. The model of coordination melting makes it possible to calculate with an acceptable accuracy the specific surface energy of the crystals, the anisotropy of the surface energy, the habit of the natural crystals, the temperature of surface melting of the crystal, the anisotropy of the electron work function and the anisotropy of the adhesive properties of single-crystal surfaces. The advantage of our model is the simplicity of evaluating the surface properties of the crystal based on the data given in the reference literature. In this case, there is no need for a complex mathematical tool, which is used in calculations using quantum chemistry or modeling by molecular dynamics.

  11. Modified creep and shrinkage prediction model B3 for serviceability limit state analysis of composite slabs

    NASA Astrophysics Data System (ADS)

    Gholamhoseini, Alireza

    2016-03-01

    Relatively little research has been reported on the time-dependent in-service behavior of composite concrete slabs with profiled steel decking as permanent formwork and little guidance is available for calculating long-term deflections. The drying shrinkage profile through the thickness of a composite slab is greatly affected by the impermeable steel deck at the slab soffit, and this has only recently been quantified. This paper presents the results of long-term laboratory tests on composite slabs subjected to both drying shrinkage and sustained loads. Based on laboratory measurements, a design model for the shrinkage strain profile through the thickness of a slab is proposed. The design model is based on some modifications to an existing creep and shrinkage prediction model B3. In addition, an analytical model is developed to calculate the time-dependent deflection of composite slabs taking into account the time-dependent effects of creep and shrinkage. The calculated deflections are shown to be in good agreement with the experimental measurements.

  12. TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, J; Park, J; Kim, L

    2016-06-15

    Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommendedmore » by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.« less

  13. Machine learning assisted first-principles calculation of multicomponent solid solutions: estimation of interface energy in Ni-based superalloys

    NASA Astrophysics Data System (ADS)

    Chandran, Mahesh; Lee, S. C.; Shim, Jae-Hyeok

    2018-02-01

    A disordered configuration of atoms in a multicomponent solid solution presents a computational challenge for first-principles calculations using density functional theory (DFT). The challenge is in identifying the few probable (low energy) configurations from a large configurational space before DFT calculation can be performed. The search for these probable configurations is possible if the configurational energy E({\\boldsymbol{σ }}) can be calculated accurately and rapidly (with a negligibly small computational cost). In this paper, we demonstrate such a possibility by constructing a machine learning (ML) model for E({\\boldsymbol{σ }}) trained with DFT-calculated energies. The feature vector for the ML model is formed by concatenating histograms of pair and triplet (only equilateral triangle) correlation functions, {g}(2)(r) and {g}(3)(r,r,r), respectively. These functions are a quantitative ‘fingerprint’ of the spatial arrangement of atoms, familiar in the field of amorphous materials and liquids. The ML model is used to generate an accurate distribution P(E({\\boldsymbol{σ }})) by rapidly spanning a large number of configurations. The P(E) contains full configurational information of the solid solution and can be selectively sampled to choose a few configurations for targeted DFT calculations. This new framework is employed to estimate (100) interface energy ({σ }{{IE}}) between γ and γ \\prime at 700 °C in Alloy 617, a Ni-based superalloy, with composition reduced to five components. The estimated {σ }{{IE}} ≈ 25.95 mJ m-2 is in good agreement with the value inferred by the precipitation model fit to experimental data. The proposed new ML-based ab initio framework can be applied to calculate the parameters and properties of alloys with any number of components, thus widening the reach of first-principles calculation to realistic compositions of industrially relevant materials and alloys.

  14. Influence of dose calculation algorithms on the predicted dose distribution and NTCP values for NSCLC patients.

    PubMed

    Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten

    2011-05-01

    To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.

  15. Chromosphere Active Region Plasma Diagnostics Based On Observations Of Millimeter Radiation

    NASA Astrophysics Data System (ADS)

    Loukitcheva, M.; Nagnibeda, V.

    1999-10-01

    In this paper we present the results of millimeter radiation calculations for different elements of chromospheric and transition region structures of the quiet Sun and S-component - elements of chromosphere network, sunspot groups and plages. The calculations were done on the basis of standard optical and UV models ( models by Vernazza et al. (1981,VAL), their modifications by Fontenla et al. (1993,FAL)). We also considered the sunspot model by Lites and Skumanich (1982,LS), S-component model by Staude et al.(1984) and modification of VAL and FAL models by Bocchialini and Vial - models NET and CELL. We compare these model calculations with observed characteristics of components of millimeter Solar radiation for the quiet Sun and S-component obtained with the radiotelescope RT-7.5 MGTU (wavelength 3.4 mm) and radioheliograph Nobeyama (wavelength 17.6 mm). From observations we derived spectral characteristics of millimeter sources and active region source structure. The comparison has shown that observed radio data are clearly in dissagrement with all the considered models. Finally, we propose further improvement of chromospheric and transition region models based on optical and UV observations in order to use for modelling information obtained from radio data.

  16. Models construction for acetone-butanol-ethanol fermentations with acetate/butyrate consecutively feeding by graph theory.

    PubMed

    Li, Zhigang; Shi, Zhongping; Li, Xin

    2014-05-01

    Several fermentations with consecutively feeding of acetate/butyrate were conducted in a 7 L fermentor and the results indicated that exogenous acetate/butyrate enhanced solvents productivities by 47.1% and 39.2% respectively, and changed butyrate/acetate ratios greatly. Then extracellular butyrate/acetate ratios were utilized for calculation of acids rates and the results revealed that acetate and butyrate formation pathways were almost blocked by corresponding acids feeding. In addition, models for acetate/butyrate feeding fermentations were constructed by graph theory based on calculation results and relevant reports. Solvents concentrations and butanol/acetone ratios of these fermentations were also calculated and the results of models calculation matched fermentation data accurately which demonstrated that models were constructed in a reasonable way. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Band structure and orbital character of monolayer MoS2 with eleven-band tight-binding model

    NASA Astrophysics Data System (ADS)

    Shahriari, Majid; Ghalambor Dezfuli, Abdolmohammad; Sabaeian, Mohammad

    2018-02-01

    In this paper, based on a tight-binding (TB) model, first we present the calculations of eigenvalues as band structure and then present the eigenvectors as probability amplitude for finding electron in atomic orbitals for monolayer MoS2 in the first Brillouin zone. In these calculations we are considering hopping processes between the nearest-neighbor Mo-S, the next nearest-neighbor in-plan Mo-Mo, and the next nearest-neighbor in-plan and out-of-plan S-S atoms in a three-atom based unit cell of two-dimensional rhombic MoS2. The hopping integrals have been solved in terms of Slater-Koster and crystal field parameters. These parameters are calculated by comparing TB model with the density function theory (DFT) in the high-symmetry k-points (i.e. the K- and Γ-points). In our TB model all the 4d Mo orbitals and the 3p S orbitals are considered and detailed analysis of the orbital character of each energy level at the main high-symmetry points of the Brillouin zone is described. In comparison with DFT calculations, our results of TB model show a very good agreement for bands near the Fermi level. However for other bands which are far from the Fermi level, some discrepancies between our TB model and DFT calculations are observed. Upon the accuracy of Slater-Koster and crystal field parameters, on the contrary of DFT, our model provide enough accuracy to calculate all allowed transitions between energy bands that are very crucial for investigating the linear and nonlinear optical properties of monolayer MoS2.

  18. The development and validation of a Monte Carlo model for calculating the out-of-field dose from radiotherapy treatments

    NASA Astrophysics Data System (ADS)

    Kry, Stephen

    Introduction. External beam photon radiotherapy is a common treatment for many malignancies, but results in the exposure of the patient to radiation away from the treatment site. This out-of-field radiation irradiates healthy tissue and may lead to the induction of secondary malignancies. Out-of-field radiation is composed of photons and, at high treatment energies, neutrons. Measurement of this out-of-field dose is time consuming, often difficult, and is specific to the conditions of the measurements. Monte Carlo simulations may be a viable approach to determining the out-of-field dose quickly, accurately, and for arbitrary irradiation conditions. Methods. An accelerator head, gantry, and treatment vault were modeled with MCNPX and 6 MV and 18 MV beams were simulated. Photon doses were calculated in-field and compared to measurements made with an ion chamber in a water tank. Photon doses were also calculated out-of-field from static fields and compared to measurements made with thermoluminescent dosimeters in acrylic. Neutron fluences were calculated and compared to measurements made with gold foils. Finally, photon and neutron dose equivalents were calculated in an anthropomorphic phantom following intensity-modulated radiation therapy and compared to previously published dose equivalents. Results. The Monte Carlo model was able to accurately calculate the in-field dose. From static treatment fields, the model was also able to calculate the out-of-field photon dose within 16% at 6 MV and 17% at 18 MV and the neutron fluence within 19% on average. From the simulated IMRT treatments, the calculated out-of-field photon dose was within 14% of measurement at 6 MV and 13% at 18 MV on average. The calculated neutron dose equivalent was much lower than the measured value but is likely accurate because the measured neutron dose equivalent was based on an overestimated neutron energy. Based on the calculated out-of-field doses generated by the Monte Carlo model, it was possible to estimate the risk of fatal secondary malignancy, which was consistent with previous estimates except for the neutron discrepancy. Conclusions. The Monte Carlo model developed here is well suited to studying the out-of-field dose equivalent from photons and neutrons under a variety of irradiation configurations, including complex treatments on complex phantoms. Based on the calculated dose equivalents, it is possible to estimate the risk of secondary malignancy associated with out-of-field doses. The Monte Carlo model should be used to study, quantify, and minimize the out-of-field dose equivalent and associated risks received by patients undergoing radiation therapy.

  19. Tree value system: description and assumptions.

    Treesearch

    D.G. Briggs

    1989-01-01

    TREEVAL is a microcomputer model that calculates tree or stand values and volumes based on product prices, manufacturing costs, and predicted product recovery. It was designed as an aid in evaluating management regimes. TREEVAL calculates values in either of two ways, one based on optimized tree bucking using dynamic programming and one simulating the results of user-...

  20. The Lα (λ = 121.6 nm) solar plage contrasts calculations.

    NASA Astrophysics Data System (ADS)

    Bruevich, E. A.

    1991-06-01

    The results of calculations of Lα plage contrasts based on experimental data are presented. A three-component model ideology of Lα solar flux using "Prognoz-10" and SME daily smoothed values of Lα solar flux are applied. The values of contrast are discussed and compared with experimental values based on "Skylab" data.

  1. Study of fatigue crack propagation in Ti-1Al-1Mn based on the calculation of cold work evolution

    NASA Astrophysics Data System (ADS)

    Plekhov, O. A.; Kostina, A. A.

    2017-05-01

    The work proposes a numerical method for lifetime assessment for metallic materials based on consideration of energy balance at crack tip. This method is based on the evaluation of the stored energy value per loading cycle. To calculate the stored and dissipated parts of deformation energy an elasto-plastic phenomenological model of energy balance in metals under the deformation and failure processes was proposed. The key point of the model is strain-type internal variable describing the stored energy process. This parameter is introduced based of the statistical description of defect evolution in metals as a second-order tensor and has a meaning of an additional strain due to the initiation and growth of the defects. The fatigue crack rate was calculated in a framework of a stationary crack approach (several loading cycles for every crack length was considered to estimate the energy balance at crack tip). The application of the proposed algorithm is illustrated by the calculation of the lifetime of the Ti-1Al-1Mn compact tension specimen under cyclic loading.

  2. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  3. Deformed shell model study of event rates for WIMP-73Ge scattering

    NASA Astrophysics Data System (ADS)

    Sahu, R.; Kota, V. K. B.

    2017-12-01

    The event detection rates for the Weakly Interacting Massive Particles (WIMP) (a dark matter candidate) are calculated with 73Ge as the detector. The calculations are performed within the deformed shell model (DSM) based on Hartree-Fock states. First, the energy levels and magnetic moment for the ground state and two low-lying positive parity states for this nucleus are calculated and compared with experiment. The agreement is quite satisfactory. Then the nuclear wave functions are used to investigate the elastic and inelastic scattering of WIMP from 73Ge; inelastic scattering, especially for the 9/2+ → 5/2+ transition, is studied for the first time. The nuclear structure factors which are independent of supersymmetric model are also calculated as a function of WIMP mass. The event rates are calculated for a given set of nucleonic current parameters. The calculation shows that 73Ge is a good detector for detecting dark matter.

  4. Separated transonic airfoil flow calculations with a nonequilibrium turbulence model

    NASA Technical Reports Server (NTRS)

    King, L. S.; Johnson, D. A.

    1985-01-01

    Navier-Stokes transonic airfoil calculations based on a recently developed nonequilibrium, turbulence closure model are presented for a supercritical airfoil section at transonic cruise conditions and for a conventional airfoil section at shock-induced stall conditions. Comparisons with experimental data are presented which show that this nonequilibrium closure model performs significantly better than the popular Baldwin-Lomax and Cebeci-Smith equilibrium algebraic models when there is boundary-layer separation that results from the inviscid-viscous interactions.

  5. A Simple Sensor Model for THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Bryant, Robert G.

    2009-01-01

    A quasi-static (low frequency) model is developed for THUNDER actuators configured as displacement sensors based on a simple Raleigh-Ritz technique. This model is used to calculate charge as a function of displacement. Using this and the calculated capacitance, voltage vs. displacement and voltage vs. electrical load curves are generated and compared with measurements. It is shown this model gives acceptable results and is useful for determining rough estimates of sensor output for various loads, laminate configurations and thicknesses.

  6. STEADY STATE FLAMMABLE GAS RELEASE RATE CALCULATION AND LOWER FLAMMABILITY LEVEL EVALUATION FOR HANFORD TANK WASTE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HU TA

    2009-10-26

    Assess the steady-state flammability level at normal and off-normal ventilation conditions. The hydrogen generation rate was calculated for 177 tanks using the rate equation model. Flammability calculations based on hydrogen, ammonia, and methane were performed for 177 tanks for various scenarios.

  7. Modeling the performance and cost of lithium-ion batteries for electric-drive vehicles.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, P. A.

    2011-10-20

    This report details the Battery Performance and Cost model (BatPaC) developed at Argonne National Laboratory for lithium-ion battery packs used in automotive transportation. The model designs the battery for a specified power, energy, and type of vehicle battery. The cost of the designed battery is then calculated by accounting for every step in the lithium-ion battery manufacturing process. The assumed annual production level directly affects each process step. The total cost to the original equipment manufacturer calculated by the model includes the materials, manufacturing, and warranty costs for a battery produced in the year 2020 (in 2010 US$). At themore » time this report is written, this calculation is the only publically available model that performs a bottom-up lithium-ion battery design and cost calculation. Both the model and the report have been publically peer-reviewed by battery experts assembled by the U.S. Environmental Protection Agency. This report and accompanying model include changes made in response to the comments received during the peer-review. The purpose of the report is to document the equations and assumptions from which the model has been created. A user of the model will be able to recreate the calculations and perhaps more importantly, understand the driving forces for the results. Instructions for use and an illustration of model results are also presented. Almost every variable in the calculation may be changed by the user to represent a system different from the default values pre-entered into the program. The distinct advantage of using a bottom-up cost and design model is that the entire power-to-energy space may be traversed to examine the correlation between performance and cost. The BatPaC model accounts for the physical limitations of the electrochemical processes within the battery. Thus, unrealistic designs are penalized in energy density and cost, unlike cost models based on linear extrapolations. Additionally, the consequences on cost and energy density from changes in cell capacity, parallel cell groups, and manufacturing capabilities are easily assessed with the model. New proposed materials may also be examined to translate bench-scale values to the design of full-scale battery packs providing realistic energy densities and prices to the original equipment manufacturer. The model will be openly distributed to the public in the year 2011. Currently, the calculations are based in a Microsoft{reg_sign} Office Excel spreadsheet. Instructions are provided for use; however, the format is admittedly not user-friendly. A parallel development effort has created an alternate version based on a graphical user-interface that will be more intuitive to some users. The version that is more user-friendly should allow for wider adoption of the model.« less

  8. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    NASA Astrophysics Data System (ADS)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  9. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  10. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  11. Radiation damage to DNA in DNA-protein complexes.

    PubMed

    Spotheim-Maurizot, M; Davídková, M

    2011-06-03

    The most aggressive product of water radiolysis, the hydroxyl (OH) radical, is responsible for the indirect effect of ionizing radiations on DNA in solution and aerobic conditions. According to radiolytic footprinting experiments, the resulting strand breaks and base modifications are inhomogeneously distributed along the DNA molecule irradiated free or bound to ligands (polyamines, thiols, proteins). A Monte-Carlo based model of simulation of the reaction of OH radicals with the macromolecules, called RADACK, allows calculating the relative probability of damage of each nucleotide of DNA irradiated alone or in complexes with proteins. RADACK calculations require the knowledge of the three dimensional structure of DNA and its complexes (determined by X-ray crystallography, NMR spectroscopy or molecular modeling). The confrontation of the calculated values with the results of the radiolytic footprinting experiments together with molecular modeling calculations show that: (1) the extent and location of the lesions are strongly dependent on the structure of DNA, which in turns is modulated by the base sequence and by the binding of proteins and (2) the regions in contact with the protein can be protected against the attack by the hydroxyl radicals via masking of the binding site and by scavenging of the radicals. 2011 Elsevier B.V. All rights reserved.

  12. Ecological footprint model using the support vector machine technique.

    PubMed

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.

  13. Wheel life prediction model - an alternative to the FASTSIM algorithm for RCF

    NASA Astrophysics Data System (ADS)

    Hossein-Nia, Saeed; Sichani, Matin Sh.; Stichel, Sebastian; Casanueva, Carlos

    2018-07-01

    In this article, a wheel life prediction model considering wear and rolling contact fatigue (RCF) is developed and applied to a heavy-haul locomotive. For wear calculations, a methodology based on Archard's wear calculation theory is used. The simulated wear depth is compared with profile measurements within 100,000 km. For RCF, a shakedown-based theory is applied locally, using the FaStrip algorithm to estimate the tangential stresses instead of FASTSIM. The differences between the two algorithms on damage prediction models are studied. The running distance between the two reprofiling due to RCF is estimated based on a Wöhler-like relationship developed from laboratory test results from the literature and the Palmgren-Miner rule. The simulated crack locations and their angles are compared with a five-year field study. Calculations to study the effects of electro-dynamic braking, track gauge, harder wheel material and the increase of axle load on the wheel life are also carried out.

  14. Density-matrix based determination of low-energy model Hamiltonians from ab initio wavefunctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Changlani, Hitesh J.; Zheng, Huihuo; Wagner, Lucas K.

    2015-09-14

    We propose a way of obtaining effective low energy Hubbard-like model Hamiltonians from ab initio quantum Monte Carlo calculations for molecular and extended systems. The Hamiltonian parameters are fit to best match the ab initio two-body density matrices and energies of the ground and excited states, and thus we refer to the method as ab initio density matrix based downfolding. For benzene (a finite system), we find good agreement with experimentally available energy gaps without using any experimental inputs. For graphene, a two dimensional solid (extended system) with periodic boundary conditions, we find the effective on-site Hubbard U{sup ∗}/t tomore » be 1.3 ± 0.2, comparable to a recent estimate based on the constrained random phase approximation. For molecules, such parameterizations enable calculation of excited states that are usually not accessible within ground state approaches. For solids, the effective Hamiltonian enables large-scale calculations using techniques designed for lattice models.« less

  15. MODBASE, a database of annotated comparative protein structure models

    PubMed Central

    Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C.; Ilyin, Valentin A.; Sali, Andrej

    2002-01-01

    MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10–4) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server. PMID:11752309

  16. The calculation of theoretical chromospheric models and the interpretation of the solar spectrum

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.

    1994-01-01

    Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for nonradiative heating, and for solar activity in general.

  17. The calculation of theoretical chromospheric models and the interpretation of solar spectra from rockets and spacecraft

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.

    1993-01-01

    Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.

  18. Electromagnetic field radiation model for lightning strokes to tall structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motoyama, H.; Janischewskyj, W.; Hussein, A.M.

    1996-07-01

    This paper describes observation and analysis of electromagnetic field radiation from lightning strokes to tall structures. Electromagnetic field waveforms and current waveforms of lightning strokes to the CN Tower have been simultaneously measured since 1991. A new calculation model of electromagnetic field radiation is proposed. The proposed model consists of the lightning current propagation and distribution model and the electromagnetic field radiation model. Electromagnetic fields calculated by the proposed model, based on the observed lightning current at the CN Tower, agree well with the observed fields at 2km north of the tower.

  19. Portrait of a Working Model for Calculating Student Retention.

    ERIC Educational Resources Information Center

    Shelton, Dick; And Others

    Since 1988, South Carolina's Piedmont Technical College (PTC) has been engaged in a process to develop a functional model for calculating student retention. The college has defined retention as a series of levels at which students and the college persist and work to fulfill goals. This definition is based on the ideas that there is no single…

  20. Activity-based differentiation of pathologists' workload in surgical pathology.

    PubMed

    Meijer, G A; Oudejans, J J; Koevoets, J J M; Meijer, C J L M

    2009-06-01

    Adequate budget control in pathology practice requires accurate allocation of resources. Any changes in types and numbers of specimens handled or protocols used will directly affect the pathologists' workload and consequently the allocation of resources. The aim of the present study was to develop a model for measuring the pathologists' workload that can take into account the changes mentioned above. The diagnostic process was analyzed and broken up into separate activities. The time needed to perform these activities was measured. Based on linear regression analysis, for each activity, the time needed was calculated as a function of the number of slides or blocks involved. The total pathologists' time required for a range of specimens was calculated based on standard protocols and validated by comparing to actually measured workload. Cutting up, microscopic procedures and dictating turned out to be highly correlated to number of blocks and/or slides per specimen. Calculated workload per type of specimen was significantly correlated to the actually measured workload. Modeling pathologists' workload based on formulas that calculate workload per type of specimen as a function of the number of blocks and slides provides a basis for a comprehensive, yet flexible, activity-based costing system for pathology.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, J.R.; Lu, Z.; Ring, D.M.

    We have examined a variety of structures for the {l_brace}510{r_brace} symmetric tilt boundary in Si and Ge, using tight-binding and first-principles calculations. These calculations show that the observed structure in Si is the lowest-energy structure, despite the fact that it is more complicated than what is necessary to preserve fourfold coordination. Contrary to calculations using a Tersoff potential, first-principles calculations show that the energy depends strongly upon the structure. A recently developed tight-binding model for Si produces results in very good agreement with the first-principles calculations. Electronic density of states calculations based upon this model show no evidence of midgapmore » states and little evidence of electronic states localized to the grain boundary. {copyright} {ital 1998} {ital The American Physical Society}« less

  2. Spectral irradiance variations: comparison between observations and the SATIRE model on solar rotation time scales

    NASA Astrophysics Data System (ADS)

    Unruh, Y. C.; Krivova, N. A.; Solanki, S. K.; Harder, J. W.; Kopp, G.

    2008-07-01

    Aims: We test the reliability of the observed and calculated spectral irradiance variations between 200 and 1600 nm over a time span of three solar rotations in 2004. Methods: We compare our model calculations to spectral irradiance observations taken with SORCE/SIM, SoHO/VIRGO, and UARS/SUSIM. The calculations assume LTE and are based on the SATIRE (Spectral And Total Irradiance REconstruction) model. We analyse the variability as a function of wavelength and present time series in a number of selected wavelength regions covering the UV to the NIR. We also show the facular and spot contributions to the total calculated variability. Results: In most wavelength regions, the variability agrees well between all sets of observations and the model calculations. The model does particularly well between 400 and 1300 nm, but fails below 220 nm, as well as for some of the strong NUV lines. Our calculations clearly show the shift from faculae-dominated variability in the NUV to spot-dominated variability above approximately 400 nm. We also discuss some of the remaining problems, such as the low sensitivity of SUSIM and SORCE for wavelengths between approximately 310 and 350 nm, where currently the model calculations still provide the best estimates of solar variability.

  3. 77 FR 61604 - Exposure Modeling Public Meeting; Notice of Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-10

    ..., birds, reptiles, and amphibians: Model Parameterization and Knowledge base Development. 4. Standard Operating Procedure for calculating degradation kinetics. 5. Aquatic exposure modeling using field studies...

  4. Testing the Predictive Power of Coulomb Stress on Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.

    2009-12-01

    Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.

  5. Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations

    NASA Astrophysics Data System (ADS)

    Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET

    2017-09-01

    The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.

  6. Finite Element Based HWB Centerbody Structural Optimization and Weight Prediction

    NASA Technical Reports Server (NTRS)

    Gern, Frank H.

    2012-01-01

    This paper describes a scalable structural model suitable for Hybrid Wing Body (HWB) centerbody analysis and optimization. The geometry of the centerbody and primary wing structure is based on a Vehicle Sketch Pad (VSP) surface model of the aircraft and a FLOPS compatible parameterization of the centerbody. Structural analysis, optimization, and weight calculation are based on a Nastran finite element model of the primary HWB structural components, featuring centerbody, mid section, and outboard wing. Different centerbody designs like single bay or multi-bay options are analyzed and weight calculations are compared to current FLOPS results. For proper structural sizing and weight estimation, internal pressure and maneuver flight loads are applied. Results are presented for aerodynamic loads, deformations, and centerbody weight.

  7. Dill: an algorithm and a symbolic software package for doing classical supersymmetry calculations

    NASA Astrophysics Data System (ADS)

    Luc̆ić, Vladan

    1995-11-01

    An algorithm is presented that formalizes different steps in a classical Supersymmetric (SUSY) calculation. Based on the algorithm Dill, a symbolic software package, that can perform the calculations, is developed in the Mathematica programming language. While the algorithm is quite general, the package is created for the 4 - D, N = 1 model. Nevertheless, with little modification, the package could be used for other SUSY models. The package has been tested and some of the results are presented.

  8. Electron- and positron-impact atomic scattering calculations using propagating exterior complex scaling

    NASA Astrophysics Data System (ADS)

    Bartlett, P. L.; Stelbovics, A. T.; Rescigno, T. N.; McCurdy, C. W.

    2007-11-01

    Calculations are reported for four-body electron-helium collisions and positron-hydrogen collisions, in the S-wave model, using the time-independent propagating exterior complex scaling (PECS) method. The PECS S-wave calculations for three-body processes in electron-helium collisions compare favourably with previous convergent close-coupling (CCC) and time-dependent exterior complex scaling (ECS) calculations, and exhibit smooth cross section profiles. The PECS four-body double-excitation cross sections are significantly different from CCC calculations and highlight the need for an accurate representation of the resonant helium final-state wave functions when undertaking these calculations. Results are also presented for positron-hydrogen collisions in an S-wave model using an electron-positron potential of V12 = - (8 + (r1 - r2)2)-1/2. This model is representative of the full problem, and the results demonstrate that ECS-based methods can accurately calculate scattering, ionization and positronium formation cross sections in this three-body rearrangement collision.

  9. Calculating the dermal flux of chemicals with OELs based on their molecular structure: An attempt to assign the skin notation.

    PubMed

    Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir

    2010-09-01

    Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published models; from among 112 chemicals 94 (84%) should have the skin notation in the OEL list based on the LFER calculations. The skin notation had been estimated by other published models for almost 94% of the chemicals. Twenty-nine (25.8%) chemicals were identified to have significant absorption and 65 (58%) the potential for dermal toxicity. We found major differences between alternative published analytical models and their ability to determine whether particular chemicals were potentially dermotoxic. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. An ab initio chemical reaction model for the direct simulation Monte Carlo study of non-equilibrium nitrogen flows.

    PubMed

    Mankodi, T K; Bhandarkar, U V; Puranik, B P

    2017-08-28

    A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N 2 +N 2 →N 2 +2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N 4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.

  11. Method of calculating tsunami travel times in the Andaman Sea region

    PubMed Central

    Visuthismajarn, Parichart; Tanavud, Charlchai; Robson, Mark G.

    2014-01-01

    A new model to calculate tsunami travel times in the Andaman Sea region has been developed. The model specifically provides more accurate travel time estimates for tsunamis propagating to Patong Beach on the west coast of Phuket, Thailand. More generally, the model provides better understanding of the influence of the accuracy and resolution of bathymetry data on the accuracy of travel time calculations. The dynamic model is based on solitary wave theory, and a lookup function is used to perform bilinear interpolation of bathymetry along the ray trajectory. The model was calibrated and verified using data from an echosounder record, tsunami photographs, satellite altimetry records, and eyewitness accounts of the tsunami on 26 December 2004. Time differences for 12 representative targets in the Andaman Sea and the Indian Ocean regions were calculated. The model demonstrated satisfactory time differences (<2 min/h), despite the use of low resolution bathymetry (ETOPO2v2). To improve accuracy, the dynamics of wave elevation and a velocity correction term must be considered, particularly for calculations in the nearshore region. PMID:25741129

  12. Method of calculating tsunami travel times in the Andaman Sea region.

    PubMed

    Kietpawpan, Monte; Visuthismajarn, Parichart; Tanavud, Charlchai; Robson, Mark G

    2008-07-01

    A new model to calculate tsunami travel times in the Andaman Sea region has been developed. The model specifically provides more accurate travel time estimates for tsunamis propagating to Patong Beach on the west coast of Phuket, Thailand. More generally, the model provides better understanding of the influence of the accuracy and resolution of bathymetry data on the accuracy of travel time calculations. The dynamic model is based on solitary wave theory, and a lookup function is used to perform bilinear interpolation of bathymetry along the ray trajectory. The model was calibrated and verified using data from an echosounder record, tsunami photographs, satellite altimetry records, and eyewitness accounts of the tsunami on 26 December 2004. Time differences for 12 representative targets in the Andaman Sea and the Indian Ocean regions were calculated. The model demonstrated satisfactory time differences (<2 min/h), despite the use of low resolution bathymetry (ETOPO2v2). To improve accuracy, the dynamics of wave elevation and a velocity correction term must be considered, particularly for calculations in the nearshore region.

  13. Research on Modeling of Propeller in a Turboprop Engine

    NASA Astrophysics Data System (ADS)

    Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong

    2015-05-01

    In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.

  14. New generation of universal modeling for centrifugal compressors calculation

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.

    2015-08-01

    The Universal Modeling method is in constant use from mid - 1990th. Below is presented the newest 6th version of the Method. The flow path configuration of 3D impellers is presented in details. It is possible to optimize meridian configuration including hub/shroud curvatures, axial length, leading edge position, etc. The new model of vaned diffuser includes flow non-uniformity coefficient based on CFD calculations. The loss model was built from the results of 37 experiments with compressors stages of different flow rates and loading factors. One common set of empirical coefficients in the loss model guarantees the efficiency definition within an accuracy of 0.86% at the design point and 1.22% along the performance curve. The model verification was made. Four multistage compressors performances with vane and vaneless diffusers were calculated. As the model verification was made, four multistage compressors performances with vane and vaneless diffusers were calculated. Two of these compressors have quite unusual flow paths. The modeling results were quite satisfactory in spite of these peculiarities. One sample of the verification calculations is presented in the text. This 6th version of the developed computer program is being already applied successfully in the design practice.

  15. Results of EAS characteristics calculations in the framework of the universal hadronic interaction model NEXUS

    NASA Astrophysics Data System (ADS)

    Kalmykov, N. N.; Ostapchenko, S. S.; Werner, K.

    An extensive air shower (EAS) calculation scheme based on cascade equations and some EAS characteristics for energies 1014 -1017 eV are presented. The universal hadronic interaction model NEXUS is employed to provide the necessary data concerning hadron-air collisions. The influence of model assumptions on the longitudinal EAS development is discussed in the framework of the NEXUS and QGSJET models. Applied to EAS simulations, perspectives of combined Monte Carlo and numerical methods are considered.

  16. Delamination modeling of laminate plate made of sublaminates

    NASA Astrophysics Data System (ADS)

    Kormaníková, Eva; Kotrasová, Kamila

    2017-07-01

    The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.

  17. Trust Model of Wireless Sensor Networks and Its Application in Data Fusion

    PubMed Central

    Chen, Zhenguo; Tian, Liqin; Lin, Chuang

    2017-01-01

    In order to ensure the reliability and credibility of the data in wireless sensor networks (WSNs), this paper proposes a trust evaluation model and data fusion mechanism based on trust. First of all, it gives the model structure. Then, the calculation rules of trust are given. In the trust evaluation model, comprehensive trust consists of three parts: behavior trust, data trust, and historical trust. Data trust can be calculated by processing the sensor data. Based on the behavior of nodes in sensing and forwarding, the behavior trust is obtained. The initial value of historical trust is set to the maximum and updated with comprehensive trust. Comprehensive trust can be obtained by weighted calculation, and then the model is used to construct the trust list and guide the process of data fusion. Using the trust model, simulation results indicate that energy consumption can be reduced by an average of 15%. The detection rate of abnormal nodes is at least 10% higher than that of the lightweight and dependable trust system (LDTS) model. Therefore, this model has good performance in ensuring the reliability and credibility of the data. Moreover, the energy consumption of transmitting was greatly reduced. PMID:28350347

  18. Calculated quantum yield of photosynthesis of phytoplankton in the Marine Light-Mixed Layers (59 deg N, 21 deg W)

    NASA Technical Reports Server (NTRS)

    Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.

    1995-01-01

    The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.

  19. Calculations of Hubbard U from first-principles

    NASA Astrophysics Data System (ADS)

    Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.

    2006-09-01

    The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.

  20. A new heat transfer analysis in machining based on two steps of 3D finite element modelling and experimental validation

    NASA Astrophysics Data System (ADS)

    Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.

    2013-01-01

    Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.

  1. The consideration of atmospheric stability within wind farm AEP calculations

    NASA Astrophysics Data System (ADS)

    Schmidt, Jonas; Chang, Chi-Yao; Dörenkämper, Martin; Salimi, Milad; Teichmann, Tim; Stoevesandt, Bernhard

    2016-09-01

    The annual energy production of an existing wind farm including thermal stratification is calculated with two different methods and compared to the average of three years of SCADA data. The first method is based on steady state computational fluid dynamics simulations and the assumption of Reynolds-similarity at hub height. The second method is a wake modelling calculation, where a new stratification transformation model was imposed on the Jensen an Ainslie wake models. The inflow states for both approaches were obtained from one year WRF simulation data of the site. Although all models underestimate the mean wind speed and wake effects, the results from the phenomenological wake transformation are compatible with high-fidelity simulation results.

  2. Examinations of electron temperature calculation methods in Thomson scattering diagnostics.

    PubMed

    Oh, Seungtae; Lee, Jong Ha; Wi, Hanmin

    2012-10-01

    Electron temperature from Thomson scattering diagnostic is derived through indirect calculation based on theoretical model. χ-square test is commonly used in the calculation, and the reliability of the calculation method highly depends on the noise level of input signals. In the simulations, noise effects of the χ-square test are examined and scale factor test is proposed as an alternative method.

  3. Rocket exhaust ground cloud/atmospheric interactions

    NASA Technical Reports Server (NTRS)

    Hwang, B.; Gould, R. K.

    1978-01-01

    An attempt to identify and minimize the uncertainties and potential inaccuracies of the NASA Multilayer Diffusion Model (MDM) is performed using data from selected Titan 3 launches. The study is based on detailed parametric calculations using the MDM code and a comparative study of several other diffusion models, the NASA measurements, and the MDM. The results are discussed and evaluated. In addition, the physical/chemical processes taking place during the rocket cloud rise are analyzed. The exhaust properties and the deluge water effects are evaluated. A time-dependent model for two aerosol coagulations is developed and documented. Calculations using this model for dry deposition during cloud rise are made. A simple model for calculating physical properties such as temperature and air mass entrainment during cloud rise is also developed and incorporated with the aerosol model.

  4. Collection Efficiency and Ice Accretion Characteristics of Two Full Scale and One 1/4 Scale Business Jet Horizontal Tails

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Papadakis, Michael

    2005-01-01

    Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.

  5. SU-E-T-272: Direct Verification of a Treatment Planning System Megavoltage Linac Beam Photon Spectra Models, and Analysis of the Effects On Patient Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leheta, D; Shvydka, D; Parsai, E

    2015-06-15

    Purpose: For the photon dose calculation Philips Pinnacle Treatment Planning System (TPS) uses collapsed cone convolution algorithm, which relies on energy spectrum of the beam in computing the scatter component. The spectrum is modeled based on Linac’s standard commissioning data and typically is not independently verified. We explored a methodology of using transmission measurements in combination with regularization data processing to unfold Linac spectra. The measured spectra were compared to those modeled by the TPS, and the effect on patient plans was evaluated. Methods: Transmission measurements were conducted in narrow-beam geometry using a standard Farmer ionization chamber. Two attenuating materialsmore » and two build -up caps, having different atomic numbers, served to enhance discrimination between absorption of low and high-energy portions of the spectra, thus improving the accuracy of the results. The data was analyzed using a regularization technique implemented through spreadsheet-based calculations. Results: The unfolded spectra were found to deviate from the TPS beam models. The effect of such deviations on treatment planning was evaluated for patient plans through dose distribution calculations with either TPS modeled or measured energy spectra. The differences were reviewed through comparison of isodose distributions, and quantified based on maximum dose values for critical structures. While in most cases no drastic differences in the calculated doses were observed, plans with deviations of 4 to 8% in the maximum dose values for critical structures were discovered. The anatomical sites with large scatter contributions are the most vulnerable to inaccuracies in the modeled spectrum. Conclusion: An independent check of the TPS model spectrum is highly desirable and should be included as part of commissioning of a new Linac. The effect is particularly important for dose calculations in high heterogeneity regions. The developed approach makes acquisition of megavoltage Linac beam spectra achievable in a typical radiation oncology clinic.« less

  6. Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.

    2003-01-01

    The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and calculated the generalized aerodynamic forces, eigenvalues, and response amplitudes. The eigenvalues determine the flutter frequency and damping. As a test case, the flutter of a helical fan was calculated with LINFLUX and compared with calculations from TURBO-AE, a nonlinear time domain code, and from ASTROP2, a code based on linear unsteady aerodynamics.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Bin; Li, Yongbao; Liu, Bo

    Purpose: CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. Methods: A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensitymore » profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. Results: The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation, the pencil beam calculated results agreed well with the film measurement of both Iris collimators and the half-beam blocked field, fared much better than the Ray-Tracing calculation. Conclusions: The authors have developed a pencil beam dose calculation model for the CyberKnife system. The dose calculation accuracy is better than the standard linac based system because the model parameters were specifically tuned to the CyberKnife system and geometry correction factors. The model handles better the lateral scatter and has the potential to be used for the irregularly shaped fields. Comprehensive validations on MLC equipped system are necessary for its clinical implementation. It is reasonably fast enough to be used during plan optimization.« less

  8. A computer graphics based model for scattering from objects of arbitrary shapes in the optical region

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.

    1991-01-01

    A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.

  9. Biomechanical analysis of the effect of congruence, depth and radius on the stability ratio of a simplistic 'ball-and-socket' joint model.

    PubMed

    Ernstbrunner, L; Werthel, J-D; Hatta, T; Thoreson, A R; Resch, H; An, K-N; Moroder, P

    2016-10-01

    The bony shoulder stability ratio (BSSR) allows for quantification of the bony stabilisers in vivo. We aimed to biomechanically validate the BSSR, determine whether joint incongruence affects the stability ratio (SR) of a shoulder model, and determine the correct parameters (glenoid concavity versus humeral head radius) for calculation of the BSSR in vivo. Four polyethylene balls (radii: 19.1 mm to 38.1 mm) were used to mould four fitting sockets in four different depths (3.2 mm to 19.1mm). The SR was measured in biomechanical congruent and incongruent experimental series. The experimental SR of a congruent system was compared with the calculated SR based on the BSSR approach. Differences in SR between congruent and incongruent experimental conditions were quantified. Finally, the experimental SR was compared with either calculated SR based on the socket concavity or plastic ball radius. The experimental SR is comparable with the calculated SR (mean difference 10%, sd 8%; relative values). The experimental incongruence study observed almost no differences (2%, sd 2%). The calculated SR on the basis of the socket concavity radius is superior in predicting the experimental SR (mean difference 10%, sd 9%) compared with the calculated SR based on the plastic ball radius (mean difference 42%, sd 55%). The present biomechanical investigation confirmed the validity of the BSSR. Incongruence has no significant effect on the SR of a shoulder model. In the event of an incongruent system, the calculation of the BSSR on the basis of the glenoid concavity radius is recommended.Cite this article: L. Ernstbrunner, J-D. Werthel, T. Hatta, A. R. Thoreson, H. Resch, K-N. An, P. Moroder. Biomechanical analysis of the effect of congruence, depth and radius on the stability ratio of a simplistic 'ball-and-socket' joint model. Bone Joint Res 2016;5:453-460. DOI: 10.1302/2046-3758.510.BJR-2016-0078.R1. © 2016 Ernstbrunner et al.

  10. Study of cosmic ray interaction model based on atmospheric muons for the neutrino flux calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanuki, T.; Honda, M.; Kajita, T.

    2007-02-15

    We have studied the hadronic interaction for the calculation of the atmospheric neutrino flux by summarizing the accurately measured atmospheric muon flux data and comparing with simulations. We find the atmospheric muon and neutrino fluxes respond to errors in the {pi}-production of the hadronic interaction similarly, and compare the atmospheric muon flux calculated using the HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).] code with experimental measurements. The {mu}{sup +}+{mu}{sup -} data show good agreement in the 1{approx}30 GeV/c range, but a large disagreement above 30 GeV/c. The {mu}{sup +}/{mu}{sup -} ratiomore » shows sizable differences at lower and higher momenta for opposite directions. As the disagreements are considered to be due to assumptions in the hadronic interaction model, we try to improve it phenomenologically based on the quark parton model. The improved interaction model reproduces the observed muon flux data well. The calculation of the atmospheric neutrino flux will be reported in the following paper [M. Honda et al., Phys. Rev. D 75, 043006 (2007).].« less

  11. The Fowler-Nordheim behavior and mechanism of photo-sensitive field from SnS{sub 2} nanosheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryawanshi, Sachin R.; Chaudhari, Nilima S.; Warule, Sambhaji S.

    2015-06-24

    Here in, we report photo-sensitive field emission measurements of SnS{sub 2} nanosheets at base pressure of ∼1×10{sup −8} mbar are reported. The nonlinear Fowler-Nordheim (F-N) plot is elucidate according to a (F-N) model of calculation based on shift in a saturation of conduction band current density after light illumination and prevalence of valence band current density at high electric field values. The model of calculation suggests that the slope variation before and after visible light illumination of the F-N plot, in the high-field and low-field regions, does not depend on the magnitude of saturation but also depend on charge carriermore » (electron) concentration get increased in conduction band. The F-N model of calculation is important for the fundamental understanding of the photo-sensitive field emission mechanism of semiconducting SnS{sub 2}. The replicate F-N plots exhibit similar features to those observed experimentally. The model calculation suggests that the nonlinearity of the F-N plot is a characteristic of the photo-enhanced energy band structure of the photo-sensitive semiconductor material.« less

  12. Analysis of thermal stress of the piston during non-stationary heat flow in a turbocharged Diesel engine

    NASA Astrophysics Data System (ADS)

    Gustof, P.; Hornik, A.

    2016-09-01

    In the paper, numeric calculations of thermal stresses of the piston in a turbocharged Diesel engine in the initial phase of its work were carried out based on experimental studies and the data resulting from them. The calculations were made using a geometrical model of the piston in a five-cylinder turbocharged Diesel engine with a capacity of about 2300 cm3, with a direct fuel injection to the combustion chamber and a power rating of 85 kW. In order to determine the thermal stress, application of own mathematical models of the heat flow in characteristic surfaces of the piston was required to show real processes occurring on the surface of the analysed component. The calculations were performed using a Geostar COSMOS/M program module. A three-dimensional geometric model of the piston was created in this program based on a real component, in order to enable the calculations and analysis of thermal stresses during non-stationary heat flow. Modelling of the thermal stresses of the piston for the engine speed n=4250 min-1 and engine load λ=1.69 was carried out.

  13. Modeling Alkyl p-Methoxy Cinnamate (APMC) as UV absorber based on electronic transition using semiempirical quantum mechanics ZINDO/s calculation

    NASA Astrophysics Data System (ADS)

    Salmahaminati; Azis, Muhlas Abdul; Purwiandono, Gani; Arsyik Kurniawan, Muhammad; Rubiyanto, Dwiarso; Darmawan, Arif

    2017-11-01

    In this research, modeling several alkyl p-methoxy cinnamate (APMC) based on electronic transition by using semiempirical mechanical quantum ZINDO/s calculation is performed. Alkyl cinnamates of C1 (methyl) up to C7 (heptyl) homolog with 1-5 example structures of each homolog are used as materials. Quantum chemistry-package software Hyperchem 8.0 is used to simulate the drawing of the structure, geometry optimization by a semiempirical Austin Model 1 algorithm and single point calculation employing a semiempirical ZINDO/s technique. ZINDO/s calculations use a defined criteria that singly excited -Configuration Interaction (CI) where a gap of HOMO-LUMO energy transition and maximum degeneracy level are 7 and 2, respectively. Moreover, analysis of the theoretical spectra is focused on the UV-B (290-320 nm) and UV-C (200-290 nm) area. The results show that modeling of the compound can be used to predict the type of UV protection activity depends on the electronic transition in the UV area. Modification of the alkyl homolog relatively does not change the value of wavelength absorption to indicate the UV protection activity. Alkyl cinnamate compounds are predicted as UV-B and UV-C sunscreen.

  14. Development of an Aeroelastic Analysis Including a Viscous Flow Model

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Bakhle, Milind A.

    2001-01-01

    Under this grant, Version 4 of the three-dimensional Navier-Stokes aeroelastic code (TURBO-AE) has been developed and verified. The TURBO-AE Version 4 aeroelastic code allows flutter calculations for a fan, compressor, or turbine blade row. This code models a vibrating three-dimensional bladed disk configuration and the associated unsteady flow (including shocks, and viscous effects) to calculate the aeroelastic instability using a work-per-cycle approach. Phase-lagged (time-shift) periodic boundary conditions are used to model the phase lag between adjacent vibrating blades. The direct-store approach is used for this purpose to reduce the computational domain to a single interblade passage. A disk storage option, implemented using direct access files, is available to reduce the large memory requirements of the direct-store approach. Other researchers have implemented 3D inlet/exit boundary conditions based on eigen-analysis. Appendix A: Aeroelastic calculations based on three-dimensional euler analysis. Appendix B: Unsteady aerodynamic modeling of blade vibration using the turbo-V3.1 code.

  15. Modeling the Capacitive Deionization Process in Dual-Porosity Electrodes

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2016-04-28

    In many areas of the world, there is a need to increase water availability. Capacitive deionization (CDI) is an electrochemical water treatment process that can be a viable alternative for treating water and for saving energy. A model is presented to simulate the CDI process in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two steps volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. A one-equationmore » model based on the principle of local equilibrium is derived. The constraints determining the range of application of the one-equation model are presented. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. The source terms that appear in the average equations are calculated using theoretical derivations. The global diffusivity is calculated by solving the closure problem.« less

  16. Molecular dynamics simulations for mechanical properties of borophene: parameterization of valence force field model and Stillinger-Weber potential

    PubMed Central

    Zhou, Yu-Ping; Jiang, Jin-Wu

    2017-01-01

    While most existing theoretical studies on the borophene are based on first-principles calculations, the present work presents molecular dynamics simulations for the lattice dynamical and mechanical properties in borophene. The obtained mechanical quantities are in good agreement with previous first-principles calculations. The key ingredients for these molecular dynamics simulations are the two efficient empirical potentials developed in the present work for the interaction of borophene with low-energy triangular structure. The first one is the valence force field model, which is developed with the assistance of the phonon dispersion of borophene. The valence force field model is a linear potential, so it is rather efficient for the calculation of linear quantities in borophene. The second one is the Stillinger-Weber potential, whose parameters are derived based on the valence force field model. The Stillinger-Weber potential is applicable in molecular dynamics simulations of nonlinear physical or mechanical quantities in borophene. PMID:28349983

  17. Axisymmetric computational fluid dynamics analysis of Saturn V/S1-C/F1 nozzle and plume

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.

    1993-01-01

    An axisymmetric single engine Computational Fluid Dynamics calculation of the Saturn V/S 1-C vehicle base region and F1 engine plume is described. There were two objectives of this work, the first was to calculate an axisymmetric approximation of the nozzle, plume and base region flow fields of S1-C/F1, relate/scale this to flight data and apply this scaling factor to a NLS/STME axisymmetric calculations from a parallel effort. The second was to assess the differences in F1 and STME plume shear layer development and concentration of combustible gases. This second piece of information was to be input/supporting data for assumptions made in NLS2 base temperature scaling methodology from which the vehicle base thermal environments were being generated. The F1 calculations started at the main combustion chamber faceplate and incorporated the turbine exhaust dump/nozzle film coolant. The plume and base region calculations were made for ten thousand feet and 57 thousand feet altitude at vehicle flight velocity and in stagnant freestream. FDNS was implemented with a 14 species, 28 reaction finite rate chemistry model plus a soot burning model for the RP-1/LOX chemistry. Nozzle and plume flow fields are shown, the plume shear layer constituents are compared to a STME plume. Conclusions are made about the validity and status of the analysis and NLS2 vehicle base thermal environment definition methodology.

  18. Development of a risk-based environmental management tool for drilling discharges. Summary of a four-year project.

    PubMed

    Singsaas, Ivar; Rye, Henrik; Frost, Tone Karin; Smit, Mathijs G D; Garpestad, Eimund; Skare, Ingvild; Bakke, Knut; Veiga, Leticia Falcao; Buffagni, Melania; Follum, Odd-Arne; Johnsen, Ståle; Moltu, Ulf-Einar; Reed, Mark

    2008-04-01

    This paper briefly summarizes the ERMS project and presents the developed model by showing results from environmental fates and risk calculations of a discharge from offshore drilling operations. The developed model calculates environmental risks for the water column and sediments resulting from exposure to toxic stressors (e.g., chemicals) and nontoxic stressors (e.g., suspended particles, sediment burial). The approach is based on existing risk assessment techniques described in the European Union technical guidance document on risk assessment and species sensitivity distributions. The model calculates an environmental impact factor, which characterizes the overall potential impact on the marine environment in terms of potentially impacted water volume and sediment area. The ERMS project started in 2003 and was finalized in 2007. In total, 28 scientific reports and 9 scientific papers have been delivered from the ERMS project (http://www.sintef.no/erms).

  19. Calculated mammographic spectra confirmed with attenuation curves for molybdenum, rhodium, and tungsten targets.

    PubMed

    Blough, M M; Waggener, R G; Payne, W H; Terry, J A

    1998-09-01

    A model for calculating mammographic spectra independent of measured data and fitting parameters is presented. This model is based on first principles. Spectra were calculated using various target and filter combinations such as molybdenum/molybdenum, molybdenum/rhodium, rhodium/rhodium, and tungsten/aluminum. Once the spectra were calculated, attenuation curves were calculated and compared to measured attenuation curves. The attenuation curves were calculated and measured using aluminum alloy 1100 or high purity aluminum filtration. Percent differences were computed between the measured and calculated attenuation curves resulting in an average of 5.21% difference for tungsten/aluminum, 2.26% for molybdenum/molybdenum, 3.35% for rhodium/rhodium, and 3.18% for molybdenum/rhodium. Calculated spectra were also compared to measured spectra from the Food and Drug Administration [Fewell and Shuping, Handbook of Mammographic X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1979)] and a comparison will also be presented.

  20. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    NASA Astrophysics Data System (ADS)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  1. A patient-specific EMG-driven neuromuscular model for the potential use of human-inspired gait rehabilitation robots.

    PubMed

    Ma, Ye; Xie, Shengquan; Zhang, Yanxin

    2016-03-01

    A patient-specific electromyography (EMG)-driven neuromuscular model (PENm) is developed for the potential use of human-inspired gait rehabilitation robots. The PENm is modified based on the current EMG-driven models by decreasing the calculation time and ensuring good prediction accuracy. To ensure the calculation efficiency, the PENm is simplified into two EMG channels around one joint with minimal physiological parameters. In addition, a dynamic computation model is developed to achieve real-time calculation. To ensure the calculation accuracy, patient-specific muscle kinematics information, such as the musculotendon lengths and the muscle moment arms during the entire gait cycle, are employed based on the patient-specific musculoskeletal model. Moreover, an improved force-length-velocity relationship is implemented to generate accurate muscle forces. Gait analysis data including kinematics, ground reaction forces, and raw EMG signals from six adolescents at three different speeds were used to evaluate the PENm. The simulation results show that the PENm has the potential to predict accurate joint moment in real-time. The design of advanced human-robot interaction control strategies and human-inspired gait rehabilitation robots can benefit from the application of the human internal state provided by the PENm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling

    DOE PAGES

    Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; ...

    2014-10-12

    The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less

  3. A physically based analytical spatial air temperature and humidity model

    Treesearch

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  4. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    PubMed

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.

  5. Modeling and calculation of turbulent lifted diffusion flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, J.P.H.; Lamers, A.P.G.G.

    1994-01-01

    Liftoff heights of turbulent diffusion flames have been modeled using the laminar diffusion flamelet concept of Peters and Williams. The strain rate of the smallest eddies is used as the stretch describing parameter, instead of the more common scalar dissipation rate. The h(U) curve, which is the mean liftoff height as a function of fuel exit velocity can be accurately predicted, while this was impossible with the scalar dissipation rate. Liftoff calculations performed in the flames as well as in the equivalent isothermal jets, using a standard k-[epsilon] turbulence model yield approximately the same correct slope for the h(U) curvemore » while the offset has to be reproduced by choosing an appropriate coefficient in the strain rate model. For the flame calculations a model for the pdf of the fluctuating flame base is proposed. The results are insensitive to its width. The temperature field is qualitatively different from the field calculated by Bradley et al. who used a premixed flamelet model for diffusion flames.« less

  6. Morphology of the winter anomaly in NmF2 and Total Electron Content

    NASA Astrophysics Data System (ADS)

    Yasyukevich, Yury; Ratovsky, Konstantin; Yasyukevich, Anna; Klimenko, Maksim; Klimenko, Vladimir; Chirik, Nikolay

    2017-04-01

    We analyzed the winter anomaly manifestation in the F2 peak electron density (NmF2) and Total Electron Content (TEC) based on the observation data and model calculation results. For the analysis we used 1998-2015 TEC Global Ionospheric Maps (GIM) and NmF2 ground-based ionosonde observation data from and COSMIC, CHAMP and GRACE radio occultation data. We used Global Self-consistent Model of the Thermosphere, Ionosphere, and Protonosphere (GSM TIP) and International Reference Ionosphere model (IRI-2012). Based on the observation data and model calculation results we constructed the maps of the winter anomaly intensity in TEC and NmF2 for the different solar and geomagnetic activity levels. The winter anomaly intensity was found to be higher in NmF2 than in TEC according to both observation and modeling. In this report we show the similarity and difference in winter anomaly as revealed in experimental data and model results.

  7. Electrical and fluid transport in consolidated sphere packs

    NASA Astrophysics Data System (ADS)

    Zhan, Xin; Schwartz, Lawrence M.; Toksöz, M. Nafi

    2015-05-01

    We calculate geometrical and transport properties (electrical conductivity, permeability, specific surface area, and surface conductivity) of a family of model granular porous media from an image based representation of its microstructure. The models are based on the packing described by Finney and cover a wide range of porosities. Finite difference methods are applied to solve for electrical conductivity and hydraulic permeability. Two image processing methods are used to identify the pore-grain interface and to test correlations linking permeability to electrical conductivity. A three phase conductivity model is developed to compute surface conductivity associated with the grain-pore interface. Our results compare well against empirical models over the entire porosity range studied. We conclude by examining the influence of image resolution on our calculations.

  8. Two-magnon excitations in resonant inelastic x-ray scattering studied within spin density wave formalism

    NASA Astrophysics Data System (ADS)

    Nomura, Takuji

    2017-10-01

    We study two-magnon excitations in resonant inelastic x-ray scattering (RIXS) at the transition-metal K edge. Instead of working with effective Heisenberg spin models, we work with a Hubbard-type model (d -p model) for a typical insulating cuprate La2CuO4 . For the antiferromagnetic ground state within the spin density wave (SDW) mean-field formalism, we calculate the dynamical correlation function within the random-phase approximation (RPA), and then obtain two-magnon excitation spectra by calculating the convolution of it. Coupling between the K -shell hole and the magnons in the intermediate state is calculated by means of diagrammatic perturbation expansion in the Coulomb interaction. The calculated momentum dependence of RIXS spectra agrees well with that of experiments. A notable difference from previous calculations based on the Heisenberg spin models is that RIXS spectra have a large two-magnon weight near the zone center, which may be confirmed by further careful high-resolution experiments.

  9. The forward modelling and analysis of magnetic field on the East Asia area using tesseroids

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Meng, X.; Xu, G.

    2017-12-01

    As the progress of airborne and satellite magnetic survey, high-resolution magnetic data could be measured at different scale. In order to test and improve the accuracy of the existing crustal model, the forward modeling method is usually used to simulate the magnetic field of the lithosphere. Traditional models to forward modelling the magnetic field are based on the Cartesian coordinate system, and are always used to calculate the magnetic field of the local and small area. However, the Cartesian coordinate system is not an ideal choice for calculating the magnetic field of the global or continental area at the height of the satellite and Earth's curvature cannot be ignored in this situation. The spherical element (called tesseroids) can be used as a model element in the spherical coordinate system to solve this problem. On the basis of studying the principle of this forward method, we focus the selection of data source and the mechanism of adaptive integration. Then we calculate the magnetic anomaly data of East Asia area based on the model Crust1.0. The results presented the crustal susceptibility distribution, which was well consistent with the basic tectonic features in the study area.

  10. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  11. A web-based normative calculator for the uniform data set (UDS) neuropsychological test battery.

    PubMed

    Shirk, Steven D; Mitchell, Meghan B; Shaughnessy, Lynn W; Sherman, Janet C; Locascio, Joseph J; Weintraub, Sandra; Atri, Alireza

    2011-11-11

    With the recent publication of new criteria for the diagnosis of preclinical Alzheimer's disease (AD), there is a need for neuropsychological tools that take premorbid functioning into account in order to detect subtle cognitive decline. Using demographic adjustments is one method for increasing the sensitivity of commonly used measures. We sought to provide a useful online z-score calculator that yields estimates of percentile ranges and adjusts individual performance based on sex, age and/or education for each of the neuropsychological tests of the National Alzheimer's Coordinating Center Uniform Data Set (NACC, UDS). In addition, we aimed to provide an easily accessible method of creating norms for other clinical researchers for their own, unique data sets. Data from 3,268 clinically cognitively-normal older UDS subjects from a cohort reported by Weintraub and colleagues (2009) were included. For all neuropsychological tests, z-scores were estimated by subtracting the raw score from the predicted mean and then dividing this difference score by the root mean squared error term (RMSE) for a given linear regression model. For each neuropsychological test, an estimated z-score was calculated for any raw score based on five different models that adjust for the demographic predictors of SEX, AGE and EDUCATION, either concurrently, individually or without covariates. The interactive online calculator allows the entry of a raw score and provides five corresponding estimated z-scores based on predictions from each corresponding linear regression model. The calculator produces percentile ranks and graphical output. An interactive, regression-based, normative score online calculator was created to serve as an additional resource for UDS clinical researchers, especially in guiding interpretation of individual performances that appear to fall in borderline realms and may be of particular utility for operationalizing subtle cognitive impairment present according to the newly proposed criteria for Stage 3 preclinical Alzheimer's disease.

  12. Orthogonal model and experimental data for analyzing wood-fiber-based tri-axial ribbed structural panels in bending

    Treesearch

    Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai

    2017-01-01

    This paper presents an analysis of 3-dimensional engineered structural panels (3DESP) made from wood-fiber-based laminated paper composites. Since the existing models for calculating the mechanical behavior of core configurations within sandwich panels are very complex, a new simplified orthogonal model (SOM) using an equivalent element has been developed. This model...

  13. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  14. Comparison of binding energies of SrcSH2-phosphotyrosyl peptides with structure-based prediction using surface area based empirical parameterization.

    PubMed Central

    Henriques, D. A.; Ladbury, J. E.; Jackson, R. M.

    2000-01-01

    The prediction of binding energies from the three-dimensional (3D) structure of a protein-ligand complex is an important goal of biophysics and structural biology. Here, we critically assess the use of empirical, solvent-accessible surface area-based calculations for the prediction of the binding of Src-SH2 domain with a series of tyrosyl phosphopeptides based on the high-affinity ligand from the hamster middle T antigen (hmT), where the residue in the pY+ 3 position has been changed. Two other peptides based on the C-terminal regulatory site of the Src protein and the platelet-derived growth factor receptor (PDGFR) are also investigated. Here, we take into account the effects of proton linkage on binding, and test five different surface area-based models that include different treatments for the contributions to conformational change and protein solvation. These differences relate to the treatment of conformational flexibility in the peptide ligand and the inclusion of proximal ordered solvent molecules in the surface area calculations. This allowed the calculation of a range of thermodynamic state functions (deltaCp, deltaS, deltaH, and deltaG) directly from structure. Comparison with the experimentally derived data shows little agreement for the interaction of SrcSH2 domain and the range of tyrosyl phosphopeptides. Furthermore, the adoption of the different models to treat conformational change and solvation has a dramatic effect on the calculated thermodynamic functions, making the predicted binding energies highly model dependent. While empirical, solvent-accessible surface area based calculations are becoming widely adopted to interpret thermodynamic data, this study highlights potential problems with application and interpretation of this type of approach. There is undoubtedly some agreement between predicted and experimentally determined thermodynamic parameters: however, the tolerance of this approach is not sufficient to make it ubiquitously applicable. PMID:11106171

  15. Poster - Thur Eve - 68: Evaluation and analytical comparison of different 2D and 3D treatment planning systems using dosimetry in anthropomorphic phantom.

    PubMed

    Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S

    2012-07-01

    The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.

  16. Internal dosimetry through GATE simulations of preclinical radiotherapy using a melanin-targeting ligand

    NASA Astrophysics Data System (ADS)

    Perrot, Y.; Degoul, F.; Auzeloux, P.; Bonnet, M.; Cachin, F.; Chezal, J. M.; Donnarieix, D.; Labarre, P.; Moins, N.; Papon, J.; Rbah-Vidal, L.; Vidal, A.; Miot-Noirault, E.; Maigne, L.

    2014-05-01

    The GATE Monte Carlo simulation platform based on the Geant4 toolkit is under constant improvement for dosimetric calculations. In this study, we explore its use for the dosimetry of the preclinical targeted radiotherapy of melanoma using a new specific melanin-targeting radiotracer labeled with iodine 131. Calculated absorbed fractions and S values for spheres and murine models (digital and CT-scan-based mouse phantoms) are compared between GATE and EGSnrc Monte Carlo codes considering monoenergetic electrons and the detailed energy spectrum of iodine 131. The behavior of Geant4 standard and low energy models is also tested. Following the different authors’ guidelines concerning the parameterization of electron physics models, this study demonstrates an agreement of 1.2% and 1.5% with EGSnrc, respectively, for the calculation of S values for small spheres and mouse phantoms. S values calculated with GATE are then used to compute the dose distribution in organs of interest using the activity distribution in mouse phantoms. This study gives the dosimetric data required for the translation of the new treatment to the clinic.

  17. Minimum detectable gas concentration performance evaluation method for gas leak infrared imaging detection systems.

    PubMed

    Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo

    2017-04-01

    Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.

  18. Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Gao, Peiyuan

    Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less

  19. Investigation of the Factors Influencing Volatile Chemical Fate During Steady-state Accretion on Wet-growing Hail

    NASA Astrophysics Data System (ADS)

    Michael, R. A.; Stuart, A. L.

    2007-12-01

    Phase partitioning during freezing affects the transport and distribution of volatile chemical species in convective clouds. This consequently can have impacts on tropospheric chemistry, air quality, pollutant deposition, and climate change. Here, we discuss the development, evaluation, and application of a mechanistic model for the study and prediction of volatile chemical partitioning during steady-state hailstone growth. The model estimates the fraction of a chemical species retained in a two-phase freezing hailstone. It is based upon mass rate balances over water and solute for accretion under wet-growth conditions. Expressions for the calculation of model components, including the rates of super-cooled drop collection, shedding, evaporation, and hail growth were developed and implemented based on available cloud microphysics literature. Solute fate calculations assume equilibrium partitioning at air-liquid and liquid-ice interfaces. Currently, we are testing the model by performing mass balance calculations, sensitivity analyses, and comparison to available experimental data. Application of the model will improve understanding of the effects of cloud conditions and chemical properties on the fate of dissolved chemical species during hail growth.

  20. A user-friendly one-dimensional model for wet volcanic plumes

    USGS Publications Warehouse

    Mastin, Larry G.

    2007-01-01

    This paper presents a user-friendly graphically based numerical model of one-dimensional steady state homogeneous volcanic plumes that calculates and plots profiles of upward velocity, plume density, radius, temperature, and other parameters as a function of height. The model considers effects of water condensation and ice formation on plume dynamics as well as the effect of water added to the plume at the vent. Atmospheric conditions may be specified through input parameters of constant lapse rates and relative humidity, or by loading profiles of actual atmospheric soundings. To illustrate the utility of the model, we compare calculations with field-based estimates of plume height (∼9 km) and eruption rate (>∼4 × 105 kg/s) during a brief tephra eruption at Mount St. Helens on 8 March 2005. Results show that the atmospheric conditions on that day boosted plume height by 1–3 km over that in a standard dry atmosphere. Although the eruption temperature was unknown, model calculations most closely match the observations for a temperature that is below magmatic but above 100°C.

  1. Derivation of effective fission gas diffusivities in UO2 from lower length scale simulations and implementation of fission gas diffusion models in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersson, Anders David Ragnar; Pastore, Giovanni; Liu, Xiang-Yang

    2014-11-07

    This report summarizes the development of new fission gas diffusion models from lower length scale simulations and assessment of these models in terms of annealing experiments and fission gas release simulations using the BISON fuel performance code. Based on the mechanisms established from density functional theory (DFT) and empirical potential calculations, continuum models for diffusion of xenon (Xe) in UO 2 were derived for both intrinsic conditions and under irradiation. The importance of the large X eU3O cluster (a Xe atom in a uranium + oxygen vacancy trap site with two bound uranium vacancies) is emphasized, which is a consequencemore » of its high mobility and stability. These models were implemented in the MARMOT phase field code, which is used to calculate effective Xe diffusivities for various irradiation conditions. The effective diffusivities were used in BISON to calculate fission gas release for a number of test cases. The results are assessed against experimental data and future directions for research are outlined based on the conclusions.« less

  2. Vibrational energy transport in acetylbenzonitrile described by an ab initio-based quantum tier model

    NASA Astrophysics Data System (ADS)

    Fujisaki, Hiroshi; Yagi, Kiyoshi; Kikuchi, Hiroto; Takami, Toshiya; Stock, Gerhard

    2017-01-01

    Performing comprehensive quantum-chemical calculations, a vibrational Hamiltonian of acetylbenzonitrile is constructed, on the basis of which a quantum-mechanical "tier model" is developed that describes the vibrational dynamics following excitation of the CN stretch mode. Taking into account 36 vibrational modes and cubic and quartic anharmonic couplings between up to three different modes, the tier model calculations are shown to qualitatively reproduce the main findings of the experiments of Rubtsov and coworkers (2011), including the energy relaxation of the initially excited CN mode and the structure-dependent vibrational transport. Moreover, the calculations suggest that the experimentally measured cross-peak among the CN and CO modes does not correspond to direct excitation of the CO normal mode but rather reflects excited low-frequency vibrations that anharmonically couple to the CO mode. Complementary quasiclassical trajectory calculations are found to be in good overall agreement with the quantum calculations.

  3. Deformed shell model calculations of half lives for β+/EC decay and 2ν β+β+/β+EC/ECEC decay in medium-heavy N~Z nuclei

    NASA Astrophysics Data System (ADS)

    Mishra, S.; Shukla, A.; Sahu, R.; Kota, V. K. B.

    2008-08-01

    The β+/EC half-lives of medium heavy N~Z nuclei with mass number A~64-80 are calculated within the deformed shell model (DSM) based on Hartree-Fock states by employing a modified Kuo interaction in (2p3/2,1f5/2,2p1/2,1g9/2) space. The DSM model has been quite successful in predicting many spectroscopic properties of N~Z medium heavy nuclei with A~64-80. The calculated β+/EC half-lives, for prolate and oblate shapes, compare well with the predictions of the calculations with Skyrme force by Sarriguren Going further, following recent searches, half-lives for 2ν β+β+/β+EC/ECEC decay for the nucleus Kr78 are calculated using DSM and the results compare well with QRPA predictions.

  4. Numerical Calculation Method for Prediction of Ground-borne Vibration near Subway Tunnel

    NASA Astrophysics Data System (ADS)

    Tsuno, Kiwamu; Furuta, Masaru; Abe, Kazuhisa

    This paper describes the development of prediction method for ground-borne vibration from railway tunnels. Field measurement was carried out both in a subway shield tunnel, in the ground and on the ground surface. The generated vibration in the tunnel was calculated by means of the train/track/tunnel interaction model and was compared with the measurement results. On the other hand, wave propagation in the ground was calculated utilizing the empirical model, which was proposed based on the relationship between frequency and material damping coefficient α in order to predict the attenuation in the ground in consideration of frequency characteristics. Numerical calculation using 2-dimensinal FE analysis was also carried out in this research. The comparison between calculated and measured results shows that the prediction method including the model for train/track/tunnel interaction and that for wave propagation is applicable to the prediction of train-induced vibration propagated from railway tunnel.

  5. Mathematical model of polyethylene pipe bending stress state

    NASA Astrophysics Data System (ADS)

    Serebrennikov, Anatoly; Serebrennikov, Daniil

    2018-03-01

    Introduction of new machines and new technologies of polyethylene pipeline installation is usually based on the polyethylene pipe flexibility. It is necessary that existing bending stresses do not lead to an irreversible polyethylene pipe deformation and to violation of its strength characteristics. Derivation of the mathematical model which allows calculating analytically the bending stress level of polyethylene pipes with consideration of nonlinear characteristics is presented below. All analytical calculations made with the mathematical model are experimentally proved and confirmed.

  6. Fast calculation of the line-spread-function by transversal directions decoupling

    NASA Astrophysics Data System (ADS)

    Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra

    2016-07-01

    We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.

  7. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related exhaust...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., and carbon-related exhaust emissions from the tests performed using gasoline or diesel test fuel. (ii... from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as determined... from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and...

  8. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related exhaust...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., and carbon-related exhaust emissions from the tests performed using gasoline or diesel test fuel. (ii... from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as determined... from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and...

  9. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related exhaust...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., and carbon-related exhaust emissions from the tests performed using gasoline or diesel test fuel. (ii... from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as determined... from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and...

  10. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emission data from tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel... values from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as..., highway, and combined fuel economy and carbon-related exhaust emission values from the tests performed...

  11. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.

    PubMed

    Hedin, Emma; Bäck, Anna

    2013-09-06

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types.

  12. Variability aware compact model characterization for statistical circuit design optimization

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  13. Calculation of the electric field resulting from human body rotation in a magnetic field

    NASA Astrophysics Data System (ADS)

    Cobos Sánchez, Clemente; Glover, Paul; Power, Henry; Bowtell, Richard

    2012-08-01

    A number of recent studies have shown that the electric field and current density induced in the human body by movement in and around magnetic resonance imaging installations can exceed regulatory levels. Although it is possible to measure the induced electric fields at the surface of the body, it is usually more convenient to use numerical models to predict likely exposure under well-defined movement conditions. Whilst the accuracy of these models is not in doubt, this paper shows that modelling of particular rotational movements should be treated with care. In particular, we show that v  ×  B rather than -(v  ·  ∇)A should be used as the driving term in potential-based modelling of induced fields. Although for translational motion the two driving terms are equivalent, specific examples of rotational rigid-body motion are given where incorrect results are obtained when -(v  ·  ∇)A is employed. In addition, we show that it is important to take into account the space charge which can be generated by rotations and we also consider particular cases where neglecting the space charge generates erroneous results. Along with analytic calculations based on simple models, boundary-element-based numerical calculations are used to illustrate these findings.

  14. A rough set-based measurement model study on high-speed railway safety operation.

    PubMed

    Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun

    2018-01-01

    Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.

  15. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems.

    PubMed

    Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A

    2015-01-01

    Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.

  16. Generalized One-Band Model Based on Zhang-Rice Singlets for Tetragonal CuO.

    PubMed

    Hamad, I J; Manuel, L O; Aligia, A A

    2018-04-27

    Tetragonal CuO (T-CuO) has attracted attention because of its structure similar to that of the cuprates. It has been recently proposed as a compound whose study can give an end to the long debate about the proper microscopic modeling for cuprates. In this work, we rigorously derive an effective one-band generalized t-J model for T-CuO, based on orthogonalized Zhang-Rice singlets, and make an estimative calculation of its parameters, based on previous ab initio calculations. By means of the self-consistent Born approximation, we then evaluate the spectral function and the quasiparticle dispersion for a single hole doped in antiferromagnetically ordered half filled T-CuO. Our predictions show very good agreement with angle-resolved photoemission spectra and with theoretical multiband results. We conclude that a generalized t-J model remains the minimal Hamiltonian for a correct description of single-hole dynamics in cuprates.

  17. Generalized One-Band Model Based on Zhang-Rice Singlets for Tetragonal CuO

    NASA Astrophysics Data System (ADS)

    Hamad, I. J.; Manuel, L. O.; Aligia, A. A.

    2018-04-01

    Tetragonal CuO (T-CuO) has attracted attention because of its structure similar to that of the cuprates. It has been recently proposed as a compound whose study can give an end to the long debate about the proper microscopic modeling for cuprates. In this work, we rigorously derive an effective one-band generalized t -J model for T-CuO, based on orthogonalized Zhang-Rice singlets, and make an estimative calculation of its parameters, based on previous ab initio calculations. By means of the self-consistent Born approximation, we then evaluate the spectral function and the quasiparticle dispersion for a single hole doped in antiferromagnetically ordered half filled T-CuO. Our predictions show very good agreement with angle-resolved photoemission spectra and with theoretical multiband results. We conclude that a generalized t -J model remains the minimal Hamiltonian for a correct description of single-hole dynamics in cuprates.

  18. A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, O. S.; Callahan, D. A.; Cerjan, C. J.

    A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less

  19. A High-Resolution Integrated Model of the National Ignition Campaign Cryogenic Layered Experiments

    DOE PAGES

    Jones, O. S.; Callahan, D. A.; Cerjan, C. J.; ...

    2012-05-29

    A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-40% of the calculated yields.« less

  20. Towards an Integrated Model of the NIC Layered Implosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, O S; Callahan, D A; Cerjan, C J

    A detailed simulation-based model of the June 2011 National Ignition Campaign (NIC) cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60.more » Simulated experimental values were extracted from the simulation and compared against the experiment. The model adjustments brought much of the simulated data into closer agreement with the experiment, with the notable exception of the measured yields, which were 15-45% of the calculated yields.« less

  1. Fiducial-based fusion of 3D dental models with magnetic resonance imaging.

    PubMed

    Abdi, Amir H; Hannam, Alan G; Fels, Sidney

    2018-04-16

    Magnetic resonance imaging (MRI) is widely used in study of maxillofacial structures. While MRI is the modality of choice for soft tissues, it fails to capture hard tissues such as bone and teeth. Virtual dental models, acquired by optical 3D scanners, are becoming more accessible for dental practice and are starting to replace the conventional dental impressions. The goal of this research is to fuse the high-resolution 3D dental models with MRI to enhance the value of imaging for applications where detailed analysis of maxillofacial structures are needed such as patient examination, surgical planning, and modeling. A subject-specific dental attachment was digitally designed and 3D printed based on the subject's face width and dental anatomy. The attachment contained 19 semi-ellipsoidal concavities in predetermined positions where oil-based ellipsoidal fiducial markers were later placed. The MRI was acquired while the subject bit on the dental attachment. The spatial position of the center of mass of each fiducial in the resultant MR Image was calculated by averaging its voxels' spatial coordinates. The rigid transformation to fuse dental models to MRI was calculated based on the least squares mapping of corresponding fiducials and solved via singular-value decomposition. The target registration error (TRE) of the proposed fusion process, calculated in a leave-one-fiducial-out fashion, was estimated at 0.49 mm. The results suggest that 6-9 fiducials suffice to achieve a TRE of equal to half the MRI voxel size. Ellipsoidal oil-based fiducials produce distinguishable intensities in MRI and can be used as registration fiducials. The achieved accuracy of the proposed approach is sufficient to leverage the merged 3D dental models with the MRI data for a finer analysis of the maxillofacial structures where complete geometry models are needed.

  2. Elastic and viscoelastic model of the stress history of sedimentary rocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    A model has been developed to calculate the elastic and viscoelastic stresses which develop in rocks at depth due to burial, uplift and diagenesis. This model includes the effect of the overburden load, tectonic or geometric strains, thermal strains, varying material properties, pore pressure variations, and viscoeleastic relaxation. Calculations for some simple examples are given to show the contributions of the individual stress components due to gravity, tectonics, thermal effects and pore pressure. A complete stress history for Mesaverde rocks in the Piceance basin is calculated based on available burial history, thermal history and expected pore pressure, material property andmore » tectonic strain variations through time. These calculations show the importance of including material property changes and viscoelastic effects. 15 refs., 48 figs.« less

  3. Comparison of analytical and numerical approaches for CT-based aberration correction in transcranial passive acoustic imaging

    NASA Astrophysics Data System (ADS)

    Jones, Ryan M.; Hynynen, Kullervo

    2016-01-01

    Computed tomography (CT)-based aberration corrections are employed in transcranial ultrasound both for therapy and imaging. In this study, analytical and numerical approaches for calculating aberration corrections based on CT data were compared, with a particular focus on their application to transcranial passive imaging. Two models were investigated: a three-dimensional full-wave numerical model (Connor and Hynynen 2004 IEEE Trans. Biomed. Eng. 51 1693-706) based on the Westervelt equation, and an analytical method (Clement and Hynynen 2002 Ultrasound Med. Biol. 28 617-24) similar to that currently employed by commercial brain therapy systems. Trans-skull time delay corrections calculated from each model were applied to data acquired by a sparse hemispherical (30 cm diameter) receiver array (128 piezoceramic discs: 2.5 mm diameter, 612 kHz center frequency) passively listening through ex vivo human skullcaps (n  =  4) to emissions from a narrow-band, fixed source emitter (1 mm diameter, 516 kHz center frequency). Measurements were taken at various locations within the cranial cavity by moving the source around the field using a three-axis positioning system. Images generated through passive beamforming using CT-based skull corrections were compared with those obtained through an invasive source-based approach, as well as images formed without skull corrections, using the main lobe volume, positional shift, peak sidelobe ratio, and image signal-to-noise ratio as metrics for image quality. For each CT-based model, corrections achieved by allowing for heterogeneous skull acoustical parameters in simulation outperformed the corresponding case where homogeneous parameters were assumed. Of the CT-based methods investigated, the full-wave model provided the best imaging results at the cost of computational complexity. These results highlight the importance of accurately modeling trans-skull propagation when calculating CT-based aberration corrections. Although presented in an imaging context, our results may also be applicable to the problem of transmit focusing through the skull.

  4. Finite element validation of stress intensity factor calculation models for thru-thickness and thumb-nail cracks in double edge notch specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beres, W.; Koul, A.K.

    1994-09-01

    Stress intensity factors for thru-thickness and thumb-nail cracks in the double edge notch specimens, containing two different notch radius (R) to specimen width (W) ratios (R/W = 1/8 and 1/16), are calculated through finite element analysis. The finite element results are compared with predictions based on existing empirical models for SIF calculations. The effects of a change in R/W ratio on SIF of thru-thickness and thumb-nail cracks are also discussed. 34 refs.

  5. Correlation of predicted and measured thermal stresses on an advanced aircraft structure with dissimilar materials. [hypersonic heating simulation

    NASA Technical Reports Server (NTRS)

    Jenkins, J. M.

    1979-01-01

    Additional information was added to a growing data base from which estimates of finite element model complexities can be made with respect to thermal stress analysis. The manner in which temperatures were smeared to the finite element grid points was examined from the point of view of the impact on thermal stress calculations. The general comparison of calculated and measured thermal stresses is guite good and there is little doubt that the finite element approach provided by NASTRAN results in correct thermal stress calculations. Discrepancies did exist between measured and calculated values in the skin and the skin/frame junctures. The problems with predicting skin thermal stress were attributed to inadequate temperature inputs to the structural model rather than modeling insufficiencies. The discrepancies occurring at the skin/frame juncture were most likely due to insufficient modeling elements rather than temperature problems.

  6. SU-F-T-409: Modelling of the Magnetic Port in Temporary Breast Tissue Expanders for a Treatment Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, J; Heins, D; Zhang, R

    Purpose: To model the magnetic port in the temporary breast tissue expanders and to improve accuracy of dose calculation in Pinnacle, a commercial treatment planning system (TPS). Methods: A magnetic port in the tissue expander was modeled with a radiological measurement-basis; we have determined the dimension and the density of the model by film images and ion chamber measurement under the magnetic port, respectively. The model was then evaluated for various field sizes and photon energies by comparing depth dose values calculated by TPS (using our new model) and ion chamber measurement in a water tank. Also, the model wasmore » further evaluated by using a simplified anthropomorphic phantom with realistic geometry by placing thermoluminescent dosimeters (TLD)s around the magnetic port. Dose perturbations in a real patient’s treatment plan from the new model and a current clinical model, which is based on the subjective contouring created by the dosimetrist, were also compared. Results: Dose calculations based on our model showed less than 1% difference from ion chamber measurements for various field sizes and energies under the magnetic port when the magnetic port was placed parallel to the phantom surface. When it was placed perpendicular to the phantom surface, the maximum difference was 3.5%, while average differences were less than 3.1% for all cases. For the simplified anthropomorphic phantom, the calculated point doses agreed with TLD measurements within 5.2%. By comparing with the current model which is being used in clinic by TPS, it was found that current clinical model overestimates the effect from the magnetic port. Conclusion: Our new model showed good agreement with measurement for all cases. It could potentially improve the accuracy of dose delivery to the breast cancer patients.« less

  7. DEVELOPMENT OF A PHYSIOLOGICALLY BASED PHARMACOKINETIC MODEL FOR DELTAMETHRIN IN DEVELOPING SPRAGUE-DAWLEY RATS

    EPA Science Inventory

    This work describes the development of a physiologically based pharmacokinetic (PBPK) model of deltamethrin, a type II pyrethroid, in the developing male Sprague-Dawley rat. Generalized Michaelis-Menten equations were used to calculate metabolic rate constants and organ weights ...

  8. Development of an inpatient operational pharmacy productivity model.

    PubMed

    Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M

    2015-02-01

    An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  9. Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.

    2017-10-01

    There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.

  10. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  11. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  12. Evaluating BTEX concentration in soil using a simple one-dimensional vado zone model: application to a new fuel station in Valencia (Spain)

    NASA Astrophysics Data System (ADS)

    Rodrigo-Ilarri, Javier; Rodrigo-Clavero, María-Elena

    2017-04-01

    Specific studies of the impact of fuel spills on the vadose zone are currently required when trying to obtain the environmental permits for new fuel stations. The development of One-Dimensional mathematical models of fate and transport of BTEX on the vadose zone can therefore be used to understand the behavior of the pollutants under different scenarios. VLEACH - a simple One-Dimensional Finite Different Vadose Zone Leaching Model - uses an numerical approximation of the Millington Equation, a theoretical based model for gaseous diffusion in porous media. This equation has been widely used in the fields of soil physics and hydrology to calculate the gaseous or vapor diffusion in porous media. The model describes the movement of organic contaminants within and between three different phases: (1) as a solute dissolved in water, (2) as a gas in the vapor phase, and (3) as an absorbed compound in the soil phase. Initially, the equilibrium distribution of contaminant mass between liquid, gas and sorbed phases is calculated. Transport processes are then simulated. Liquid advective transport is calculated based on values defined by the user for infiltration and soil water content. The contaminant in the vapor phase migrates into or out of adjacent cells based on the calculated concentration gradients that exist between adjacent cells. After the mass is exchanged between the cells, the total mass in each cell is recalculated and re-equilibrated between the different phases. At the end of the simulation, (1) an overall area-weighted groundwater impact for the entire modeled area and (2) the concentration profile of BTEX on the vadose zone are calculated. This work shows the results obtained when applying VLEACH to analyze the contamination scenario caused by a BTEX spill coming from a set of future underground storage tanks located on a new fuel station in Aldaia (Valencia region - Spain).

  13. A model of the atmospheric metal deposition by cosmic dust particles

    NASA Astrophysics Data System (ADS)

    McNeil, W. J.

    1993-11-01

    We have developed a model of the deposition of meteoric metals in Earth's atmosphere. The model takes as input the total mass influx of material to the Earth and calculates the deposition rate at all altitudes through solution of the drag and subliminal equations in a Monte Carlo-type computation. The diffusion equation is then solved to give steady state concentration of complexes of specific metal species and kinetics are added to calculate the concentration of individual complexes. Concentrating on sodium, we calculate the Na(D) nightglow predicted by the model, and by introduction of seasonal variations in lower tropospheric ozone based on experimental results, we are able to duplicate the seasonal variation of mid-latitude nightglow data.

  14. PNS calculations for 3-D hypersonic corner flow with two turbulence models

    NASA Technical Reports Server (NTRS)

    Smith, Gregory E.; Liou, May-Fun; Benson, Thomas J.

    1988-01-01

    A three-dimensional parabolized Navier-Stokes code has been used as a testbed to investigate two turbulence models, the McDonald Camarata and Bushnell Beckwith model, in the hypersonic regime. The Bushnell Beckwith form factor correction to the McDonald Camarata mixing length model has been extended to three-dimensional flow by use of an inverse averaging of the resultant length scale contributions from each wall. Two-dimensional calculations are compared with experiment for Mach 18 helium flow over a 4-deg wedge. Corner flow calculations have been performed at Mach 11.8 for a Reynolds number of .67 x 10 to the 6th, based on the duct half-width, and a freestream stagnation temperature of 1750-deg Rankine.

  15. Application of numerical method in calculating the internal rate of return of joint venture investment using diminishing musyarakah model

    NASA Astrophysics Data System (ADS)

    Ruslan, Siti Zaharah Mohd; Jaffar, Maheran Mohd

    2017-05-01

    Islamic banking in Malaysia offers variety of products based on Islamic principles. One of the concepts is a diminishing musyarakah. The concept of diminishing musyarakah helps Muslims to avoid transaction which are based on riba. The diminishing musyarakah can be defined as an agreement between capital provider and entrepreneurs that enable entrepreneurs to buy equity in instalments where profits and losses are shared based on agreed ratio. The objective of this paper is to determine the internal rate of return (IRR) for a diminishing musyarakah model by applying a numerical method. There are several numerical methods in calculating the IRR such as by using an interpolation method and a trial and error method by using Microsoft Office Excel. In this paper we use a bisection method and secant method as an alternative way in calculating the IRR. It was found that the diminishing musyarakah model can be adapted in managing the performance of joint venture investments. Therefore, this paper will encourage more companies to use the concept of joint venture in managing their investments performance.

  16. Monte Carlo approach in assessing damage in higher order structures of DNA

    NASA Technical Reports Server (NTRS)

    Chatterjee, A.; Schmidt, J. B.; Holley, W. R.

    1994-01-01

    We have developed a computer monitor of nuclear DNA in the form of chromatin fibre. The fibres are modeled as a ideal solenoid consisting of twenty helical turns with six nucleosomes per turn. The chromatin model, in combination with are Monte Carlo theory of radiation damage induces by charged particles, based on general features of tack structure and stopping power theory, has been used to evaluate the influence of DNA structure on initial damage. An interesting has emerged from our calculations. Our calculated results predict the existence of strong spatial correlations in damage sites associated with the symmetries in the solenoidal model. We have calculated spectra of short fragments of double stranded DNA produced by multiple double strand breaks induced by both high and low LET radiation. The spectra exhibit peaks at multiples of approximately 85 base pairs (the nucleosome periodicity), and approximately 1000 base pairs (solenoid periodicity). Preliminary experiments to investigate the fragment distributions from irradiated DNA, made by B. Rydberg at Lawrence Berkeley Laboratory, confirm the existence of short DNA fragments and are in substantial agreement with the predictions of our theory.

  17. Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature

    PubMed Central

    Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat

    2014-01-01

    It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185

  18. Thermodynamic description of multicomponent nickel-base superalloys containing aluminum, chromium, ruthenium and platinum: A computational thermodynamic approach coupled with experiments

    NASA Astrophysics Data System (ADS)

    Zhu, Jun

    Ru and Pt are candidate additional component for improving the high temperature properties of Ni-base superalloys. A thermodynamic description of the Ni-Al-Cr-Ru-Pt system, serving as an essential knowledge base for better alloy design and processing control, was developed in the present study by means of thermodynamic modeling coupled with experimental investigations of phase equilibria. To deal with the order/disorder transition occurring in the Ni-base superalloys, a physical sound model, Cluster/Site Approximation (CSA) was used to describe the fcc phases. The CSA offers computational advantages, without loss of accuracy, over the Cluster Variation Method (CVM) in the calculation of multicomponent phase diagrams. It has been successfully applied to fcc phases in calculating technologically important Ni-Al-Cr phase diagrams. Our effort in this study focused on the two key ternary systems: Ni-Al-Ru and Ni-Al-Pt. The CSA calculated Ni-Al-Ru ternary phase diagrams are in good agreement with the experimental results in the literature and from the current study. A thermodynamic description of quaternary Ni-Al-Cr-Ru was obtained based on the descriptions of the lower order systems and the calculated results agree with experimental data available in literature and in the current study. The Ni-Al-Pt system was thermodynamically modeled based on the limited experimental data available in the literature and obtained from the current study. With the help of the preliminary description, a number of alloy compositions were selected for further investigation. The information obtained was used to improve the current modeling. A thermodynamic description of the Ni-Al-Cr-Pt quaternary was then obtained via extrapolation from its constituent lower order systems. The thermodynamic description for Ni-base superalloy containing Al, Cr, Ru and Pt was obtained via extrapolation. It is believed to be reliable and useful to guide the alloy design and further experimental investigation.

  19. Virtual reality based adaptive dose assessment method for arbitrary geometries in nuclear facility decommissioning.

    PubMed

    Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun

    2018-05-17

    This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.

  20. High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME

    NASA Astrophysics Data System (ADS)

    Otis, Richard A.; Liu, Zi-Kui

    2017-05-01

    One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.

  1. Alchemical prediction of hydration free energies for SAMPL

    PubMed Central

    Mobley, David L.; Liu, Shaui; Cerutti, David S.; Swope, William C.; Rice, Julia E.

    2013-01-01

    Hydration free energy calculations have become important tests of force fields. Alchemical free energy calculations based on molecular dynamics simulations provide a rigorous way to calculate these free energies for a particular force field, given sufficient sampling. Here, we report results of alchemical hydration free energy calculations for the set of small molecules comprising the 2011 Statistical Assessment of Modeling of Proteins and Ligands (SAMPL) challenge. Our calculations are largely based on the Generalized Amber Force Field (GAFF) with several different charge models, and we achieved RMS errors in the 1.4-2.2 kcal/mol range depending on charge model, marginally higher than what we typically observed in previous studies1-5. The test set consists of ethane, biphenyl, and a dibenzyl dioxin, as well as a series of chlorinated derivatives of each. We found that, for this set, using high-quality partial charges from MP2/cc-PVTZ SCRF RESP fits provided marginally improved agreement with experiment over using AM1-BCC partial charges as we have more typically done, in keeping with our recent findings5. Switching to OPLS Lennard-Jones parameters with AM1-BCC charges also improves agreement with experiment. We also find a number of chemical trends within each molecular series which we can explain, but there are also some surprises, including some that are captured by the calculations and some that are not. PMID:22198475

  2. Calculation of thermodynamic functions of aluminum plasma for high-energy-density systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaev, V. V., E-mail: shumaev@student.bmstu.ru

    The results of calculating the degree of ionization, the pressure, and the specific internal energy of aluminum plasma in a wide temperature range are presented. The TERMAG computational code based on the Thomas–Fermi model was used at temperatures T > 105 K, and the ionization equilibrium model (Saha model) was applied at lower temperatures. Quantitatively similar results were obtained in the temperature range where both models are applicable. This suggests that the obtained data may be joined to produce a wide-range equation of state.

  3. A fluctuating quantum model of the CO vibration in carboxyhemoglobin.

    PubMed

    Falvo, Cyril; Meier, Christoph

    2011-06-07

    In this paper, we present a theoretical approach to construct a fluctuating quantum model of the CO vibration in heme-CO proteins and its interaction with external laser fields. The methodology consists of mixed quantum-classical calculations for a restricted number of snapshots, which are then used to construct a parametrized quantum model. As an example, we calculate the infrared absorption spectrum of carboxy-hemoglobin, based on a simplified protein model, and found the absorption linewidth in good agreement with the experimental results. © 2011 American Institute of Physics

  4. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  5. Development of a numerical model for calculating exposure to toxic and nontoxic stressors in the water column and sediment from drilling discharges.

    PubMed

    Rye, Henrik; Reed, Mark; Frost, Tone Karin; Smit, Mathijs G D; Durgut, Ismail; Johansen, Øistein; Ditlevsen, May Kristin

    2008-04-01

    Drilling discharges are complex mixtures of chemical components and particles which might lead to toxic and nontoxic stress in the environment. In order to be able to evaluate the potential environmental consequences of such discharges in the water column and in sediments, a numerical model was developed. The model includes water column stratification, ocean currents and turbulence, natural burial, bioturbation, and biodegradation of organic matter in the sediment. Accounting for these processes, the fate of the discharge is modeled for the water column, including near-field mixing and plume motion, far-field mixing, and transport. The fate of the discharge is also modeled for the sediment, including sea floor deposition, and mixing due to bioturbation. Formulas are provided for the calculation of suspended matter and chemical concentrations in the water column, and burial, change in grain size, oxygen depletion, and chemical concentrations in the sediment. The model is fully 3-dimensional and time dependent. It uses a Lagrangian approach for the water column based on moving particles that represent the properties of the release and an Eulerian approach for the sediment based on calculation of the properties of matter in a grid. The model will be used to calculate the environmental risk, both in the water column and in sediments, from drilling discharges. It can serve as a tool to define risk mitigating measures, and as such it provides guidance towards the "zero harm" goal.

  6. An Adaptive Nonlinear Basal-Bolus Calculator for Patients With Type 1 Diabetes

    PubMed Central

    Boiroux, Dimitri; Aradóttir, Tinna Björk; Nørgaard, Kirsten; Poulsen, Niels Kjølstad; Madsen, Henrik; Jørgensen, John Bagterp

    2016-01-01

    Background: Bolus calculators help patients with type 1 diabetes to mitigate the effect of meals on their blood glucose by administering a large amount of insulin at mealtime. Intraindividual changes in patients physiology and nonlinearity in insulin-glucose dynamics pose a challenge to the accuracy of such calculators. Method: We propose a method based on a continuous-discrete unscented Kalman filter to continuously track the postprandial glucose dynamics and the insulin sensitivity. We augment the Medtronic Virtual Patient (MVP) model to simulate noise-corrupted data from a continuous glucose monitor (CGM). The basal rate is determined by calculating the steady state of the model and is adjusted once a day before breakfast. The bolus size is determined by optimizing the postprandial glucose values based on an estimate of the insulin sensitivity and states, as well as the announced meal size. Following meal announcements, the meal compartment and the meal time constant are estimated, otherwise insulin sensitivity is estimated. Results: We compare the performance of a conventional linear bolus calculator with the proposed bolus calculator. The proposed basal-bolus calculator significantly improves the time spent in glucose target (P < .01) compared to the conventional bolus calculator. Conclusion: An adaptive nonlinear basal-bolus calculator can efficiently compensate for physiological changes. Further clinical studies will be needed to validate the results. PMID:27613658

  7. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  8. Use of a Microsoft Excel based add-in program to calculate plasma sinistrin clearance by a two-compartment model analysis in dogs.

    PubMed

    Steinbach, Sarah M L; Sturgess, Christopher P; Dunning, Mark D; Neiger, Reto

    2015-06-01

    Assessment of renal function by means of plasma clearance of a suitable marker has become standard procedure for estimation of glomerular filtration rate (GFR). Sinistrin, a polyfructan solely cleared by the kidney, is often used for this purpose. Pharmacokinetic modeling using adequate software is necessary to calculate disappearance rate and half-life of sinistrin. The purpose of this study was to describe the use of a Microsoft excel based add-in program to calculate plasma sinistrin clearance, as well as additional pharmacokinetic parameters such as transfer rates (k), half-life (t1/2) and volume of distribution (Vss) for sinistrin in dogs with varying degrees of renal function. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. A fast GPU-based Monte Carlo simulation of proton transport with detailed modeling of nonelastic interactions.

    PubMed

    Wan Chan Tseung, H; Ma, J; Beltran, C

    2015-06-01

    Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton-nucleus collisions. Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with (GEANT)4.9.6p2/TOPAS. Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the (GEANT)4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is ∼ 20 s for 1 × 10(7) proton histories. Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with (GEANT)4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically applicable MC-based IMPT treatment planning system. The detailed nuclear modeling will allow us to perform very fast linear energy transfer and neutron dose estimates on the GPU.

  10. The Role of Scale and Model Bias in ADAPT's Photospheric Eatimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godinez Vazquez, Humberto C.; Hickmann, Kyle Scott; Arge, Charles Nicholas

    2015-05-20

    The Air Force Assimilative Photospheric flux Transport model (ADAPT), is a magnetic flux propagation based on Worden-Harvey (WH) model. ADAPT would be used to provide a global photospheric map of the Earth. A data assimilation method based on the Ensemble Kalman Filter (EnKF), a method of Monte Carlo approximation tied with Kalman filtering, is used in calculating the ADAPT models.

  11. GPU based 3D feature profile simulation of high-aspect ratio contact hole etch process under fluorocarbon plasmas

    NASA Astrophysics Data System (ADS)

    Chun, Poo-Reum; Lee, Se-Ah; Yook, Yeong-Geun; Choi, Kwang-Sung; Cho, Deog-Geun; Yu, Dong-Hun; Chang, Won-Seok; Kwon, Deuk-Chul; Im, Yeon-Ho

    2013-09-01

    Although plasma etch profile simulation has been attracted much interest for developing reliable plasma etching, there still exist big gaps between current research status and predictable modeling due to the inherent complexity of plasma process. As an effort to address this issue, we present 3D feature profile simulation coupled with well-defined plasma-surface kinetic model for silicon dioxide etching process under fluorocarbon plasmas. To capture the realistic plasma surface reaction behaviors, a polymer layer based surface kinetic model was proposed to consider the simultaneous polymer deposition and oxide etching. Finally, the realistic plasma surface model was used for calculation of speed function for 3D topology simulation, which consists of multiple level set based moving algorithm, and ballistic transport module. In addition, the time consumable computations in the ballistic transport calculation were improved drastically by GPU based numerical computation, leading to the real time computation. Finally, we demonstrated that the surface kinetic model could be coupled successfully for 3D etch profile simulations in high-aspect ratio contact hole plasma etching.

  12. A novel growth mode of Physarum polycephalum during starvation

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Oettmeier, Christina; Döbereiner, Hans-Günther

    2018-06-01

    Organisms are constantly looking to forage and respond to various environmental queues to maximize their chance of survival. This is reflected in the unicellular organism Physarum polycephalum, which is known to grow as an optimized network. Here, we describe a new growth pattern of Physarum mesoplasmodium, where sheet-like motile bodies termed ‘satellites’ are formed. This non-network pattern formation is induced only when nutrients are scarce, suggesting that it is a type of emergency response. Our goal is to construct a model to describe the behaviour of satellites based on negative chemotaxis. We conjecture a diffusion-based model which implements detection of a signal molecule above a threshold concentration. Then we calculate how far the satellites must travel until the concentration signal falls below the threshold. These calculated distances are in good agreement with the distances where satellites stop. Based on the Akaike weight analysis, our threshold model is at least 2.3 times more likely to be the better model than the others we have considered. Based on the model, we estimate the diffusion coefficient of this molecule, which corresponds to typical signalling molecules.

  13. TMAP: Tübingen NLTE Model-Atmosphere Package

    NASA Astrophysics Data System (ADS)

    Werner, Klaus; Dreizler, Stefan; Rauch, Thomas

    2012-12-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) is a tool to calculate stellar atmospheres in spherical or plane-parallel geometry in hydrostatic and radiative equilibrium allowing departures from local thermodynamic equilibrium (LTE) for the population of atomic levels. It is based on the Accelerated Lambda Iteration (ALI) method and is able to account for line blanketing by metals. All elements from hydrogen to nickel may be included in the calculation with model atoms which are tailored for the aims of the user.

  14. Electronic structure and microscopic model of CoNb2O6

    NASA Astrophysics Data System (ADS)

    Molla, Kaimujjaman; Rahaman, Badiur

    2018-05-01

    We present the first principle density functional calculations to figure out the underlying spin model of CoNb2O6. The first principles calculations define the main paths of superexchange interaction between Co spins in this compound. We discuss the nature of the exchange paths and provide quantitative estimates of magnetic exchange couplings. A microscopic modeling based on analysis of the electronic structure of this system puts it in the interesting class of weakly couple geometrically frustrated isosceles triangular Ising antiferromagnet.

  15. Cosmic ray air shower characteristics in the framework of the parton-based Gribov-Regge model NEXUS

    NASA Astrophysics Data System (ADS)

    Bossard, G.; Drescher, H. J.; Kalmykov, N. N.; Ostapchenko, S.; Pavlov, A. I.; Pierog, T.; Vishnevskaya, E. A.; Werner, K.

    2001-03-01

    The purpose of this paper is twofold: first we want to introduce a new type of hadronic interaction model (NEXUS), which has a much more solid theoretical basis than, for example, presently used models such as QGSJET and VENUS, and ensures therefore a much more reliable extrapolation towards high energies. Secondly, we want to promote an extensive air shower (EAS) calculation scheme, based on cascade equations rather than explicit Monte Carlo simulations, which is very accurate in calculations of main EAS characteristics and extremely fast concerning computing time. We employ the NEXUS model to provide the necessary data on particle production in hadron-air collisions and present the average EAS characteristics for energies 1014-1017 eV. The experimental data of the CASA-BLANCA group are analyzed in the framework of the new model.

  16. A model for the accurate computation of the lateral scattering of protons in water

    NASA Astrophysics Data System (ADS)

    Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.

    2016-02-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  17. Geometric modeling of Plateau borders using the orthographic projection method for closed cell rigid polyurethane foam thermal conductivity prediction

    NASA Astrophysics Data System (ADS)

    Xu, Jie; Wu, Tao; Peng, Chuang; Adegbite, Stephen

    2017-09-01

    The geometric Plateau border model for closed cell polyurethane foam was developed based on volume integrations of approximated 3D four-cusp hypocycloid structure. The tetrahedral structure of convex struts was orthogonally projected into 2D three-cusp deltoid with three central cylinders. The idealized single unit strut was modeled by superposition. The volume of each component was calculated by geometric analyses. The strut solid fraction f s and foam porosity coefficient δ were calculated based on representative elementary volume of Kelvin and Weaire-Phelan structures. The specific surface area Sv derived respectively from packing structures and deltoid approximation model were put into contrast against strut dimensional ratio ɛ. The characteristic foam parameters obtained from this semi-empirical model were further employed to predict foam thermal conductivity.

  18. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  19. DEVELOPMENT OF A PHYSIOLOGICALLY BASED PHARMACOKINETIC MODEL FOR DELTAMETHRIN IN ADULT AND DEVELOPING SPRAGUE-DAWLEY RATS

    EPA Science Inventory

    This work describes the development of a physiologically based pharmacokinetic (PBPK) model of deltamethrin, a type II pyrethroid, in the developing male Sprague-Dawley rat. Generalized Michaelis-Menten equations were used to calculate metabolic rate constants and organ weights ...

  20. A Meta-Analysis of Video-Modeling Based Interventions for Reduction of Challenging Behaviors for Students with EBD

    ERIC Educational Resources Information Center

    Losinski, Mickey; Wiseman, Nicole; White, Sherry A.; Balluch, Felicity

    2016-01-01

    The current study examined the use of video modeling (VM)-based interventions to reduce the challenging behaviors of students with emotional or behavioral disorders. Each study was evaluated using Council for Exceptional Children's (CEC's) quality indicators for evidence-based practices. In addition, study effects were calculated along the three…

  1. Implementation of Biofilm Permeability Models for Mineral Reactions in Saturated Porous Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Saripalli, Kanaka P.; Bacon, Diana H.

    2005-02-22

    An approach based on continuous biofilm models is proposed for modeling permeability changes due to mineral precipitation and dissolution in saturated porous media. In contrast to the biofilm approach, implementation of the film depositional models within a reactive transport code requires a time-dependent calculation of the mineral films in the pore space. Two different methods for this calculation are investigated. The first method assumes a direct relationship between changes in mineral radii (i.e., surface area) and changes in the pore space. In the second method, an effective change in pore radii is calculated based on the relationship between permeability andmore » grain size. Porous media permeability is determined by coupling the film permeability models (Mualem and Childs and Collis-George) to a volumetric model that incorporates both mineral density and reactive surface area. Results from single mineral dissolution and single mineral precipitation simulations provide reasonable estimates of permeability, though they under predict the magnitude of permeability changes relative to the Kozeny and Carmen model. However, a comparison of experimental and simulated data show that the Mualem film model is the only one that can replicate the oscillations in permeability that occur as a result of simultaneous dissolution and precipitation reactions occurring within the porous media.« less

  2. Technical Note: spektr 3.0-A computational tool for x-ray spectrum modeling and analysis.

    PubMed

    Punnoose, J; Xu, J; Sisniega, A; Zbijewski, W; Siewerdsen, J H

    2016-08-01

    A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. The spektr code generates x-ray spectra (photons/mm(2)/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20-150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30-140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available.

  3. Large-Eddy Simulations of Radiatively Driven Convection: Sensitivities to the Representation of Small Scales.

    NASA Astrophysics Data System (ADS)

    Stevens, Bjorn; Moeng, Chin-Hoh; Sullivan, Peter P.

    1999-12-01

    Large-eddy simulations of a smoke cloud are examined with respect to their sensitivity to small scales as manifest in either the grid spacing or the subgrid-scale (SGS) model. Calculations based on a Smagorinsky SGS model are found to be more sensitive to the effective resolution of the simulation than are calculations based on the prognostic turbulent kinetic energy (TKE) SGS model. The difference between calculations based on the two SGS models is attributed to the advective transport, diffusive transport, and/or time-rate-of-change terms in the TKE equation. These terms are found to be leading order in the entrainment zone and allow the SGS TKE to behave in a way that tends to compensate for changes that result in larger or smaller resolved scale entrainment fluxes. This compensating behavior of the SGS TKE model is attributed to the fact that changes that reduce the resolved entrainment flux (viz., values of the eddy viscosity in the upper part of the PBL) simultaneously tend to increase the buoyant production of SGS TKE in the radiatively destabilized portion of the smoke cloud. Increased production of SGS TKE in this region then leads to increased amounts of transported, or fossil, SGS TKE in the entrainment zone itself, which in turn leads to compensating increases in the SGS entrainment fluxes. In the Smagorinsky model, the absence of a direct connection between SGS TKE in the entrainment and radiatively destabilized zones prevents this compensating mechanism from being active, and thus leads to calculations whose entrainment rate sensitivities as a whole reflect the sensitivities of the resolved-scale fluxes to values of upper PBL eddy viscosities.

  4. Regression-based model of skin diffuse reflectance for skin color analysis

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi

    2008-11-01

    A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.

  5. Model Comparisons For Space Solar Cell End-Of-Life Calculations

    NASA Astrophysics Data System (ADS)

    Messenger, Scott; Jackson, Eric; Warner, Jeffrey; Walters, Robert; Evans, Hugh; Heynderickx, Daniel

    2011-10-01

    Space solar cell end-of-life (EOL) calculations are performed over a wide range of space radiation environments for GaAs-based single and multijunction solar cell technologies. Two general semi-empirical approaches will used to generate these EOL calculation results: 1) the JPL equivalent fluence (EQFLUX) and 2) the NRL displacement damage dose (SCREAM). This paper also includes the first results using the Monte Carlo-based version of SCREAM, called MC- SCREAM, which is now freely available online as part of the SPENVIS suite of programs.

  6. Assessment of the 3He pressure inside the CABRI transient rods - Development of a surrogate model based on measurements and complementary CFD calculations

    NASA Astrophysics Data System (ADS)

    Clamens, Olivier; Lecerf, Johann; Hudelot, Jean-Pascal; Duc, Bertrand; Cadiou, Thierry; Blaise, Patrick; Biard, Bruno

    2018-01-01

    CABRI is an experimental pulse reactor, funded by the French Nuclear Safety and Radioprotection Institute (IRSN) and operated by CEA at the Cadarache research center. It is designed to study fuel behavior under RIA conditions. In order to produce the power transients, reactivity is injected by depressurization of a neutron absorber (3He) situated in transient rods inside the reactor core. The shapes of power transients depend on the total amount of reactivity injected and on the injection speed. The injected reactivity can be calculated by conversion of the 3He gas density into units of reactivity. So, it is of upmost importance to properly master gas density evolution in transient rods during a power transient. The 3He depressurization was studied by CFD calculations and completed with measurements using pressure transducers. The CFD calculations show that the density evolution is slower than the pressure drop. Surrogate models were built based on CFD calculations and validated against preliminary tests in the CABRI transient system. Studies also show that it is harder to predict the depressurization during the power transients because of neutron/3He capture reactions that induce a gas heating. This phenomenon can be studied by a multiphysics approach based on reaction rate calculation thanks to Monte Carlo code and study the resulting heating effect with the validated CFD simulation.

  7. Numerical modelling of a fibre reflection filter based on a metal–dielectric diffraction structure with an increased optical damage threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terentyev, V S; Simonov, V A

    2016-02-28

    Numerical modelling demonstrates the possibility of fabricating an all-fibre multibeam two-mirror reflection interferometer based on a metal–dielectric diffraction structure in its front mirror. The calculations were performed using eigenmodes of a double-clad single-mode fibre. The calculation results indicate that, using a metallic layer in the structure of the front mirror of such an interferometer and a diffraction effect, one can reduce the Ohmic loss by a factor of several tens in comparison with a continuous thin metallic film. (laser crystals and braggg ratings)

  8. A mathematical method for precisely calculating the radiographic angles of the cup after total hip arthroplasty.

    PubMed

    Zhao, Jing-Xin; Su, Xiu-Yun; Xiao, Ruo-Xiu; Zhao, Zhe; Zhang, Li-Hai; Zhang, Li-Cheng; Tang, Pei-Fu

    2016-11-01

    We established a mathematical method to precisely calculate the radiographic anteversion (RA) and radiographic inclination (RI) angles of the acetabular cup based on anterior-posterior (AP) pelvic radiographs after total hip arthroplasty. Using Mathematica software, a mathematical model for an oblique cone was established to simulate how AP pelvic radiographs are obtained and to address the relationship between the two-dimensional and three-dimensional geometry of the opening circle of the cup. In this model, the vertex was the X-ray beam source, and the generatrix was the ellipse in radiographs projected from the opening circle of the acetabular cup. Using this model, we established a series of mathematical formulas to reveal the differences between the true RA and RI cup angles and the measurements results achieved using traditional methods and AP pelvic radiographs and to precisely calculate the RA and RI cup angles based on post-operative AP pelvic radiographs. Statistical analysis indicated that traditional methods should be used with caution if traditional measurements methods are used to calculate the RA and RI cup angles with AP pelvic radiograph. The entire calculation process could be performed by an orthopedic surgeon with mathematical knowledge of basic matrix and vector equations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Validation of a program for supercritical power plant calculations

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian

    2011-12-01

    This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.

  10. Geoinformation modeling system for analysis of atmosphere pollution impact on vegetable biosystems using space images

    NASA Astrophysics Data System (ADS)

    Polichtchouk, Yuri; Ryukhko, Viatcheslav; Tokareva, Olga; Alexeeva, Mary

    2002-02-01

    Geoinformation modeling system structure for assessment of the environmental impact of atmospheric pollution on forest- swamp ecosystems of West Siberia is considered. Complex approach to the assessment of man-caused impact based on the combination of sanitary-hygienic and landscape-geochemical approaches is reported. Methodical problems of analysis of atmosphere pollution impact on vegetable biosystems using geoinformation systems and remote sensing data are developed. Landscape structure of oil production territories in southern part of West Siberia are determined on base of processing of space images from spaceborn Resource-O. Particularities of atmosphere pollution zones modeling caused by gas burning in torches in territories of oil fields are considered. For instance, a pollution zones were revealed modeling of contaminants dispersal in atmosphere by standard model. Polluted landscapes areas are calculated depending on oil production volume. It is shown calculated data is well approximated by polynomial models.

  11. Linear Discriminant Analysis for the in Silico Discovery of Mechanism-Based Reversible Covalent Inhibitors of a Serine Protease: Application of Hydration Thermodynamics Analysis and Semi-empirical Molecular Orbital Calculation.

    PubMed

    Masuda, Yosuke; Yoshida, Tomoki; Yamaotsu, Noriyuki; Hirono, Shuichi

    2018-01-01

    We recently reported that the Gibbs free energy of hydrolytic water molecules (ΔG wat ) in acyl-trypsin intermediates calculated by hydration thermodynamics analysis could be a useful metric for estimating the catalytic rate constants (k cat ) of mechanism-based reversible covalent inhibitors. For thorough evaluation, the proposed method was tested with an increased number of covalent ligands that have no corresponding crystal structures. After modeling acyl-trypsin intermediate structures using flexible molecular superposition, ΔG wat values were calculated according to the proposed method. The orbital energies of antibonding π* molecular orbitals (MOs) of carbonyl C=O in covalently modified catalytic serine (E orb ) were also calculated by semi-empirical MO calculations. Then, linear discriminant analysis (LDA) was performed to build a model that can discriminate covalent inhibitor candidates from substrate-like ligands using ΔG wat and E orb . The model was built using a training set (10 compounds) and then validated by a test set (4 compounds). As a result, the training set and test set ligands were perfectly discriminated by the model. Hydrolysis was slower when (1) the hydrolytic water molecule has lower ΔG wat ; (2) the covalent ligand presents higher E orb (higher reaction barrier). Results also showed that the entropic term of hydrolytic water molecule (-TΔS wat ) could be used for estimating k cat and for covalent inhibitor optimization; when the rotational freedom of the hydrolytic water molecule is limited, the chance for favorable interaction with the electrophilic acyl group would also be limited. The method proposed in this study would be useful for screening and optimizing the mechanism-based reversible covalent inhibitors.

  12. Nature of adsorption on TiC(111) investigated with density-functional calculations

    NASA Astrophysics Data System (ADS)

    Ruberto, Carlo; Lundqvist, Bengt I.

    2007-06-01

    Extensive density-functional calculations are performed for chemisorption of atoms in the three first periods (H, B, C, N, O, F, Al, Si, P, S, and Cl) on the polar TiC(111) surface. Calculations are also performed for O on TiC(001), for full O(1×1) monolayer on TiC(111), as well as for bulk TiC and for the clean TiC(111) and (001) surfaces. Detailed results concerning atomic structures, energetics, and electronic structures are presented. For the bulk and the clean surfaces, previous results are confirmed. In addition, detailed results are given on the presence of C-C bonds in the bulk and at the surface, as well as on the presence of a Ti-based surface resonance (TiSR) at the Fermi level and of C-based surface resonances (CSR’s) in the lower part of the surface upper valence band. For the adsorption, adsorption energies Eads and relaxed geometries are presented, showing great variations characterized by pyramid-shaped Eads trends within each period. An extraordinarily strong chemisorption is found for the O atom, 8.8eV /adatom. On the basis of the calculated electronic structures, a concerted-coupling model for the chemisorption is proposed, in which two different types of adatom-substrate interactions work together to provide the obtained strong chemisorption: (i) adatom-TiSR and (ii) adatom-CSR’s. This model is used to successfully describe the essential features of the calculated Eads trends. The fundamental nature of this model, based on the Newns-Anderson model, should make it apt for general application to transition-metal carbides and nitrides and for predictive purposes in technological applications, such as cutting-tool multilayer coatings and MAX phases.

  13. Computation of Southern Pine Site Index Using a TI-59 Calculator

    Treesearch

    Robert M. Farrar

    1983-01-01

    A program is described that permits computation of site index in the field using a Texas Instruments model TI-59 programmable, hand-held, battery-powered calculator. Based on a series of equations developed by R.M. Farrar, Jr., for the site index curves in USDA Miscellaneous Publication 50, the program can accommodate any index base age, tree age, and height within...

  14. LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W

    2008-01-01

    Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less

  15. Liquid-liquid equilibria for the ternary systems sulfolane + octane + benzene, sulfolane + octane + toluene and sulfolane + octane + p-xylene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.; Kim, H.

    1995-03-01

    Sulfolane is widely used as a solvent for the extraction of aromatic hydrocarbons. Ternary phase equilibrium data are essential for the proper understanding of the solvent extraction process. Liquid-liquid equilibrium data for the systems sulfolane + octane + benzene, sulfolane + octane + toluene and sulfolane + octane + p-xylene were determined at 298.15, 308.15, and 318.15 K. Tie line data were satisfactorily correlated by the Othmer and Tobias method. The experimental data were compared with the values calculated by the UNIQUAC and NRTL models. Good quantitative agreement was obtained with these models. However, the calculated values based on themore » NRTL model were found to be better than those based on the UNIQUAC model.« less

  16. High resolution, MRI-based, segmented, computerized head phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zubal, I.G.; Harrell, C.R.; Smith, E.O.

    1999-01-01

    The authors have created a high-resolution software phantom of the human brain which is applicable to voxel-based radiation transport calculations yielding nuclear medicine simulated images and/or internal dose estimates. A software head phantom was created from 124 transverse MRI images of a healthy normal individual. The transverse T2 slices, recorded in a 256x256 matrix from a GE Signa 2 scanner, have isotropic voxel dimensions of 1.5 mm and were manually segmented by the clinical staff. Each voxel of the phantom contains one of 62 index numbers designating anatomical, neurological, and taxonomical structures. The result is stored as a 256x256x128 bytemore » array. Internal volumes compare favorably to those described in the ICRP Reference Man. The computerized array represents a high resolution model of a typical human brain and serves as a voxel-based anthropomorphic head phantom suitable for computer-based modeling and simulation calculations. It offers an improved realism over previous mathematically described software brain phantoms, and creates a reference standard for comparing results of newly emerging voxel-based computations. Such voxel-based computations lead the way to developing diagnostic and dosimetry calculations which can utilize patient-specific diagnostic images. However, such individualized approaches lack fast, automatic segmentation schemes for routine use; therefore, the high resolution, typical head geometry gives the most realistic patient model currently available.« less

  17. Prediction of energy balance and utilization for solar electric cars

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    Solar irradiation and ambient temperature are characterized by region, season and time-domain, which directly affects the performance of solar energy based car system. In this paper, the model of solar electric cars used was based in Xi’an. Firstly, the meteorological data are modelled to simulate the change of solar irradiation and ambient temperature, and then the temperature change of solar cell is calculated using the thermal equilibrium relation. The above work is based on the driving resistance and solar cell power generation model, which is simulated under the varying radiation conditions in a day. The daily power generation and solar electric car cruise mileage can be predicted by calculating solar cell efficiency and power. The above theoretical approach and research results can be used in the future for solar electric car program design and optimization for the future developments.

  18. Cross-validation of the osmotic pressure based on Pitzer model with air humidity osmometry at high concentration of ammonium sulfate solutions.

    PubMed

    Wang, Xiao-Lan; Zhan, Ting-Ting; Zhan, Xian-Cheng; Tan, Xiao-Ying; Qu, Xiao-You; Wang, Xin-Yue; Li, Cheng-Rong

    2014-01-01

    The osmotic pressure of ammonium sulfate solutions has been measured by the well-established freezing point osmometry in dilute solutions and we recently reported air humidity osmometry in a much wider range of concentration. Air humidity osmometry cross-validated the theoretical calculations of osmotic pressure based on the Pitzer model at high concentrations by two one-sided test (TOST) of equivalence with multiple testing corrections, where no other experimental method could serve as a reference for comparison. Although more strict equivalence criteria were established between the measurements of freezing point osmometry and the calculations based on the Pitzer model at low concentration, air humidity osmometry is the only currently available osmometry applicable to high concentration, serves as an economic addition to standard osmometry.

  19. Temperature-dependent infrared optical properties of 3C-, 4H- and 6H-SiC

    NASA Astrophysics Data System (ADS)

    Tong, Zhen; Liu, Linhua; Li, Liangsheng; Bao, Hua

    2018-05-01

    The temperature-dependent optical properties of cubic (3C) and hexagonal (4H and 6H) silicon carbide are investigated in the infrared range of 2-16 μm both by experimental measurements and numerical simulations. The temperature in experimental measurement is up to 593 K, while the numerical method can predict the optical properties at elevated temperatures. To investigate the temperature effect, the temperature-dependent damping parameter in the Lorentz model is calculated based on anharmonic lattice dynamics method, in which the harmonic and anharmonic interatomic force constants are determined from first-principles calculations. The infrared phonon modes of silicon carbide are determined from first-principles calculations. Based on first-principles calculations, the Lorentz model is parameterized without any experimental fitting data and the temperature effect is considered. In our investigations, we find that the increasing temperature induces a small reduction of the reflectivity in the range of 10-13 μm. More importantly, it also shows that our first-principles calculations can predict the infrared optical properties at high-temperature effectively which is not easy to be obtained through experimental measurements.

  20. National Stormwater Calculator - Version 1.1 (Model)

    EPA Science Inventory

    EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The SWC estimates runoff at a site based on available information ...

  1. On the use of an analytic source model for dose calculations in precision image-guided small animal radiotherapy.

    PubMed

    Granton, Patrick V; Verhaegen, Frank

    2013-05-21

    Precision image-guided small animal radiotherapy is rapidly advancing through the use of dedicated micro-irradiation devices. However, precise modeling of these devices in model-based dose-calculation algorithms such as Monte Carlo (MC) simulations continue to present challenges due to a combination of very small beams, low mechanical tolerances on beam collimation, positioning and long calculation times. The specific intent of this investigation is to introduce and demonstrate the viability of a fast analytical source model (AM) for use in either investigating improvements in collimator design or for use in faster dose calculations. MC models using BEAMnrc were developed for circular and square fields sizes from 1 to 25 mm in diameter (or side) that incorporated the intensity distribution of the focal spot modeled after an experimental pinhole image. These MC models were used to generate phase space files (PSFMC) at the exit of the collimators. An AM was developed that included the intensity distribution of the focal spot, a pre-calculated x-ray spectrum, and the collimator-specific entrance and exit apertures. The AM was used to generate photon fluence intensity distributions (ΦAM) and PSFAM containing photons radiating at angles according to the focal spot intensity distribution. MC dose calculations using DOSXYZnrc in a water and mouse phantom differing only by source used (PSFMC versus PSFAM) were found to agree within 7% and 4% for the smallest 1 and 2 mm collimator, respectively, and within 1% for all other field sizes based on depth dose profiles. PSF generation times were approximately 1200 times faster for the smallest beam and 19 times faster for the largest beam. The influence of the focal spot intensity distribution on output and on beam shape was quantified and found to play a significant role in calculated dose distributions. Beam profile differences due to collimator alignment were found in both small and large collimators sensitive to shifts of 1 mm with respect to the central axis.

  2. Multistage degradation modeling for BLDC motor based on Wiener process

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai

    2018-05-01

    Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.

  3. Recommended improvements to the DS02 dosimetry system's calculation of organ doses and their potential advantages for the Radiation Effects Research Foundation.

    PubMed

    Cullings, Harry M

    2012-03-01

    The Radiation Effects Research Foundation (RERF) uses a dosimetry system to calculate radiation doses received by the Japanese atomic bomb survivors based on their reported location and shielding at the time of exposure. The current system, DS02, completed in 2003, calculates detailed doses to 15 particular organs of the body from neutrons and gamma rays, using new source terms and transport calculations as well as some other improvements in the calculation of terrain and structural shielding, but continues to use methods from an older system, DS86, to account for body self-shielding. Although recent developments in models of the human body from medical imaging, along with contemporary computer speed and software, allow for improvement of the calculated organ doses, before undertaking changes to the organ dose calculations, it is important to evaluate the improvements that can be made and their potential contribution to RERF's research. The analysis provided here suggests that the most important improvements can be made by providing calculations for more organs or tissues and by providing a larger series of age- and sex-specific models of the human body from birth to adulthood, as well as fetal models.

  4. Enthalpy-based equation of state for highly porous materials employing modified soft sphere fluid model

    NASA Astrophysics Data System (ADS)

    Nayak, Bishnupriya; Menon, S. V. G.

    2018-01-01

    Enthalpy-based equation of state based on a modified soft sphere model for the fluid phase, which includes vaporization and ionization effects, is formulated for highly porous materials. Earlier developments and applications of enthalpy-based approach had not accounted for the fact that shocked states of materials with high porosity (e.g., porosity more than two for Cu) are in the expanded fluid region. We supplement the well known soft sphere model with a generalized Lennard-Jones formula for the zero temperature isotherm, with parameters determined from cohesive energy, specific volume and bulk modulus of the solid at normal condition. Specific heats at constant pressure, ionic and electronic enthalpy parameters and thermal excitation effects are calculated using the modified approach and used in the enthalpy-based equation of state. We also incorporate energy loss from the shock due to expansion of shocked material in calculating porous Hugoniot. Results obtained for Cu, even up to initial porosities ten, show good agreement with experimental data.

  5. A theoretical trombone

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael C.

    2014-09-01

    What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that compare well to both the desired frequencies of the musical pitches and those actually played on a real trombone.

  6. Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model

    NASA Technical Reports Server (NTRS)

    White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.

    1989-01-01

    A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.

  7. Control of electromagnetic stirring by power focusing in large induction crucible furnaces

    NASA Astrophysics Data System (ADS)

    Frizen, V. E.; Sarapulov, F. N.

    2011-12-01

    An approach is proposed for the calculation of the operating conditions of an induction crucible furnace at the final stage of melting with the power focused in various regions of melted metal. The calculation is performed using a model based on the method of detailed magnetic equivalent circuits. The combination of the furnace and a thyristor frequency converter is taken into account in modeling.

  8. Calculation of solar wind flows about terrestrial planets

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Spreiter, J. R.

    1982-01-01

    A computational model was developed for the determination of the plasma and magnetic field properties of the global interaction of the solar wind with terrestrial planetary magneto/ionospheres. The theoretical method is based on an established single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of supersonic, super Alfvenic solar wind flow past terrestrial planets. A summary is provided of the important research results.

  9. Evaluation of atmospheric nitrogen deposition model performance in the context of U.S. critical load assessments

    NASA Astrophysics Data System (ADS)

    Williams, Jason J.; Chung, Serena H.; Johansen, Anne M.; Lamb, Brian K.; Vaughan, Joseph K.; Beutel, Marc

    2017-02-01

    Air quality models are widely used to estimate pollutant deposition rates and thereby calculate critical loads and critical load exceedances (model deposition > critical load). However, model operational performance is not always quantified specifically to inform these applications. We developed a performance assessment approach designed to inform critical load and exceedance calculations, and applied it to the Pacific Northwest region of the U.S. We quantified wet inorganic N deposition performance of several widely-used air quality models, including five different Community Multiscale Air Quality Model (CMAQ) simulations, the Tdep model, and 'PRISM x NTN' model. Modeled wet inorganic N deposition estimates were compared to wet inorganic N deposition measurements at 16 National Trends Network (NTN) monitoring sites, and to annual bulk inorganic N deposition measurements at Mount Rainier National Park. Model bias (model - observed) and error (|model - observed|) were expressed as a percentage of regional critical load values for diatoms and lichens. This novel approach demonstrated that wet inorganic N deposition bias in the Pacific Northwest approached or exceeded 100% of regional diatom and lichen critical load values at several individual monitoring sites, and approached or exceeded 50% of critical loads when averaged regionally. Even models that adjusted deposition estimates based on deposition measurements to reduce bias or that spatially-interpolated measurement data, had bias that approached or exceeded critical loads at some locations. While wet inorganic N deposition model bias is only one source of uncertainty that can affect critical load and exceedance calculations, results demonstrate expressing bias as a percentage of critical loads at a spatial scale consistent with calculations may be a useful exercise for those performing calculations. It may help decide if model performance is adequate for a particular calculation, help assess confidence in calculation results, and highlight cases where a non-deterministic approach may be needed.

  10. GPU-based ultra-fast dose calculation using a finite size pencil beam model.

    PubMed

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B

    2009-10-21

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  11. A Prototype Physical Database for Passive Microwave Retrievals of Precipitation over the US Southern Great Plains

    NASA Technical Reports Server (NTRS)

    Ringerud, S.; Kummerow, C. D.; Peters-Lidard, C. D.

    2015-01-01

    An accurate understanding of the instantaneous, dynamic land surface emissivity is necessary for a physically based, multi-channel passive microwave precipitation retrieval scheme over land. In an effort to assess the feasibility of the physical approach for land surfaces, a semi-empirical emissivity model is applied for calculation of the surface component in a test area of the US Southern Great Plains. A physical emissivity model, using land surface model data as input, is used to calculate emissivity at the 10GHz frequency, combining contributions from the underlying soil and vegetation layers, including the dielectric and roughness effects of each medium. An empirical technique is then applied, based upon a robust set of observed channel covariances, extending the emissivity calculations to all channels. For calculation of the hydrometeor contribution, reflectivity profiles from the Tropical Rainfall Measurement Mission Precipitation Radar (TRMM PR) are utilized along with coincident brightness temperatures (Tbs) from the TRMM Microwave Imager (TMI), and cloud-resolving model profiles. Ice profiles are modified to be consistent with the higher frequency microwave Tbs. Resulting modeled top of the atmosphere Tbs show correlations to observations of 0.9, biases of 1K or less, root-mean-square errors on the order of 5K, and improved agreement over the use of climatological emissivity values. The synthesis of these models and data sets leads to the creation of a simple prototype Tb database that includes both dynamic surface and atmospheric information physically consistent with the land surface model, emissivity model, and atmospheric information.

  12. Electronic coupling between Watson-Crick pairs for hole transfer and transport in desoxyribonucleic acid

    NASA Astrophysics Data System (ADS)

    Voityuk, Alexander A.; Jortner, Joshua; Bixon, M.; Rösch, Notker

    2001-04-01

    Electronic matrix elements for hole transfer between Watson-Crick pairs in desoxyribonucleic acid (DNA) of regular structure, calculated at the Hartree-Fock level, are compared with the corresponding intrastrand and interstrand matrix elements estimated for models comprised of just two nucleobases. The hole transfer matrix element of the GAG trimer duplex is calculated to be larger than that of the GTG duplex. "Through-space" interaction between two guanines in the trimer duplexes is comparable with the coupling through an intervening Watson-Crick pair. The gross features of bridge specificity and directional asymmetry of the electronic matrix elements for hole transfer between purine nucleobases in superstructures of dimer and trimer duplexes have been discussed on the basis of the quantum chemical calculations. These results have also been analyzed with a semiempirical superexchange model for the electronic coupling in DNA duplexes of donor (nuclobases)-acceptor, which incorporates adjacent base-base electronic couplings and empirical energy gaps corrected for solvation effects; this perturbation-theory-based model interpretation allows a theoretical evaluation of experimental observables, i.e., the absolute values of donor-acceptor electronic couplings, their distance dependence, and the reduction factors for the intrastrand hole hopping or trapping rates upon increasing the size of the nucleobases bridge. The quantum chemical results point towards some limitations of the perturbation-theory-based modeling.

  13. 42 CFR § 512.307 - Subsequent calculations.

    Code of Federal Regulations, 2010 CFR

    2017-10-01

    ... (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL Pricing and Payment § 512.307... the initial NPRA, using claims data and non-claims-based payment data available at that time, to account for final claims run-out, final changes in non-claims-based payment data, and any additional...

  14. Estimates of Stellar Weak Interaction Rates for Nuclei in the Mass Range A=65-80

    NASA Astrophysics Data System (ADS)

    Pruet, Jason; Fuller, George M.

    2003-11-01

    We estimate lepton capture and emission rates, as well as neutrino energy loss rates, for nuclei in the mass range A=65-80. These rates are calculated on a temperature/density grid appropriate for a wide range of astrophysical applications including simulations of late time stellar evolution and X-ray bursts. The basic inputs in our single-particle and empirically inspired model are (i) experimentally measured level information, weak transition matrix elements, and lifetimes, (ii) estimates of matrix elements for allowed experimentally unmeasured transitions based on the systematics of experimentally observed allowed transitions, and (iii) estimates of the centroids of the GT resonances motivated by shell model calculations in the fp shell as well as by (n, p) and (p, n) experiments. Fermi resonances (isobaric analog states) are also included, and it is shown that Fermi transitions dominate the rates for most interesting proton-rich nuclei for which an experimentally determined ground state lifetime is unavailable. For the purposes of comparing our results with more detailed shell model based calculations we also calculate weak rates for nuclei in the mass range A=60-65 for which Langanke & Martinez-Pinedo have provided rates. The typical deviation in the electron capture and β-decay rates for these ~30 nuclei is less than a factor of 2 or 3 for a wide range of temperature and density appropriate for presupernova stellar evolution. We also discuss some subtleties associated with the partition functions used in calculations of stellar weak rates and show that the proper treatment of the partition functions is essential for estimating high-temperature β-decay rates. In particular, we show that partition functions based on unconverged Lanczos calculations can result in errors in estimates of high-temperature β-decay rates.

  15. Discontinuous model with semi analytical sheath interface for radio frequency plasma

    NASA Astrophysics Data System (ADS)

    Miyashita, Masaru

    2016-09-01

    Sumitomo Heavy Industries, Ltd. provide many products utilizing plasma. In this study, we focus on the Radio Frequency (RF) plasma source by interior antenna. The plasma source is expected to be high density and low metal contamination. However, the sputtering the antenna cover by high energy ion from sheath voltage still have been problematic. We have developed the new model which can calculate sheath voltage wave form in the RF plasma source for realistic calculation time. This model is discontinuous that electronic fluid equation in plasma connect to usual passion equation in antenna cover and chamber with semi analytical sheath interface. We estimate the sputtering distribution based on calculated sheath voltage waveform by this model, sputtering yield and ion energy distribution function (IEDF) model. The estimated sputtering distribution reproduce the tendency of experimental results.

  16. Evaluation of potential toxicity from co-exposure to three CNS depressants (toluene, ethylbenzene, and xylene) under resting and working conditions using PBPK modeling.

    PubMed

    Dennison, James E; Bigelow, Philip L; Mumtaz, Moiz M; Andersen, Melvin E; Dobrev, Ivan D; Yang, Raymond S H

    2005-03-01

    Under OSHA and American Conference of Governmental Industrial Hygienists (ACGIH) guidelines, the mixture formula (unity calculation) provides a method for evaluating exposures to mixtures of chemicals that cause similar toxicities. According to the formula, if exposures are reduced in proportion to the number of chemicals and their respective exposure limits, the overall exposure is acceptable. This approach assumes that responses are additive, which is not the case when pharmacokinetic interactions occur. To determine the validity of the additivity assumption, we performed unity calculations for a variety of exposures to toluene, ethylbenzene, and/or xylene using the concentration of each chemical in blood in the calculation instead of the inhaled concentration. The blood concentrations were predicted using a validated physiologically based pharmacokinetic (PBPK) model to allow exploration of a variety of exposure scenarios. In addition, the Occupational Safety and Health Administration and ACGIH occupational exposure limits were largely based on studies of humans or animals that were resting during exposure. The PBPK model was also used to determine the increased concentration of chemicals in the blood when employees were exercising or performing manual work. At rest, a modest overexposure occurs due to pharmacokinetic interactions when exposure is equal to levels where a unity calculation is 1.0 based on threshold limit values (TLVs). Under work load, however, internal exposure was 87%higher than provided by the TLVs. When exposures were controlled by a unity calculation based on permissible exposure limits (PELs), internal exposure was 2.9 and 4.6 times the exposures at the TLVs at rest and workload, respectively. If exposure was equal to PELs outright, internal exposure was 12.5 and 16 times the exposure at the TLVs at rest and workload, respectively. These analyses indicate the importance of (1) selecting appropriate exposure limits, (2) performing unity calculations, and (3) considering the effect of work load on internal doses, and they illustrate the utility of PBPK modeling in occupational health risk assessment.

  17. A probabilistic maintenance model for diesel engines

    NASA Astrophysics Data System (ADS)

    Pathirana, Shan; Abeygunawardane, Saranga Kumudu

    2018-02-01

    In this paper, a probabilistic maintenance model is developed for inspection based preventive maintenance of diesel engines based on the practical model concepts discussed in the literature. Developed model is solved using real data obtained from inspection and maintenance histories of diesel engines and experts' views. Reliability indices and costs were calculated for the present maintenance policy of diesel engines. A sensitivity analysis is conducted to observe the effect of inspection based preventive maintenance on the life cycle cost of diesel engines.

  18. Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2014-01-01

    This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.

  19. Development of a near-wall Reynolds-stress closure based on the SSG model for the pressure strain

    NASA Technical Reports Server (NTRS)

    So, R. M. C.; Aksoy, H.; Sommer, T. P.; Yuan, S. P.

    1994-01-01

    In this research, a near-wall second-order closure based on the Speziable et al.(1991) or SSG model for the pressure-strain term is proposed. Unlike the LRR model, the SSG model is quasi-nonlinear and yields better results when applied to calculate rotating homogeneous turbulent flows. An asymptotic analysis near the wall is applied to both the exact and modeled, equations so that appropriate near-wall corrections to the SSG model and the modeled dissipation-rate equation can be derived to satisfy the physical wall boundary conditions as well as the asymptotic near-wall behavior of the exact equations. Two additional model constants are introduced and they are determined by calibrating against one set of near-wall channel flow data. Once determined, their values are found to remain constant irrespective of the type of flow examined. The resultant model is used to calculate simple turbulent flows, near separating turbulent flows, complex turbulent flows and compressible turbulent flows with a freestream Mach number as high as 10. In all the flow cases investigated, the calculated results are in good agreement with data. This new near-wall model is less ad hoc, physically and mathematically more sound and eliminates the empiricism introduced by Zhang. Therefore, it is quite general, as demonstrated by the good agreement achieved with measurements covering a wide range of Reynolds numbers and Mach numbers.

  20. Road traffic noise prediction model for heterogeneous traffic based on ASJ-RTN Model 2008 with consideration of horn

    NASA Astrophysics Data System (ADS)

    Hustim, M.; Arifin, Z.; Aly, S. H.; Ramli, M. I.; Zakaria, R.; Liputo, A.

    2018-04-01

    This research aimed to predict the noise produced by the traffic in the road network in Makassar City using ASJ-RTN Model 2008 by calculating the horn sound. Observations were taken at 37 survey points on road side. The observations were conducted at 06.00 - 18.00 and 06.00 - 21.00 which research objects were motorcycle (MC), light vehicle (LV) and heavy vehicle (HV). The observed data were traffic volume, vehicle speed, number of horn and traffic noise using Sound Level Meter Tenmars TM-103. The research result indicates that prediction noise model by calculating the horn sound produces the average noise level value of 78.5 dB having the Pearson’s correlation and RMSE of 0.95 and 0.87. Therefore, ASJ-RTN Model 2008 prediction model by calculating the horn sound is said to be sufficiently good for predicting noise level.

  1. Ecological Footprint and Ecosystem Services Models: A Comparative Analysis of Environmental Carrying Capacity Calculation Approach in Indonesia

    NASA Astrophysics Data System (ADS)

    Subekti, R. M.; Suroso, D. S. A.

    2018-05-01

    Calculation of environmental carrying capacity can be done by various approaches. The selection of an appropriate approach determines the success of determining and applying environmental carrying capacity. This study aimed to compare the ecological footprint approach and the ecosystem services approach for calculating environmental carrying capacity. It attempts to describe two relatively new models that require further explanation if they are used to calculate environmental carrying capacity. In their application, attention needs to be paid to their respective advantages and weaknesses. Conceptually, the ecological footprint model is more complete than the ecosystem services model, because it describes the supply and demand of resources, including supportive and assimilative capacity of the environment, and measurable output through a resource consumption threshold. However, this model also has weaknesses, such as not considering technological change and resources beneath the earth’s surface, as well as the requirement to provide trade data between regions for calculating at provincial and district level. The ecosystem services model also has advantages, such as being in line with strategic environmental assessment (SEA) of ecosystem services, using spatial analysis based on ecoregions, and a draft regulation on calculation guidelines formulated by the government. Meanwhile, weaknesses are that it only describes the supply of resources, that the assessment of the different types of ecosystem services by experts tends to be subjective, and that the output of the calculation lacks a resource consumption threshold.

  2. Calculation methods study on hot spot stress of new girder structure detail

    NASA Astrophysics Data System (ADS)

    Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing

    2017-10-01

    To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.

  3. Czochralski crystal growth: Modeling study

    NASA Technical Reports Server (NTRS)

    Dudukovic, M. P.; Ramachandran, P. A.; Srivastava, R. K.; Dorsey, D.

    1986-01-01

    The modeling study of Czochralski (Cz) crystal growth is reported. The approach was to relate in a quantitative manner, using models based on first priniciples, crystal quality to operating conditions and geometric variables. The finite element method is used for all calculations.

  4. Dynamic Model of Basic Oxygen Steelmaking Process Based on Multi-zone Reaction Kinetics: Model Derivation and Validation

    NASA Astrophysics Data System (ADS)

    Rout, Bapin Kumar; Brooks, Geoff; Rhamdhani, M. Akbar; Li, Zushu; Schrama, Frank N. H.; Sun, Jianjun

    2018-04-01

    A multi-zone kinetic model coupled with a dynamic slag generation model was developed for the simulation of hot metal and slag composition during the basic oxygen furnace (BOF) operation. The three reaction zones (i) jet impact zone, (ii) slag-bulk metal zone, (iii) slag-metal-gas emulsion zone were considered for the calculation of overall refining kinetics. In the rate equations, the transient rate parameters were mathematically described as a function of process variables. A micro and macroscopic rate calculation methodology (micro-kinetics and macro-kinetics) were developed to estimate the total refining contributed by the recirculating metal droplets through the slag-metal emulsion zone. The micro-kinetics involves developing the rate equation for individual droplets in the emulsion. The mathematical models for the size distribution of initial droplets, kinetics of simultaneous refining of elements, the residence time in the emulsion, and dynamic interfacial area change were established in the micro-kinetic model. In the macro-kinetics calculation, a droplet generation model was employed and the total amount of refining by emulsion was calculated by summing the refining from the entire population of returning droplets. A dynamic FetO generation model based on oxygen mass balance was developed and coupled with the multi-zone kinetic model. The effect of post-combustion on the evolution of slag and metal composition was investigated. The model was applied to a 200-ton top blowing converter and the simulated value of metal and slag was found to be in good agreement with the measured data. The post-combustion ratio was found to be an important factor in controlling FetO content in the slag and the kinetics of Mn and P in a BOF process.

  5. Long-term strength determination for cooled blades made of monocrystalline superalloys

    NASA Astrophysics Data System (ADS)

    Getsov, L. B.; Semenov, A. S.; Besschetnov, V. A.; Grishchenko, A. I.; Semenov, S. G.

    2017-04-01

    For the manufacture of blades for modern gas-turbine installations, monocrystalline alloys are used. Traditional methods for the calculation of stressed-deformed state and safety factors for these alloys developed and verified for polycrystalline materials need to be adjusted. This paper deals with methodological principles for an approach to the solving of the problem concerning a finite-element determination of the long-term static strength for cooled monocrystalline blades employed in gas-turbine installations based on the use of two different models (phenomenological and micromechanical) considering the inelastic deformation of monocrystalline superalloys. An analysis has been performed for the distribution of Schmid factors in the spherical triangle for primary and secondary octahedral and cubic slip systems. Calculations are performed using Larson-Miller's parametric dependences taking into account the crystallographic orientation of the material. A determination procedure for the anisotropy coefficients of long-term strength is described based on data for different orientations. A comparative analysis of the results of finite-element calculations made using phenomenological and micromechanical (crystallographic) creep models for the long-term static strength of cooled monocrystalline blades used in a gas-turbine engine has been performed. It is shown that the location of the most loaded sections of such a blade coincide with the results of calculations according to these models. It has been found that the micromechanical deformation model results in the obtaining of the most conservative estimate for the long-term strength of turbine blades made of monocrystalline alloys. It is shown that the calculations using models for materials with isotropic properties can produce considerable errors in determining the durability of the blades. The possibility is considered for using 1D-, 2D-, and 3D-models for turbine monocrystalline blades in the determination of their durability parameters.

  6. Variability of pCO2 in surface waters and development of prediction model.

    PubMed

    Chung, Sewoong; Park, Hyungseok; Yoo, Jisu

    2018-05-01

    Inland waters are substantial sources of atmospheric carbon, but relevant data are rare in Asian monsoon regions including Korea. Emissions of CO 2 to the atmosphere depend largely on the partial pressure of CO 2 (pCO 2 ) in water; however, measured pCO 2 data are scarce and calculated pCO 2 can show large uncertainty. This study had three objectives: 1) to examine the spatial variability of pCO 2 in diverse surface water systems in Korea; 2) to compare pCO 2 calculated using pH-total alkalinity (Alk) and pH-dissolved inorganic carbon (DIC) with pCO 2 measured by an in situ submersible nondispersive infrared detector; and 3) to characterize the major environmental variables determining the variation of pCO 2 based on physical, chemical, and biological data collected concomitantly. Of 30 samples, 80% were found supersaturated in CO 2 with respect to the overlying atmosphere. Calculated pCO 2 using pH-Alk and pH-DIC showed weak prediction capability and large variations with respect to measured pCO 2 . Error analysis indicated that calculated pCO 2 is highly sensitive to the accuracy of pH measurements, particularly at low pH. Stepwise multiple linear regression (MLR) and random forest (RF) techniques were implemented to develop the most parsimonious model based on 10 potential predictor variables (pH, Alk, DIC, Uw, Cond, Turb, COD, DOC, TOC, Chla) by optimizing model performance. The RF model showed better performance than the MLR model, and the most parsimonious RF model (pH, Turb, Uw, Chla) improved pCO 2 prediction capability considerably compared with the simple calculation approach, reducing the RMSE from 527-544 to 105μatm at the study sites. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. The Application of COMSOL Multiphysics Package on the Modelling of Complex 3-D Lithospheric Electrical Resistivity Structures - A Case Study from the Proterozoic Orogenic belt within the North China Craton

    NASA Astrophysics Data System (ADS)

    Guo, L.; Yin, Y.; Deng, M.; Guo, L.; Yan, J.

    2017-12-01

    At present, most magnetotelluric (MT) forward modelling and inversion codes are based on finite difference method. But its structured mesh gridding cannot be well adapted for the conditions with arbitrary topography or complex tectonic structures. By contrast, the finite element method is more accurate in calculating complex and irregular 3-D region and has lower requirement of function smoothness. However, the complexity of mesh gridding and limitation of computer capacity has been affecting its application. COMSOL Multiphysics is a cross-platform finite element analysis, solver and multiphysics full-coupling simulation software. It achieves highly accurate numerical simulations with high computational performance and outstanding multi-field bi-directional coupling analysis capability. In addition, its AC/DC and RF module can be used to easily calculate the electromagnetic responses of complex geological structures. Using the adaptive unstructured grid, the calculation is much faster. In order to improve the discretization technique of computing area, we use the combination of Matlab and COMSOL Multiphysics to establish a general procedure for calculating the MT responses for arbitrary resistivity models. The calculated responses include the surface electric and magnetic field components, impedance components, magnetic transfer functions and phase tensors. Then, the reliability of this procedure is certificated by 1-D, 2-D and 3-D and anisotropic forward modeling tests. Finally, we establish the 3-D lithospheric resistivity model for the Proterozoic Wutai-Hengshan Mts. within the North China Craton by fitting the real MT data collected there. The reliability of the model is also verified by induced vectors and phase tensors. Our model shows more details and better resolution, compared with the previously published 3-D model based on the finite difference method. In conclusion, COMSOL Multiphysics package is suitable for modeling the 3-D lithospheric resistivity structures under complex tectonic deformation backgrounds, which could be a good complement to the existing finite-difference inversion algorithms.

  8. A statistical method to estimate low-energy hadronic cross sections

    NASA Astrophysics Data System (ADS)

    Balassa, Gábor; Kovács, Péter; Wolf, György

    2018-02-01

    In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.

  9. Areas of Weakly Anomalous to Anomalous Surface Temperature in Chaffee County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Chaffee County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  10. Areas of Weakly Anomalous to Anomalous Surface Temperature in Garfield County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Garfield County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature between 1o and 2o were considered ASTER modeled warm surface exposures (thermal anomalies) Note: 'o' is used in this description to represent lowercase sigma.

  11. Areas of Weakly Anomalous to Anomalous Surface Temperature in Routt County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Routt County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature between 1o and 2o were considered ASTER modeled warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  12. Areas of Weakly Anomalous to Anomalous Surface Temperature in Dolores County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Dolores County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies) Note: 'o' is used in this description to represent lowercase sigma.

  13. Areas of Weakly Anomalous to Anomalous Surface Temperature in Archuleta County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Archuleta County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature between 1o and 2o were considered ASTER modeled warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  14. Modeling the Secondary Drying Stage of Freeze Drying: Development and Validation of an Excel-Based Model.

    PubMed

    Sahni, Ekneet K; Pikal, Michael J

    2017-03-01

    Although several mathematical models of primary drying have been developed over the years, with significant impact on the efficiency of process design, models of secondary drying have been confined to highly complex models. The simple-to-use Excel-based model developed here is, in essence, a series of steady state calculations of heat and mass transfer in the 2 halves of the dry layer where drying time is divided into a large number of time steps, where in each time step steady state conditions prevail. Water desorption isotherm and mass transfer coefficient data are required. We use the Excel "Solver" to estimate the parameters that define the mass transfer coefficient by minimizing the deviations in water content between calculation and a calibration drying experiment. This tool allows the user to input the parameters specific to the product, process, container, and equipment. Temporal variations in average moisture contents and product temperatures are outputs and are compared with experiment. We observe good agreement between experiments and calculations, generally well within experimental error, for sucrose at various concentrations, temperatures, and ice nucleation temperatures. We conclude that this model can serve as an important process development tool for process design and manufacturing problem-solving. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  15. A Simulation and Modeling Framework for Space Situational Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivier, S S

    This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellitemore » intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated.« less

  16. A MATHEMATICAL MODEL FOR CALCULATING ELECTRICAL CONDITIONS IN WIRE-DUCT ELECTROSTATIC PRECIPITATION DEVICES

    EPA Science Inventory

    The article reports the development of a new method of calculating electrical conditions in wire-duct electrostatic precipitation devices. The method, based on a numerical solution to the governing differential equations under a suitable choice of boundary conditions, accounts fo...

  17. Experiment-specific cosmic microwave background calculations made easier - Approximation formula for smoothed delta T/T windows

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.

    1993-01-01

    Simple and easy to implement elementary function approximations are introduced to the spectral window functions needed in calculations of model predictions of the cosmic microwave backgrond (CMB) anisotropy. These approximations allow the investigator to obtain model delta T/T predictions in terms of single integrals over the power spectrum of cosmological perturbations and to avoid the necessity of performing the additional integrations. The high accuracy of these approximations is demonstrated here for the CDM theory-based calculations of the expected delta T/T signal in several experiments searching for the CMB anisotropy.

  18. Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ruf, Joe

    2007-01-01

    As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.

  19. Accurate pKa calculation of the conjugate acids of alkanolamines, alkaloids and nucleotide bases by quantum chemical methods.

    PubMed

    Gangarapu, Satesh; Marcelis, Antonius T M; Zuilhof, Han

    2013-04-02

    The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum model (CPCM) and universal solvation models (SMD, SM8). G3, SCS-MP2 and M11-L methods coupled with SMD and SM8 solvation models perform well for alkanolamines with mean unsigned errors below 0.20 pKa units, in all cases. Extending this method to the pKa calculation of 35 nitrogen-containing compounds spanning 12 pKa units showed an excellent correlation between experimental and computational pKa values of these 35 amines with the computationally low-cost SM8/M11-L density functional approach. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. SU-E-CAMPUS-I-05: Internal Dosimetric Calculations for Several Imaging Radiopharmaceuticals in Preclinical Studies and Quantitative Assessment of the Mouse Size Impact On Them. Realistic Monte Carlo Simulations Based On the 4D-MOBY Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostou, T; Papadimitroulas, P; Kagadis, GC

    2014-06-15

    Purpose: Commonly used radiopharmaceuticals were tested to define the most important dosimetric factors in preclinical studies. Dosimetric calculations were applied in two different whole-body mouse models, with varying organ size, so as to determine their impact on absorbed doses and S-values. Organ mass influence was evaluated with computational models and Monte Carlo(MC) simulations. Methods: MC simulations were executed on GATE to determine dose distribution in the 4D digital MOBY mouse phantom. Two mouse models, 28 and 34 g respectively, were constructed based on realistic preclinical exams to calculate the absorbed doses and S-values of five commonly used radionuclides in SPECT/PETmore » studies (18F, 68Ga, 177Lu, 111In and 99mTc).Radionuclide biodistributions were obtained from literature. Realistic statistics (uncertainty lower than 4.5%) were acquired using the standard physical model in Geant4. Comparisons of the dosimetric calculations on the two different phantoms for each radiopharmaceutical are presented. Results: Dose per organ in mGy was calculated for all radiopharmaceuticals. The two models introduced a difference of 0.69% in their brain masses, while the largest differences were observed in the marrow 18.98% and in the thyroid 18.65% masses.Furthermore, S-values of the most important target-organs were calculated for each isotope. Source-organ was selected to be the whole mouse body.Differences on the S-factors were observed in the 6.0–30.0% range. Tables with all the calculations as reference dosimetric data were developed. Conclusion: Accurate dose per organ and the most appropriate S-values are derived for specific preclinical studies. The impact of the mouse model size is rather high (up to 30% for a 17.65% difference in the total mass), and thus accurate definition of the organ mass is a crucial parameter for self-absorbed S values calculation.Our goal is to extent the study for accurate estimations in small animal imaging, whereas it is known that there is a large variety in the anatomy of the organs.« less

  1. Prediction of prostate cancer in unscreened men: external validation of a risk calculator.

    PubMed

    van Vugt, Heidi A; Roobol, Monique J; Kranse, Ries; Määttänen, Liisa; Finne, Patrik; Hugosson, Jonas; Bangma, Chris H; Schröder, Fritz H; Steyerberg, Ewout W

    2011-04-01

    Prediction models need external validation to assess their value beyond the setting where the model was derived from. To assess the external validity of the European Randomized study of Screening for Prostate Cancer (ERSPC) risk calculator (www.prostatecancer-riskcalculator.com) for the probability of having a positive prostate biopsy (P(posb)). The ERSPC risk calculator was based on data of the initial screening round of the ERSPC section Rotterdam and validated in 1825 and 531 men biopsied at the initial screening round in the Finnish and Swedish sections of the ERSPC respectively. P(posb) was calculated using serum prostate specific antigen (PSA), outcome of digital rectal examination (DRE), transrectal ultrasound and ultrasound assessed prostate volume. The external validity was assessed for the presence of cancer at biopsy by calibration (agreement between observed and predicted outcomes), discrimination (separation of those with and without cancer), and decision curves (for clinical usefulness). Prostate cancer was detected in 469 men (26%) of the Finnish cohort and in 124 men (23%) of the Swedish cohort. Systematic miscalibration was present in both cohorts (mean predicted probability 34% versus 26% observed, and 29% versus 23% observed, both p<0.001). The areas under the curves were 0.76 and 0.78, and substantially lower for the model with PSA only (0.64 and 0.68 respectively). The model proved clinically useful for any decision threshold compared with a model with PSA only, PSA and DRE, or biopsying all men. A limitation is that the model is based on sextant biopsies results. The ERSPC risk calculator discriminated well between those with and without prostate cancer among initially screened men, but overestimated the risk of a positive biopsy. Further research is necessary to assess the performance and applicability of the ERSPC risk calculator when a clinical setting is considered rather than a screening setting. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. The prospect of modern thermomechanics in structural integrity calculations of large-scale pressure vessels

    NASA Astrophysics Data System (ADS)

    Fekete, Tamás

    2018-05-01

    Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well-grounded theoretical foundation for a new modeling framework of structural integrity. This paper presents the first findings of the research project.

  3. SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, B; Liu, S; Zhang, T

    2016-06-15

    Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less

  4. Modeling RF Fields in Hot Plasmas with Parallel Full Wave Code

    NASA Astrophysics Data System (ADS)

    Spencer, Andrew; Svidzinski, Vladimir; Zhao, Liangji; Galkin, Sergei; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a suite of full wave RF plasma codes. It is based on a meshless formulation in configuration space with adapted cloud of computational points (CCP) capability and using the hot plasma conductivity kernel to model the nonlocal plasma dielectric response. The conductivity kernel is calculated by numerically integrating the linearized Vlasov equation along unperturbed particle trajectories. Work has been done on the following calculations: 1) the conductivity kernel in hot plasmas, 2) a monitor function based on analytic solutions of the cold-plasma dispersion relation, 3) an adaptive CCP based on the monitor function, 4) stencils to approximate the wave equations on the CCP, 5) the solution to the full wave equations in the cold-plasma model in tokamak geometry for ECRH and ICRH range of frequencies, and 6) the solution to the wave equations using the calculated hot plasma conductivity kernel. We will present results on using a meshless formulation on adaptive CCP to solve the wave equations and on implementing the non-local hot plasma dielectric response to the wave equations. The presentation will include numerical results of wave propagation and absorption in the cold and hot tokamak plasma RF models, using DIII-D geometry and plasma parameters. Work is supported by the U.S. DOE SBIR program.

  5. Calculation of the surface tension of liquid Ga-based alloys

    NASA Astrophysics Data System (ADS)

    Dogan, Ali; Arslan, Hüseyin

    2018-05-01

    As known, Eyring and his collaborators have applied the structure theory to the properties of binary liquid mixtures. In this work, the Eyring model has been extended to calculate the surface tension of liquid Ga-Bi, Ga-Sn and Ga-In binary alloys. It was found that the addition of Sn, In and Bi into Ga leads to significant decrease in the surface tension of the three Ga-based alloy systems, especially for that of Ga-Bi alloys. The calculated surface tension values of these alloys exhibit negative deviation from the corresponding ideal mixing isotherms. Moreover, a comparison between the calculated results and corresponding literature data indicates a good agreement.

  6. Delimitation of areas under the real pressure from agricultural activities due to nitrate water pollution in Poland

    NASA Astrophysics Data System (ADS)

    Wozniak, E.; Nasilowska, S.; Jarocinska, A.; Igras, J.; Stolarska, M.; Bernoussi, A. S.; Karaczun, Z.

    2012-04-01

    The aim of the performed research was to determine catchments under the nitrogen pressure in Poland in period of 2007-2010. National Water Management Authority in Poland uses the elaborated methodology to fulfil requirements of Nitrate Directive and Water Framework Directive. Multicriteria GIS analysis was conducted on the base on various types of environmental data, maps and remote sensing products. Final model of real agricultural pressure was made using two components: (i) potential pressure connected with agriculture (ii) the vulnerability of the area. The agricultural pressure was calculated using the amount of nitrogen in fertilizers and the amount of nitrogen produced by animal breeding. The animal pressure was based on the information about the number of bred animals of each species for communes in Poland. The spatial distribution of vegetation pressure was calculated using kriging for the whole country base on the information about 5000 points with the amount of nitrogen dose in fertilizers. The vulnerability model was elaborated only for arable lands. It was based on the probability of the precipitation penetration to the ground water and runoff to surface waters. Catchment, Hydrogeological, Soil, Relief or Land Cover maps allowed taking into account constant environmental conditions. Additionally information about precipitation for each day of analysis and evapotranspiration for every 16-day period (calculated from satellite images) were used to present influence of meteorological condition on vulnerability of the terrain. The risk model is the sum of the vulnerability model and the agricultural pressure model. In order to check the accuracy of the elaborated model, the authors compared the results with the eutrophication measurements. The model accuracy is from 85,3% to 91,3%.

  7. Use of A-Train Aerosol Observations to Constrain Direct Aerosol Radiative Effects (DARE) Comparisons with Aerocom Models and Uncertainty Assessments

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.

    2017-01-01

    We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the MODIS Collection 6 AOD data derived with the dark target and deep blue algorithms has extended the coverage of the MOC retrievals towards higher latitudes. The MOC aerosol retrievals agree better with AERONET in terms of the single scattering albedo (ssa) at 441 nm than ssa calculated from OMI and MODIS data alone, indicating that CALIOP aerosol backscatter data contains information on aerosol absorption. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Overall, the MOC-based calculations of clear-sky DARE at TOA over land are smaller (less negative) than previous model or observational estimates due to the inclusion of more absorbing aerosol retrievals over brighter surfaces, not previously available for observationally-based estimates of DARE. MOC-based DARE estimates at the surface over land and total (land and ocean) DARE estimates at TOA are in between previous model and observational results. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3. We discuss sampling issues that affect the comparisons and the major challenges in extending our clear-sky DARE results to all-sky conditions. We present estimates of clear-sky and all-sky DARE and show uncertainties that stem from the assumptions in the spatial extrapolation and accuracy of aerosol and cloud properties, in the diurnal evolution of these properties, and in the radiative transfer calculations.

  8. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications

    PubMed Central

    Bäck, Anna

    2013-01-01

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose‐volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient‐specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm‐specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction‐based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman‐Kutcher‐Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm‐specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types. PACS numbers: 87.53.‐j, 87.53.Kn, 87.55.‐x, 87.55.dh, 87.55.kd PMID:24036865

  9. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems

    PubMed Central

    Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.

    2015-01-01

    Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086

  10. Radiation doses and neutron irridation effects on human cells based on calculations

    NASA Astrophysics Data System (ADS)

    Radojevic, B. B.; Cukavac, M.; Jovanovic, D.

    In general, main aim of our paper is to follow influence of neutron's radiation on materials, but one of possible applications of fast neutrons in therapeutical reasons i.e. their influence on carcinom cells of difficuilt geometries in human bodies too. Interactions between neutrons and human cells of tissue are analysed here. We know that the light nuclei of hydrogen, nitrogen, carbon, and oxygen are main constituents of human cells, and that different nuclear models are usually used to present interactions of nuclear particles with mentioned elements. Some of most widely used pre-equilibrium nuclear models are: intranuclear cascade model (ICN), Harp-Miller-Berne (HMB), geometry-dependent hybrid (GDH) and exciton models (EM). In this paper is studied and calculated the primary energetic spectra of the secundary particles (neutrons, protons, and gamas) emitted from this interactions, and followed by corresponding integral cross sections, based on exciton model (EM). The total emission cross-section is the sum of emissions in all stages of energies. Obtained spectra for interactions type of (n, n'), (n, p), and (n, ?), for various incident neutron energies in the interval from 3 MeV up to 30 MeV are analysed too. Some results of calculations are presented here.

  11. An accurate model for the computation of the dose of protons in water.

    PubMed

    Embriaco, A; Bellinzona, V E; Fontana, A; Rotondi, A

    2017-06-01

    The accurate and fast calculation of the dose in proton radiation therapy is an essential ingredient for successful treatments. We propose a novel approach with a minimal number of parameters. The approach is based on the exact calculation of the electromagnetic part of the interaction, namely the Molière theory of the multiple Coulomb scattering for the transversal 1D projection and the Bethe-Bloch formula for the longitudinal stopping power profile, including a gaussian energy straggling. To this e.m. contribution the nuclear proton-nucleus interaction is added with a simple two-parameter model. Then, the non gaussian lateral profile is used to calculate the radial dose distribution with a method that assumes the cylindrical symmetry of the distribution. The results, obtained with a fast C++ based computational code called MONET (MOdel of ioN dosE for Therapy), are in very good agreement with the FLUKA MC code, within a few percent in the worst case. This study provides a new tool for fast dose calculation or verification, possibly for clinical use. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Numerical estimation of cavitation intensity

    NASA Astrophysics Data System (ADS)

    Krumenacker, L.; Fortes-Patella, R.; Archer, A.

    2014-03-01

    Cavitation may appear in turbomachinery and in hydraulic orifices, venturis or valves, leading to performance losses, vibrations and material erosion. This study propose a new method to predict the cavitation intensity of the flow, based on a post-processing of unsteady CFD calculations. The paper presents the analyses of cavitating structures' evolution at two different scales: • A macroscopic one in which the growth of cavitating structures is calculated using an URANS software based on a homogeneous model. Simulations of cavitating flows are computed using a barotropic law considering presence of air and interfacial tension, and Reboud's correction on the turbulence model. • Then a small one where a Rayleigh-Plesset software calculates the acoustic energy generated by the implosion of the vapor/gas bubbles with input parameters from macroscopic scale. The volume damage rate of the material during incubation time is supposed to be a part of the cumulated acoustic energy received by the solid wall. The proposed analysis method is applied to calculations on hydrofoil and orifice geometries. Comparisons between model results and experimental works concerning flow characteristic (size of cavity, pressure,velocity) as well as pitting (erosion area, relative cavitation intensity) are presented.

  13. System Statement of Tasks of Calculating and Providing the Reliability of Heating Cogeneration Plants in Power Systems

    NASA Astrophysics Data System (ADS)

    Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.

    2018-01-01

    A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.

  14. Gas flow calculation method of a ramjet engine

    NASA Astrophysics Data System (ADS)

    Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir

    2017-11-01

    At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.

  15. Direct calculation of wall interferences and wall adaptation for two-dimensional flow in wind tunnels with closed walls

    NASA Technical Reports Server (NTRS)

    Amecke, Juergen

    1986-01-01

    A method for the direct calculation of the wall induced interference velocity in two dimensional flow based on Cauchy's integral formula was derived. This one-step method allows the calculation of the residual corrections and the required wall adaptation for interference-free flow starting from the wall pressure distribution without any model representation. Demonstrated applications are given.

  16. The methodology of the gas turbine efficiency calculation

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Job, Marcin; Brzęczek, Mateusz; Nawrat, Krzysztof; Mędrych, Janusz

    2016-12-01

    In the paper a calculation methodology of isentropic efficiency of a compressor and turbine in a gas turbine installation on the basis of polytropic efficiency characteristics is presented. A gas turbine model is developed into software for power plant simulation. There are shown the calculation algorithms based on iterative model for isentropic efficiency of the compressor and for isentropic efficiency of the turbine based on the turbine inlet temperature. The isentropic efficiency characteristics of the compressor and the turbine are developed by means of the above mentioned algorithms. The gas turbine development for the high compressor ratios was the main driving force for this analysis. The obtained gas turbine electric efficiency characteristics show that an increase of pressure ratio above 50 is not justified due to the slight increase in the efficiency with a significant increase of turbine inlet combustor outlet and temperature.

  17. Neutron-induced reactions on AlF3 studied using the optical model

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Lv, Cui-Juan; Zhang, Guo-Qiang; Wang, Hong-Wei; Zuo, Jia-Xu

    2015-08-01

    Neutron-induced reactions on 27Al and 19F nuclei are investigated using the optical model implemented in the TALYS 1.4 toolkit. Incident neutron energies in a wide range from 0.1 keV to 30 MeV are calculated. The cross sections for the main channels (n, np), (n, p), (n, α), (n, 2n), and (n, γ) and the total reaction cross section (n, tot) of the reactions are obtained. When the default parameters in TALYS 1.4 are adopted, the calculated results agree with the measured results. Based on the calculated results for the n + 27Al and n + 19F reactions, the results of the n + 27Al19F reactions are predicted. These results are useful both for the design of thorium-based molten salt reactors and for neutron activation analysis techniques.

  18. Simulation of ground-water flow and evaluation of water-management alternatives in the upper Charles River basin, eastern Massachusetts

    USGS Publications Warehouse

    DeSimone, Leslie A.; Walter, Donald A.; Eggleston, John R.; Nimiroski, Mark T.

    2002-01-01

    Ground water is the primary source of drinking water for towns in the upper Charles River Basin, an area of 105 square miles in eastern Massachusetts that is undergoing rapid growth. The stratified-glacial aquifers in the basin are high yield, but also are thin, discontinuous, and in close hydraulic connection with streams, ponds, and wetlands. Water withdrawals averaged 10.1 million gallons per day in 1989?98 and are likely to increase in response to rapid growth. These withdrawals deplete streamflow and lower pond levels. A study was conducted to develop tools for evaluating water-management alternatives at the regional scale in the basin. Geologic and hydrologic data were compiled and collected to characterize the ground- and surface-water systems. Numerical flow modeling techniques were applied to evaluate the effects of increased withdrawals and altered recharge on ground-water levels, pond levels, and stream base flow. Simulation-optimization methods also were applied to test their efficacy for management of multiple water-supply and water-resource needs. Steady-state and transient ground-water-flow models were developed using the numerical modeling code MODFLOW-2000. The models were calibrated to 1989?98 average annual conditions of water withdrawals, water levels, and stream base flow. Model recharge rates were varied spatially, by land use, surficial geology, and septic-tank return flow. Recharge was changed during model calibration by means of parameter-estimation techniques to better match the estimated average annual base flow; area-weighted rates averaged 22.5 inches per year for the basin. Water withdrawals accounted for about 7 percent of total simulated flows through the stream-aquifer system and were about equal in magnitude to model-calculated rates of ground-water evapotranspiration from wetlands and ponds in aquifer areas. Water withdrawals as percentages of total flow varied spatially and temporally within an average year; maximum values were 12 to 13 percent of total annual flow in some subbasins and of total monthly flow throughout the basin in summer and early fall. Water-management alternatives were evaluated by simulating hypothetical scenarios of increased withdrawals and altered recharge for average 1989?98 conditions with the flow models. Increased withdrawals to maximum State-permitted levels would result in withdrawals of about 15 million gallons per day, or about 50 percent more than current withdrawals. Model-calculated effects of these increased withdrawals included reductions in stream base flow that were greatest (as a percentage of total flow) in late summer and early fall. These reductions ranged from less than 5 percent to more than 60 percent of model-calculated 1989?98 base flow along reaches of the Charles River and major tributaries during low-flow periods. Reductions in base flow generally were comparable to upstream increases in withdrawals, but were slightly less than upstream withdrawals in areas where septic-system return flow was simulated. Increased withdrawals also increased the proportion of wastewater in the Charles River downstream of treatment facilities. The wastewater component increased downstream from a treatment facility in Milford from 80 percent of September base flow under 1989?98 conditions to 90 percent of base flow, and from 18 to 27 percent of September base flow downstream of a treatment facility in Medway. In another set of hypothetical scenarios, additional recharge equal to the transfer of water out of a typical subbasin by sewers was found to increase model-calculated base flows by about 12 percent of model-calculated base flows. Addition of recharge equal to that available from artificial recharge of residential rooftop runoff had smaller effects, augmenting simulated September base flow by about 3 percent. Simulation-optimization methods were applied to an area near Populatic Pond and the confluence of the Mill and Charles Rivers in Franklin,

  19. Infrared radiation scene generation of stars and planets in celestial background

    NASA Astrophysics Data System (ADS)

    Guo, Feng; Hong, Yaohui; Xu, Xiaojian

    2014-10-01

    An infrared (IR) radiation generation model of stars and planets in celestial background is proposed in this paper. Cohen's spectral template1 is modified for high spectral resolution and accuracy. Based on the improved spectral template for stars and the blackbody assumption for planets, an IR radiation model is developed which is able to generate the celestial IR background for stars and planets appearing in sensor's field of view (FOV) for specified observing date and time, location, viewpoint and spectral band over 1.2μm ~ 35μm. In the current model, the initial locations of stars are calculated based on midcourse space experiment (MSX) IR astronomical catalogue (MSX-IRAC) 2 , while the initial locations of planets are calculated using secular variations of the planetary orbits (VSOP) theory. Simulation results show that the new IR radiation model has higher resolution and accuracy than common model.

  20. Bayesian algorithm implementation in a real time exposure assessment model on benzene with calculation of associated cancer risks.

    PubMed

    Sarigiannis, Dimosthenis A; Karakitsios, Spyros P; Gotti, Alberto; Papaloukas, Costas L; Kassomenos, Pavlos A; Pilidis, Georgios A

    2009-01-01

    The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded) determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs) was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK) risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural). Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations.

  1. Bayesian Algorithm Implementation in a Real Time Exposure Assessment Model on Benzene with Calculation of Associated Cancer Risks

    PubMed Central

    Sarigiannis, Dimosthenis A.; Karakitsios, Spyros P.; Gotti, Alberto; Papaloukas, Costas L.; Kassomenos, Pavlos A.; Pilidis, Georgios A.

    2009-01-01

    The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded) determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs) was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK) risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural). Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations. PMID:22399936

  2. Videogrammetric Model Deformation Measurement Technique

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Liu, Tian-Shu

    2001-01-01

    The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.

  3. Feasibility of supersonic diode pumped alkali lasers: Model calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barmashenko, B. D.; Rosenwaks, S.

    The feasibility of supersonic operation of diode pumped alkali lasers (DPALs) is studied for Cs and K atoms applying model calculations, based on a semi-analytical model previously used for studying static and subsonic flow DPALs. The operation of supersonic lasers is compared with that measured and modeled in subsonic lasers. The maximum power of supersonic Cs and K lasers is found to be higher than that of subsonic lasers with the same resonator and alkali density at the laser inlet by 25% and 70%, respectively. These results indicate that for scaling-up the power of DPALs, supersonic expansion should be considered.

  4. Calculating Henry’s Constants of Charged Molecules Using SPARC

    EPA Science Inventory

    SPARC Performs Automated Reasoning in Chemistry is a computer program designed to model physical and chemical properties of molecules solely based on thier chemical structure. SPARC uses a toolbox of mechanistic perturbation models to model intermolecular interactions. SPARC has ...

  5. Global thermal analysis of air-air cooled motor based on thermal network

    NASA Astrophysics Data System (ADS)

    Hu, Tian; Leng, Xue; Shen, Li; Liu, Haidong

    2018-02-01

    The air-air cooled motors with high efficiency, large starting torque, strong overload capacity, low noise, small vibration and other characteristics, are widely used in different department of national industry, but its cooling structure is complex, it requires the motor thermal management technology should be high. The thermal network method is a common method to calculate the temperature field of the motor, it has the advantages of small computation time and short time consuming, it can save a lot of time in the initial design phase of the motor. The domain analysis of air-air cooled motor and its cooler was based on thermal network method, the combined thermal network model was based, the main components of motor internal and external cooler temperature were calculated and analyzed, and the temperature rise test results were compared to verify the correctness of the combined thermal network model, the calculation method can satisfy the need of engineering design, and provide a reference for the initial and optimum design of the motor.

  6. The effect of nanoparticle surfactant polarization on trapping depth of vegetable insulating oil-based nanofluids

    NASA Astrophysics Data System (ADS)

    Li, Jian; Du, Bin; Wang, Feipeng; Yao, Wei; Yao, Shuhan

    2016-02-01

    Nanoparticles can generate charge carrier trapping and reduce the velocity of streamer development in insulating oils ultimately leading to an enhancement of the breakdown voltage of insulating oils. Vegetable insulating oil-based nanofluids with three sizes of monodispersed Fe3O4 nanoparticles were prepared and their trapping depths were measured by thermally stimulated method (TSC). It is found that the nanoparticle surfactant polarization can significantly influence the trapping depth of vegetable insulating oil-based nanofluids. A nanoparticle polarization model considering surfactant polarization was proposed to calculate the trapping depth of the nanofluids at different nanoparticle sizes and surfactant thicknesses. The results show the calculated values of the model are in a fairly good agreement with the experimental values.

  7. Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model

    NASA Astrophysics Data System (ADS)

    Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man

    2017-03-01

    Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.

  8. Modeling and calculation of impact friction caused by corner contact in gear transmission

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiang; Chen, Siyu

    2014-09-01

    Corner contact in gear pair causes vibration and noise, which has attracted many attentions. However, teeth errors and deformation make it difficulty to determine the point situated at corner contact and study the mechanism of teeth impact friction in the current researches. Based on the mechanism of corner contact, the process of corner contact is divided into two stages of impact and scratch, and the calculation model including gear equivalent error—combined deformation is established along the line of action. According to the distributive law, gear equivalent error is synthesized by base pitch error, normal backlash and tooth profile modification on the line of action. The combined tooth compliance of the first point lying in corner contact before the normal path is inversed along the line of action, on basis of the theory of engagement and the curve of tooth synthetic compliance & load-history. Combined secondarily the equivalent error with the combined deflection, the position standard of the point situated at corner contact is probed. Then the impact positions and forces, from the beginning to the end during corner contact before the normal path, are calculated accurately. Due to the above results, the lash model during corner contact is founded, and the impact force and frictional coefficient are quantified. A numerical example is performed and the averaged impact friction coefficient based on the presented calculation method is validated. This research obtains the results which could be referenced to understand the complex mechanism of teeth impact friction and quantitative calculation of the friction force and coefficient, and to gear exact design for tribology.

  9. Improvement of the Scintillation-Irregularity Model in WBMOD

    DTIC Science & Technology

    1983-02-28

    satellite over e small s.ction of its orbit . 2-4 IMPLEMENTATION AT AFGWC One of the tasks carried out was to modify the most recent version of WaMOD...influence scintillation strength OSRTN Sets up integral to calculate phase variance, for finite outer scale ROMINT Modified Romberg quadrature integration... orbit calculation, and implc-Nentation of Ln irregularity drift routine based on a recently published model of ionospheric convection st high latitudes

  10. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  11. SU-E-T-541: Measurement of CT Density Model Variations and the Impact On the Accuracy of Monte Carlo (MC) Dose Calculation in Stereotactic Body Radiation Therapy for Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiang, H; Li, B; Behrman, R

    2015-06-15

    Purpose: To measure the CT density model variations between different CT scanners used for treatment planning and impact on the accuracy of MC dose calculation in lung SBRT. Methods: A Gammex electron density phantom (RMI 465) was scanned on two 64-slice CT scanners (GE LightSpeed VCT64) and a 16-slice CT (Philips Brilliance Big Bore CT). All three scanners had been used to acquire CT for CyberKnife lung SBRT treatment planning. To minimize the influences of beam hardening and scatter for improving reproducibility, three scans were acquired with the phantom rotated 120° between scans. The mean CT HU of each densitymore » insert, averaged over the three scans, was used to build the CT density models. For 14 patient plans, repeat MC dose calculations were performed by using the scanner-specific CT density models and compared to a baseline CT density model in the base plans. All dose re-calculations were done using the same plan beam configurations and MUs. Comparisons of dosimetric parameters included PTV volume covered by prescription dose, mean PTV dose, V5 and V20 for lungs, and the maximum dose to the closest critical organ. Results: Up to 50.7 HU variations in CT density models were observed over the baseline CT density model. For 14 patient plans examined, maximum differences in MC dose re-calculations were less than 2% in 71.4% of the cases, less than 5% in 85.7% of the cases, and 5–10% for 14.3% of the cases. As all the base plans well exceeded the clinical objectives of target coverage and OAR sparing, none of the observed differences led to clinically significant concerns. Conclusion: Marked variations of CT density models were observed for three different CT scanners. Though the differences can cause up to 5–10% differences in MC dose calculations, it was found that they caused no clinically significant concerns.« less

  12. The human body metabolism process mathematical simulation based on Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Oliynyk, Andriy; Oliynyk, Eugene; Pyptiuk, Olexandr; DzierŻak, RóŻa; Szatkowska, Małgorzata; Uvaysova, Svetlana; Kozbekova, Ainur

    2017-08-01

    The mathematical model of metabolism process in human organism based on Lotka-Volterra model has beeng proposed, considering healing regime, nutrition system, features of insulin and sugar fragmentation process in the organism. The numerical algorithm of the model using IV-order Runge-Kutta method has been realized. After the result of calculations the conclusions have been made, recommendations about using the modeling results have been showed, the vectors of the following researches are defined.

  13. Managing Uncertainty in Runoff Estimation with the U.S. Environmental Protection Agency National Stormwater Calculator.

    EPA Science Inventory

    The U.S. Environmental Protection Agency National Stormwater Calculator (NSWC) simplifies the task of estimating runoff through a straightforward simulation process based on the EPA Stormwater Management Model. The NSWC accesses localized climate and soil hydrology data, and opti...

  14. ESTIMATION OF PHOSPHATE ESTER HYDROLYSIS RATE CONSTANTS. II. ACID AND GENERAL BASE CATALYZED HYDROLYSIS

    EPA Science Inventory

    SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to calculate acid and neutral hydrolysis rate constants of phosphate esters in water. The rate is calculated from the energy difference between the initial and transition states of a ...

  15. Development and testing of a fast conceptual river water quality model.

    PubMed

    Keupers, Ingrid; Willems, Patrick

    2017-04-15

    Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  17. Modeling the Soil Water and Energy Balance of a Mixed Grass Rangeland and Evaluating a Soil Water Based Drought Index in Wyoming

    NASA Astrophysics Data System (ADS)

    Engda, T. A.; Kelleners, T. J.; Paige, G. B.

    2013-12-01

    Soil water content plays an important role in the complex interaction between terrestrial ecosystems and the atmosphere. Automated soil water content sensing is increasingly being used to assess agricultural drought conditions. A one-dimensional vertical model that calculates incoming solar radiation, canopy energy balance, surface energy balance, snow pack dynamics, soil water flow, snow-soil heat exchange is applied to calculate water flow and heat transport in a Rangeland soil located near Lingel, Wyoming. The model is calibrated and validated using three years of measured soil water content data. Long-term average soil water content dynamics are calculated using a 30 year historical data record. The difference between long-term average soil water content and observed soil water content is compared with plant biomass to evaluate the usefulness of soil water content as a drought indicator. Strong correlation between soil moisture surplus/deficit and plant biomass may prove our hypothesis that soil water content is a good indicator of drought conditions. Soil moisture based drought index is calculated using modeled and measured soil water data input and is compared with measured plant biomass data. A drought index that captures local drought conditions proves the importance of a soil water monitoring network for Wyoming Rangelands to fill the gap between large scale drought indices, which are not detailed enough to assess conditions at local level, and local drought conditions. Results from a combined soil moisture monitoring and computer modeling, and soil water based drought index soil are presented to quantify vertical soil water flow, heat transport, historical soil water variations and drought conditions in the study area.

  18. HUMAN BODY SHAPE INDEX BASED ON AN EXPERIMENTALLY DERIVED MODEL OF HUMAN GROWTH

    PubMed Central

    Lebiedowska, Maria K.; Alter, Katharine E.; Stanhope, Steven J.

    2009-01-01

    Objectives To test the assumption of geometrically similar growth by developing experimentally derived models of human body growth during the age interval of 5–18 years; to use the derived growth models to establish a new Human Body Shape Index (HBSI) based on natural age related changes in HBS; and to compare various metrics of relative body weight (body mass index, ponderal index, HBSI) in a sample of 5–18 year old children. Study design Non-disabled Polish children (N=847) participated in this descriptive study. To model growth, the best fit between body height (H) and body mass (M) was calculated for each sex with the allometric equation M= miHχ. HBSI and HBSI were calculated separately for girls and boys, using sex-specific values for χ and a general HBSI from combined data. The customary body mass and ponderal indices were calculated and compared to HBSI values. Results The models of growth were M=13.11H2.84 (R2=.90) and M=13.64H2.68 (R2=.91) for girls and boys respectively. HBSI values contained less inherent variability and were influenced least by growth (age and height) than customary indices. Conclusion Age-related growth during childhood is sex-specific and not geometrically similar. Therefore, indices of human body shape formulated from experimentally derived models of human growth are superior to customary geometric similarity-based indices for the characterization of human body shape in children during the formative growth years. PMID:18154897

  19. Human body shape index based on an experimentally derived model of human growth.

    PubMed

    Lebiedowska, Maria K; Alter, Katharine E; Stanhope, Steven J

    2008-01-01

    To test the assumption of geometrically similar growth by developing experimentally derived models of human body growth during the age interval of 5 to 18 years; to use these derived growth models to establish a new human body shape index (HBSI) based on natural age-related changes in human body shape (HBS); and to compare various metrics of relative body weight (body mass index [BMI], ponderal index [PI], and HBSI) in a sample of 5- to 18-year-old children. Nondisabled Polish children (n = 847) participated in this descriptive study. To model growth, the best fit between body height (H) and body mass (M) was calculated for each sex using the allometric equation M = m(i) H(chi). HBSI was calculated separately for girls and boys, using sex-specific values for chi and a general HBSI from combined data. The customary BMI and PI were calculated and compared with HBSI values. The models of growth were M = 13.11H(2.84) (R2 = 0.90) for girls and M = 13.64H(2.68) (R2 = 0.91) for boys. HBSI values contained less inherent variability and were less influenced by growth (age and height) compared with BMI and PI. Age-related growth during childhood is sex-specific and not geometrically similar. Therefore, indices of HBS formulated from experimentally derived models of human growth are superior to customary geometric similarity-based indices for characterizing HBS in children during the formative growth years.

  20. Volcanic ash dosage calculator: A proof-of-concept tool to support aviation stakeholders during ash events

    NASA Astrophysics Data System (ADS)

    Dacre, H.; Prata, A.; Shine, K. P.; Irvine, E.

    2017-12-01

    The volcanic ash clouds produced by Icelandic volcano Eyjafjallajökull in April/May 2010 resulted in `no fly zones' which paralysed European aircraft activity and cost the airline industry an estimated £1.1 billion. In response to the crisis, the Civil Aviation Authority (CAA), in collaboration with Rolls Royce, produced the `safe-to-fly' chart. As ash concentrations are the primary output of dispersion model forecasts, the chart was designed to illustrate how engine damage progresses as a function of ash concentration. Concentration thresholds were subsequently derived based on previous ash encounters. Research scientists and aircraft manufactures have since recognised the importance of volcanic ash dosages; the accumulated concentration over time. Dosages are an improvement to concentrations as they can be used to identify pernicious situations where ash concentrations are acceptably low but the exposure time is long enough to cause damage to aircraft engines. Here we present a proof-of-concept volcanic ash dosage calculator; an innovative, web-based research tool, developed in close collaboration with operators and regulators, which utilises interactive data visualisation to communicate the uncertainty inherent in dispersion model simulations and subsequent dosage calculations. To calculate dosages, we use NAME (Numerical Atmospheric-dispersion Modelling Environment) to simulate several Icelandic eruption scenarios, which result in tephra dispersal across the North Atlantic, UK and Europe. Ash encounters are simulated based on flight-optimal routes derived from aircraft routing software. Key outputs of the calculator include: the along-flight dosage, exposure time and peak concentration. The design of the tool allows users to explore the key areas of uncertainty in the dosage calculation and to visualise how this changes as the planned flight path is varied. We expect that this research will result in better informed decisions from key stakeholders during volcanic ash events through a deeper understanding of the associated uncertainties in dosage calculations.

  1. Model-based coefficient method for calculation of N leaching from agricultural fields applied to small catchments and the effects of leaching reducing measures

    NASA Astrophysics Data System (ADS)

    Kyllmar, K.; Mårtensson, K.; Johnsson, H.

    2005-03-01

    A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.

  2. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    NASA Astrophysics Data System (ADS)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  3. One-Dimensional Hybrid Satellite Track Model for the Dynamics Explorer 2 (DE 2) Satellite

    NASA Technical Reports Server (NTRS)

    Deng, Wei; Killeen, T. L.; Burns, A. G.; Johnson, R. M.; Emery, B. A.; Roble, R. G.; Winningham, J. D.; Gary, J. B.

    1995-01-01

    A one-dimensional hybrid satellite track model has been developed to calculate the high-latitude thermospheric/ionospheric structure below the satellite altitude using Dynamics Explorer 2 (DE 2) satellite measurements and theory. This model is based on Emery et al. satellite track code but also includes elements of Roble et al. global mean thermosphere/ionosphere model. A number of parameterizations and data handling techniques are used to input satellite data from several DE 2 instruments into this model. Profiles of neutral atmospheric densities are determined from the MSIS-90 model and measured neutral temperatures. Measured electron precipitation spectra are used in an auroral model to calculate particle impact ionization rates below the satellite. These rates are combined with a solar ionization rate profile and used to solve the O(+) diffusion equation, with the measured electron density as an upper boundary condition. The calculated O(+) density distribution, as well as the ionization profiles, are then used in a photochemical equilibrium model to calculate the electron and molecular ion densities. The electron temperature is also calculated by solving the electron energy equation with an upper boundary condition determined by the DE 2 measurement. The model enables calculations of altitude profiles of conductivity and Joule beating rate along and below the satellite track. In a first application of the new model, a study is made of thermospheric and ionospheric structure below the DE 2 satellite for a single orbit which occurred on October 25, 1981. The field-aligned Poynting flux, which is independently obtained for this orbit, is compared with the model predictions of the height-integrated energy conversion rate. Good quantitative agreement between these two estimates has been reached. In addition, measurements taken at the incoherent scatter radar site at Chatanika (65.1 deg N, 147.4 deg W) during a DE 2 overflight are compared with the model calculations. A good agreement was found in lower thermospheric conductivities and Joule heating rate.

  4. An analytical model for calculating microdosimetric distributions from heavy ions in nanometer site targets.

    PubMed

    Czopyk, L; Olko, P

    2006-01-01

    The analytical model of Xapsos used for calculating microdosimetric spectra is based on the observation that straggling of energy loss can be approximated by a log-normal distribution of energy deposition. The model was applied to calculate microdosimetric spectra in spherical targets of nanometer dimensions from heavy ions at energies between 0.3 and 500 MeV amu(-1). We recalculated the originally assumed 1/E(2) initial delta electrons spectrum by applying the Continuous Slowing Down Approximation for secondary electrons. We also modified the energy deposition from electrons of energy below 100 keV, taking into account the effective path length of the scattered electrons. Results of our model calculations agree favourably with results of Monte Carlo track structure simulations using MOCA-14 for light ions (Z = 1-8) of energy ranging from E = 0.3 to 10.0 MeV amu(-1) as well as with results of Nikjoo for a wall-less proportional counter (Z = 18).

  5. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  6. Statistical mechanics of light elements at high pressure. VI - Liquid-state calculations with Thomas-Fermi-Dirac theory

    NASA Technical Reports Server (NTRS)

    Macfarlane, J. J.

    1984-01-01

    A model free energy is developed for hydrogen-helium mixtures based on solid-state Thomas-Fermi-Dirac calculations at pressures relevant to the interiors of giant planets. Using a model potential similar to that for a two-component plasma, effective charges for the nuclei (which are in general smaller than the actual charges because of screening effects) are parameterized, being constrained by calculations at a number of densities, compositions, and lattice structures. These model potentials are then used to compute the equilibrium properties of H-He fluids using a charged hard-sphere model. The results find critical temperatures of about 0 K, 500 K, and 1500 K, for pressures of 10, 100, and 1000 Mbar, respectively. These phase separation temperatures are considerably lower (approximately 6,000-10,000 K) than those found from calculations using free electron perturbation theory, and suggest that H-He solutions should be stable against phase separation in the metallic zones of Jupiter and Saturn.

  7. Measurements and calculations of transport AC loss in second generation high temperature superconducting pancake coils

    NASA Astrophysics Data System (ADS)

    Yuan, Weijia; Coombs, T. A.; Kim, Jae-Ho; Han Kim, Chul; Kvitkovic, Jozef; Pamidi, Sastry

    2011-12-01

    Theoretical and experimental AC loss data on a superconducting pancake coil wound using second generation (2 G) conductors are presented. An anisotropic critical state model is used to calculate critical current and the AC losses of a superconducting pancake coil. In the coil there are two regions, the critical state region and the subcritical region. The model assumes that in the subcritical region the flux lines are parallel to the tape wide face. AC losses of the superconducting pancake coil are calculated using this model. Both calorimetric and electrical techniques were used to measure AC losses in the coil. The calorimetric method is based on measuring the boil-off rate of liquid nitrogen. The electric method used a compensation circuit to eliminate the inductive component to measure the loss voltage of the coil. The experimental results are consistent with the theoretical calculations thus validating the anisotropic critical state model for loss estimations in the superconducting pancake coil.

  8. Development of computer-aided design system of elastic sensitive elements of automatic metering devices

    NASA Astrophysics Data System (ADS)

    Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.

    2018-05-01

    The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.

  9. Comparing Budget-based and Tracer-based Residence Times in Butte Basin, California

    NASA Astrophysics Data System (ADS)

    Moran, J. E.; Visser, A.; Esser, B.; Buck, C.

    2017-12-01

    The California Sustainable Groundwater Management Act of 2014 (SGMA) calls for basin-scale Groundwater Sustainability Plans (GSPs) that include a water budget covering a 50 year planning horizon. A nine layer, Integrated Water Flow Model (IWFM) developed for Butte Basin, California, allows examination of water budgets within 36 sub-regions having varying land and water use, to inform SGMA efforts. Detailed land use, soil type, groundwater pumping, and surface water delivery data were applied in the finite element IWFM calibration. In a sustainable system, the volume of storage does not change over a defined time period, and the residence time can be calculated from the water storage volume divided by the flux (recharge or discharge rate). Groundwater ages based on environmental tracer data reflect the mean residence time of groundwater, or its inverse, the turnover rate. Comparisons between budget-based residence times determined from storage and flux, and residence times determined from isotopic tracers of groundwater age, can provide insight into data quality, model reliability, and system sustainability. Budget-based groundwater residence times were calculated from IWFM model output by assuming constant storage and dividing by either averaged annual net recharge or discharge. Calculated residence times range between approximately 100 and 1000 years, with shorter times in subregions where pumping dominates discharge. Independently, 174 wells within the model boundaries were analyzed for tritium-helium groundwater age as part of the California Groundwater Ambient Monitoring and Assessment program. Age distributions from isotopic tracers were compared to model-derived groundwater residence times from groundwater budgets within the subregions of Butte Basin. Mean, apparent, tracer-based residence times are mostly between 20 and 40 years, but 25% of the long-screened wells that were sampled do not have detectable tritium, indicating residence times of more than about 60 years and broad age distributions. A key factor in making meaningful comparisons is to examine budget-based and tracer-based results over transmissive vertical sections, where pumping increases turnover time.

  10. Icing Branch Current Research Activities in Icing Physics

    NASA Technical Reports Server (NTRS)

    Vargas, Mario

    2009-01-01

    Current development: A grid block transformation scheme which allows the input of grids in arbitrary reference frames, the use of mirror planes, and grids with relative velocities has been developed. A simple ice crystal and sand particle bouncing scheme has been included. Added an SLD splashing model based on that developed by William Wright for the LEWICE 3.2.2 software. A new area based collection efficiency algorithm will be incorporated which calculates trajectories from inflow block boundaries to outflow block boundaries. This method will be used for calculating and passing collection efficiency data between blade rows for turbo-machinery calculations.

  11. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    PubMed

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.

  12. Development of a Model for Some Aspects of University Policy. Technical Report.

    ERIC Educational Resources Information Center

    Goossens, J. L. M.; And Others

    A method to calculate the need for academic staff per faculty, based on educational programs and numbers of students, is described which is based on quantitative relations between programs, student enrollment, and total budget. The model is described schematically and presented in a mathematical form adapted to computer processing. Its application…

  13. Physiologically-based Pharmacokinetic(PBPK) Models Application to Screen Environmental Hazards Related to Adverse Outcome Pathways(AOPs)

    EPA Science Inventory

    PBPK models are useful in estimating exposure levels based on in vitro to in vivo extrapolation (IVIVE) calculations. Linkage of large sets of chemically screened vitro signature effects to in vivo adverse outcomes using IVIVE is central to the concepts of toxicology in the 21st ...

  14. Lebedev acceleration and comparison of different photometric models in the inversion of lightcurves for asteroids

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Ping; Huang, Xiang-Jie; Ip, Wing-Huen; Hsia, Chi-Hao

    2018-04-01

    In the lightcurve inversion process where asteroid's physical parameters such as rotational period, pole orientation and overall shape are searched, the numerical calculations of the synthetic photometric brightness based on different shape models are frequently implemented. Lebedev quadrature is an efficient method to numerically calculate the surface integral on the unit sphere. By transforming the surface integral on the Cellinoid shape model to that on the unit sphere, the lightcurve inversion process based on the Cellinoid shape model can be remarkably accelerated. Furthermore, Matlab codes of the lightcurve inversion process based on the Cellinoid shape model are available on Github for free downloading. The photometric models, i.e., the scattering laws, also play an important role in the lightcurve inversion process, although the shape variations of asteroids dominate the morphologies of the lightcurves. Derived from the radiative transfer theory, the Hapke model can describe the light reflectance behaviors from the viewpoint of physics, while there are also many empirical models in numerical applications. Numerical simulations are implemented for the comparison of the Hapke model with the other three numerical models, including the Lommel-Seeliger, Minnaert, and Kaasalainen models. The results show that the numerical models with simple function expressions can fit well with the synthetic lightcurves generated based on the Hapke model; this good fit implies that they can be adopted in the lightcurve inversion process for asteroids to improve the numerical efficiency and derive similar results to those of the Hapke model.

  15. MeltMigrator: A MATLAB-based software for modeling three-dimensional melt migration and crustal thickness variations at mid-ocean ridges following a rules-based approach

    NASA Astrophysics Data System (ADS)

    Bai, Hailong; Montési, Laurent G. J.; Behn, Mark D.

    2017-01-01

    MeltMigrator is a MATLAB®-based melt migration software developed to process three-dimensional mantle temperature and velocity data from user-supplied numerical models of mid-ocean ridges, calculate melt production and melt migration trajectories in the mantle, estimate melt flux along plate boundaries, and predict crustal thickness distribution on the seafloor. MeltMigrator is also capable of calculating compositional evolution depending on the choice of petrologic melting model. Programmed in modules, MeltMigrator is highly customizable and can be expanded to a wide range of applications. We have applied it to complex mid-ocean ridge model settings, including transform faults, oblique segments, ridge migration, asymmetrical spreading, background mantle flow, and ridge-plume interaction. In this technical report, we include an example application to a segmented mid-ocean ridge. MeltMigrator is available as a supplement to this paper, and it is also available from GitHub and the University of Maryland Geodynamics Group website.

  16. Influence maximization in social networks under an independent cascade-based model

    NASA Astrophysics Data System (ADS)

    Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan

    2016-02-01

    The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.

  17. An improvement in the calculation of the efficiency of oxidative phosphorylation and rate of energy dissipation in mitochondria

    NASA Astrophysics Data System (ADS)

    Ghafuri, Mohazabeh; Golfar, Bahareh; Nosrati, Mohsen; Hoseinkhani, Saman

    2014-12-01

    The process of ATP production is one of the most vital processes in living cells which happens with a high efficiency. Thermodynamic evaluation of this process and the factors involved in oxidative phosphorylation can provide a valuable guide for increasing the energy production efficiency in research and industry. Although energy transduction has been studied qualitatively in several researches, there are only few brief reviews based on mathematical models on this subject. In our previous work, we suggested a mathematical model for ATP production based on non-equilibrium thermodynamic principles. In the present study, based on the new discoveries on the respiratory chain of animal mitochondria, Golfar's model has been used to generate improved results for the efficiency of oxidative phosphorylation and the rate of energy loss. The results calculated from the modified coefficients for the proton pumps of the respiratory chain enzymes are closer to the experimental results and validate the model.

  18. CL-20/DNB co-crystal based PBX with PEG: molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiang; Gao, Pei; Xiao, Ji Jun; Zhao, Feng; Xiao, He Ming

    2016-12-01

    Molecular dynamics simulation was carried out for CL-20/DNB co-crystal based PBX (polymer-bonded explosive) blended with polymer PEG (polyethylene glycol). In this paper, the miscibility of the PBX models is investigated through the calculated binding energy. Pair correlation function (PCF) analysis is applied to study the interaction of the interface structures in the PBX models. The mechanical properties of PBXs are also discussed to understand the change of the mechanical properties after adding the polymer. Moreover, the calculated diffusion coefficients of the interfacial explosive molecules are used to discuss the dispersal ability of CL-20 and DNB molecules in the interface layer.

  19. Numerical modeling on carbon fiber composite material in Gaussian beam laser based on ANSYS

    NASA Astrophysics Data System (ADS)

    Luo, Ji-jun; Hou, Su-xia; Xu, Jun; Yang, Wei-jun; Zhao, Yun-fang

    2014-02-01

    Based on the heat transfer theory and finite element method, the macroscopic ablation model of Gaussian beam laser irradiated surface is built and the value of temperature field and thermal ablation development is calculated and analyzed rationally by using finite element software of ANSYS. Calculation results show that the ablating form of the materials in different irritation is of diversity. The laser irradiated surface is a camber surface rather than a flat surface, which is on the lowest point and owns the highest power density. Research shows that the higher laser power density absorbed by material surface, the faster the irritation surface regressed.

  20. Trust-based information system architecture for personal wellness.

    PubMed

    Ruotsalainen, Pekka; Nykänen, Pirkko; Seppälä, Antto; Blobel, Bernd

    2014-01-01

    Modern eHealth, ubiquitous health and personal wellness systems take place in an unsecure and ubiquitous information space where no predefined trust occurs. This paper presents novel information model and an architecture for trust based privacy management of personal health and wellness information in ubiquitous environment. The architecture enables a person to calculate a dynamic and context-aware trust value for each service provider, and using it to design personal privacy policies for trustworthy use of health and wellness services. For trust calculation a novel set of measurable context-aware and health information-sensitive attributes is developed. The architecture enables a person to manage his or her privacy in ubiquitous environment by formulating context-aware and service provider specific policies. Focus groups and information modelling was used for developing a wellness information model. System analysis method based on sequential steps that enable to combine results of analysis of privacy and trust concerns and the selection of trust and privacy services was used for development of the information system architecture. Its services (e.g. trust calculation, decision support, policy management and policy binding services) and developed attributes enable a person to define situation-aware policies that regulate the way his or her wellness and health information is processed.

  1. Metal Accretion onto White Dwarfs. II. A Better Approach Based on Time-Dependent Calculations in Static Models

    NASA Astrophysics Data System (ADS)

    Fontaine, G.; Dufour, P.; Chayer, P.; Dupuis, J.; Brassard, P.

    2015-06-01

    The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. Inferences on the process based on diffusion timescale arguments make the implicit assumption that the concentration gradient of a given metal at the base of the convection zone is negligible. This assumption is, in fact, not rigorously valid, but it allows the decoupling of the surface abundance from the evolving distribution of a given metal in deeper layers. A better approach is a full time-dependent calculation of the evolution of the abundance profile of an accreting-diffusing element. We used the same approach as that developed by Dupuis et al. to model accretion episodes involving many more elements than those considered by these authors. Our calculations incorporate the improvements to diffusion physics mentioned in Paper I. The basic assumption in the Dupuis et al. approach is that the accreted metals are trace elements, i.e, that they have no effects on the background (DA or non-DA) stellar structure. This allows us to consider an arbitrary number of accreting elements.

  2. Nanomechanical properties of phospholipid microbubbles.

    PubMed

    Buchner Santos, Evelyn; Morris, Julia K; Glynos, Emmanouil; Sboros, Vassilis; Koutsos, Vasileios

    2012-04-03

    This study uses atomic force microscopy (AFM) force-deformation (F-Δ) curves to investigate for the first time the Young's modulus of a phospholipid microbubble (MB) ultrasound contrast agent. The stiffness of the MBs was calculated from the gradient of the F-Δ curves, and the Young's modulus of the MB shell was calculated by employing two different mechanical models based on the Reissner and elastic membrane theories. We found that the relatively soft phospholipid-based MBs behave inherently differently to stiffer, polymer-based MBs [Glynos, E.; Koutsos, V.; McDicken, W. N.; Moran, C. M.; Pye, S. D.; Ross, J. A.; Sboros, V. Langmuir2009, 25 (13), 7514-7522] and that elastic membrane theory is the most appropriate of the models tested for evaluating the Young's modulus of the phospholipid shell, agreeing with values available for living cell membranes, supported lipid bilayers, and synthetic phospholipid vesicles. Furthermore, we show that AFM F-Δ curves in combination with a suitable mechanical model can assess the shell properties of phospholipid MBs. The "effective" Young's modulus of the whole bubble was also calculated by analysis using Hertz theory. This analysis yielded values which are in agreement with results from studies which used Hertz theory to analyze similar systems such as cells.

  3. Quantum chemical calculations for polymers and organic compounds

    NASA Technical Reports Server (NTRS)

    Lopez, J.; Yang, C.

    1982-01-01

    The relativistic effects of the orbiting electrons on a model compound were calculated. The computational method used was based on 'Modified Neglect of Differential Overlap' (MNDO). The compound tetracyanoplatinate was used since empirical measurement and calculations along "classical" lines had yielded many known properties. The purpose was to show that for large molecules relativity effects could not be ignored and that these effects could be calculated and yield data in closer agreement to empirical measurements. Both the energy band structure and molecular orbitals are depicted.

  4. Process for computing geometric perturbations for probabilistic analysis

    DOEpatents

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  5. Benchmark model correction of monitoring system based on Dynamic Load Test of Bridge

    NASA Astrophysics Data System (ADS)

    Shi, Jing-xian; Fan, Jiang

    2018-03-01

    Structural health monitoring (SHM) is a field of research in the area, and it’s designed to achieve bridge safety and reliability assessment, which needs to be carried out on the basis of the accurate simulation of the finite element model. Bridge finite element model is simplified of the structural section form, support conditions, material properties and boundary condition, which is based on the design and construction drawings, and it gets the calculation models and the results.But according to the design and specification requirements established finite element model due to its cannot fully reflect the true state of the bridge, so need to modify the finite element model to obtain the more accurate finite element model. Based on Da-guan river crossing of Ma - Zhao highway in Yunnan province as the background to do the dynamic load test test, we find that the impact coefficient of the theoretical model of the bridge is very different from the coefficient of the actual test, and the change is different; according to the actual situation, the calculation model is adjusted to get the correct frequency of the bridge, the revised impact coefficient found that the modified finite element model is closer to the real state, and provides the basis for the correction of the finite model.

  6. PEM-West trajectory climatology and photochemical model sensitivity study prepared using retrospective meteorological data

    NASA Technical Reports Server (NTRS)

    Merrill, John T.; Rodriguez, Jose M.

    1991-01-01

    Trajectory and photochemical model calculations based on retrospective meteorological data for the operations areas of the NASA Pacific Exploratory Mission (PEM)-West mission are summarized. The trajectory climatology discussed here is intended to provide guidance for flight planning and initial data interpretation during the field phase of the expedition by indicating the most probable path air parcels are likely to take to reach various points in the area. The photochemical model calculations which are discussed indicate the sensitivity of the chemical environment to various initial chemical concentrations and to conditions along the trajectory. In the post-expedition analysis these calculations will be used to provide a climatological context for the meteorological conditions which are encountered in the field.

  7. Improved version of the PHOBOS Glauber Monte Carlo

    DOE PAGES

    Loizides, C.; Nagle, J.; Steinberg, P.

    2015-09-01

    “Glauber” models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,more » Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber–Gribov fluctuations of the proton in p +A collisions. A users’ guide (updated to reflect changes in v2) is provided for running various calculations.« less

  8. Core excitations across the neutron shell gap in 207Tl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, E.; Podolyák, Zs.; Grawe, H.

    2015-05-05

    The single closed-neutron-shell, one proton–hole nucleus 207Tl was populated in deep-inelastic collisions of a 208Pb beam with a 208Pb target. The yrast and near-yrast level scheme has been established up to high excitation energy, comprising an octupole phonon state and a large number of core excited states. Based on shell-model calculations, all observed single core excitations were established to arise from the breaking of the N=126 neutron core. While the shell-model calculations correctly predict the ordering of these states, their energies are compressed at high spins. It is concluded that this compression is an intrinsic feature of shell-model calculations usingmore » two-body matrix elements developed for the description of two-body states, and that multiple core excitations need to be considered in order to accurately calculate the energy spacings of the predominantly three-quasiparticle states.« less

  9. Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model.

    PubMed

    Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong

    2012-10-17

    We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion k at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.

  10. [Biometric bases: basic concepts of probability calculation].

    PubMed

    Dinya, E

    1998-04-26

    The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.

  11. QMEANclust: estimation of protein model quality by combining a composite scoring function with structural density information.

    PubMed

    Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce

    2009-05-20

    The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.

  12. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  13. Modeling of the Temperature-dependent Spectral Response of In(1-x)Ga(x)Sb Infrared Photodetectors

    NASA Technical Reports Server (NTRS)

    Gonzalex-Cuevas, Juan A.; Refaat, Tamer F.; Abedin, M. Nurul; Elsayed-Ali, Hani E.

    2006-01-01

    A model of the spectral responsivity of In(1-x) Ga(x) Sb p-n junction infrared photodetectors has been developed. This model is based on calculations of the photogenerated and diffusion currents in the device. Expressions for the carrier mobilities, absorption coefficient and normal-incidence reflectivity as a function of temperature were derived from extensions made to Adachi and Caughey-Thomas models. Contributions from the Auger recombination mechanism, which increase with a rise in temperature, have also been considered. The responsivity was evaluated for different doping levels, diffusion depths, operating temperatures, and photon energies. Parameters calculated from the model were compared with available experimental data, and good agreement was obtained. These theoretical calculations help to better understand the electro-optical behavior of In(1-x) Ga(x) Sb photodetectors, and can be utilized for performance enhancement through optimization of the device structure.

  14. Calculation and analysis of cross-sections for p+184W reactions up to 200 MeV

    NASA Astrophysics Data System (ADS)

    Sun, Jian-Ping; Zhang, Zheng-Jun; Han, Yin-Lu

    2015-08-01

    A set of optimal proton optical potential parameters for p+ 184W reactions are obtained at incident proton energy up to 250 MeV. Based on these parameters, the reaction cross-sections, elastic scattering angular distributions, energy spectra and double differential cross sections of proton-induced reactions on 184W are calculated and analyzed by using theoretical models which integrate the optical model, distorted Born wave approximation theory, intra-nuclear cascade model, exciton model, Hauser-Feshbach theory and evaporation model. The calculated results are compared with existing experimental data and good agreement is achieved. Supported by National Basic Research Program of China, Technology Research of Accelerator Driven Sub-critical System for Nuclear Waste Transmutation (2007CB209903) and Strategic Priority Research Program of Chinese Academy of Sciences, Thorium Molten Salt Reactor Nuclear Energy System (XDA02010100)

  15. Quantum chemical determination of Young's modulus of lignin. Calculations on a beta-O-4' model compound.

    PubMed

    Elder, Thomas

    2007-11-01

    The calculation of Young's modulus of lignin has been examined by subjecting a dimeric model compound to strain, coupled with the determination of energy and stress. The computational results, derived from quantum chemical calculations, are in agreement with available experimental results. Changes in geometry indicate that modifications in dihedral angles occur in response to linear strain. At larger levels of strain, bond rupture is evidenced by abrupt changes in energy, structure, and charge. Based on the current calculations, the bond scission may be occurring through a homolytic reaction between aliphatic carbon atoms. These results may have implications in the reactivity of lignin especially when subjected to processing methods that place large mechanical forces on the structure.

  16. Calculations of dose distributions using a neural network model

    NASA Astrophysics Data System (ADS)

    Mathieu, R.; Martin, E.; Gschwind, R.; Makovicka, L.; Contassot-Vivier, S.; Bahi, J.

    2005-03-01

    The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journées Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.

  17. Calculations of dose distributions using a neural network model.

    PubMed

    Mathieu, R; Martin, E; Gschwind, R; Makovicka, L; Contassot-Vivier, S; Bahi, J

    2005-03-07

    The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journees Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.

  18. WaterSense Program: Methodology for National Water Savings Analysis Model Indoor Residential Water Use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehead, Camilla Dunham; McNeil, Michael; Dunham_Whitehead, Camilla

    2008-02-28

    The U.S. Environmental Protection Agency (EPA) influences the market for plumbing fixtures and fittings by encouraging consumers to purchase products that carry the WaterSense label, which certifies those products as performing at low flow rates compared to unlabeled fixtures and fittings. As consumers decide to purchase water-efficient products, water consumption will decline nationwide. Decreased water consumption should prolong the operating life of water and wastewater treatment facilities.This report describes the method used to calculate national water savings attributable to EPA?s WaterSense program. A Microsoft Excel spreadsheet model, the National Water Savings (NWS) analysis model, accompanies this methodology report. Version 1.0more » of the NWS model evaluates indoor residential water consumption. Two additional documents, a Users? Guide to the spreadsheet model and an Impacts Report, accompany the NWS model and this methodology document. Altogether, these four documents represent Phase One of this project. The Users? Guide leads policy makers through the spreadsheet options available for projecting the water savings that result from various policy scenarios. The Impacts Report shows national water savings that will result from differing degrees of market saturation of high-efficiency water-using products.This detailed methodology report describes the NWS analysis model, which examines the effects of WaterSense by tracking the shipments of products that WaterSense has designated as water-efficient. The model estimates market penetration of products that carry the WaterSense label. Market penetration is calculated for both existing and new construction. The NWS model estimates savings based on an accounting analysis of water-using products and of building stock. Estimates of future national water savings will help policy makers further direct the focus of WaterSense and calculate stakeholder impacts from the program.Calculating the total gallons of water the WaterSense program saves nationwide involves integrating two components, or modules, of the NWS model. Module 1 calculates the baseline national water consumption of typical fixtures, fittings, and appliances prior to the program (as described in Section 2.0 of this report). Module 2 develops trends in efficiency for water-using products both in the business-as-usual case and as a result of the program (Section 3.0). The NWS model combines the two modules to calculate total gallons saved by the WaterSense program (Section 4.0). Figure 1 illustrates the modules and the process involved in modeling for the NWS model analysis.The output of the NWS model provides the base case for each end use, as well as a prediction of total residential indoor water consumption during the next two decades. Based on the calculations described in Section 4.0, we can project a timeline of water savings attributable to the WaterSense program. The savings increase each year as the program results in the installation of greater numbers of efficient products, which come to compose more and more of the product stock in households throughout the United States.« less

  19. Computational modeling of properties

    NASA Technical Reports Server (NTRS)

    Franz, Judy R.

    1994-01-01

    A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wise-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid II-VI semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.

  20. Computational modeling of properties

    NASA Technical Reports Server (NTRS)

    Franz, Judy R.

    1994-01-01

    A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wide-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid 2-6 semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.

  1. New 2D diffraction model and its applications to terahertz parallel-plate waveguide power splitters

    PubMed Central

    Zhang, Fan; Song, Kaijun; Fan, Yong

    2017-01-01

    A two-dimensional (2D) diffraction model for the calculation of the diffraction field in 2D space and its applications to terahertz parallel-plate waveguide power splitters are proposed in this paper. Compared with the Huygens-Fresnel principle in three-dimensional (3D) space, the proposed model provides an approximate analytical expression to calculate the diffraction field in 2D space. The diffraction filed is regarded as the superposition integral in 2D space. The calculated results obtained from the proposed diffraction model agree well with the ones by software HFSS based on the element method (FEM). Based on the proposed 2D diffraction model, two parallel-plate waveguide power splitters are presented. The splitters consist of a transmitting horn antenna, reflectors, and a receiving antenna array. The reflector is cylindrical parabolic with superimposed surface relief to efficiently couple the transmitted wave into the receiving antenna array. The reflector is applied as computer-generated holograms to match the transformed field to the receiving antenna aperture field. The power splitters were optimized by a modified real-coded genetic algorithm. The computed results of the splitters agreed well with the ones obtained by software HFSS verify the novel design method for power splitter, which shows good applied prospects of the proposed 2D diffraction model. PMID:28181514

  2. Study on the Mathematical Model of Dielectric Recovery Characteristics in High Voltage SF6 Circuit Breaker

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Wang, Feiming; Xu, Jianyuan; Xia, Yalong; Liu, Weidong

    2016-03-01

    According to the stream theory, this paper proposes a mathematical model of the dielectric recovery characteristic based on the two-temperature ionization equilibrium equation. Taking the dynamic variation of charged particle's ionization and attachment into account, this model can be used in collaboration with the Coulomb collision model, which gives the relationship of the heavy particle temperature and electron temperature to calculate the electron density and temperature under different pressure and electric field conditions, so as to deliver the breakdown electric field strength under different pressure conditions. Meanwhile an experiment loop of the circuit breaker has been built to measure the breakdown voltage. It is shown that calculated results are in conformity with experiment results on the whole while results based on the stream criterion are larger than experiment results. This indicates that the mathematical model proposed here is more accurate for calculating the dielectric recovery characteristic, it is derived from the stream model with some improvement and refinement and has great significance for increasing the simulation accuracy of circuit breaker's interruption characteristic. supported by Science and Technology Project of State Grid Corporation of China (No. GY17201200063), National Natural Science Foundation of China (No. 51277123), Basic Research Project of Liaoning Key Laboratory of Education Department (LZ2015055)

  3. Dosimetry in x-ray-based breast imaging

    PubMed Central

    Dance, David R; Sechopoulos, Ioannis

    2016-01-01

    The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable. PMID:27617767

  4. Dosimetry in x-ray-based breast imaging

    NASA Astrophysics Data System (ADS)

    Dance, David R.; Sechopoulos, Ioannis

    2016-10-01

    The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable.

  5. Research into the rationality and the application scopes of different melting models of nanoparticles

    NASA Astrophysics Data System (ADS)

    Fu, Qingshan; Xue, Yongqiang; Cui, Zixiang; Duan, Huijuan

    2017-07-01

    A rational melting model is indispensable to address the fundamental issue regarding the melting of nanoparticles. To ascertain the rationality and the application scopes of the three classical thermodynamic models, namely Pawlow, Rie, and Reiss melting models, corresponding accurate equations for size-dependent melting temperature of nanoparticles were derived. Comparison of the melting temperatures of Au, Al, and Sn nanoparticles calculated by the accurate equations with available experimental results demonstrates that both Reiss and Rie melting models are rational and capable of accurately describing the melting behaviors of nanoparticles at different melting stages. The former (surface pre-melting) is applicable to the stage from initial melting to critical thickness of liquid shell, while the latter (solid particles surrounded by a great deal of liquid) from the critical thickness to complete melting. The melting temperatures calculated by the accurate equation based on Reiss melting model are in good agreement with experimental results within the whole size range of calculation compared with those by other theoretical models. In addition, the critical thickness of liquid shell is found to decrease with particle size decreasing and presents a linear variation with particle size. The accurate thermodynamic equations based on Reiss and Rie melting models enable us to quantitatively and conveniently predict and explain the melting behaviors of nanoparticles at all size range in the whole melting process. [Figure not available: see fulltext.

  6. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience.

    PubMed

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael

    2007-08-21

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.

  7. TensorCalculator: exploring the evolution of mechanical stress in the CCMV capsid

    NASA Astrophysics Data System (ADS)

    Kononova, Olga; Maksudov, Farkhad; Marx, Kenneth A.; Barsegov, Valeri

    2018-01-01

    A new computational methodology for the accurate numerical calculation of the Cauchy stress tensor, stress invariants, principal stress components, von Mises and Tresca tensors is developed. The methodology is based on the atomic stress approach which permits the calculation of stress tensors, widely used in continuum mechanics modeling of materials properties, using the output from the MD simulations of discrete atomic and C_α -based coarse-grained structural models of biological particles. The methodology mapped into the software package TensorCalculator was successfully applied to the empty cowpea chlorotic mottle virus (CCMV) shell to explore the evolution of mechanical stress in this mechanically-tested specific example of a soft virus capsid. We found an inhomogeneous stress distribution in various portions of the CCMV structure and stress transfer from one portion of the virus structure to another, which also points to the importance of entropic effects, often ignored in finite element analysis and elastic network modeling. We formulate a criterion for elastic deformation using the first principal stress components. Furthermore, we show that von Mises and Tresca stress tensors can be used to predict the onset of a viral capsid’s mechanical failure, which leads to total structural collapse. TensorCalculator can be used to study stress evolution and dynamics of defects in viral capsids and other large-size protein assemblies.

  8. A New Reliability Analysis Model of the Chegongzhuang Heat-Supplying Tunnel Structure Considering the Coupling of Pipeline Thrust and Thermal Effect

    PubMed Central

    Zhang, Jiawen; He, Shaohui; Wang, Dahai; Liu, Yangpeng; Yao, Wenbo; Liu, Xiabing

    2018-01-01

    Based on the operating Chegongzhuang heat-supplying tunnel in Beijing, the reliability of its lining structure under the action of large thrust and thermal effect is studied. According to the characteristics of a heat-supplying tunnel service, a three-dimensional numerical analysis model was established based on the mechanical tests on the in-situ specimens. The stress and strain of the tunnel structure were obtained before and after the operation. Compared with the field monitoring data, the rationality of the model was verified. After extracting the internal force of the lining structure, the improved method of subset simulation was proposed as the performance function to calculate the reliability of the main control section of the tunnel. In contrast to the traditional calculation method, the analytic relationship between the sample numbers in the subset simulation method and Monte Carlo method was given. The results indicate that the lining structure is greatly influenced by coupling in the range of six meters from the fixed brackets, especially the tunnel floor. The improved subset simulation method can greatly save computation time and improve computational efficiency under the premise of ensuring the accuracy of calculation. It is suitable for the reliability calculation of tunnel engineering, because “the lower the probability, the more efficient the calculation.” PMID:29401691

  9. A hybrid deep neural network and physically based distributed model for river stage prediction

    NASA Astrophysics Data System (ADS)

    hitokoto, Masayuki; sakuraba, Masaaki

    2016-04-01

    We developed the real-time river stage prediction model, using the hybrid deep neural network and physically based distributed model. As the basic model, 4 layer feed-forward artificial neural network (ANN) was used. As a network training method, the deep learning technique was applied. To optimize the network weight, the stochastic gradient descent method based on the back propagation method was used. As a pre-training method, the denoising autoencoder was used. Input of the ANN model is hourly change of water level and hourly rainfall, output data is water level of downstream station. In general, the desirable input of the ANN has strong correlation with the output. In conceptual hydrological model such as tank model and storage-function model, river discharge is governed by the catchment storage. Therefore, the change of the catchment storage, downstream discharge subtracted from rainfall, can be the potent input candidate of the ANN model instead of rainfall. From this point of view, the hybrid deep neural network and physically based distributed model was developed. The prediction procedure of the hybrid model is as follows; first, downstream discharge was calculated by the distributed model, and then estimates the hourly change of catchment storage form rainfall and calculated discharge as the input of the ANN model, and finally the ANN model was calculated. In the training phase, hourly change of catchment storage can be calculated by the observed rainfall and discharge data. The developed model was applied to the one catchment of the OOYODO River, one of the first-grade river in Japan. The modeled catchment is 695 square km. For the training data, 5 water level gauging station and 14 rain-gauge station in the catchment was used. The training floods, superior 24 events, were selected during the period of 2005-2014. Prediction was made up to 6 hours, and 6 models were developed for each prediction time. To set the proper learning parameters and network architecture of the ANN model, sensitivity analysis was done by the case study approach. The prediction result was evaluated by the superior 4 flood events by the leave-one-out cross validation. The prediction result of the basic 4 layer ANN was better than the conventional 3 layer ANN model. However, the result did not reproduce well the biggest flood event, supposedly because the lack of the sufficient high-water level flood event in the training data. The result of the hybrid model outperforms the basic ANN model and distributed model, especially improved the performance of the basic ANN model in the biggest flood event.

  10. A battery power model for the EUVE spacecraft

    NASA Technical Reports Server (NTRS)

    Yen, Wen L.; Littlefield, Ronald G.; Mclean, David R.; Tuchman, Alan; Broseghini, Todd A.; Page, Brenda J.

    1993-01-01

    This paper describes a battery power model that has been developed to simulate and predict the behavior of the 50 ampere-hour nickel-cadmium battery that supports the Extreme Ultraviolet Explorer (EUVE) spacecraft in its low Earth orbit. First, for given orbit, attitude, solar array panel and spacecraft load data, the model calculates minute-by-minute values for the net power available for charging the battery for a user-specified time period (usually about two weeks). Next, the model is used to calculate minute-by-minute values for the battery voltage, current and state-of-charge for the time period. The model's calculations are explained for its three phases: sunrise charging phase, constant voltage phase, and discharge phase. A comparison of predicted model values for voltage, current and state-of-charge with telemetry data for a complete charge-discharge cycle shows good correlation. This C-based computer model will be used by the EUVE Flight Operations Team for various 'what-if' scheduling analyses.

  11. Calculations on the Back of an Envelope Model: Applying Seasonal Fecundity Models to Species’ Range Limits

    EPA Science Inventory

    Most predictions of the effect of climate change on species’ ranges are based on correlations between climate and current species’ distributions. These so-called envelope models may be a good first approximation, but we need demographically mechanistic models to incorporate the ...

  12. Ab-Initio Molecular Dynamics Simulations of Molten Ni-Based Superalloys (Preprint)

    DTIC Science & Technology

    2011-10-01

    in liquid–metal density with composition and temperature across the solidification zone. Here, fundamental properties of molten Ni -based alloys ...temperature across the solidification zone. Here, fundamental properties of molten Ni -based alloys , required for modeling these instabilities, are...temperature is assessed in model Ni -Al-W and RENE-N4 alloys . Calculations are performed using a recently implemented constant pressure methodology (NPT) which

  13. Model for Vortex Ring State Influence on Rotorcraft Flight Dynamics

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2005-01-01

    The influence of vortex ring state (VRS) on rotorcraft flight dynamics is investigated, specifically the vertical velocity drop of helicopters and the roll-off of tiltrotors encountering VRS. The available wind tunnel and flight test data for rotors in vortex ring state are reviewed. Test data for axial flow, non-axial flow, two rotors, unsteadiness, and vortex ring state boundaries are described and discussed. Based on the available measured data, a VRS model is developed. The VRS model is a parametric extension of momentum theory for calculation of the mean inflow of a rotor, hence suitable for simple calculations and real-time simulations. This inflow model is primarily defined in terms of the stability boundary of the aircraft motion. Calculations of helicopter response during VRS encounter were performed, and good correlation is shown with the vertical velocity drop measured in flight tests. Calculations of tiltrotor response during VRS encounter were performed, showing the roll-off behavior characteristic of tiltrotors. Hence it is possible, using a model of the mean inflow of an isolated rotor, to explain the basic behavior of both helicopters and tiltrotors in vortex ring state.

  14. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 1: Analysis and results

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A model for predicting the distribution of liquid fuel droplets and fuel vapor in premixing-prevaporizing fuel-air mixing passages of the direct injection type is reported. This model consists of three computer programs; a calculation of the two dimensional or axisymmetric air flow field neglecting the effects of fuel; a calculation of the three dimensional fuel droplet trajectories and evaporation rates in a known, moving air flow; a calculation of fuel vapor diffusing into a moving three dimensional air flow with source terms dependent on the droplet evaporation rates. The fuel droplets are treated as individual particle classes each satisfying Newton's law, a heat transfer, and a mass transfer equation. This fuel droplet model treats multicomponent fuels and incorporates the physics required for the treatment of elastic droplet collisions, droplet shattering, droplet coalescence and droplet wall interactions. The vapor diffusion calculation treats three dimensional, gas phase, turbulent diffusion processes. The analysis includes a model for the autoignition of the fuel air mixture based upon the rate of formation of an important intermediate chemical species during the preignition period.

  15. Accuracy of Digital Impressions and Fitness of Single Crowns Based on Digital Impressions

    PubMed Central

    Yang, Xin; Lv, Pin; Liu, Yihong; Si, Wenjie; Feng, Hailan

    2015-01-01

    In this study, the accuracy (precision and trueness) of digital impressions and the fitness of single crowns manufactured based on digital impressions were evaluated. #14-17 epoxy resin dentitions were made, while full-crown preparations of extracted natural teeth were embedded at #16. (1) To assess precision, deviations among repeated scan models made by intraoral scanner TRIOS and MHT and model scanner D700 and inEos were calculated through best-fit algorithm and three-dimensional (3D) comparison. Root mean square (RMS) and color-coded difference images were offered. (2) To assess trueness, micro computed tomography (micro-CT) was used to get the reference model (REF). Deviations between REF and repeated scan models (from (1)) were calculated. (3) To assess fitness, single crowns were manufactured based on TRIOS, MHT, D700 and inEos scan models. The adhesive gaps were evaluated under stereomicroscope after cross-sectioned. Digital impressions showed lower precision and better trueness. Except for MHT, the means of RMS for precision were lower than 10 μm. Digital impressions showed better internal fitness. Fitness of single crowns based on digital impressions was up to clinical standard. Digital impressions could be an alternative method for single crowns manufacturing. PMID:28793417

  16. Method Development for Clinical Comprehensive Evaluation of Pediatric Drugs Based on Multi-Criteria Decision Analysis: Application to Inhaled Corticosteroids for Children with Asthma.

    PubMed

    Yu, Yuncui; Jia, Lulu; Meng, Yao; Hu, Lihua; Liu, Yiwei; Nie, Xiaolu; Zhang, Meng; Zhang, Xuan; Han, Sheng; Peng, Xiaoxia; Wang, Xiaoling

    2018-04-01

    Establishing a comprehensive clinical evaluation system is critical in enacting national drug policy and promoting rational drug use. In China, the 'Clinical Comprehensive Evaluation System for Pediatric Drugs' (CCES-P) project, which aims to compare drugs based on clinical efficacy and cost effectiveness to help decision makers, was recently proposed; therefore, a systematic and objective method is required to guide the process. An evidence-based multi-criteria decision analysis model that involved an analytic hierarchy process (AHP) was developed, consisting of nine steps: (1) select the drugs to be reviewed; (2) establish the evaluation criterion system; (3) determine the criterion weight based on the AHP; (4) construct the evidence body for each drug under evaluation; (5) select comparative measures and calculate the original utility score; (6) place a common utility scale and calculate the standardized utility score; (7) calculate the comprehensive utility score; (8) rank the drugs; and (9) perform a sensitivity analysis. The model was applied to the evaluation of three different inhaled corticosteroids (ICSs) used for asthma management in children (a total of 16 drugs with different dosage forms and strengths or different manufacturers). By applying the drug analysis model, the 16 ICSs under review were successfully scored and evaluated. Budesonide suspension for inhalation (drug ID number: 7) ranked the highest, with comprehensive utility score of 80.23, followed by fluticasone propionate inhaled aerosol (drug ID number: 16), with a score of 79.59, and budesonide inhalation powder (drug ID number: 6), with a score of 78.98. In the sensitivity analysis, the ranking of the top five and lowest five drugs remains unchanged, suggesting this model is generally robust. An evidence-based drug evaluation model based on AHP was successfully developed. The model incorporates sufficient utility and flexibility for aiding the decision-making process, and can be a useful tool for the CCES-P.

  17. A Biomechanical Model for Lung Fibrosis in Proton Beam Therapy

    NASA Astrophysics Data System (ADS)

    King, David J. S.

    The physics of protons makes them well-suited to conformal radiotherapy due to the well-known Bragg peak effect. From a proton's inherent stopping power, uncertainty effects can cause a small amount of dose to overflow to an organ at risk (OAR). Previous models for calculating normal tissue complication probabilities (NTCPs) relied on the equivalent uniform dose model (EUD), in which the organ was split into 1/3, 2/3 or whole organ irradiation. However, the problem of dealing with volumes <1/3 of the total volume renders this EUD based approach no longer applicable. In this work the case for an experimental data-based replacement at low volumes is investigated. Lung fibrosis is investigated as an NTCP effect typically arising from dose overflow from tumour irradiation at the spinal base. Considering a 3D geometrical model of the lungs, irradiations are modelled with variable parameters of dose overflow. To calculate NTCPs without the EUD model, experimental data is used from the quantitative analysis of normal tissue effects in the clinic (QUANTEC) data. Additional side projects are also investigated, introduced and explained at various points. A typical radiotherapy course for the patient of 30x2Gy per fraction is simulated. A range of geometry of the target volume and irradiation types is investigated. Investigations with X-rays found the majority of the data point ratios (ratio of EUD values found from calculation based and data based methods) at 20% within unity showing a relatively close agreement. The ratios have not systematically preferred one particular type of predictive method. No Vx metric was found to consistently outperform another. In certain cases there is a good agreement and not in other cases which can be found predicted in the literature. The overall results leads to conclusion that there is no reason to discount the use of the data based predictive method particularly, as a low volume replacement predictive method.

  18. MDR-TB patients in KwaZulu-Natal, South Africa: Cost-effectiveness of 5 models of care

    PubMed Central

    Wallengren, Kristina; Reddy, Tarylee; Besada, Donela; Brust, James C. M.; Voce, Anna; Desai, Harsha; Ngozo, Jacqueline; Radebe, Zanele; Master, Iqbal; Padayatchi, Nesri; Daviaud, Emmanuelle

    2018-01-01

    Background South Africa has a high burden of MDR-TB, and to provide accessible treatment the government has introduced different models of care. We report the most cost-effective model after comparing cost per patient successfully treated across 5 models of care: centralized hospital, district hospitals (2), and community-based care through clinics or mobile injection teams. Methods In an observational study five cohorts were followed prospectively. The cost analysis adopted a provider perspective and economic cost per patient successfully treated was calculated based on country protocols and length of treatment per patient per model of care. Logistic regression was used to calculate propensity score weights, to compare pairs of treatment groups, whilst adjusting for baseline imbalances between groups. Propensity score weighted costs and treatment success rates were used in the ICER analysis. Sensitivity analysis focused on varying treatment success and length of hospitalization within each model. Results In 1,038 MDR-TB patients 75% were HIV-infected and 56% were successfully treated. The cost per successfully treated patient was 3 to 4.5 times lower in the community-based models with no hospitalization. Overall, the Mobile model was the most cost-effective. Conclusion Reducing the length of hospitalization and following community-based models of care improves the affordability of MDR-TB treatment without compromising its effectiveness. PMID:29668748

  19. Calculating permittivity of semi-conductor fillers in composites based on simplified effective medium approximation models

    NASA Astrophysics Data System (ADS)

    Feng, Yefeng; Wu, Qin; Hu, Jianbing; Xu, Zhichao; Peng, Cheng; Xia, Zexu

    2018-03-01

    Interface induced polarization has a significant impact on permittivity of 0–3 type polymer composites with Si based semi-conducting fillers. Polarity of Si based filler, polarity of polymer matrix and grain size of filler are closely connected with induced polarization and permittivity of composites. However, unlike 2–2 type composites, the real permittivity of Si based fillers in 0–3 type composites could be not directly measured. Therefore, achieving the theoretical permittivity of fillers in 0–3 composites through effective medium approximation (EMA) models should be very necessary. In this work, the real permittivity results of Si based semi-conducting fillers in ten different 0–3 polymer composite systems were calculated by linear fitting of simplified EMA models, based on particularity of reported parameters in those composites. The results further confirmed the proposed interface induced polarization. The results further verified significant influences of filler polarity, polymer polarity and filler size on induced polarization and permittivity of composites as well. High self-consistency was gained between present modelling and prior measuring. This work might offer a facile and effective route to achieve the difficultly measured dielectric performances of discrete filler phase in some special polymer based composite systems.

  20. Density functional and theoretical study of the temperature and pressure dependency of the plasmon energy of solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attarian Shandiz, M., E-mail: mohammad.attarianshandiz@mail.mcgill.ca; Gauvin, R.

    The temperature and pressure dependency of the volume plasmon energy of solids was investigated by density functional theory calculations. The volume change of crystal is the major factor responsible for the variation of valence electron density and plasmon energy in the free electron model. Hence, to introduce the effect of temperature and pressure for the density functional theory calculations of plasmon energy, the temperature and pressure dependency of lattice parameter was used. Also, by combination of the free electron model and the equation of state based on the pseudo-spinodal approach, the temperature and pressure dependency of the plasmon energy wasmore » modeled. The suggested model is in good agreement with the results of density functional theory calculations and available experimental data for elements with the free electron behavior.« less

  1. FATE 5: A natural attenuation calibration tool for groundwater fate and transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nevin, J.P.; Connor, J.A.; Newell, C.J.

    1997-12-31

    A new groundwater attenuation modeling tool (FATE 5) has been developed to assist users with determining site-specific natural attenuation rates for organic constituents dissolved in groundwater. FATE 5 is based on and represents an enhancement to the Domenico analytical groundwater transport model. These enhancements include use of an optimization routine to match results from the Domenico model to actual measured site concentrations, an extensive database of chemical property data, and calculation of an estimate of the length of time needed for a plume to reach steady state conditions. FATE 5 was developed in Microsoft{reg_sign} Excel and is controlled by meansmore » of a simple, user-friendly graphical interface. Using the Solver routine built into Excel, FATE 5 is able to calibrate the attenuation rate used by the Domenico model to match site-specific data. By calibrating the decay rate to site-specific measurements, FATE 5 can yield accurate predictions of long-term natural attenuation processes within a groundwater within a groundwater plume. In addition, FATE 5 includes a formulation of the transient Domenico solution used to help the user determine if the steady-state assumptions employed by the model are appropriate. The calibrated groundwater flow model can then be used either to (i) predict upper-bound constituent concentrations in groundwater, based on an observed source zone concentration, or (ii) back-calculate a lower-bound SSTL value, based on a user-specified exposure point concentration at the groundwater point of exposure (POE). This paper reviews the major elements of the FATE 5 model - and gives results for real-world applications. Key modeling assumptions and summary guidelines regarding calculation procedures and input parameter selection are also addressed.« less

  2. Comparison of three GIS-based models for predicting rockfall runout zones at a regional scale

    NASA Astrophysics Data System (ADS)

    Dorren, Luuk K. A.; Seijmonsbergen, Arie C.

    2003-11-01

    Site-specific information about the level of protection that mountain forests provide is often not available for large regions. Information regarding rockfalls is especially scarce. The most efficient way to obtain information about rockfall activity and the efficacy of protection forests at a regional scale is to use a simulation model. At present, it is still unknown which forest parameters could be incorporated best in such models. Therefore, the purpose of this study was to test and evaluate a model for rockfall assessment at a regional scale in which simple forest stand parameters, such as the number of trees per hectare and the diameter at breast height, are incorporated. Therefore, a newly developed Geographical Information System (GIS)-based distributed model is compared with two existing rockfall models. The developed model is the only model that calculates the rockfall velocity on the basis of energy loss due to collisions with trees and on the soil surface. The two existing models calculate energy loss over the distance between two cell centres, while the newly developed model is able to calculate multiple bounces within a pixel. The patterns of rockfall runout zones produced by the three models are compared with patterns of rockfall deposits derived from geomorphological field maps. Furthermore, the rockfall velocities modelled by the three models are compared. It is found that the models produced rockfall runout zone maps with rather similar accuracies. However, the developed model performs best on forested hillslopes and it also produces velocities that match best with field estimates on both forested and nonforested hillslopes irrespective of the slope gradient.

  3. A one-dimensional peridynamic model of defect propagation and its relation to certain other continuum models

    NASA Astrophysics Data System (ADS)

    Wang, Linjuan; Abeyaratne, Rohan

    2018-07-01

    The peridynamic model of a solid does not involve spatial gradients of the displacement field and is therefore well suited for studying defect propagation. Here, bond-based peridynamic theory is used to study the equilibrium and steady propagation of a lattice defect - a kink - in one dimension. The material transforms locally, from one state to another, as the kink passes through. The kink is in equilibrium if the applied force is less than a certain critical value that is calculated, and propagates if it exceeds that value. The kinetic relation giving the propagation speed as a function of the applied force is also derived. In addition, it is shown that the dynamical solutions of certain differential-equation-based models of a continuum are the same as those of the peridynamic model provided the micromodulus function is chosen suitably. A formula for calculating the micromodulus function of the equivalent peridynamic model is derived and illustrated. This ability to replace a differential-equation-based model with a peridynamic one may prove useful when numerically studying more complicated problems such as those involving multiple and interacting defects.

  4. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    NASA Astrophysics Data System (ADS)

    Hoľko, Michal; Stacho, Jakub

    2014-12-01

    The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.

  5. [Case study on health risk assessment based on site-specific conceptual model].

    PubMed

    Zhong, Mao-Sheng; Jiang, Lin; Yao, Jue-Jun; Xia, Tian-Xiang; Zhu, Xiao-Ying; Han, Dan; Zhang, Li-Na

    2013-02-01

    Site investigation was carried out on an area to be redeveloped as a subway station, which is right downstream of the groundwater of a former chemical plant. The results indicate the subsurface soil and groundwater in the area are both polluted heavily by 1,2-dichloroethane, which was caused by the chemical plant upstream with the highest concentration was 104.08 mg.kg-1 for soil sample at 8.6 m below ground and the highest concentration was 18500 microg.L-1 for groundwater. Further, a site-specific contamination conceptual model, giving consideration to the specific structure configuration of the station, was developed, and the corresponding risk calculation equation was derived. The carcinogenic risks calculated with models developed on the generic site conceptual model and derived herein on the site-specific conceptual model were compared. Both models indicate that the carcinogenic risk is significantly higher than the acceptable level which is 1 x 10(-6). The comparison result reveals that the risk calculated with the former models for soil and groundwater are higher than the one calculated with the latter models by 2 times and 1.5 times, respectively. The finding in this paper indicates that the generic risk assessment model may underestimate the risk if specific site conditions and structure configuration are not considered.

  6. Cost calculator methods for estimating casework time in child welfare services: A promising approach for use in implementation of evidence-based practices and other service innovations.

    PubMed

    Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia

    2014-04-01

    Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.

  7. IPR 1.0: an efficient method for calculating solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Chen, W.; Li, J.

    2013-12-01

    Climate change may alter the spatial distribution, composition, structure, and functions of plant communities. Transitional zones between biomes, or ecotones, are particularly sensitive to climate change. Ecotones are usually heterogeneous with sparse trees. The dynamics of ecotones are mainly determined by the growth and competition of individual plants in the communities. Therefore it is necessary to calculate solar radiation absorbed by individual plants for understanding and predicting their responses to climate change. In this study, we developed an individual plant radiation model, IPR (version 1.0), to calculate solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities. The model is developed based on geometrical optical relationships assuming crowns of woody plants are rectangular boxes with uniform leaf area density. The model calculates the fractions of sunlit and shaded leaf classes and the solar radiation absorbed by each class, including direct radiation from the sun, diffuse radiation from the sky, and scattered radiation from the plant community. The solar radiation received on the ground is also calculated. We tested the model by comparing with the analytical solutions of random distributions of plants. The tests show that the model results are very close to the averages of the random distributions. This model is efficient in computation, and is suitable for ecological models to simulate long-term transient responses of plant communities to climate change.

  8. A coupled surface-water and ground-water flow model (MODBRANCH) for simulation of stream-aquifer interaction

    USGS Publications Warehouse

    Swain, Eric D.; Wexler, Eliezer J.

    1996-01-01

    Ground-water and surface-water flow models traditionally have been developed separately, with interaction between subsurface flow and streamflow either not simulated at all or accounted for by simple formulations. In areas with dynamic and hydraulically well-connected ground-water and surface-water systems, stream-aquifer interaction should be simulated using deterministic responses of both systems coupled at the stream-aquifer interface. Accordingly, a new coupled ground-water and surface-water model was developed by combining the U.S. Geological Survey models MODFLOW and BRANCH; the interfacing code is referred to as MODBRANCH. MODFLOW is the widely used modular three-dimensional, finite-difference ground-water model, and BRANCH is a one-dimensional numerical model commonly used to simulate unsteady flow in open- channel networks. MODFLOW was originally written with the River package, which calculates leakage between the aquifer and stream, assuming that the stream's stage remains constant during one model stress period. A simple streamflow routing model has been added to MODFLOW, but is limited to steady flow in rectangular, prismatic channels. To overcome these limitations, the BRANCH model, which simulates unsteady, nonuniform flow by solving the St. Venant equations, was restructured and incorporated into MODFLOW. Terms that describe leakage between stream and aquifer as a function of streambed conductance and differences in aquifer and stream stage were added to the continuity equation in BRANCH. Thus, leakage between the aquifer and stream can be calculated separately in each model, or leakages calculated in BRANCH can be used in MODFLOW. Total mass in the coupled models is accounted for and conserved. The BRANCH model calculates new stream stages for each time interval in a transient simulation based on upstream boundary conditions, stream properties, and initial estimates of aquifer heads. Next, aquifer heads are calculated in MODFLOW based on stream stages calculated by BRANCH, aquifer properties, and stresses. This process is repeated until convergence criteria are met for head and stage. Because time steps used in ground-water modeling can be much longer than time intervals used in surface- water simulations, provision has been made for handling multiple BRANCH time intervals within one MODFLOW time step. An option was also added to BRANCH to allow the simulation of channel drying and rewetting. Testing of the coupled model was verified by using data from previous studies; by comparing results with output from a simpler, four-point implicit, open-channel flow model linked with MODFLOW; and by comparison to field studies of L-31N canal in southern Florida.

  9. Adsorption and diffusion of fructose in zeolite HZSM-5: selection of models and methods for computational studies.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, L.; Curtiss, L. A.; Assary, R. S.

    The adsorption and protonation of fructose in HZSM-5 have been studied for the assessment of models for accurate reaction energy calculations and the evaluation of molecular diffusivity. The adsorption and protonation were calculated using 2T, 5T, and 46T clusters as well as a periodic model. The results indicate that the reaction thermodynamics cannot be predicted correctly using small cluster models, such as 2T or 5T, because these small cluster models fail to represent the electrostatic effect of a zeolite cage, which provides additional stabilization to the ion pair formed upon the protonation of fructose. Structural parameters optimized using the 46Tmore » cluster model agree well with those of the full periodic model; however, the calculated reaction energies are in significant error due to the poor account of dispersion effects by density functional theory. The dispersion effects contribute -30.5 kcal/mol to the binding energy of fructose in the zeolite pore based on periodic model calculations that include dispersion interactions. The protonation of the fructose ternary carbon hydroxyl group was calculated to be exothermic by 5.5 kcal/mol with a reaction barrier of 2.9 kcal/mol using the periodic model with dispersion effects. Our results suggest that the internal diffusion of fructose in HZSM-5 is very likely to be energetically limited and only occurs at high temperature due to the large size of the molecule.« less

  10. Adsorption and Diffusion of Fructose in Zeolite HZSM-5: Selection of Models and Methods for Computational Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Lei; Curtiss, Larry A.; Assary, Rajeev S.

    The adsorption and protonation of fructose inHZSM-5 have been studied for the assessment of models for accurate reaction energy calculations and the evaluation of molecular diffusivity. The adsorption and protonation were calculated using 2T, 5T, and 46T clusters as well as a periodic model. The results indicate that the reaction thermodynamics cannot be predicted correctly using small cluster models, such as 2T or 5T, because these small cluster models fail to represent the electrostatic effect of a zeolite cage, which provides additional stabilization to the ion pair formed upon the protonation of fructose. Structural parameters optimized using the 46T clustermore » model agree well with those of the full periodic model; however, the calculated reaction energies are in significant error due to the poor account of dispersion effects by density functional theory. The dispersion effects contribute -30.5 kcal/mol to the binding energy of fructose in the zeolite pore based on periodic model calculations that include dispersion interactions. The protonation of the fructose ternary carbon hydroxyl group was calculated to be exothermic by 5.5 kcal/mol with a reaction barrier of 2.9 kcal/mol using the periodic model with dispersion effects. Our results suggest that the internal diffusion of fructose in HZSM-5 is very likely to be energetically limited and only occurs at high temperature due to the large size of the molecule.« less

  11. Model for Correlating Real-Time Survey Results to Contaminant Concentrations - 12183

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Stuart A.

    2012-07-01

    The U.S. Environmental Protection Agency (EPA) Superfund program is developing a new Counts Per Minute (CPM) calculator to correlate real-time survey results, which are often expressed as counts per minute, to contaminant concentrations that are more typically provided in risk assessments or for cleanup levels, usually expressed in pCi/g or pCi/m{sup 2}. Currently there is no EPA guidance for Superfund sites on correlating count per minute field survey readings back to risk, dose, or other ARAR based concentrations. The CPM calculator is a web-based model that estimates a gamma detector response for a given level of contamination. The intent ofmore » the CPM calculator is to facilitate more real-time measurements within a Superfund response framework. The draft of the CPM calculator is still undergoing internal EPA review. This will be followed by external peer review. It is expected that the CPM calculator will at least be in peer review by the time of WM2012 and possibly finalized at that time. The CPM calculator should facilitate greater use of real-time measurement at Superfund sites. The CPM calculator may also standardize the process of converting lab data to real time measurements. It will thus lessen the amount of lab sampling that is needed for site characterization and confirmation surveys, but it will not remove the need for sampling. (authors)« less

  12. Comments on "Modified wind chill temperatures determined by a whole body thermoregulation model and human-based convective coefficients" by Ben Shabat, Shitzer and Fiala (2013) and "Facial convective heat exchange coefficients in cold and windy environments estimated from human experiments" by Ben Shabat and Shitzer (2012)

    NASA Astrophysics Data System (ADS)

    Osczevski, Randall J.

    2014-08-01

    Ben Shabat et al. (Int J Biometeorol 56(4):639-51, 2013) present revised charts for wind chill equivalent temperatures (WCET) and facial skin temperatures (FST) that differ significantly from currently accepted charts. They credit these differences to their more sophisticated calculation model and to the human-based equation that it used for finding the convective heat transfer coefficient (Ben Shabat and Shitzer, Int J Biometeorol 56:639-651, 2012). Because a version of the simple model that was used to create the current charts accurately reproduces their results when it uses the human-based equation, the differences that they found must be entirely due to this equation. In deriving it, Ben Shabat and Shitzer assumed that all of the heat transfer from the surface of their cylindrical model was due to forced convection alone. Because several modes of heat transfer were occurring in the human experiments they were attempting to simulate, notably radiation, their coefficients are actually total external heat transfer coefficients, not purely convective ones, as the calculation models assume. Data from the one human experiment that used heat flux sensors supports this conclusion and exposes the hazard of using a numerical model with several adjustable parameters that cannot be measured. Because the human-based equation is faulty, the values in the proposed charts are not correct. The equation that Ben Shabat et al. (Int J Biometeorol 56(4):639-51, 2013) propose to calculate WCET should not be used.

  13. A virtual photon energy fluence model for Monte Carlo dose calculation.

    PubMed

    Fippel, Matthias; Haryanto, Freddy; Dohm, Oliver; Nüsslin, Fridtjof; Kriesen, Stephan

    2003-03-01

    The presented virtual energy fluence (VEF) model of the patient-independent part of the medical linear accelerator heads, consists of two Gaussian-shaped photon sources and one uniform electron source. The planar photon sources are located close to the bremsstrahlung target (primary source) and to the flattening filter (secondary source), respectively. The electron contamination source is located in the plane defining the lower end of the filter. The standard deviations or widths and the relative weights of each source are free parameters. Five other parameters correct for fluence variations, i.e., the horn or central depression effect. If these parameters and the field widths in the X and Y directions are given, the corresponding energy fluence distribution can be calculated analytically and compared to measured dose distributions in air. This provides a method of fitting the free parameters using the measurements for various square and rectangular fields and a fixed number of monitor units. The next step in generating the whole set of base data is to calculate monoenergetic central axis depth dose distributions in water which are used to derive the energy spectrum by deconvolving the measured depth dose curves. This spectrum is also corrected to take the off-axis softening into account. The VEF model is implemented together with geometry modules for the patient specific part of the treatment head (jaws, multileaf collimator) into the XVMC dose calculation engine. The implementation into other Monte Carlo codes is possible based on the information in this paper. Experiments are performed to verify the model by comparing measured and calculated dose distributions and output factors in water. It is demonstrated that open photon beams of linear accelerators from two different vendors are accurately simulated using the VEF model. The commissioning procedure of the VEF model is clinically feasible because it is based on standard measurements in air and water. It is also useful for IMRT applications because a full Monte Carlo simulation of the treatment head would be too time-consuming for many small fields.

  14. Modeling of Continuum Manipulators Using Pythagorean Hodograph Curves.

    PubMed

    Singh, Inderjeet; Amara, Yacine; Melingui, Achille; Mani Pathak, Pushparaj; Merzouki, Rochdi

    2018-05-10

    Research on continuum manipulators is increasingly developing in the context of bionic robotics because of their many advantages over conventional rigid manipulators. Due to their soft structure, they have inherent flexibility, which makes it a huge challenge to control them with high performances. Before elaborating a control strategy of such robots, it is essential to reconstruct first the behavior of the robot through development of an approximate behavioral model. This can be kinematic or dynamic depending on the conditions of operation of the robot itself. Kinematically, two types of modeling methods exist to describe the robot behavior; quantitative methods describe a model-based method, and qualitative methods describe a learning-based method. In kinematic modeling of continuum manipulator, the assumption of constant curvature is often considered to simplify the model formulation. In this work, a quantitative modeling method is proposed, based on the Pythagorean hodograph (PH) curves. The aim is to obtain a three-dimensional reconstruction of the shape of the continuum manipulator with variable curvature, allowing the calculation of its inverse kinematic model (IKM). It is noticed that the performances of the PH-based kinematic modeling of continuum manipulators are considerable regarding position accuracy, shape reconstruction, and time/cost of the model calculation, than other kinematic modeling methods, for two cases: free load manipulation and variable load manipulation. This modeling method is applied to the compact bionic handling assistant (CBHA) manipulator for validation. The results are compared with other IKMs developed in case of CBHA manipulator.

  15. Calculation of Brown Carbon Optical Properties in the Fifth version Community Atmospheric Model (CAM5) and Validation with a Case Study in Kanpur, India

    NASA Astrophysics Data System (ADS)

    Xu, L.; Peng, Y.; Ram, K.

    2017-12-01

    The presence of absorbing component of organic carbon in atmospheric aerosols (Brown Carbon, BrC) has recently received much attention to the scientific community because of its absorbing nature, especially in the UV and Visible region. Attempts to account for BrC in radiative forcing calculations in climate model are rather scarce, primarily due to observational constrain as well as its incorporation in the model-based studies. Due to non-treatment of BrC in the off-line models, there exists a large discrepancy between model- and observational- based estimate of direct radiative effect of carbonaceous aerosols. In this study, we have included BrC absorption and optical characteristics in the fifth version of Community Atmospheric Model (CAM5) for the better understanding of radiative impact of BrC over northern India, also for improving the performance of aerosol radiative calculation in climate model. We have used the inputs of aerosol chemical composition measurements conducted at an urban site, Kanpur, in the Indo-Gangetic Plain (IGP) during 2007-2008 to construct the optical properties of BrC in CAM5 model. Model radiative simulations of sensitive tests showed good agreement with observations. Effects of varying imaginary part of BrC refractive index, relative mass ratio of BrC to organic aerosol in combination with core-shell mixing style of BrC with other anthropogenic aerosols are also analyzed for understanding BrC impact on simulated aerosol absorption in model.

  16. Sea-Level Allowances along the World Coastlines

    NASA Astrophysics Data System (ADS)

    Vandewal, R.; Tsitsikas, C.; Reerink, T.; Slangen, A.; de Winter, R.; Muis, S.; Hunter, J. R.

    2017-12-01

    Sea level changes as a result of climate change. For projections we take ocean mass changes and volume changes into account. Including gravitational and rotational fingerprints this provide regional sea level changes. Hence we can calculate sea-level rise patterns based on CMIP5 projections. In order to take the variability around the mean state, which follows from the climate models, into account we use the concept of allowances. The allowance indicates the height a coastal structure needs to be increased to maintain the likelihood of sea-level extremes. Here we use a global reanalysis of storm surges and extreme sea levels based on a global hydrodynamic model in order to calculate allowances. It is shown that the model compares in most regions favourably with tide gauge records from the GESLA data set. Combining the CMIP5 projections and the global hydrodynamical model we calculate sea-level allowances along the global coastlines and expand the number of points with a factor 50 relative to tide gauge based results. Results show that allowances increase gradually along continental margins with largest values near the equator. In general values are lower at midlatitudes both in Northern and Southern Hemisphere. Increased risk for extremes are typically 103-104 for the majority of the coastline under the RCP8.5 scenario at the end of the century. Finally we will show preliminary results of the effect of changing wave heights based on the coordinated ocean wave project.

  17. PVWatts Version 1 Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2013-10-01

    The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.

  18. Equilibrium Phase Behavior of the Square-Well Linear Microphase-Forming Model.

    PubMed

    Zhuang, Yuan; Charbonneau, Patrick

    2016-07-07

    We have recently developed a simulation approach to calculate the equilibrium phase diagram of particle-based microphase formers. Here, this approach is used to calculate the phase behavior of the square-well linear model for different strengths and ranges of the linear long-range repulsive component. The results are compared with various theoretical predictions for microphase formation. The analysis further allows us to better understand the mechanism for microphase formation in colloidal suspensions.

  19. Post-1906 stress recovery of the San Andreas fault system calculated from three-dimensional finite element analysis

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    The M = 7.8 1906 San Francisco earthquake cast a stress shadow across the San Andreas fault system, inhibiting other large earthquakes for at least 75 years. The duration of the stress shadow is a key question in San Francisco Bay area seismic hazard assessment. This study presents a three-dimensional (3-D) finite element simulation of post-1906 stress recovery. The model reproduces observed geologic slip rates on major strike-slip faults and produces surface velocity vectors comparable to geodetic measurements. Fault stressing rates calculated with the finite element model are evaluated against numbers calculated using deep dislocation slip. In the finite element model, tectonic stressing is distributed throughout the crust and upper mantle, whereas tectonic stressing calculated with dislocations is focused mostly on faults. In addition, the finite element model incorporates postseismic effects such as deep afterslip and viscoelastic relaxation in the upper mantle. More distributed stressing and postseismic effects in the finite element model lead to lower calculated tectonic stressing rates and longer stress shadow durations (17-74 years compared with 7-54 years). All models considered indicate that the 1906 stress shadow was completely erased by tectonic loading no later than 1980. However, the stress shadow still affects present-day earthquake probability. Use of stressing rate parameters calculated with the finite element model yields a 7-12% reduction in 30-year probability caused by the 1906 stress shadow as compared with calculations not incorporating interactions. The aggregate interaction-based probability on selected segments (not including the ruptured San Andreas fault) is 53-70% versus the noninteraction range of 65-77%.

  20. User's guide to PHREEQC (Version 2) : a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations

    USGS Publications Warehouse

    Parkhurst, David L.; Appelo, C.A.J.

    1999-01-01

    PHREEQC version 2 is a computer program written in the C programming language that is designed to perform a wide variety of low-temperature aqueous geochemical calculations. PHREEQC is based on an ion-association aqueous model and has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations involving reversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and irreversible reactions, which include specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters, within specified compositional uncertainty limits.New features in PHREEQC version 2 relative to version 1 include capabilities to simulate dispersion (or diffusion) and stagnant zones in 1D-transport calculations, to model kinetic reactions with user-defined rate expressions, to model the formation or dissolution of ideal, multicomponent or nonideal, binary solid solutions, to model fixed-volume gas phases in addition to fixed-pressure gas phases, to allow the number of surface or exchange sites to vary with the dissolution or precipitation of minerals or kinetic reactants, to include isotope mole balances in inverse modeling calculations, to automatically use multiple sets of convergence parameters, to print user-defined quantities to the primary output file and (or) to a file suitable for importation into a spreadsheet, and to define solution compositions in a format more compatible with spreadsheet programs. This report presents the equations that are the basis for chemical equilibrium, kinetic, transport, and inverse-modeling calculations in PHREEQC; describes the input for the program; and presents examples that demonstrate most of the program's capabilities.

Top